modelId stringlengths 4 111 | lastModified stringlengths 24 24 | tags list | pipeline_tag stringlengths 5 30 ⌀ | author stringlengths 2 34 ⌀ | config null | securityStatus null | id stringlengths 4 111 | likes int64 0 9.53k | downloads int64 2 73.6M | library_name stringlengths 2 84 ⌀ | created timestamp[us] | card stringlengths 101 901k | card_len int64 101 901k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
DucHaiten/DucHaitenJourney | 2023-06-15T12:58:48.000Z | [
"diffusers",
"stable-diffusion",
"text-to-image",
"image-to-image",
"en",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | DucHaiten | null | null | DucHaiten/DucHaitenJourney | 8 | 3,377 | diffusers | 2023-03-12T16:01:27 | ---
language:
- en
tags:
- stable-diffusion
- text-to-image
- image-to-image
- diffusers
license: creativeml-openrail-m
inference: true
---
DPM++ 2S a Karras cfg 10
will be better in large resolution 768x768, 512x512 will be poor quality
negative prompt:
illustration, painting, cartoons, sketch, (worst quality:2), (low quality:2), (normal quality:2), lowres, bad anatomy, bad hands, ((monochrome)), ((grayscale)), collapsed eyeshadow, multiple eyeblows, vaginas in breasts, (cropped), oversaturated, extra limb, missing limbs, deformed hands, long neck, long body, imperfect, (bad hands), signature, watermark, username, artist name, conjoined fingers, deformed fingers, ugly eyes, imperfect eyes, skewed eyes, unnatural face, unnatural body, error | 754 | [
[
-0.0478515625,
-0.03839111328125,
0.046173095703125,
0.0269622802734375,
-0.065673828125,
0.0039005279541015625,
0.0282440185546875,
-0.050933837890625,
0.02880859375,
0.02874755859375,
-0.048736572265625,
-0.01143646240234375,
-0.08258056640625,
0.021469116... |
wissamantoun/araelectra-base-artydiqa | 2023-03-20T13:06:59.000Z | [
"transformers",
"pytorch",
"safetensors",
"electra",
"question-answering",
"ar",
"dataset:tydiqa",
"arxiv:2012.15516",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | question-answering | wissamantoun | null | null | wissamantoun/araelectra-base-artydiqa | 8 | 3,376 | transformers | 2022-03-02T23:29:05 | ---
language: ar
datasets:
- tydiqa
widget:
- text: "ما هو نظام الحكم في لبنان؟"
context: "لبنان أو (رسميا: الجمهورية اللبنانية)، هي دولة عربية واقعة في الشرق الأوسط في غرب القارة الآسيوية. تحدها سوريا من الشمال و الشرق، و فلسطين المحتلة - إسرائيل من الجنوب، وتطل من جهة الغرب على البحر الأبيض المتوسط. هو بلد ديمقراطي جمهوري طوائفي. معظم سكانه من العرب المسلمين و المسيحيين. وبخلاف غالبية الدول العربية هناك وجود فعال للمسيحيين في الحياة العامة والسياسية. هاجر وانتشر أبناؤه حول العالم منذ أيام الفينيقيين، وحاليا فإن عدد اللبنانيين المهاجرين يقدر بضعف عدد اللبنانيين المقيمين. واجه لبنان منذ القدم تعدد الحضارات التي عبرت فيه أو احتلت أراضيه وذلك لموقعه الوسطي بين الشمال الأوروبي والجنوب العربي والشرق الآسيوي والغرب الأفريقي، ويعد هذا الموقع المتوسط من أبرز الأسباب لتنوع الثقافات في لبنان، وفي الوقت ذاته من الأسباب المؤدية للحروب والنزاعات على مر العصور تجلت بحروب أهلية ونزاع مصيري مع إسرائيل. ويعود أقدم دليل على استيطان الإنسان في لبنان ونشوء حضارة على أرضه إلى أكثر من 7000 سنة. في القدم، سكن الفينيقيون أرض لبنان الحالية مع جزء من أرض سوريا و فلسطين، وهؤلاء قوم ساميون اتخذوا من الملاحة والتجارة مهنة لهم، وازدهرت حضارتهم طيلة 2500 سنة تقريبا (من حوالي سنة 3000 حتى سنة 539 ق.م). وقد مرت على لبنان عدة حضارات وشعوب استقرت فيه منذ عهد الفينيقين، مثل المصريين القدماء، الآشوريين، الفرس، الإغريق، الرومان، الروم البيزنطيين، العرب، الصليبيين، الأتراك العثمانيين، فالفرنسيين."
---
<img src="https://raw.githubusercontent.com/WissamAntoun/arabic-wikipedia-qa-streamlit/main/is2alni_logo.png" width="150" align="center"/>
# Arabic QA
AraELECTRA powered Arabic Wikipedia QA system with Streamlit [](https://share.streamlit.io/wissamantoun/arabic-wikipedia-qa-streamlit/main)
This model is trained on the Arabic section of ArTyDiQA using the colab here [](https://colab.research.google.com/drive/1hik0L_Dxg6WwJFcDPP1v74motSkst4gE?usp=sharing)
# How to use:
```bash
git clone https://github.com/aub-mind/arabert
pip install pyarabic
```
```python
from arabert.preprocess import ArabertPreprocessor
from transformers import pipeline
prep = ArabertPreprocessor("aubmindlab/araelectra-base-discriminator") #or empty string it's the same
qa_pipe =pipeline("question-answering",model="wissamantoun/araelectra-base-artydiqa")
text = " ما هو نظام الحكم في لبنان؟"
context = """
لبنان أو (رسميًّا: الجُمْهُورِيَّة اللبنانيَّة)، هي دولة عربيّة واقِعَة في الشَرق الأوسط في غرب القارة الآسيويّة. تَحُدّها سوريا من الشمال والشرق، وفلسطين المحتلة - إسرائيل من الجنوب، وتطل من جهة الغرب على البحر الأبيض المتوسط. هو بلد ديمقراطي جمهوري طوائفي. مُعظم سكانه من العرب المسلمين والمسيحيين. وبخلاف غالبيّة الدول العربيّة هناك وجود فعّال للمسيحيين في الحياة العامّة والسياسيّة. هاجر وانتشر أبناؤه حول العالم منذ أيام الفينيقيين، وحاليًّا فإن عدد اللبنانيين المهاجرين يُقدَّر بضعف عدد اللبنانيين المقيمين.
واجه لبنان منذ القدم تعدد الحضارات التي عبرت فيه أو احتلّت أراضيه وذلك لموقعه الوسطي بين الشمال الأوروبي والجنوب العربي والشرق الآسيوي والغرب الأفريقي، ويعد هذا الموقع المتوسط من أبرز الأسباب لتنوع الثقافات في لبنان، وفي الوقت ذاته من الأسباب المؤدية للحروب والنزاعات على مر العصور تجلت بحروب أهلية ونزاع مصيري مع إسرائيل. ويعود أقدم دليل على استيطان الإنسان في لبنان ونشوء حضارة على أرضه إلى أكثر من 7000 سنة.
في القدم، سكن الفينيقيون أرض لبنان الحالية مع جزء من أرض سوريا وفلسطين، وهؤلاء قوم ساميون اتخذوا من الملاحة والتجارة مهنة لهم، وازدهرت حضارتهم طيلة 2500 سنة تقريبًا (من حوالي سنة 3000 حتى سنة 539 ق.م). وقد مرّت على لبنان عدّة حضارات وشعوب استقرت فيه منذ عهد الفينيقين، مثل المصريين القدماء، الآشوريين، الفرس، الإغريق، الرومان، الروم البيزنطيين، العرب، الصليبيين، الأتراك العثمانيين، فالفرنسيين.
"""
context = prep.preprocess(context)# don't forget to preprocess the question and the context to get the optimal results
result = qa_pipe(question=text,context=context)
"""
{'answer': 'ديمقراطي جمهوري طوائفي',
'end': 241,
'score': 0.4910127818584442,
'start': 219}
"""
```
# If you used this model please cite us as :
```
@misc{antoun2020araelectra,
title={AraELECTRA: Pre-Training Text Discriminators for Arabic Language Understanding},
author={Wissam Antoun and Fady Baly and Hazem Hajj},
year={2020},
eprint={2012.15516},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | 4,392 | [
[
-0.053863525390625,
-0.04412841796875,
0.030731201171875,
0.011627197265625,
-0.0306854248046875,
-0.01120758056640625,
0.01708984375,
-0.024139404296875,
0.037750244140625,
0.0181121826171875,
-0.02288818359375,
-0.038787841796875,
-0.0565185546875,
0.01786... |
microsoft/tapex-large-finetuned-wtq | 2023-03-14T11:51:54.000Z | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"tapex",
"table-question-answering",
"en",
"dataset:wikitablequestions",
"arxiv:2107.07653",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | table-question-answering | microsoft | null | null | microsoft/tapex-large-finetuned-wtq | 31 | 3,374 | transformers | 2022-03-10T05:06:08 | ---
language: en
tags:
- tapex
- table-question-answering
datasets:
- wikitablequestions
license: mit
---
# TAPEX (large-sized model)
TAPEX was proposed in [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. The original repo can be found [here](https://github.com/microsoft/Table-Pretraining).
## Model description
TAPEX (**Ta**ble **P**re-training via **Ex**ecution) is a conceptually simple and empirically powerful pre-training approach to empower existing models with *table reasoning* skills. TAPEX realizes table pre-training by learning a neural SQL executor over a synthetic corpus, which is obtained by automatically synthesizing executable SQL queries.
TAPEX is based on the BART architecture, the transformer encoder-decoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder.
This model is the `tapex-base` model fine-tuned on the [WikiTableQuestions](https://huggingface.co/datasets/wikitablequestions) dataset.
## Intended Uses
You can use the model for table question answering on *complex* questions. Some **solveable** questions are shown below (corresponding tables now shown):
| Question | Answer |
|:---: |:---:|
| according to the table, what is the last title that spicy horse produced? | Akaneiro: Demon Hunters |
| what is the difference in runners-up from coleraine academical institution and royal school dungannon? | 20 |
| what were the first and last movies greenstreet acted in? | The Maltese Falcon, Malaya |
| in which olympic games did arasay thondike not finish in the top 20? | 2012 |
| which broadcaster hosted 3 titles but they had only 1 episode? | Channel 4 |
### How to Use
Here is how to use this model in transformers:
```python
from transformers import TapexTokenizer, BartForConditionalGeneration
import pandas as pd
tokenizer = TapexTokenizer.from_pretrained("microsoft/tapex-large-finetuned-wtq")
model = BartForConditionalGeneration.from_pretrained("microsoft/tapex-large-finetuned-wtq")
data = {
"year": [1896, 1900, 1904, 2004, 2008, 2012],
"city": ["athens", "paris", "st. louis", "athens", "beijing", "london"]
}
table = pd.DataFrame.from_dict(data)
# tapex accepts uncased input since it is pre-trained on the uncased corpus
query = "In which year did beijing host the Olympic Games?"
encoding = tokenizer(table=table, query=query, return_tensors="pt")
outputs = model.generate(**encoding)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
# [' 2008.0']
```
### How to Eval
Please find the eval script [here](https://github.com/huggingface/transformers/tree/main/examples/research_projects/tapex).
### BibTeX entry and citation info
```bibtex
@inproceedings{
liu2022tapex,
title={{TAPEX}: Table Pre-training via Learning a Neural {SQL} Executor},
author={Qian Liu and Bei Chen and Jiaqi Guo and Morteza Ziyadi and Zeqi Lin and Weizhu Chen and Jian-Guang Lou},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=O50443AsCP}
}
``` | 3,197 | [
[
-0.03851318359375,
-0.0516357421875,
0.036468505859375,
-0.009368896484375,
-0.01568603515625,
0.003597259521484375,
-0.0162506103515625,
-0.0076751708984375,
0.0289459228515625,
0.03662109375,
-0.038238525390625,
-0.041778564453125,
-0.036346435546875,
-0.0... |
timm/vit_base_patch16_224.augreg_in1k | 2023-05-06T00:00:30.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2106.10270",
"arxiv:2010.11929",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/vit_base_patch16_224.augreg_in1k | 1 | 3,372 | timm | 2022-12-22T07:24:52 | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for vit_base_patch16_224.augreg_in1k
A Vision Transformer (ViT) image classification model. Trained on ImageNet-1k (with additional augmentation and regularization) in JAX by paper authors, ported to PyTorch by Ross Wightman.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 86.6
- GMACs: 16.9
- Activations (M): 16.5
- Image size: 224 x 224
- **Papers:**
- How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers: https://arxiv.org/abs/2106.10270
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/google-research/vision_transformer
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('vit_base_patch16_224.augreg_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vit_base_patch16_224.augreg_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 197, 768) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{steiner2021augreg,
title={How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers},
author={Steiner, Andreas and Kolesnikov, Alexander and and Zhai, Xiaohua and Wightman, Ross and Uszkoreit, Jakob and Beyer, Lucas},
journal={arXiv preprint arXiv:2106.10270},
year={2021}
}
```
```bibtex
@article{dosovitskiy2020vit,
title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
journal={ICLR},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
| 3,796 | [
[
-0.03875732421875,
-0.03106689453125,
-0.0036296844482421875,
0.006702423095703125,
-0.0298004150390625,
-0.027069091796875,
-0.0207977294921875,
-0.03448486328125,
0.0141754150390625,
0.0246124267578125,
-0.04144287109375,
-0.038238525390625,
-0.047760009765625... |
textattack/bert-base-uncased-yelp-polarity | 2021-05-20T07:49:07.000Z | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | textattack | null | null | textattack/bert-base-uncased-yelp-polarity | 2 | 3,371 | transformers | 2022-03-02T23:29:05 | ## TextAttack Model Card
This `bert-base-uncased` model was fine-tuned for sequence classification using TextAttack
and the yelp_polarity dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 16, a learning
rate of 5e-05, and a maximum sequence length of 256.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.9699473684210527, as measured by the
eval set accuracy, found after 4 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
| 632 | [
[
-0.01605224609375,
-0.02587890625,
0.029693603515625,
0.0007410049438476562,
-0.039215087890625,
-0.0008273124694824219,
-0.017974853515625,
-0.033782958984375,
-0.005992889404296875,
0.0294036865234375,
-0.043243408203125,
-0.05596923828125,
-0.036346435546875,... |
timm/maxvit_base_tf_384.in1k | 2023-05-10T23:55:57.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2204.01697",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/maxvit_base_tf_384.in1k | 1 | 3,367 | timm | 2022-12-02T21:48:32 | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for maxvit_base_tf_384.in1k
An official MaxViT image classification model. Trained in tensorflow on ImageNet-1k by paper authors.
Ported from official Tensorflow implementation (https://github.com/google-research/maxvit) to PyTorch by Ross Wightman.
### Model Variants in [maxxvit.py](https://github.com/huggingface/pytorch-image-models/blob/main/timm/models/maxxvit.py)
MaxxViT covers a number of related model architectures that share a common structure including:
- CoAtNet - Combining MBConv (depthwise-separable) convolutional blocks in early stages with self-attention transformer blocks in later stages.
- MaxViT - Uniform blocks across all stages, each containing a MBConv (depthwise-separable) convolution block followed by two self-attention blocks with different partitioning schemes (window followed by grid).
- CoAtNeXt - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in CoAtNet. All normalization layers are LayerNorm (no BatchNorm).
- MaxxViT - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in MaxViT. All normalization layers are LayerNorm (no BatchNorm).
- MaxxViT-V2 - A MaxxViT variation that removes the window block attention leaving only ConvNeXt blocks and grid attention w/ more width to compensate.
Aside from the major variants listed above, there are more subtle changes from model to model. Any model name with the string `rw` are `timm` specific configs w/ modelling adjustments made to favour PyTorch eager use. These were created while training initial reproductions of the models so there are variations.
All models with the string `tf` are models exactly matching Tensorflow based models by the original paper authors with weights ported to PyTorch. This covers a number of MaxViT models. The official CoAtNet models were never released.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 119.7
- GMACs: 73.8
- Activations (M): 332.9
- Image size: 384 x 384
- **Papers:**
- MaxViT: Multi-Axis Vision Transformer: https://arxiv.org/abs/2204.01697
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('maxvit_base_tf_384.in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'maxvit_base_tf_384.in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 192, 192])
# torch.Size([1, 96, 96, 96])
# torch.Size([1, 192, 48, 48])
# torch.Size([1, 384, 24, 24])
# torch.Size([1, 768, 12, 12])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'maxvit_base_tf_384.in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 768, 12, 12) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
### By Top-1
|model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)|
|------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:|
|[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22|
|[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76|
|[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99|
|[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15|
|[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84|
|[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90|
|[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95|
|[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74|
|[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43|
|[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64|
|[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77|
|[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99|
|[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22|
|[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15|
|[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78|
|[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90|
|[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84|
|[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77|
|[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59|
|[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65|
|[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42|
|[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35|
|[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13|
|[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01|
|[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38|
|[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78|
|[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30|
|[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17|
|[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92|
|[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60|
|[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11|
|[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78|
|[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47|
|[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05|
|[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05|
|[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92|
|[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28|
|[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04|
|[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73|
|[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34|
|[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80|
|[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41|
|[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86|
### By Throughput (samples / sec)
|model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)|
|------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:|
|[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80|
|[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41|
|[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34|
|[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73|
|[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04|
|[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86|
|[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05|
|[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92|
|[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05|
|[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28|
|[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11|
|[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47|
|[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13|
|[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78|
|[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60|
|[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92|
|[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30|
|[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17|
|[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22|
|[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78|
|[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78|
|[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38|
|[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77|
|[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64|
|[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01|
|[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42|
|[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35|
|[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65|
|[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43|
|[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74|
|[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59|
|[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95|
|[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90|
|[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90|
|[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77|
|[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84|
|[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84|
|[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99|
|[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99|
|[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76|
|[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15|
|[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15|
|[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22|
## Citation
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@article{tu2022maxvit,
title={MaxViT: Multi-Axis Vision Transformer},
author={Tu, Zhengzhong and Talebi, Hossein and Zhang, Han and Yang, Feng and Milanfar, Peyman and Bovik, Alan and Li, Yinxiao},
journal={ECCV},
year={2022},
}
```
```bibtex
@article{dai2021coatnet,
title={CoAtNet: Marrying Convolution and Attention for All Data Sizes},
author={Dai, Zihang and Liu, Hanxiao and Le, Quoc V and Tan, Mingxing},
journal={arXiv preprint arXiv:2106.04803},
year={2021}
}
```
| 22,110 | [
[
-0.05224609375,
-0.030426025390625,
0.0009055137634277344,
0.03106689453125,
-0.0245208740234375,
-0.0172119140625,
-0.0110626220703125,
-0.023651123046875,
0.054718017578125,
0.0168304443359375,
-0.04119873046875,
-0.046966552734375,
-0.047515869140625,
-0.... |
deepmind/vision-perceiver-conv | 2021-12-11T13:12:42.000Z | [
"transformers",
"pytorch",
"perceiver",
"image-classification",
"dataset:imagenet",
"arxiv:2107.14795",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | image-classification | deepmind | null | null | deepmind/vision-perceiver-conv | 5 | 3,359 | transformers | 2022-03-02T23:29:05 | ---
license: apache-2.0
tags:
datasets:
- imagenet
---
# Perceiver IO for vision (convolutional processing)
Perceiver IO model pre-trained on ImageNet (14 million images, 1,000 classes) at resolution 224x224. It was introduced in the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Jaegle et al. and first released in [this repository](https://github.com/deepmind/deepmind-research/tree/master/perceiver).
Disclaimer: The team releasing Perceiver IO did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Perceiver IO is a transformer encoder model that can be applied on any modality (text, images, audio, video, ...). The core idea is to employ the self-attention mechanism on a not-too-large set of latent vectors (e.g. 256 or 512), and only use the inputs to perform cross-attention with the latents. This allows for the time and memory requirements of the self-attention mechanism to not depend on the size of the inputs.
To decode, the authors employ so-called decoder queries, which allow to flexibly decode the final hidden states of the latents to produce outputs of arbitrary size and semantics. For image classification, the output is a tensor containing the logits, of shape (batch_size, num_labels).
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/perceiver_architecture.jpg" alt="drawing" width="600"/>
<small> Perceiver IO architecture.</small>
As the time and memory requirements of the self-attention mechanism don't depend on the size of the inputs, the Perceiver IO authors can train the model directly on raw pixel values, rather than on patches as is done in ViT. This particular model employs a simple 2D conv+maxpool preprocessing network on the pixel values, before using the inputs for cross-attention with the latents.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by replacing the classification decoder.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=deepmind/perceiver) to look for other fine-tuned versions on a task that may interest you.
### How to use
Here is how to use this model in PyTorch:
```python
from transformers import PerceiverFeatureExtractor, PerceiverForImageClassificationConvProcessing
import requests
from PIL import Image
feature_extractor = PerceiverFeatureExtractor.from_pretrained("deepmind/vision-perceiver-conv")
model = PerceiverForImageClassificationConvProcessing.from_pretrained("deepmind/vision-perceiver-conv")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
# prepare input
inputs = feature_extractor(image, return_tensors="pt").pixel_values
# forward pass
outputs = model(inputs)
logits = outputs.logits
print("Predicted class:", model.config.id2label[logits.argmax(-1).item()])
>>> should print Predicted class: tabby, tabby cat
```
## Training data
This model was pretrained on [ImageNet](http://www.image-net.org/), a dataset consisting of 14 million images and 1k classes.
## Training procedure
### Preprocessing
Images are center cropped and resized to a resolution of 224x224 and normalized across the RGB channels. Note that data augmentation was used during pre-training, as explained in Appendix H of the [paper](https://arxiv.org/abs/2107.14795).
### Pretraining
Hyperparameter details can be found in Appendix H of the [paper](https://arxiv.org/abs/2107.14795).
## Evaluation results
This model is able to achieve a top-1 accuracy of 82.1 on ImageNet-1k.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2107-14795,
author = {Andrew Jaegle and
Sebastian Borgeaud and
Jean{-}Baptiste Alayrac and
Carl Doersch and
Catalin Ionescu and
David Ding and
Skanda Koppula and
Daniel Zoran and
Andrew Brock and
Evan Shelhamer and
Olivier J. H{\'{e}}naff and
Matthew M. Botvinick and
Andrew Zisserman and
Oriol Vinyals and
Jo{\~{a}}o Carreira},
title = {Perceiver {IO:} {A} General Architecture for Structured Inputs {\&}
Outputs},
journal = {CoRR},
volume = {abs/2107.14795},
year = {2021},
url = {https://arxiv.org/abs/2107.14795},
eprinttype = {arXiv},
eprint = {2107.14795},
timestamp = {Tue, 03 Aug 2021 14:53:34 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2107-14795.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | 4,989 | [
[
-0.052276611328125,
-0.0478515625,
0.0245513916015625,
-0.000774383544921875,
-0.01470947265625,
-0.038818359375,
-0.0042266845703125,
-0.06121826171875,
0.01042938232421875,
0.0115203857421875,
-0.0345458984375,
-0.0175018310546875,
-0.04718017578125,
-0.00... |
DucHaiten/DucHaitenDreamWorld | 2023-03-28T15:19:02.000Z | [
"diffusers",
"stable-diffusion",
"text-to-image",
"image-to-image",
"en",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | DucHaiten | null | null | DucHaiten/DucHaitenDreamWorld | 24 | 3,358 | diffusers | 2023-02-06T16:26:07 | ---
license: creativeml-openrail-m
language:
- en
tags:
- stable-diffusion
- text-to-image
- image-to-image
- diffusers
inference: true
---
After many days of not eating well, sleeping 4 hours at night. Finally, version 2.4.1 of the DucHaitenDreamWorld model is also completed, it will be a huge improvement, just looking at the sample image is enough to understand how great it is. At least not as bad as the previous version :)
Dream World is my model for art like Disney, Pixar.
xformer on, no ave (I haven't tried it with vae so I don't know if it's good or bad)
Please support me by becoming a patron:
https://www.patreon.com/duchaitenreal







![00376-1484770875-[uploaded e621], by Pino Daeni, by Ruan Jia, by Fumiko, by Alayna Lemmer, by Carlo Galli Bibiena, solo female ((Vulpix)) with ((.png](https://s3.amazonaws.com/moonup/production/uploads/1676126509917-630b58b279d18d5e53e3a5a9.png)



| 3,380 | [
[
-0.0433349609375,
-0.0267791748046875,
0.0222015380859375,
0.01134490966796875,
-0.0307159423828125,
0.0175018310546875,
0.0214080810546875,
-0.052734375,
0.06280517578125,
0.06298828125,
-0.06005859375,
-0.02972412109375,
-0.047210693359375,
0.0109710693359... |
setu4993/LEALLA-small | 2023-10-19T06:22:00.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"feature-extraction",
"sentence_embedding",
"multilingual",
"google",
"sentence-similarity",
"lealla",
"labse",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"bo",
"bs",
"ca",
"ceb",
"co",
"cs",
"c... | sentence-similarity | setu4993 | null | null | setu4993/LEALLA-small | 3 | 3,357 | transformers | 2023-05-21T08:17:47 | ---
pipeline_tag: sentence-similarity
language:
- af
- am
- ar
- as
- az
- be
- bg
- bn
- bo
- bs
- ca
- ceb
- co
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- haw
- he
- hi
- hmn
- hr
- ht
- hu
- hy
- id
- ig
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lb
- lo
- lt
- lv
- mg
- mi
- mk
- ml
- mn
- mr
- ms
- mt
- my
- ne
- nl
- no
- ny
- or
- pa
- pl
- pt
- ro
- ru
- rw
- si
- sk
- sl
- sm
- sn
- so
- sq
- sr
- st
- su
- sv
- sw
- ta
- te
- tg
- th
- tk
- tl
- tr
- tt
- ug
- uk
- ur
- uz
- vi
- wo
- xh
- yi
- yo
- zh
- zu
tags:
- bert
- sentence_embedding
- multilingual
- google
- sentence-similarity
- lealla
- labse
license: apache-2.0
datasets:
- CommonCrawl
- Wikipedia
---
# LEALLA-small
## Model description
LEALLA is a collection of lightweight language-agnostic sentence embedding models supporting 109 languages, distilled from [LaBSE](https://ai.googleblog.com/2020/08/language-agnostic-bert-sentence.html). The model is useful for getting multilingual sentence embeddings and for bi-text retrieval.
- Model: [HuggingFace's model hub](https://huggingface.co/setu4993/LEALLA-small).
- Paper: [arXiv](https://arxiv.org/abs/2302.08387).
- Original model: [TensorFlow Hub](https://tfhub.dev/google/LEALLA/LEALLA-small/1).
- Conversion from TensorFlow to PyTorch: [GitHub](https://github.com/setu4993/convert-labse-tf-pt).
This is migrated from the v1 model on the TF Hub. The embeddings produced by both the versions of the model are [equivalent](https://github.com/setu4993/convert-labse-tf-pt/blob/c0d4fbce789b0709a9664464f032d2e9f5368a86/tests/test_conversion_lealla.py#L31). Though, for some of the languages (like Japanese), the LEALLA models appear to require higher tolerances when comparing embeddings and similarities.
## Usage
Using the model:
```python
import torch
from transformers import BertModel, BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained("setu4993/LEALLA-small")
model = BertModel.from_pretrained("setu4993/LEALLA-small")
model = model.eval()
english_sentences = [
"dog",
"Puppies are nice.",
"I enjoy taking long walks along the beach with my dog.",
]
english_inputs = tokenizer(english_sentences, return_tensors="pt", padding=True)
with torch.no_grad():
english_outputs = model(**english_inputs)
```
To get the sentence embeddings, use the pooler output:
```python
english_embeddings = english_outputs.pooler_output
```
Output for other languages:
```python
italian_sentences = [
"cane",
"I cuccioli sono carini.",
"Mi piace fare lunghe passeggiate lungo la spiaggia con il mio cane.",
]
japanese_sentences = ["犬", "子犬はいいです", "私は犬と一緒にビーチを散歩するのが好きです"]
italian_inputs = tokenizer(italian_sentences, return_tensors="pt", padding=True)
japanese_inputs = tokenizer(japanese_sentences, return_tensors="pt", padding=True)
with torch.no_grad():
italian_outputs = model(**italian_inputs)
japanese_outputs = model(**japanese_inputs)
italian_embeddings = italian_outputs.pooler_output
japanese_embeddings = japanese_outputs.pooler_output
```
For similarity between sentences, an L2-norm is recommended before calculating the similarity:
```python
import torch.nn.functional as F
def similarity(embeddings_1, embeddings_2):
normalized_embeddings_1 = F.normalize(embeddings_1, p=2)
normalized_embeddings_2 = F.normalize(embeddings_2, p=2)
return torch.matmul(
normalized_embeddings_1, normalized_embeddings_2.transpose(0, 1)
)
print(similarity(english_embeddings, italian_embeddings))
print(similarity(english_embeddings, japanese_embeddings))
print(similarity(italian_embeddings, japanese_embeddings))
```
## Details
Details about data, training, evaluation and performance metrics are available in the [original paper](https://arxiv.org/abs/2302.08387).
### BibTeX entry and citation info
```bibtex
@inproceedings{mao-nakagawa-2023-lealla,
title = "{LEALLA}: Learning Lightweight Language-agnostic Sentence Embeddings with Knowledge Distillation",
author = "Mao, Zhuoyuan and
Nakagawa, Tetsuji",
booktitle = "Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics",
month = may,
year = "2023",
address = "Dubrovnik, Croatia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.eacl-main.138",
doi = "10.18653/v1/2023.eacl-main.138",
pages = "1886--1894",
abstract = "Large-scale language-agnostic sentence embedding models such as LaBSE (Feng et al., 2022) obtain state-of-the-art performance for parallel sentence alignment. However, these large-scale models can suffer from inference speed and computation overhead. This study systematically explores learning language-agnostic sentence embeddings with lightweight models. We demonstrate that a thin-deep encoder can construct robust low-dimensional sentence embeddings for 109 languages. With our proposed distillation methods, we achieve further improvements by incorporating knowledge from a teacher model. Empirical results on Tatoeba, United Nations, and BUCC show the effectiveness of our lightweight models. We release our lightweight language-agnostic sentence embedding models LEALLA on TensorFlow Hub.",
}
```
| 5,554 | [
[
-0.0033512115478515625,
-0.06658935546875,
0.0447998046875,
0.012725830078125,
-0.005771636962890625,
-0.01715087890625,
-0.039703369140625,
-0.01947021484375,
0.0216064453125,
-0.00046539306640625,
-0.0287322998046875,
-0.041656494140625,
-0.043426513671875,
... |
asahi417/tner-xlm-roberta-base-ontonotes5 | 2022-11-04T03:24:37.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"en",
"arxiv:2209.12616",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | token-classification | asahi417 | null | null | asahi417/tner-xlm-roberta-base-ontonotes5 | 3 | 3,352 | transformers | 2022-03-02T23:29:05 |
---
language:
- en
---
# Model Card for XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER.
# Model Details
## Model Description
XLM-RoBERTa finetuned on NER.
- **Developed by:** Asahi Ushio
- **Shared by [Optional]:** Hugging Face
- **Model type:** Token Classification
- **Language(s) (NLP):** en
- **License:** More information needed
- **Related Models:** XLM-RoBERTa
- **Parent Model:** XLM-RoBERTa
- **Resources for more information:**
- [GitHub Repo](https://github.com/asahi417/tner)
- [Associated Paper](https://arxiv.org/abs/2209.12616)
- [Space](https://huggingface.co/spaces/akdeniz27/turkish-named-entity-recognition)
# Uses
## Direct Use
Token Classification
## Downstream Use [Optional]
This model can be used in conjunction with the [tner library](https://github.com/asahi417/tner).
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recomendations.
# Training Details
## Training Data
An NER dataset contains a sequence of tokens and tags for each split (usually `train`/`validation`/`test`),
```python
{
'train': {
'tokens': [
['@paulwalk', 'It', "'s", 'the', 'view', 'from', 'where', 'I', "'m", 'living', 'for', 'two', 'weeks', '.', 'Empire', 'State', 'Building', '=', 'ESB', '.', 'Pretty', 'bad', 'storm', 'here', 'last', 'evening', '.'],
['From', 'Green', 'Newsfeed', ':', 'AHFA', 'extends', 'deadline', 'for', 'Sage', 'Award', 'to', 'Nov', '.', '5', 'http://tinyurl.com/24agj38'], ...
],
'tags': [
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 2, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], ...
]
},
'validation': ...,
'test': ...,
}
```
with a dictionary to map a label to its index (`label2id`) as below.
```python
{"O": 0, "B-ORG": 1, "B-MISC": 2, "B-PER": 3, "I-PER": 4, "B-LOC": 5, "I-ORG": 6, "I-MISC": 7, "I-LOC": 8}
```
## Training Procedure
### Preprocessing
More information needed
### Speeds, Sizes, Times
**Layer_norm_eps:** 1e-05,
**Num_attention_heads:** 12,
**Num_hidden_layers:** 12,
**Vocab_size:** 250002
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
See [dataset card](https://github.com/asahi417/tner/blob/master/DATASET_CARD.md) for full dataset lists
### Factors
More information needed
### Metrics
More information needed
## Results
More information needed
# Model Examination
More information needed
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed
## Compute Infrastructure
More information needed
### Hardware
More information needed
### Software
More information needed
# Citation
**BibTeX:**
```
@inproceedings{ushio-camacho-collados-2021-ner,
title = "{T}-{NER}: An All-Round Python Library for Transformer-based Named Entity Recognition",
author = "Ushio, Asahi and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations",
month = apr,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.eacl-demos.7",
pages = "53--62",
}
```
# Glossary [optional]
More information needed
# More Information [optional]
More information needed
# Model Card Authors [optional]
Asahi Ushio in collaboration with Ezi Ozoani and the Hugging Face team.
# Model Card Contact
More information needed
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-ontonotes5")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-ontonotes5")
```
</details>
| 5,189 | [
[
-0.039337158203125,
-0.039093017578125,
0.0180206298828125,
0.002857208251953125,
-0.011474609375,
-0.018585205078125,
-0.0199127197265625,
-0.0254058837890625,
0.0236358642578125,
0.0333251953125,
-0.039093017578125,
-0.04791259765625,
-0.05560302734375,
0.... |
IDEA-CCNL/Taiyi-CLIP-Roberta-102M-Chinese | 2023-05-25T09:44:14.000Z | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"clip",
"zh",
"image-text",
"feature-extraction",
"arxiv:2209.02970",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | feature-extraction | IDEA-CCNL | null | null | IDEA-CCNL/Taiyi-CLIP-Roberta-102M-Chinese | 45 | 3,349 | transformers | 2022-07-09T07:11:05 | ---
license: apache-2.0
# inference: false
# pipeline_tag: zero-shot-image-classification
pipeline_tag: feature-extraction
# inference:
# parameters:
tags:
- clip
- zh
- image-text
- feature-extraction
---
# Taiyi-CLIP-Roberta-102M-Chinese
- Main Page:[Fengshenbang](https://fengshenbang-lm.com/)
- Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
## 简介 Brief Introduction
首个开源的中文CLIP模型,1.23亿图文对上进行预训练的文本端RoBERTa-base。
The first open source Chinese CLIP, pre-training on 123M image-text pairs, the text encoder: RoBERTa-base.
## 模型分类 Model Taxonomy
| 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra |
| :----: | :----: | :----: | :----: | :----: | :----: |
| 特殊 Special | 多模态 Multimodal | 太乙 Taiyi | CLIP (Roberta) | 102M | 中文 Chinese |
## 模型信息 Model Information
我们遵循CLIP的实验设置,以获得强大的视觉-语言表征。在训练中文版的CLIP时,我们使用[chinese-roberta-wwm](https://huggingface.co/hfl/chinese-roberta-wwm-ext)作为语言的编码器,并将[CLIP](https://github.com/openai/CLIP)中的ViT-B-32应用于视觉的编码器。为了快速且稳定地进行预训练,我们冻结了视觉编码器并且只微调语言编码器。此外,我们将[Noah-Wukong](https://wukong-dataset.github.io/wukong-dataset/)数据集(100M)和[Zero](https://zero.so.com/)数据集(23M)用作预训练的数据集,训练了24个epoch,在在A100x32上训练了7天。据我们所知,我们的Taiyi-CLIP是目前Huggingface社区中首个的开源中文CLIP。
We follow the experimental setup of CLIP to obtain powerful visual-language intelligence. To obtain the CLIP for Chinese, we employ [chinese-roberta-wwm](https://huggingface.co/hfl/chinese-roberta-wwm-ext) for the language encoder, and apply the ViT-B-32 in [CLIP](https://github.com/openai/CLIP) for the vision encoder. We freeze the vision encoder and tune the language encoder to speed up and stabilize the pre-training process. Moreover, we apply [Noah-Wukong](https://wukong-dataset.github.io/wukong-dataset/) dataset (100M) and [Zero](https://zero.so.com/) dataset (23M) as the pre-training datasets. We train 24 epochs, which takes 7 days to train on A100x16. To the best of our knowledge, our TaiyiCLIP is currently the only open-sourced Chinese CLIP in the huggingface community.
### 下游效果 Performance
**Zero-Shot Classification**
| model | dataset | Top1 | Top5 |
| ---- | ---- | ---- | ---- |
| Taiyi-CLIP-Roberta-102M-Chinese | ImageNet1k-CN | 42.85% | 71.48% |
**Zero-Shot Text-to-Image Retrieval**
| model | dataset | Top1 | Top5 | Top10 |
| ---- | ---- | ---- | ---- | ---- |
| Taiyi-CLIP-Roberta-102M-Chinese | Flickr30k-CNA-test | 46.32% | 74.58% | 83.44% |
| Taiyi-CLIP-Roberta-102M-Chinese | COCO-CN-test | 47.10% | 78.53% | 87.84% |
| Taiyi-CLIP-Roberta-102M-Chinese | wukong50k | 49.18% | 81.94% | 90.27% |
## 使用 Usage
```python3
from PIL import Image
import requests
import clip
import torch
from transformers import BertForSequenceClassification, BertConfig, BertTokenizer
from transformers import CLIPProcessor, CLIPModel
import numpy as np
query_texts = ["一只猫", "一只狗",'两只猫', '两只老虎','一只老虎'] # 这里是输入文本的,可以随意替换。
# 加载Taiyi 中文 text encoder
text_tokenizer = BertTokenizer.from_pretrained("IDEA-CCNL/Taiyi-CLIP-Roberta-102M-Chinese")
text_encoder = BertForSequenceClassification.from_pretrained("IDEA-CCNL/Taiyi-CLIP-Roberta-102M-Chinese").eval()
text = text_tokenizer(query_texts, return_tensors='pt', padding=True)['input_ids']
url = "http://images.cocodataset.org/val2017/000000039769.jpg" # 这里可以换成任意图片的url
# 加载CLIP的image encoder
clip_model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")
image = processor(images=Image.open(requests.get(url, stream=True).raw), return_tensors="pt")
with torch.no_grad():
image_features = clip_model.get_image_features(**image)
text_features = text_encoder(text).logits
# 归一化
image_features = image_features / image_features.norm(dim=1, keepdim=True)
text_features = text_features / text_features.norm(dim=1, keepdim=True)
# 计算余弦相似度 logit_scale是尺度系数
logit_scale = clip_model.logit_scale.exp()
logits_per_image = logit_scale * image_features @ text_features.t()
logits_per_text = logits_per_image.t()
probs = logits_per_image.softmax(dim=-1).cpu().numpy()
print(np.around(probs, 3))
```
## 引用 Citation
如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2209.02970):
If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970):
```text
@article{fengshenbang,
author = {Jiaxing Zhang and Ruyi Gan and Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen},
title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
journal = {CoRR},
volume = {abs/2209.02970},
year = {2022}
}
```
也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
```text
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
```
| 5,287 | [
[
-0.0360107421875,
-0.056671142578125,
0.013519287109375,
0.031707763671875,
-0.036285400390625,
-0.01422882080078125,
-0.04156494140625,
-0.027069091796875,
0.034942626953125,
0.0118255615234375,
-0.040008544921875,
-0.042572021484375,
-0.043701171875,
0.003... |
digiplay/RunDiffusionFXPhotorealistic_v1 | 2023-10-20T02:46:45.000Z | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | digiplay | null | null | digiplay/RunDiffusionFXPhotorealistic_v1 | 8 | 3,347 | diffusers | 2023-06-04T16:02:01 | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Realistic like text-to-image model,
pretty good and very stable.
Model info :
https://civitai.com/models/82972/rundiffusion-fx-photorealistic
Original Author's DEMO images :



| 676 | [
[
-0.030029296875,
-0.07574462890625,
0.036529541015625,
0.0184478759765625,
-0.031341552734375,
0.002399444580078125,
-0.006015777587890625,
-0.020294189453125,
0.0303192138671875,
0.057098388671875,
-0.038421630859375,
-0.032073974609375,
-0.0233612060546875,
... |
artificialguybr/StickersRedmond | 2023-09-12T06:21:30.000Z | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"license:creativeml-openrail-m",
"has_space",
"region:us"
] | text-to-image | artificialguybr | null | null | artificialguybr/StickersRedmond | 10 | 3,346 | diffusers | 2023-09-12T06:16:10 | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: Stickers, sticker
widget:
- text: Stickers, sticker
---
# Stickers.Redmond

Stickers.Redmond is here!
Introducing Stickers.Redmond, the ultimate LORA for creating Stickers images!
I'm grateful for the GPU time from Redmond.AI that allowed me to make this LORA! If you need GPU, then you need the great services from Redmond.AI.
It is based on SD XL 1.0 and fine-tuned on a large dataset.
The LORA has a high capacity to generate Coloring Book Images!
The tag for the model: Stickers, Sticker
I really hope you like the LORA and use it.
If you like the model and think it's worth it, you can make a donation to my Patreon or Ko-fi.
Patreon:
https://www.patreon.com/user?u=81570187
Ko-fi:https://ko-fi.com/artificialguybr
BuyMeACoffe:https://www.buymeacoffee.com/jvkape
Follow me in my twitter to know before all about new models:
https://twitter.com/artificialguybr/ | 1,076 | [
[
-0.0384521484375,
-0.053070068359375,
0.0297088623046875,
0.02862548828125,
-0.03173828125,
0.00215911865234375,
0.0212249755859375,
-0.051422119140625,
0.07623291015625,
0.0367431640625,
-0.03363037109375,
-0.035369873046875,
-0.025787353515625,
-0.03344726... |
EleutherAI/pythia-160m-v0 | 2023-07-09T16:03:26.000Z | [
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"text-generation",
"causal-lm",
"pythia",
"pythia_v0",
"en",
"dataset:the_pile",
"arxiv:2101.00027",
"arxiv:2201.07311",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | text-generation | EleutherAI | null | null | EleutherAI/pythia-160m-v0 | 7 | 3,342 | transformers | 2022-10-16T17:40:11 | ---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- pythia_v0
license: apache-2.0
datasets:
- the_pile
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
[on Hugging Face](https://huggingface.co/models?other=pythia).
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
## Pythia-160M
### Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:contact@eleuther.ai).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change over the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch `143000` corresponds
exactly to the model checkpoint on the `main` branch of each model.
You may also further fine-tune and adapt Pythia-160M for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-160M as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-160M has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-160M will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-160M to produce factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-160M may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-160M.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
### Training
#### Training data
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).<br>
The Pile was **not** deduplicated before being used to train Pythia-160M.
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so `step1000` is the first checkpoint
for `pythia-1.4b` that was saved (corresponding to step 500 in training), and
`step1000` is likewise the first `pythia-6.9b` checkpoint that was saved
(corresponding to 1000 “actual” steps).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
### Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Challenge Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/>
</details>
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure> | 11,782 | [
[
-0.0265350341796875,
-0.062103271484375,
0.0207672119140625,
0.00360107421875,
-0.015869140625,
-0.01320648193359375,
-0.0178680419921875,
-0.033447265625,
0.0162200927734375,
0.01611328125,
-0.0229644775390625,
-0.0238494873046875,
-0.03515625,
-0.003671646... |
cyberagent/open-calm-small | 2023-05-18T01:10:33.000Z | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"japanese",
"causal-lm",
"ja",
"dataset:wikipedia",
"dataset:cc100",
"dataset:mc4",
"license:cc-by-sa-4.0",
"has_space",
"text-generation-inference",
"region:us"
] | text-generation | cyberagent | null | null | cyberagent/open-calm-small | 15 | 3,342 | transformers | 2023-05-15T06:40:15 | ---
license: cc-by-sa-4.0
datasets:
- wikipedia
- cc100
- mc4
language:
- ja
tags:
- japanese
- causal-lm
inference: false
---
# OpenCALM-Small
## Model Description
OpenCALM is a suite of decoder-only language models pre-trained on Japanese datasets, developed by CyberAgent, Inc.
## Usage
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("cyberagent/open-calm-small", device_map="auto", torch_dtype=torch.float16)
tokenizer = AutoTokenizer.from_pretrained("cyberagent/open-calm-small")
inputs = tokenizer("AIによって私達の暮らしは、", return_tensors="pt").to(model.device)
with torch.no_grad():
tokens = model.generate(
**inputs,
max_new_tokens=64,
do_sample=True,
temperature=0.7,
top_p=0.9,
repetition_penalty=1.05,
pad_token_id=tokenizer.pad_token_id,
)
output = tokenizer.decode(tokens[0], skip_special_tokens=True)
print(output)
```
## Model Details
|Model|Params|Layers|Dim|Heads|Dev ppl|
|:---:|:---: |:---:|:---:|:---:|:---:|
|[cyberagent/open-calm-small](https://huggingface.co/cyberagent/open-calm-small)|160M|12|768|12|19.7|
|[cyberagent/open-calm-medium](https://huggingface.co/cyberagent/open-calm-medium)|400M|24|1024|16|13.8|
|[cyberagent/open-calm-large](https://huggingface.co/cyberagent/open-calm-large)|830M|24|1536|16|11.3|
|[cyberagent/open-calm-1b](https://huggingface.co/cyberagent/open-calm-1b)|1.4B|24|2048|16|10.3|
|[cyberagent/open-calm-3b](https://huggingface.co/cyberagent/open-calm-3b)|2.7B|32|2560|32|9.7|
|[cyberagent/open-calm-7b](https://huggingface.co/cyberagent/open-calm-7b)|6.8B|32|4096|32|8.2|
* **Developed by**: [CyberAgent, Inc.](https://www.cyberagent.co.jp/)
* **Model type**: Transformer-based Language Model
* **Language**: Japanese
* **Library**: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
* **License**: OpenCALM is licensed under the Creative Commons Attribution-ShareAlike 4.0 International License ([CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/)). When using this model, please provide appropriate credit to CyberAgent, Inc.
* Example (en): This model is a fine-tuned version of OpenCALM-XX developed by CyberAgent, Inc. The original model is released under the CC BY-SA 4.0 license, and this model is also released under the same CC BY-SA 4.0 license. For more information, please visit: https://creativecommons.org/licenses/by-sa/4.0/
* Example (ja): 本モデルは、株式会社サイバーエージェントによるOpenCALM-XXをファインチューニングしたものです。元のモデルはCC BY-SA 4.0ライセンスのもとで公開されており、本モデルも同じくCC BY-SA 4.0ライセンスで公開します。詳しくはこちらをご覧ください: https://creativecommons.org/licenses/by-sa/4.0/
## Training Dataset
* Wikipedia (ja)
* Common Crawl (ja)
## Author
[Ryosuke Ishigami](https://huggingface.co/rishigami)
## Citations
```bibtext
@software{gpt-neox-library,
title = {{GPT-NeoX: Large Scale Autoregressive Language Modeling in PyTorch}},
author = {Andonian, Alex and Anthony, Quentin and Biderman, Stella and Black, Sid and Gali, Preetham and Gao, Leo and Hallahan, Eric and Levy-Kramer, Josh and Leahy, Connor and Nestler, Lucas and Parker, Kip and Pieler, Michael and Purohit, Shivanshu and Songz, Tri and Phil, Wang and Weinbach, Samuel},
url = {https://www.github.com/eleutherai/gpt-neox},
doi = {10.5281/zenodo.5879544},
month = {8},
year = {2021},
version = {0.0.1},
}
``` | 3,375 | [
[
-0.0303955078125,
-0.054473876953125,
0.021240234375,
0.00377655029296875,
-0.01128387451171875,
-0.0242767333984375,
-0.030975341796875,
-0.032867431640625,
0.01560211181640625,
0.03717041015625,
-0.039215087890625,
-0.0550537109375,
-0.034637451171875,
0.0... |
cross-encoder/qnli-electra-base | 2021-08-05T08:41:23.000Z | [
"transformers",
"pytorch",
"electra",
"text-classification",
"arxiv:1804.07461",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | cross-encoder | null | null | cross-encoder/qnli-electra-base | 1 | 3,339 | transformers | 2022-03-02T23:29:05 | ---
license: apache-2.0
---
# Cross-Encoder for Quora Duplicate Questions Detection
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class.
## Training Data
Given a question and paragraph, can the question be answered by the paragraph? The models have been trained on the [GLUE QNLI](https://arxiv.org/abs/1804.07461) dataset, which transformed the [SQuAD dataset](https://rajpurkar.github.io/SQuAD-explorer/) into an NLI task.
## Performance
For performance results of this model, see [SBERT.net Pre-trained Cross-Encoder][https://www.sbert.net/docs/pretrained_cross-encoders.html].
## Usage
Pre-trained models can be used like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('model_name')
scores = model.predict([('Query1', 'Paragraph1'), ('Query2', 'Paragraph2')])
#e.g.
scores = model.predict([('How many people live in Berlin?', 'Berlin had a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.'), ('What is the size of New York?', 'New York City is famous for the Metropolitan Museum of Art.')])
```
## Usage with Transformers AutoModel
You can use the model also directly with Transformers library (without SentenceTransformers library):
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model = AutoModelForSequenceClassification.from_pretrained('model_name')
tokenizer = AutoTokenizer.from_pretrained('model_name')
features = tokenizer(['How many people live in Berlin?', 'What is the size of New York?'], ['Berlin had a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'], padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = torch.nn.functional.sigmoid(model(**features).logits)
print(scores)
``` | 1,996 | [
[
-0.02069091796875,
-0.056427001953125,
0.0277862548828125,
0.0195465087890625,
-0.015289306640625,
-0.006275177001953125,
0.004894256591796875,
-0.014923095703125,
0.01302337646484375,
0.04376220703125,
-0.044677734375,
-0.035247802734375,
-0.039703369140625,
... |
zohaib99k/Bert_Arabic-SQuADv2-QA | 2023-05-04T07:42:02.000Z | [
"transformers",
"pytorch",
"electra",
"question-answering",
"ar",
"dataset:ZeyadAhmed/Arabic-SQuADv2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | question-answering | zohaib99k | null | null | zohaib99k/Bert_Arabic-SQuADv2-QA | 0 | 3,336 | transformers | 2023-05-04T07:37:13 | ---
datasets:
- ZeyadAhmed/Arabic-SQuADv2.0
language:
- ar
metrics:
-
name: exact_match
type: exact_match
value: 65.12
-
name: F1
type: f1
value: 71.49
---
# AraElectra for Question Answering on Arabic-SQuADv2
This is the [AraElectra](https://huggingface.co/aubmindlab/araelectra-base-discriminator) model, fine-tuned using the [Arabic-SQuADv2.0](https://huggingface.co/datasets/ZeyadAhmed/Arabic-SQuADv2.0) dataset. It's been trained on question-answer pairs, including unanswerable questions, for the task of Question Answering. with help of [AraElectra Classifier](https://huggingface.co/ZeyadAhmed/AraElectra-Arabic-SQuADv2-CLS) to predicted unanswerable question.
## Overview
**Language model:** AraElectra <br>
**Language:** Arabic <br>
**Downstream-task:** Extractive QA
**Training data:** Arabic-SQuADv2.0
**Eval data:** Arabic-SQuADv2.0 <br>
**Test data:** Arabic-SQuADv2.0 <br>
**Code:** [See More Info on Github](https://github.com/zeyadahmed10/Arabic-MRC)
**Infrastructure**: 1x Tesla K80
## Hyperparameters
```
batch_size = 8
n_epochs = 4
base_LM_model = "AraElectra"
learning_rate = 3e-5
optimizer = AdamW
padding = dynamic
```
## Online Demo on Arabic Wikipedia and User Provided Contexts
See model in action hosted on streamlit [](https://share.streamlit.io/wissamantoun/arabic-wikipedia-qa-streamlit/main)
## Usage
For best results use the AraBert [preprocessor](https://github.com/aub-mind/arabert/blob/master/preprocess.py) by aub-mind
```python
from transformers import ElectraForQuestionAnswering, ElectraForSequenceClassification, AutoTokenizer, pipeline
from preprocess import ArabertPreprocessor
prep_object = ArabertPreprocessor("araelectra-base-discriminator")
question = prep_object('ما هي جامعة الدول العربية ؟')
context = prep_object('''
جامعة الدول العربية هيمنظمة إقليمية تضم دولاً عربية في آسيا وأفريقيا.
ينص ميثاقها على التنسيق بين الدول الأعضاء في الشؤون الاقتصادية، ومن ضمنها العلاقات التجارية الاتصالات، العلاقات الثقافية، الجنسيات ووثائق وأذونات السفر والعلاقات الاجتماعية والصحة. المقر الدائم لجامعة الدول العربية يقع في القاهرة، عاصمة مصر (تونس من 1979 إلى 1990).
''')
# a) Get predictions
qa_modelname = 'ZeyadAhmed/AraElectra-Arabic-SQuADv2-QA'
cls_modelname = 'ZeyadAhmed/AraElectra-Arabic-SQuADv2-CLS'
qa_pipe = pipeline('question-answering', model=qa_modelname, tokenizer=qa_modelname)
QA_input = {
'question': question,
'context': context
}
CLS_input = {
'text': question,
'text_pair': context
}
qa_res = qa_pipe(QA_input)
cls_res = cls_pipe(CLS_iput)
threshold = 0.5 #hyperparameter can be tweaked
## note classification results label0 probability it can be answered label1 probability can't be answered
## if label1 probability > threshold then consider the output of qa_res is empty string else take the qa_res
# b) Load model & tokenizer
qa_model = ElectraForQuestionAnswering.from_pretrained(qa_modelname)
cls_model = ElectraForSequenceClassification.from_pretrained(cls_modelname)
tokenizer = AutoTokenizer.from_pretrained(qa_modelname)
```
## Performance
Evaluated on the Arabic-SQuAD 2.0 test set with the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/) except changing in the preprocessing a little to fit the arabic language [the modified eval script](https://github.com/zeyadahmed10/Arabic-MRC/blob/main/evaluatev2.py).
```
"exact": 65.11555277951281,
"f1": 71.49042547237256,,
"total": 9606,
"HasAns_exact": 56.14535768645358,
"HasAns_f1": 67.79623803036668,
"HasAns_total": 5256,
"NoAns_exact": 75.95402298850574,
"NoAns_f1": 75.95402298850574,
"NoAns_total": 4350
```
| 3,760 | [
[
-0.04150390625,
-0.051239013671875,
0.026519775390625,
0.003448486328125,
-0.0109405517578125,
0.01319122314453125,
-0.0013360977172851562,
-0.021759033203125,
0.005916595458984375,
0.0200958251953125,
-0.038482666015625,
-0.0377197265625,
-0.03631591796875,
... |
algolet/mt5-base-chinese-qg | 2022-03-03T02:18:05.000Z | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | algolet | null | null | algolet/mt5-base-chinese-qg | 11 | 3,332 | transformers | 2022-03-02T23:29:05 | <h3 align="center">
<p>MT5 Base Model for Chinese Question Generation</p>
</h3>
<h3 align="center">
<p>基于mt5的中文问题生成任务</p>
</h3>
#### 可以通过安装question-generation包开始用
```
pip install question-generation
```
使用方法请参考github项目:https://github.com/algolet/question_generation
#### 在线使用
可以直接在线使用我们的模型:https://www.algolet.com/applications/qg
#### 通过transformers调用
``` python
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("algolet/mt5-base-chinese-qg")
model = AutoModelForSeq2SeqLM.from_pretrained("algolet/mt5-base-chinese-qg")
model.eval()
text = "在一个寒冷的冬天,赶集完回家的农夫在路边发现了一条冻僵了的蛇。他很可怜蛇,就把它放在怀里。当他身上的热气把蛇温暖以后,蛇很快苏醒了,露出了残忍的本性,给了农夫致命的伤害——咬了农夫一口。农夫临死之前说:“我竟然救了一条可怜的毒蛇,就应该受到这种报应啊!”"
text = "question generation: " + text
inputs = tokenizer(text,
return_tensors='pt',
truncation=True,
max_length=512)
with torch.no_grad():
outs = model.generate(input_ids=inputs["input_ids"],
attention_mask=inputs["attention_mask"],
max_length=128,
no_repeat_ngram_size=4,
num_beams=4)
question = tokenizer.decode(outs[0], skip_special_tokens=True)
questions = [q.strip() for q in question.split("<sep>") if len(q.strip()) > 0]
print(questions)
['在寒冷的冬天,农夫在哪里发现了一条可怜的蛇?', '农夫是如何看待蛇的?', '当农夫遇到蛇时,他做了什么?']
```
#### 指标
rouge-1: 0.4041
rouge-2: 0.2104
rouge-l: 0.3843
---
language:
- zh
tags:
- mt5
- question generation
metrics:
- rouge
---
| 1,563 | [
[
-0.0211639404296875,
-0.0572509765625,
0.0096435546875,
0.019744873046875,
-0.031341552734375,
-0.0172119140625,
0.003376007080078125,
0.00408172607421875,
-0.0005297660827636719,
0.0245361328125,
-0.05401611328125,
-0.04864501953125,
-0.02581787109375,
0.00... |
geolocal/StreetCLIP | 2023-09-13T00:03:57.000Z | [
"transformers",
"pytorch",
"clip",
"zero-shot-image-classification",
"geolocalization",
"geolocation",
"geographic",
"street",
"climate",
"urban",
"rural",
"multi-modal",
"geoguessr",
"en",
"arxiv:2302.00275",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"has_space",
"region:u... | zero-shot-image-classification | geolocal | null | null | geolocal/StreetCLIP | 21 | 3,327 | transformers | 2023-01-26T18:16:02 | ---
license: cc-by-nc-4.0
language:
- en
pipeline_tag: zero-shot-image-classification
widget:
- src: https://huggingface.co/geolocal/StreetCLIP/resolve/main/nagasaki.jpg
candidate_labels: China, South Korea, Japan, Phillipines, Taiwan, Vietnam, Cambodia
example_title: Countries
- src: https://huggingface.co/geolocal/StreetCLIP/resolve/main/sanfrancisco.jpeg
candidate_labels: San Jose, San Diego, Los Angeles, Las Vegas, San Francisco, Seattle
example_title: Cities
library_name: transformers
tags:
- geolocalization
- geolocation
- geographic
- street
- climate
- clip
- urban
- rural
- multi-modal
- geoguessr
---
# Model Card for StreetCLIP
StreetCLIP is a robust foundation model for open-domain image geolocalization and other
geographic and climate-related tasks.
Trained on an original dataset of 1.1 million street-level urban and rural geo-tagged images, it achieves
state-of-the-art performance on multiple open-domain image geolocalization benchmarks in zero-shot,
outperforming supervised models trained on millions of images.
# Model Description
StreetCLIP is a model pretrained by deriving image captions synthetically from image class labels using
a domain-specific caption template. This allows StreetCLIP to transfer its generalized zero-shot learning
capabilities to a specific domain (i.e. the domain of image geolocalization).
StreetCLIP builds on the OpenAI's pretrained large version of CLIP ViT, using 14x14 pixel
patches and images with a 336 pixel side length.
## Model Details
- **Model type:** [CLIP](https://openai.com/blog/clip/)
- **Language:** English
- **License:** Create Commons Attribution Non Commercial 4.0
- **Trained from model:** [openai/clip-vit-large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336)
## Model Sources
- **Paper:** [Preprint](https://arxiv.org/abs/2302.00275)
- **Cite preprint as:**
```bibtex
@misc{haas2023learning,
title={Learning Generalized Zero-Shot Learners for Open-Domain Image Geolocalization},
author={Lukas Haas and Silas Alberti and Michal Skreta},
year={2023},
eprint={2302.00275},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
# Uses
StreetCLIP has a deep understanding of the visual features found in street-level urban and rural scenes
and knows how to relate these concepts to specific countries, regions, and cities. Given its training setup,
the following use cases are recommended for StreetCLIP.
## Direct Use
StreetCLIP can be used out-of-the box using zero-shot learning to infer the geolocation of images on a country, region,
or city level. Given that StreetCLIP was pretrained on a dataset of street-level urban and rural images,
the best performance can be expected on images from a similar distribution.
Broader direct use cases are any zero-shot image classification tasks that rely on urban and rural street-level
understanding or geographical information relating visual clues to their region of origin.
## Downstream Use
StreetCLIP can be finetuned for any downstream applications that require geographic or street-level urban or rural
scene understanding. Examples of use cases are the following:
**Understanding the Built Environment**
- Analyzing building quality
- Building type classifcation
- Building energy efficiency Classification
**Analyzing Infrastructure**
- Analyzing road quality
- Utility pole maintenance
- Identifying damage from natural disasters or armed conflicts
**Understanding the Natural Environment**
- Mapping vegetation
- Vegetation classification
- Soil type classifcation
- Tracking deforestation
**General Use Cases**
- Street-level image segmentation
- Urban and rural scene classification
- Object detection in urban or rural environments
- Improving navigation and self-driving car technology
## Out-of-Scope Use
Any use cases attempting to geolocate users' private images are out-of-scope and discouraged.
# Bias, Risks, and Limitations
StreetCLIP was not trained on social media images or images of identifable people for a reason. As such, any use case
attempting to geolocalize users' private images
## Recommendations
We encourage the community to apply StreetCLIP to applications with significant social impact of which there are many.
The first three categories of potential use cases under Downstream Use list potential use cases with social impact
to explore.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from PIL import Image
import requests
from transformers import CLIPProcessor, CLIPModel
model = CLIPModel.from_pretrained("geolocal/StreetCLIP")
processor = CLIPProcessor.from_pretrained("geolocal/StreetCLIP")
url = "https://huggingface.co/geolocal/StreetCLIP/resolve/main/sanfrancisco.jpeg"
image = Image.open(requests.get(url, stream=True).raw)
choices = ["San Jose", "San Diego", "Los Angeles", "Las Vegas", "San Francisco"]
inputs = processor(text=choices, images=image, return_tensors="pt", padding=True)
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image # this is the image-text similarity score
probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities
```
# Training Details
## Training Data
StreetCLIP was trained on an original, unreleased street-level dataset of 1.1 million real-world,
urban and rural images. The data used to train the model comes from 101 countries, biased towards
western countries and not including India and China.
## Preprocessing
Same preprocessing as [openai/clip-vit-large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336).
## Training Procedure
StreetCLIP is initialized with OpenAI's pretrained large version of CLIP ViT and then pretrained using the synthetic
caption domain-specific pretraining method described in the paper corresponding to this work. StreetCLIP was trained
for 3 epochs using an AdamW optimizer with a learning rate of 1e-6 on 3 NVIDIA A100 80GB GPUs, a batch size of 32,
and gradient accumulation of 12 steps.
StreetCLIP was trained with the goal of matching images in the batch
with the caption correponding to the correct city, region, and country of the images' origins.
# Evaluation
StreetCLIP was evaluated in zero-shot on two open-domain image geolocalization benchmarks using a
technique called hierarchical linear probing. Hierarchical linear probing sequentially attempts to
identify the correct country and then city of geographical image origin.
## Testing Data and Metrics
### Testing Data
StreetCLIP was evaluated on the following two open-domain image geolocalization benchmarks.
* [IM2GPS](http://graphics.cs.cmu.edu/projects/im2gps/).
* [IM2GPS3K](https://github.com/lugiavn/revisiting-im2gps)
### Metrics
The objective of the listed benchmark datasets is to predict the images' coordinates of origin with as
little deviation as possible. A common metric set forth in prior literature is called Percentage at Kilometer (% @ KM).
The Percentage at Kilometer metric first calculates the distance in kilometers between the predicted coordinates
to the ground truth coordinates and then looks at what percentage of error distances are below a certain kilometer threshold.
## Results
**IM2GPS**
| Model | 25km | 200km | 750km | 2,500km |
|----------|:-------------:|:------:|:------:|:------:|
| PlaNet (2016) | 24.5 | 37.6 | 53.6 | 71.3 |
| ISNs (2018) | 43.0 | 51.9 | 66.7 | 80.2 |
| TransLocator (2022) | **48.1** | **64.6** | **75.6** | 86.7 |
| **Zero-Shot CLIP (ours)** | 27.0 | 42.2 | 71.7 | 86.9 |
| **Zero-Shot StreetCLIP (ours)** | 28.3 | 45.1 | 74.7 | **88.2** |
Metric: Percentage at Kilometer (% @ KM)
**IM2GPS3K**
| Model | 25km | 200km | 750km | 2,500km |
|----------|:-------------:|:------:|:------:|:------:|
| PlaNet (2016) | 24.8 | 34.3 | 48.4 | 64.6 |
| ISNs (2018) | 28.0 | 36.6 | 49.7 | 66.0 |
| TransLocator (2022) | **31.1** | **46.7** | 58.9 | 80.1 |
| **Zero-Shot CLIP (ours)** | 19.5 | 34.0 | 60.0 | 78.1 |
| **Zero-Shot StreetCLIP (ours)** | 22.4 | 37.4 | **61.3** | **80.4** |
Metric: Percentage at Kilometer (% @ KM)
### Summary
Our experiments demonstrate that our synthetic caption pretraining method is capable of significantly
improving CLIP's generalized zero-shot capabilities applied to open-domain image geolocalization while
achieving state-of-the-art performance on a selection of benchmark metrics.
# Environmental Impact
- **Hardware Type:** 4 NVIDIA A100 GPUs
- **Hours used:** 12
# Citation
Cite preprint as:
```bibtex
@misc{haas2023learning,
title={Learning Generalized Zero-Shot Learners for Open-Domain Image Geolocalization},
author={Lukas Haas and Silas Alberti and Michal Skreta},
year={2023},
eprint={2302.00275},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
| 8,858 | [
[
-0.046630859375,
-0.0538330078125,
0.04278564453125,
0.0048065185546875,
-0.037841796875,
0.0016565322875976562,
-0.01284027099609375,
-0.045745849609375,
0.035064697265625,
0.0194549560546875,
-0.036468505859375,
-0.06353759765625,
-0.05474853515625,
-0.010... |
healx/gpt-2-pubmed-medium | 2020-12-11T21:43:41.000Z | [
"transformers",
"pytorch",
"arxiv:2004.13845",
"endpoints_compatible",
"region:us"
] | null | healx | null | null | healx/gpt-2-pubmed-medium | 2 | 3,325 | transformers | 2022-03-02T23:29:05 | GPT-2 (355M model) finetuned on 0.5m PubMed abstracts. Used in the [writemeanabstract.com](writemeanabstract.com) and the following preprint:
[Papanikolaou, Yannis, and Andrea Pierleoni. "DARE: Data Augmented Relation Extraction with GPT-2." arXiv preprint arXiv:2004.13845 (2020).](https://arxiv.org/abs/2004.13845)
| 318 | [
[
-0.0196990966796875,
-0.040618896484375,
0.05224609375,
0.020233154296875,
-0.0224609375,
-0.058135986328125,
0.0033054351806640625,
-0.04443359375,
0.0014858245849609375,
0.0273895263671875,
-0.0416259765625,
-0.016510009765625,
-0.0360107421875,
0.00061655... |
consciousAI/cai-lunaris-text-embeddings | 2023-06-22T21:33:52.000Z | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"mteb",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | sentence-similarity | consciousAI | null | null | consciousAI/cai-lunaris-text-embeddings | 0 | 3,324 | sentence-transformers | 2023-06-22T18:08:54 | ---
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- mteb
model-index:
- name: cai-lunaris-text-embeddings
results:
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 17.07
- type: map_at_10
value: 29.372999999999998
- type: map_at_100
value: 30.79
- type: map_at_1000
value: 30.819999999999997
- type: map_at_3
value: 24.395
- type: map_at_5
value: 27.137
- type: mrr_at_1
value: 17.923000000000002
- type: mrr_at_10
value: 29.695
- type: mrr_at_100
value: 31.098
- type: mrr_at_1000
value: 31.128
- type: mrr_at_3
value: 24.704
- type: mrr_at_5
value: 27.449
- type: ndcg_at_1
value: 17.07
- type: ndcg_at_10
value: 37.269000000000005
- type: ndcg_at_100
value: 43.716
- type: ndcg_at_1000
value: 44.531
- type: ndcg_at_3
value: 26.839000000000002
- type: ndcg_at_5
value: 31.845000000000002
- type: precision_at_1
value: 17.07
- type: precision_at_10
value: 6.3020000000000005
- type: precision_at_100
value: 0.922
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 11.309
- type: precision_at_5
value: 9.246
- type: recall_at_1
value: 17.07
- type: recall_at_10
value: 63.016000000000005
- type: recall_at_100
value: 92.24799999999999
- type: recall_at_1000
value: 98.72
- type: recall_at_3
value: 33.926
- type: recall_at_5
value: 46.23
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 53.44266265900711
- type: mrr
value: 66.54695950402322
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 75.9652953730204
- type: cos_sim_spearman
value: 73.96554077670989
- type: euclidean_pearson
value: 75.68477255792381
- type: euclidean_spearman
value: 74.59447076995703
- type: manhattan_pearson
value: 75.94984623881341
- type: manhattan_spearman
value: 74.72218452337502
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 14.119000000000002
- type: map_at_10
value: 19.661
- type: map_at_100
value: 20.706
- type: map_at_1000
value: 20.848
- type: map_at_3
value: 17.759
- type: map_at_5
value: 18.645
- type: mrr_at_1
value: 17.166999999999998
- type: mrr_at_10
value: 23.313
- type: mrr_at_100
value: 24.263
- type: mrr_at_1000
value: 24.352999999999998
- type: mrr_at_3
value: 21.412
- type: mrr_at_5
value: 22.313
- type: ndcg_at_1
value: 17.166999999999998
- type: ndcg_at_10
value: 23.631
- type: ndcg_at_100
value: 28.427000000000003
- type: ndcg_at_1000
value: 31.862000000000002
- type: ndcg_at_3
value: 20.175
- type: ndcg_at_5
value: 21.397
- type: precision_at_1
value: 17.166999999999998
- type: precision_at_10
value: 4.549
- type: precision_at_100
value: 0.8370000000000001
- type: precision_at_1000
value: 0.136
- type: precision_at_3
value: 9.68
- type: precision_at_5
value: 6.981
- type: recall_at_1
value: 14.119000000000002
- type: recall_at_10
value: 32.147999999999996
- type: recall_at_100
value: 52.739999999999995
- type: recall_at_1000
value: 76.67
- type: recall_at_3
value: 22.019
- type: recall_at_5
value: 25.361
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 16.576
- type: map_at_10
value: 22.281000000000002
- type: map_at_100
value: 23.066
- type: map_at_1000
value: 23.166
- type: map_at_3
value: 20.385
- type: map_at_5
value: 21.557000000000002
- type: mrr_at_1
value: 20.892
- type: mrr_at_10
value: 26.605
- type: mrr_at_100
value: 27.229
- type: mrr_at_1000
value: 27.296
- type: mrr_at_3
value: 24.809
- type: mrr_at_5
value: 25.927
- type: ndcg_at_1
value: 20.892
- type: ndcg_at_10
value: 26.092
- type: ndcg_at_100
value: 29.398999999999997
- type: ndcg_at_1000
value: 31.884
- type: ndcg_at_3
value: 23.032
- type: ndcg_at_5
value: 24.634
- type: precision_at_1
value: 20.892
- type: precision_at_10
value: 4.885
- type: precision_at_100
value: 0.818
- type: precision_at_1000
value: 0.126
- type: precision_at_3
value: 10.977
- type: precision_at_5
value: 8.013
- type: recall_at_1
value: 16.576
- type: recall_at_10
value: 32.945
- type: recall_at_100
value: 47.337
- type: recall_at_1000
value: 64.592
- type: recall_at_3
value: 24.053
- type: recall_at_5
value: 28.465
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 20.604
- type: map_at_10
value: 28.754999999999995
- type: map_at_100
value: 29.767
- type: map_at_1000
value: 29.852
- type: map_at_3
value: 26.268
- type: map_at_5
value: 27.559
- type: mrr_at_1
value: 24.326
- type: mrr_at_10
value: 31.602000000000004
- type: mrr_at_100
value: 32.46
- type: mrr_at_1000
value: 32.521
- type: mrr_at_3
value: 29.415000000000003
- type: mrr_at_5
value: 30.581000000000003
- type: ndcg_at_1
value: 24.326
- type: ndcg_at_10
value: 33.335
- type: ndcg_at_100
value: 38.086
- type: ndcg_at_1000
value: 40.319
- type: ndcg_at_3
value: 28.796
- type: ndcg_at_5
value: 30.758999999999997
- type: precision_at_1
value: 24.326
- type: precision_at_10
value: 5.712
- type: precision_at_100
value: 0.893
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 13.208
- type: precision_at_5
value: 9.329
- type: recall_at_1
value: 20.604
- type: recall_at_10
value: 44.505
- type: recall_at_100
value: 65.866
- type: recall_at_1000
value: 82.61800000000001
- type: recall_at_3
value: 31.794
- type: recall_at_5
value: 36.831
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.280999999999999
- type: map_at_10
value: 11.636000000000001
- type: map_at_100
value: 12.363
- type: map_at_1000
value: 12.469
- type: map_at_3
value: 10.415000000000001
- type: map_at_5
value: 11.144
- type: mrr_at_1
value: 9.266
- type: mrr_at_10
value: 12.838
- type: mrr_at_100
value: 13.608999999999998
- type: mrr_at_1000
value: 13.700999999999999
- type: mrr_at_3
value: 11.507000000000001
- type: mrr_at_5
value: 12.343
- type: ndcg_at_1
value: 9.266
- type: ndcg_at_10
value: 13.877
- type: ndcg_at_100
value: 18.119
- type: ndcg_at_1000
value: 21.247
- type: ndcg_at_3
value: 11.376999999999999
- type: ndcg_at_5
value: 12.675
- type: precision_at_1
value: 9.266
- type: precision_at_10
value: 2.226
- type: precision_at_100
value: 0.47200000000000003
- type: precision_at_1000
value: 0.077
- type: precision_at_3
value: 4.859
- type: precision_at_5
value: 3.6380000000000003
- type: recall_at_1
value: 8.280999999999999
- type: recall_at_10
value: 19.872999999999998
- type: recall_at_100
value: 40.585
- type: recall_at_1000
value: 65.225
- type: recall_at_3
value: 13.014000000000001
- type: recall_at_5
value: 16.147
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.1209999999999996
- type: map_at_10
value: 7.272
- type: map_at_100
value: 8.079
- type: map_at_1000
value: 8.199
- type: map_at_3
value: 6.212
- type: map_at_5
value: 6.736000000000001
- type: mrr_at_1
value: 5.721
- type: mrr_at_10
value: 9.418
- type: mrr_at_100
value: 10.281
- type: mrr_at_1000
value: 10.385
- type: mrr_at_3
value: 8.126
- type: mrr_at_5
value: 8.779
- type: ndcg_at_1
value: 5.721
- type: ndcg_at_10
value: 9.673
- type: ndcg_at_100
value: 13.852999999999998
- type: ndcg_at_1000
value: 17.546999999999997
- type: ndcg_at_3
value: 7.509
- type: ndcg_at_5
value: 8.373
- type: precision_at_1
value: 5.721
- type: precision_at_10
value: 2.04
- type: precision_at_100
value: 0.48
- type: precision_at_1000
value: 0.093
- type: precision_at_3
value: 4.022
- type: precision_at_5
value: 3.06
- type: recall_at_1
value: 4.1209999999999996
- type: recall_at_10
value: 15.201
- type: recall_at_100
value: 33.922999999999995
- type: recall_at_1000
value: 61.529999999999994
- type: recall_at_3
value: 8.869
- type: recall_at_5
value: 11.257
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 14.09
- type: map_at_10
value: 19.573999999999998
- type: map_at_100
value: 20.580000000000002
- type: map_at_1000
value: 20.704
- type: map_at_3
value: 17.68
- type: map_at_5
value: 18.64
- type: mrr_at_1
value: 17.227999999999998
- type: mrr_at_10
value: 23.152
- type: mrr_at_100
value: 24.056
- type: mrr_at_1000
value: 24.141000000000002
- type: mrr_at_3
value: 21.142
- type: mrr_at_5
value: 22.201
- type: ndcg_at_1
value: 17.227999999999998
- type: ndcg_at_10
value: 23.39
- type: ndcg_at_100
value: 28.483999999999998
- type: ndcg_at_1000
value: 31.709
- type: ndcg_at_3
value: 19.883
- type: ndcg_at_5
value: 21.34
- type: precision_at_1
value: 17.227999999999998
- type: precision_at_10
value: 4.3790000000000004
- type: precision_at_100
value: 0.826
- type: precision_at_1000
value: 0.128
- type: precision_at_3
value: 9.496
- type: precision_at_5
value: 6.872
- type: recall_at_1
value: 14.09
- type: recall_at_10
value: 31.580000000000002
- type: recall_at_100
value: 54.074
- type: recall_at_1000
value: 77.092
- type: recall_at_3
value: 21.601
- type: recall_at_5
value: 25.333
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 10.538
- type: map_at_10
value: 15.75
- type: map_at_100
value: 16.71
- type: map_at_1000
value: 16.838
- type: map_at_3
value: 13.488
- type: map_at_5
value: 14.712
- type: mrr_at_1
value: 13.813
- type: mrr_at_10
value: 19.08
- type: mrr_at_100
value: 19.946
- type: mrr_at_1000
value: 20.044
- type: mrr_at_3
value: 16.838
- type: mrr_at_5
value: 17.951
- type: ndcg_at_1
value: 13.813
- type: ndcg_at_10
value: 19.669
- type: ndcg_at_100
value: 24.488
- type: ndcg_at_1000
value: 27.87
- type: ndcg_at_3
value: 15.479000000000001
- type: ndcg_at_5
value: 17.229
- type: precision_at_1
value: 13.813
- type: precision_at_10
value: 3.916
- type: precision_at_100
value: 0.743
- type: precision_at_1000
value: 0.122
- type: precision_at_3
value: 7.534000000000001
- type: precision_at_5
value: 5.822
- type: recall_at_1
value: 10.538
- type: recall_at_10
value: 28.693
- type: recall_at_100
value: 50.308
- type: recall_at_1000
value: 74.44
- type: recall_at_3
value: 16.866999999999997
- type: recall_at_5
value: 21.404999999999998
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 11.044583333333332
- type: map_at_10
value: 15.682833333333335
- type: map_at_100
value: 16.506500000000003
- type: map_at_1000
value: 16.623833333333334
- type: map_at_3
value: 14.130833333333333
- type: map_at_5
value: 14.963583333333332
- type: mrr_at_1
value: 13.482833333333332
- type: mrr_at_10
value: 18.328500000000002
- type: mrr_at_100
value: 19.095416666666665
- type: mrr_at_1000
value: 19.18241666666666
- type: mrr_at_3
value: 16.754749999999998
- type: mrr_at_5
value: 17.614749999999997
- type: ndcg_at_1
value: 13.482833333333332
- type: ndcg_at_10
value: 18.81491666666667
- type: ndcg_at_100
value: 22.946833333333334
- type: ndcg_at_1000
value: 26.061083333333336
- type: ndcg_at_3
value: 15.949333333333332
- type: ndcg_at_5
value: 17.218333333333334
- type: precision_at_1
value: 13.482833333333332
- type: precision_at_10
value: 3.456583333333333
- type: precision_at_100
value: 0.6599166666666666
- type: precision_at_1000
value: 0.109
- type: precision_at_3
value: 7.498833333333332
- type: precision_at_5
value: 5.477166666666667
- type: recall_at_1
value: 11.044583333333332
- type: recall_at_10
value: 25.737750000000005
- type: recall_at_100
value: 44.617916666666666
- type: recall_at_1000
value: 67.56524999999999
- type: recall_at_3
value: 17.598249999999997
- type: recall_at_5
value: 20.9035
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.362
- type: map_at_10
value: 13.414000000000001
- type: map_at_100
value: 14.083000000000002
- type: map_at_1000
value: 14.168
- type: map_at_3
value: 12.098
- type: map_at_5
value: 12.803999999999998
- type: mrr_at_1
value: 11.043
- type: mrr_at_10
value: 15.158
- type: mrr_at_100
value: 15.845999999999998
- type: mrr_at_1000
value: 15.916
- type: mrr_at_3
value: 13.88
- type: mrr_at_5
value: 14.601
- type: ndcg_at_1
value: 11.043
- type: ndcg_at_10
value: 16.034000000000002
- type: ndcg_at_100
value: 19.686
- type: ndcg_at_1000
value: 22.188
- type: ndcg_at_3
value: 13.530000000000001
- type: ndcg_at_5
value: 14.704
- type: precision_at_1
value: 11.043
- type: precision_at_10
value: 2.791
- type: precision_at_100
value: 0.5
- type: precision_at_1000
value: 0.077
- type: precision_at_3
value: 6.237
- type: precision_at_5
value: 4.5089999999999995
- type: recall_at_1
value: 9.362
- type: recall_at_10
value: 22.396
- type: recall_at_100
value: 39.528999999999996
- type: recall_at_1000
value: 58.809
- type: recall_at_3
value: 15.553
- type: recall_at_5
value: 18.512
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.657
- type: map_at_10
value: 8.273
- type: map_at_100
value: 8.875
- type: map_at_1000
value: 8.977
- type: map_at_3
value: 7.32
- type: map_at_5
value: 7.792000000000001
- type: mrr_at_1
value: 7.02
- type: mrr_at_10
value: 9.966999999999999
- type: mrr_at_100
value: 10.636
- type: mrr_at_1000
value: 10.724
- type: mrr_at_3
value: 8.872
- type: mrr_at_5
value: 9.461
- type: ndcg_at_1
value: 7.02
- type: ndcg_at_10
value: 10.199
- type: ndcg_at_100
value: 13.642000000000001
- type: ndcg_at_1000
value: 16.643
- type: ndcg_at_3
value: 8.333
- type: ndcg_at_5
value: 9.103
- type: precision_at_1
value: 7.02
- type: precision_at_10
value: 1.8929999999999998
- type: precision_at_100
value: 0.43
- type: precision_at_1000
value: 0.08099999999999999
- type: precision_at_3
value: 3.843
- type: precision_at_5
value: 2.884
- type: recall_at_1
value: 5.657
- type: recall_at_10
value: 14.563
- type: recall_at_100
value: 30.807000000000002
- type: recall_at_1000
value: 53.251000000000005
- type: recall_at_3
value: 9.272
- type: recall_at_5
value: 11.202
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 10.671999999999999
- type: map_at_10
value: 14.651
- type: map_at_100
value: 15.406
- type: map_at_1000
value: 15.525
- type: map_at_3
value: 13.461
- type: map_at_5
value: 14.163
- type: mrr_at_1
value: 12.407
- type: mrr_at_10
value: 16.782
- type: mrr_at_100
value: 17.562
- type: mrr_at_1000
value: 17.653
- type: mrr_at_3
value: 15.47
- type: mrr_at_5
value: 16.262
- type: ndcg_at_1
value: 12.407
- type: ndcg_at_10
value: 17.251
- type: ndcg_at_100
value: 21.378
- type: ndcg_at_1000
value: 24.689
- type: ndcg_at_3
value: 14.915000000000001
- type: ndcg_at_5
value: 16.1
- type: precision_at_1
value: 12.407
- type: precision_at_10
value: 2.91
- type: precision_at_100
value: 0.573
- type: precision_at_1000
value: 0.096
- type: precision_at_3
value: 6.779
- type: precision_at_5
value: 4.888
- type: recall_at_1
value: 10.671999999999999
- type: recall_at_10
value: 23.099
- type: recall_at_100
value: 41.937999999999995
- type: recall_at_1000
value: 66.495
- type: recall_at_3
value: 16.901
- type: recall_at_5
value: 19.807
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 13.364
- type: map_at_10
value: 17.772
- type: map_at_100
value: 18.659
- type: map_at_1000
value: 18.861
- type: map_at_3
value: 16.659
- type: map_at_5
value: 17.174
- type: mrr_at_1
value: 16.996
- type: mrr_at_10
value: 21.687
- type: mrr_at_100
value: 22.313
- type: mrr_at_1000
value: 22.422
- type: mrr_at_3
value: 20.652
- type: mrr_at_5
value: 21.146
- type: ndcg_at_1
value: 16.996
- type: ndcg_at_10
value: 21.067
- type: ndcg_at_100
value: 24.829
- type: ndcg_at_1000
value: 28.866999999999997
- type: ndcg_at_3
value: 19.466
- type: ndcg_at_5
value: 19.993
- type: precision_at_1
value: 16.996
- type: precision_at_10
value: 4.071000000000001
- type: precision_at_100
value: 0.9329999999999999
- type: precision_at_1000
value: 0.183
- type: precision_at_3
value: 9.223
- type: precision_at_5
value: 6.4030000000000005
- type: recall_at_1
value: 13.364
- type: recall_at_10
value: 25.976
- type: recall_at_100
value: 44.134
- type: recall_at_1000
value: 73.181
- type: recall_at_3
value: 20.503
- type: recall_at_5
value: 22.409000000000002
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.151
- type: map_at_10
value: 9.155000000000001
- type: map_at_100
value: 9.783999999999999
- type: map_at_1000
value: 9.879
- type: map_at_3
value: 7.825
- type: map_at_5
value: 8.637
- type: mrr_at_1
value: 5.915
- type: mrr_at_10
value: 10.34
- type: mrr_at_100
value: 10.943999999999999
- type: mrr_at_1000
value: 11.033
- type: mrr_at_3
value: 8.934000000000001
- type: mrr_at_5
value: 9.812
- type: ndcg_at_1
value: 5.915
- type: ndcg_at_10
value: 11.561
- type: ndcg_at_100
value: 14.971
- type: ndcg_at_1000
value: 17.907999999999998
- type: ndcg_at_3
value: 8.896999999999998
- type: ndcg_at_5
value: 10.313
- type: precision_at_1
value: 5.915
- type: precision_at_10
value: 2.1069999999999998
- type: precision_at_100
value: 0.414
- type: precision_at_1000
value: 0.074
- type: precision_at_3
value: 4.128
- type: precision_at_5
value: 3.327
- type: recall_at_1
value: 5.151
- type: recall_at_10
value: 17.874000000000002
- type: recall_at_100
value: 34.174
- type: recall_at_1000
value: 56.879999999999995
- type: recall_at_3
value: 10.732999999999999
- type: recall_at_5
value: 14.113000000000001
- task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 3.101
- type: map_at_10
value: 5.434
- type: map_at_100
value: 6.267
- type: map_at_1000
value: 6.418
- type: map_at_3
value: 4.377000000000001
- type: map_at_5
value: 4.841
- type: mrr_at_1
value: 7.166
- type: mrr_at_10
value: 12.012
- type: mrr_at_100
value: 13.144
- type: mrr_at_1000
value: 13.229
- type: mrr_at_3
value: 9.826
- type: mrr_at_5
value: 10.921
- type: ndcg_at_1
value: 7.166
- type: ndcg_at_10
value: 8.687000000000001
- type: ndcg_at_100
value: 13.345
- type: ndcg_at_1000
value: 16.915
- type: ndcg_at_3
value: 6.276
- type: ndcg_at_5
value: 7.013
- type: precision_at_1
value: 7.166
- type: precision_at_10
value: 2.9250000000000003
- type: precision_at_100
value: 0.771
- type: precision_at_1000
value: 0.13999999999999999
- type: precision_at_3
value: 4.734
- type: precision_at_5
value: 3.8830000000000005
- type: recall_at_1
value: 3.101
- type: recall_at_10
value: 11.774999999999999
- type: recall_at_100
value: 28.819
- type: recall_at_1000
value: 49.886
- type: recall_at_3
value: 5.783
- type: recall_at_5
value: 7.692
- task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.758
- type: map_at_10
value: 5.507
- type: map_at_100
value: 7.1819999999999995
- type: map_at_1000
value: 7.652
- type: map_at_3
value: 4.131
- type: map_at_5
value: 4.702
- type: mrr_at_1
value: 28.499999999999996
- type: mrr_at_10
value: 37.693
- type: mrr_at_100
value: 38.657000000000004
- type: mrr_at_1000
value: 38.704
- type: mrr_at_3
value: 34.792
- type: mrr_at_5
value: 36.417
- type: ndcg_at_1
value: 20.625
- type: ndcg_at_10
value: 14.771999999999998
- type: ndcg_at_100
value: 16.821
- type: ndcg_at_1000
value: 21.546000000000003
- type: ndcg_at_3
value: 16.528000000000002
- type: ndcg_at_5
value: 15.573
- type: precision_at_1
value: 28.499999999999996
- type: precision_at_10
value: 12.25
- type: precision_at_100
value: 3.7600000000000002
- type: precision_at_1000
value: 0.86
- type: precision_at_3
value: 19.167
- type: precision_at_5
value: 16.25
- type: recall_at_1
value: 2.758
- type: recall_at_10
value: 9.164
- type: recall_at_100
value: 21.022
- type: recall_at_1000
value: 37.053999999999995
- type: recall_at_3
value: 5.112
- type: recall_at_5
value: 6.413
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 28.53554681148413
- type: mrr
value: 29.290078704990325
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 76.52926207453477
- type: cos_sim_spearman
value: 68.98528351149498
- type: euclidean_pearson
value: 73.7744559091218
- type: euclidean_spearman
value: 69.03481995814735
- type: manhattan_pearson
value: 73.72818267270651
- type: manhattan_spearman
value: 69.00576442086793
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 61.71540153163407
- type: cos_sim_spearman
value: 58.502746406116614
- type: euclidean_pearson
value: 60.82817999438477
- type: euclidean_spearman
value: 58.988494433752756
- type: manhattan_pearson
value: 60.87147859170236
- type: manhattan_spearman
value: 59.03527382025516
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 72.89990498692094
- type: cos_sim_spearman
value: 74.03028513377879
- type: euclidean_pearson
value: 73.8252088833803
- type: euclidean_spearman
value: 74.15554246478399
- type: manhattan_pearson
value: 73.80947397334666
- type: manhattan_spearman
value: 74.13117958176566
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 70.67974206005906
- type: cos_sim_spearman
value: 66.18263558486296
- type: euclidean_pearson
value: 69.5048876024341
- type: euclidean_spearman
value: 66.36380457878391
- type: manhattan_pearson
value: 69.4895372451589
- type: manhattan_spearman
value: 66.36941569935124
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 73.99856913569187
- type: cos_sim_spearman
value: 75.54712054246464
- type: euclidean_pearson
value: 74.55692573876115
- type: euclidean_spearman
value: 75.34499056740096
- type: manhattan_pearson
value: 74.59342318869683
- type: manhattan_spearman
value: 75.35708317926819
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 72.3343670787494
- type: cos_sim_spearman
value: 73.7136650302399
- type: euclidean_pearson
value: 73.86004257913046
- type: euclidean_spearman
value: 73.9557418048638
- type: manhattan_pearson
value: 73.78919091538661
- type: manhattan_spearman
value: 73.86316425954108
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 79.08159601556619
- type: cos_sim_spearman
value: 80.13910828685532
- type: euclidean_pearson
value: 79.39197806617453
- type: euclidean_spearman
value: 79.85692277871196
- type: manhattan_pearson
value: 79.32452246324705
- type: manhattan_spearman
value: 79.70120373587193
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 62.29720207747786
- type: cos_sim_spearman
value: 65.65260681394685
- type: euclidean_pearson
value: 64.49002165983158
- type: euclidean_spearman
value: 65.25917651158736
- type: manhattan_pearson
value: 64.49981108236335
- type: manhattan_spearman
value: 65.20426825202405
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 71.1871068550574
- type: cos_sim_spearman
value: 71.40167034949341
- type: euclidean_pearson
value: 72.2373684855404
- type: euclidean_spearman
value: 71.90255429812984
- type: manhattan_pearson
value: 72.23173532049509
- type: manhattan_spearman
value: 71.87843489689064
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 68.65000574464773
- type: mrr
value: 88.29363084265044
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 40.76107749144358
- type: mrr
value: 41.03689202953908
- task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 28.68520527813894
- type: cos_sim_spearman
value: 29.017620841627433
- type: dot_pearson
value: 29.25380949876322
- type: dot_spearman
value: 29.33885250837327
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
| 35,450 | [
[
-0.0220489501953125,
-0.053253173828125,
0.01459503173828125,
0.0279541015625,
-0.027862548828125,
-0.033782958984375,
-0.0130767822265625,
0.00539398193359375,
0.0214385986328125,
0.03094482421875,
-0.042633056640625,
-0.03717041015625,
-0.04986572265625,
-... |
JosephusCheung/Qwen-LLaMAfied-7B-Chat | 2023-10-22T18:39:39.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"qwen",
"llama-2",
"en",
"zh",
"license:gpl-3.0",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | text-generation | JosephusCheung | null | null | JosephusCheung/Qwen-LLaMAfied-7B-Chat | 99 | 3,321 | transformers | 2023-08-04T08:43:39 | ---
language:
- en
- zh
tags:
- qwen
- llama
- llama-2
license: gpl-3.0
---
NEW VERSIONS: [https://huggingface.co/CausalLM/14B](https://huggingface.co/CausalLM/14B)
This is the LLaMAfied replica of [Qwen/Qwen-7B-Chat](https://huggingface.co/Qwen/Qwen-7B-Chat) (Original Version before 25.09.2023), recalibrated to fit the original LLaMA/LLaMA-2-like model structure.
You can use LlamaForCausalLM for model inference, which is the same as LLaMA/LLaMA-2 models (using GPT2Tokenizer converted from the original tiktoken, by [vonjack](https://huggingface.co/vonjack)).
The model has been edited to be white-labelled, meaning the model will no longer call itself a Qwen.
Up until now, the model has undergone numerical alignment of weights and preliminary reinforcement learning in order to align with the original model. Some errors and outdated knowledge have been addressed through model editing methods. This model remains completely equivalent to the original version, without having any dedicated supervised finetuning on downstream tasks or other extensive conversation datasets.
PROMPT FORMAT: [chatml](https://github.com/openai/openai-python/blob/main/chatml.md)
CURRENT MMLU: 53.48
CURRENT CEval (val): 54.13
```
MMLU - stem ACC: 46.40 Humanities ACC: 47.61 other ACC: 61.31 social ACC: 61.78 AVERAGE ACC:53.48
CEval (val) - STEM acc: 45.28 Social Science acc: 66.19 Humanities acc: 58.76 Other acc: 54.62 Hard acc:28.64 AVERAGE acc:54.13
```
Issue: Compared to the original Qwen-7B-Chat scoring 53.90 in MMLU and 54.18 in CEval (val), the our scores dropped slightly [-0.42 in MMLU, -0.05 in CEval (val)] due to insufficient realignment.
这是 [通义千问 Qwen/Qwen-7B-Chat](https://huggingface.co/Qwen/Qwen-7B-Chat) (在 25.09.2023 之前的原始版本) 的 LLaMA 化版本,经过重新校准以适应原始的类似 LLaMA/LLaMA-2 的模型结构。
您可以使用 LlamaCausalLM 进行模型推理,和 LLaMA/LLaMA-2 保持一致(使用由 [vonjack](https://huggingface.co/vonjack) 从原始 tiktoken 转换而来的 GPT2Tokenizer 分词器)。
模型已经被编辑实现白标化,不再自称通义千问。
到目前为止,该模型已经进行了权重的数值对齐和初步的强化学习,以与原始模型保持一致。 一些错误和过时的知识已通过模型编辑方法得到解决。 该模型与原始版本完全等效,尚未对下游任务或其他广泛的对话数据集进行任何专门的监督微调。
PROMPT 格式: [chatml](https://github.com/openai/openai-python/blob/main/chatml.md)
当前的 MMLU: 53.48
当前的 CEval (val): 54.13
```
MMLU - stem ACC: 46.40 Humanities ACC: 47.61 other ACC: 61.31 social ACC: 61.78 AVERAGE ACC:53.48
CEval (val) - STEM acc: 45.28 Social Science acc: 66.19 Humanities acc: 58.76 Other acc: 54.62 Hard acc:28.64 AVERAGE acc:54.13
```
问题:相比原本的 Qwen-7B-Chat 的 MMLU 分数 53.90 和 CEval (val) 分数 54.18,由于不够充分的重新对齐,分数都略有下降 [-0.42 in MMLU, -0.05 in CEval (val)]。 | 2,548 | [
[
-0.0246734619140625,
-0.0618896484375,
0.01033782958984375,
0.0208740234375,
-0.02850341796875,
-0.00760650634765625,
-0.01251220703125,
-0.049163818359375,
0.031768798828125,
0.024139404296875,
-0.047393798828125,
-0.0343017578125,
-0.041473388671875,
0.002... |
Yntec/a-ZovyaRPGArtistV2VAE | 2023-08-03T17:08:46.000Z | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"Zovya",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | Yntec | null | null | Yntec/a-ZovyaRPGArtistV2VAE | 0 | 3,320 | diffusers | 2023-08-03T04:39:20 | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
- Zovya
---
# A-Zovya RPG Artist Tools V2 Art
This model with the Color 101 VAE baked in.
Original page:
https://civitai.com/models/8124?modelVersionId=42992
| 332 | [
[
0.006801605224609375,
0.0038890838623046875,
0.0310821533203125,
0.0217742919921875,
-0.01486968994140625,
-0.01064300537109375,
0.050628662109375,
-0.0036067962646484375,
0.032318115234375,
0.056182861328125,
-0.0771484375,
-0.03546142578125,
-0.008316040039062... |
cross-encoder/msmarco-MiniLM-L12-en-de-v1 | 2021-08-05T08:40:18.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | cross-encoder | null | null | cross-encoder/msmarco-MiniLM-L12-en-de-v1 | 2 | 3,318 | transformers | 2022-03-02T23:29:05 | ---
license: apache-2.0
---
# Cross-Encoder for MS MARCO - EN-DE
This is a cross-lingual Cross-Encoder model for EN-DE that can be used for passage re-ranking. It was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task.
The model can be used for Information Retrieval: See [SBERT.net Retrieve & Re-rank](https://www.sbert.net/examples/applications/retrieve_rerank/README.html).
The training code is available in this repository, see `train_script.py`.
## Usage with SentenceTransformers
When you have [SentenceTransformers](https://www.sbert.net/) installed, you can use the model like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('model_name', max_length=512)
query = 'How many people live in Berlin?'
docs = ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.']
pairs = [(query, doc) for doc in docs]
scores = model.predict(pairs)
```
## Usage with Transformers
With the transformers library, you can use the model like this:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model = AutoModelForSequenceClassification.from_pretrained('model_name')
tokenizer = AutoTokenizer.from_pretrained('model_name')
features = tokenizer(['How many people live in Berlin?', 'How many people live in Berlin?'], ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'], padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
print(scores)
```
## Performance
The performance was evaluated on three datasets:
- **TREC-DL19 EN-EN**: The original [TREC 2019 Deep Learning Track](https://microsoft.github.io/msmarco/TREC-Deep-Learning-2019.html): Given an English query and 1000 documents (retrieved by BM25 lexical search), rank documents with according to their relevance. We compute NDCG@10. BM25 achieves a score of 45.46, a perfect re-ranker can achieve a score of 95.47.
- **TREC-DL19 DE-EN**: The English queries of TREC-DL19 have been translated by a German native speaker to German. We rank the German queries versus the English passages from the original TREC-DL19 setup. We compute NDCG@10.
- **GermanDPR DE-DE**: The [GermanDPR](https://www.deepset.ai/germanquad) dataset provides German queries and German passages from Wikipedia. We indexed the 2.8 Million paragraphs from German Wikipedia and retrieved for each query the top 100 most relevant passages using BM25 lexical search with Elasticsearch. We compute MRR@10. BM25 achieves a score of 35.85, a perfect re-ranker can achieve a score of 76.27.
We also check the performance of bi-encoders using the same evaluation: The retrieved documents from BM25 lexical search are re-ranked using query & passage embeddings with cosine-similarity. Bi-Encoders can also be used for end-to-end semantic search.
| Model-Name | TREC-DL19 EN-EN | TREC-DL19 DE-EN | GermanDPR DE-DE | Docs / Sec |
| ------------- |:-------------:| :-----: | :---: | :----: |
| BM25 | 45.46 | - | 35.85 | -|
| **Cross-Encoder Re-Rankers** | | | |
| [cross-encoder/msmarco-MiniLM-L6-en-de-v1](https://huggingface.co/cross-encoder/msmarco-MiniLM-L6-en-de-v1) | 72.43 | 65.53 | 46.77 | 1600 |
| [cross-encoder/msmarco-MiniLM-L12-en-de-v1](https://huggingface.co/cross-encoder/msmarco-MiniLM-L12-en-de-v1) | 72.94 | 66.07 | 49.91 | 900 |
| [svalabs/cross-electra-ms-marco-german-uncased](https://huggingface.co/svalabs/cross-electra-ms-marco-german-uncased) (DE only) | - | - | 53.67 | 260 |
| [deepset/gbert-base-germandpr-reranking](https://huggingface.co/deepset/gbert-base-germandpr-reranking) (DE only) | - | - | 53.59 | 260 |
| **Bi-Encoders (re-ranking)** | | | |
| [sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-lng-aligned](https://huggingface.co/sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-lng-aligned) | 63.38 | 58.28 | 37.88 | 940 |
| [sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-trained-scratch](https://huggingface.co/sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-trained-scratch) | 65.51 | 58.69 | 38.32 | 940 |
| [svalabs/bi-electra-ms-marco-german-uncased](https://huggingface.co/svalabs/bi-electra-ms-marco-german-uncased) (DE only) | - | - | 34.31 | 450 |
| [deepset/gbert-base-germandpr-question_encoder](https://huggingface.co/deepset/gbert-base-germandpr-question_encoder) (DE only) | - | - | 42.55 | 450 |
Note: Docs / Sec gives the number of (query, document) pairs we can re-rank within a second on a V100 GPU.
| 4,796 | [
[
-0.042816162109375,
-0.055023193359375,
0.0199432373046875,
0.0091552734375,
-0.0178680419921875,
0.005641937255859375,
-0.034576416015625,
-0.030029296875,
0.0306396484375,
0.01076507568359375,
-0.03265380859375,
-0.06475830078125,
-0.058135986328125,
0.023... |
Yntec/level4 | 2023-09-21T03:14:00.000Z | [
"diffusers",
"Photorealistic",
"Beautiful",
"Fantasy",
"AreThoseLevel4Plates",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | Yntec | null | null | Yntec/level4 | 0 | 3,314 | diffusers | 2023-09-20T22:17:37 | ---
library_name: diffusers
pipeline_tag: text-to-image
license: creativeml-openrail-m
tags:
- Photorealistic
- Beautiful
- Fantasy
- AreThoseLevel4Plates
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
---
# level 4 v3
Original page: https://civitai.com/models/17449?modelVersionId=21896
Sample and prompt:

Pretty cute girl. Detailed coffee table in the vaporwave mid century modern livingroom. highly detailed, digital painting, artstation, concept art, smooth, sharp focus, illustration, artgerm, tomasz alen kopera, peter mohrbacher, little girl, donato giancola, joseph christian leyendecker, boris vallejo, wlop | 767 | [
[
-0.02783203125,
-0.03936767578125,
0.033477783203125,
0.0208740234375,
-0.0069732666015625,
-0.0094451904296875,
0.036376953125,
-0.035675048828125,
0.01290130615234375,
0.034942626953125,
-0.052154541015625,
-0.060821533203125,
-0.01140594482421875,
-0.0022... |
bert-large-cased-whole-word-masking-finetuned-squad | 2023-04-06T13:42:06.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"bert",
"question-answering",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | question-answering | null | null | null | bert-large-cased-whole-word-masking-finetuned-squad | 1 | 3,310 | transformers | 2022-03-02T23:29:04 | ---
language: en
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# BERT large model (cased) whole word masking finetuned on SQuAD
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/google-research/bert). This model is cased: it makes a difference between english and English.
Differently to other BERT models, this model was trained with a new technique: Whole Word Masking. In this case, all of the tokens corresponding to a word are masked at once. The overall masking rate remains the same.
The training is identical -- each masked WordPiece token is predicted independently.
After pre-training, this model was fine-tuned on the SQuAD dataset with one of our fine-tuning scripts. See below for more information regarding this fine-tuning.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
This model has the following configuration:
- 24-layer
- 1024 hidden dimension
- 16 attention heads
- 336M parameters.
## Intended uses & limitations
This model should be used as a question-answering model. You may use it in a question answering pipeline, or use it to output raw results given a query and a context. You may see other use cases in the [task summary](https://huggingface.co/transformers/task_summary.html#extractive-question-answering) of the transformers documentation.## Training data
The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### Fine-tuning
After pre-training, this model was fine-tuned on the SQuAD dataset with one of our fine-tuning scripts. In order to reproduce the training, you may use the following command:
```
python -m torch.distributed.launch --nproc_per_node=8 ./examples/question-answering/run_qa.py \
--model_name_or_path bert-large-cased-whole-word-masking \
--dataset_name squad \
--do_train \
--do_eval \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir ./examples/models/wwm_cased_finetuned_squad/ \
--per_device_eval_batch_size=3 \
--per_device_train_batch_size=3 \
```
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1810-04805,
author = {Jacob Devlin and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
Understanding},
journal = {CoRR},
volume = {abs/1810.04805},
year = {2018},
url = {http://arxiv.org/abs/1810.04805},
archivePrefix = {arXiv},
eprint = {1810.04805},
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | 6,119 | [
[
-0.038299560546875,
-0.060089111328125,
0.01715087890625,
0.0249481201171875,
-0.0270233154296875,
-0.0017652511596679688,
-0.0220184326171875,
-0.03851318359375,
0.0190887451171875,
0.039306640625,
-0.06341552734375,
-0.026123046875,
-0.04345703125,
0.00862... |
vaddagonivyshnavi/my-pet-cat | 2023-11-03T11:06:05.000Z | [
"diffusers",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us",
"has_space"
] | text-to-image | vaddagonivyshnavi | null | null | vaddagonivyshnavi/my-pet-cat | 0 | 3,308 | diffusers | 2023-11-03T11:00:56 | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-cat Dreambooth model trained by vaddagonivyshnavi following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: MRCEW-235
Sample pictures of this concept:
.jpg)
| 406 | [
[
-0.06036376953125,
-0.0197296142578125,
0.0224761962890625,
0.00571441650390625,
-0.0157928466796875,
0.0447998046875,
0.0275726318359375,
-0.02996826171875,
0.060791015625,
0.040679931640625,
-0.04443359375,
-0.00656890869140625,
-0.00893402099609375,
0.017... |
Lykon/dreamshaper-8-inpainting | 2023-08-26T16:47:22.000Z | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"inpainting",
"art",
"artistic",
"anime",
"dreamshaper",
"en",
"license:creativeml-openrail-m",
"diffusers:StableDiffusionInpaintPipeline",
"region:us"
] | null | Lykon | null | null | Lykon/dreamshaper-8-inpainting | 3 | 3,307 | diffusers | 2023-08-26T16:47:21 | ---
language:
- en
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- inpainting
- art
- artistic
- diffusers
- anime
- dreamshaper
duplicated_from: lykon-models/dreamshaper-8-inpainting
---
# Dreamshaper 8 inpainting
`lykon-models/dreamshaper-8-inpainting` is a Stable Diffusion Inpainting model that has been fine-tuned on [runwayml/stable-diffusion-inpainting](https://huggingface.co/runwayml/stable-diffusion-inpainting).
Please consider supporting me:
- on [Patreon](https://www.patreon.com/Lykon275)
- or [buy me a coffee](https://snipfeed.co/lykon)
## Diffusers
For more general information on how to run inpainting models with 🧨 Diffusers, see [the docs](https://huggingface.co/docs/diffusers/using-diffusers/inpaint).
1. Installation
```
pip install diffusers transformers accelerate
```
2. Run
```py
from diffusers import AutoPipelineForInpainting, DEISMultistepScheduler
import torch
from diffusers.utils import load_image
pipe = AutoPipelineForInpainting.from_pretrained('lykon-models/dreamshaper-8-inpainting', torch_dtype=torch.float16, variant="fp16")
pipe.scheduler = DEISMultistepScheduler.from_config(pipe.scheduler.config)
pipe = pipe.to("cuda")
img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
image = load_image(img_url)
mask_image = load_image(mask_url)
prompt = "a majestic tiger sitting on a park bench"
generator = torch.manual_seed(33)
image = pipe(prompt, image=image, mask_image=mask_image, generator=generator, num_inference_steps=25).images[0]
image.save("./image.png")
```

## Notes
- **Version 8** focuses on improving what V7 started. Might be harder to do photorealism compared to realism focused models, as it might be hard to do anime compared to anime focused models, but it can do both pretty well if you're skilled enough. Check the examples!
- **Version 7** improves lora support, NSFW and realism. If you're interested in "absolute" realism, try AbsoluteReality.
- **Version 6** adds more lora support and more style in general. It should also be better at generating directly at 1024 height (but be careful with it). 6.x are all improvements.
- **Version 5** is the best at photorealism and has noise offset.
- **Version 4** is much better with anime (can do them with no LoRA) and booru tags. It might be harder to control if you're used to caption style, so you might still want to use version 3.31. V4 is also better with eyes at lower resolutions. Overall is like a "fix" of V3 and shouldn't be too much different.
| 2,762 | [
[
-0.027099609375,
-0.03448486328125,
0.040252685546875,
0.03558349609375,
-0.0260162353515625,
0.006439208984375,
0.0153045654296875,
-0.049407958984375,
0.031036376953125,
0.0472412109375,
-0.033843994140625,
-0.020050048828125,
-0.03411865234375,
-0.0060424... |
diffusers/ddpm_dummy | 2023-02-08T12:31:14.000Z | [
"transformers",
"hf_diffuse",
"endpoints_compatible",
"has_space",
"region:us"
] | null | diffusers | null | null | diffusers/ddpm_dummy | 0 | 3,305 | transformers | 2022-05-31T12:37:35 | ---
tags:
- hf_diffuse
---
# Dummy diffusion model following architecture of https://github.com/lucidrains/denoising-diffusion-pytorch
Run the model as follows:
```python
from diffusers import UNetModel, GaussianDiffusion
import torch
# 1. Load model
unet = UNetModel.from_pretrained("fusing/ddpm_dummy")
# 2. Do one denoising step with model
batch_size, num_channels, height, width = 1, 3, 32, 32
dummy_noise = torch.ones((batch_size, num_channels, height, width))
time_step = torch.tensor([10])
image = unet(dummy_noise, time_step)
# 3. Load sampler
sampler = GaussianDiffusion.from_config("fusing/ddpm_dummy")
# 4. Sample image from sampler passing the model
image = sampler.sample(model, batch_size=1)
print(image)
``` | 730 | [
[
-0.0159454345703125,
-0.0634765625,
0.032958984375,
0.037384033203125,
-0.0245819091796875,
-0.01355743408203125,
0.0155487060546875,
0.00838470458984375,
-0.00823974609375,
0.0270233154296875,
-0.0305023193359375,
-0.034149169921875,
-0.032379150390625,
-0.... |
facebook/mms-1b | 2023-06-05T10:23:40.000Z | [
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"mms",
"ab",
"af",
"ak",
"am",
"ar",
"as",
"av",
"ay",
"az",
"ba",
"bm",
"be",
"bn",
"bi",
"bo",
"sh",
"br",
"bg",
"ca",
"cs",
"ce",
"cv",
"ku",
"cy",
"da",
"de",
"dv",
"dz",
"el",
"en",
"eo",... | null | facebook | null | null | facebook/mms-1b | 22 | 3,303 | transformers | 2023-05-22T19:39:11 | ---
tags:
- mms
language:
- ab
- af
- ak
- am
- ar
- as
- av
- ay
- az
- ba
- bm
- be
- bn
- bi
- bo
- sh
- br
- bg
- ca
- cs
- ce
- cv
- ku
- cy
- da
- de
- dv
- dz
- el
- en
- eo
- et
- eu
- ee
- fo
- fa
- fj
- fi
- fr
- fy
- ff
- ga
- gl
- gn
- gu
- zh
- ht
- ha
- he
- hi
- sh
- hu
- hy
- ig
- ia
- ms
- is
- it
- jv
- ja
- kn
- ka
- kk
- kr
- km
- ki
- rw
- ky
- ko
- kv
- lo
- la
- lv
- ln
- lt
- lb
- lg
- mh
- ml
- mr
- ms
- mk
- mg
- mt
- mn
- mi
- my
- zh
- nl
- 'no'
- 'no'
- ne
- ny
- oc
- om
- or
- os
- pa
- pl
- pt
- ms
- ps
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- ro
- rn
- ru
- sg
- sk
- sl
- sm
- sn
- sd
- so
- es
- sq
- su
- sv
- sw
- ta
- tt
- te
- tg
- tl
- th
- ti
- ts
- tr
- uk
- ms
- vi
- wo
- xh
- ms
- yo
- ms
- zu
- za
license: cc-by-nc-4.0
datasets:
- google/fleurs
metrics:
- wer
---
# Massively Multilingual Speech (MMS) - 1B
Facebook's MMS counting *1 billion* parameters.
MMS is Facebook AI's massive multilingual pretrained model for speech ("MMS").
It is pretrained in with [Wav2Vec2's self-supervised training objective](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) on about 500,000 hours of speech data in over 1,400 languages.
When using the model make sure that your speech input is sampled at 16kHz.
**Note**: This model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Translation, or Classification. Check out the [**How-to-fine section](#how-to-finetune) or [**this blog**](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for more information about ASR.
## Table Of Content
- [How to Finetune](#how-to-finetune)
- [Model details](#model-details)
- [Additional links](#additional-links)
## How to finetune
Coming soon...
## Model details
- **Developed by:** Vineel Pratap et al.
- **Model type:** Multi-Lingual Automatic Speech Recognition model
- **Language(s):** 1000+ languages
- **License:** CC-BY-NC 4.0 license
- **Num parameters**: 1 billion
- **Cite as:**
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
## Additional Links
- [Blog post]( )
- [Transformers documentation](https://huggingface.co/docs/transformers/main/en/model_doc/mms).
- [Paper](https://arxiv.org/abs/2305.13516)
- [GitHub Repository](https://github.com/facebookresearch/fairseq/tree/main/examples/mms#asr)
- [Other **MMS** checkpoints](https://huggingface.co/models?other=mms)
- MMS ASR fine-tuned checkpoints:
- [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all)
- [facebook/mms-1b-l1107](https://huggingface.co/facebook/mms-1b-l1107)
- [facebook/mms-1b-fl102](https://huggingface.co/facebook/mms-1b-fl102)
- [Official Space](https://huggingface.co/spaces/facebook/MMS)
| 3,166 | [
[
-0.049285888671875,
-0.036651611328125,
0.00800323486328125,
0.0311431884765625,
-0.0035991668701171875,
0.00420379638671875,
-0.007080078125,
-0.0300445556640625,
0.0159454345703125,
0.0240020751953125,
-0.083251953125,
-0.024078369140625,
-0.040802001953125,
... |
tomaarsen/span-marker-bert-tiny-conll03 | 2023-09-12T19:49:56.000Z | [
"span-marker",
"pytorch",
"safetensors",
"token-classification",
"ner",
"named-entity-recognition",
"en",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"region:us"
] | token-classification | tomaarsen | null | null | tomaarsen/span-marker-bert-tiny-conll03 | 1 | 3,300 | span-marker | 2023-04-05T07:45:11 | ---
language:
- en
license: apache-2.0
library_name: span-marker
tags:
- span-marker
- token-classification
- ner
- named-entity-recognition
datasets:
- conll2003
metrics:
- f1
- recall
- precision
pipeline_tag: token-classification
widget:
- text: Amelia Earhart flew her single engine Lockheed Vega 5B across the Atlantic
to Paris.
example_title: Amelia Earhart
base_model: prajjwal1/bert-tiny
model-index:
- name: SpanMarker w. bert-tiny on CoNLL03 by Tom Aarsen
results:
- task:
type: token-classification
name: Named Entity Recognition
dataset:
name: CoNLL03
type: conll2003
split: test
revision: 01ad4ad271976c5258b9ed9b910469a806ff3288
metrics:
- type: f1
value: 0.8093994778067886
name: F1
- type: precision
value: 0.8546048601184398
name: Precision
- type: recall
value: 0.7687362233651727
name: Recall
---
# SpanMarker for Named Entity Recognition
This is a [SpanMarker](https://github.com/tomaarsen/SpanMarkerNER) model that can be used for Named Entity Recognition. In particular, this SpanMarker model uses [prajjwal1/bert-tiny](https://huggingface.co/prajjwal1/bert-tiny) as the underlying encoder.
## Note
This model is primarily used for efficient tests on the [SpanMarker](https://github.com/tomaarsen/SpanMarkerNER) GitHub repository.
## Usage
To use this model for inference, first install the `span_marker` library:
```bash
pip install span_marker
```
You can then run inference with this model like so:
```python
from span_marker import SpanMarkerModel
# Download from the 🤗 Hub
model = SpanMarkerModel.from_pretrained("tomaarsen/span-marker-bert-tiny-conll03")
# Run inference
entities = model.predict("Amelia Earhart flew her single engine Lockheed Vega 5B across the Atlantic to Paris.")
```
See the [SpanMarker](https://github.com/tomaarsen/SpanMarkerNER) repository for documentation and additional information on this library. | 1,966 | [
[
-0.0300140380859375,
-0.048370361328125,
0.0311126708984375,
0.0194091796875,
-0.0251922607421875,
-0.0164337158203125,
-0.008209228515625,
-0.049346923828125,
0.02490234375,
0.0197296142578125,
-0.04608154296875,
-0.0195159912109375,
-0.040740966796875,
0.0... |
Sukumarachari2580/my-fav-bird | 2023-11-04T04:37:33.000Z | [
"diffusers",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | Sukumarachari2580 | null | null | Sukumarachari2580/my-fav-bird | 0 | 3,298 | diffusers | 2023-11-04T04:30:42 | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-fav-bird Dreambooth model trained by Sukumarachari2580 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: CVR-142
Sample pictures of this concept:
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
| 829 | [
[
-0.05523681640625,
-0.018768310546875,
-0.0011968612670898438,
0.017730712890625,
-0.02215576171875,
0.0138702392578125,
0.0302276611328125,
-0.0283966064453125,
0.0428466796875,
0.01702880859375,
-0.0313720703125,
-0.027130126953125,
-0.0335693359375,
0.023... |
Yntec/DreamWorld | 2023-10-15T07:11:12.000Z | [
"diffusers",
"Anime",
"Disney",
"Pixar",
"DucHaiten",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | Yntec | null | null | Yntec/DreamWorld | 1 | 3,296 | diffusers | 2023-10-15T05:20:19 | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- Anime
- Disney
- Pixar
- DucHaiten
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
---
# DucHaitenDreamWorld v1.3
No-ema version of this model.
If you like this content, support DucHaiten at: https://linktr.ee/Duc_Haiten
Sample and prompts:


textured EYES, Portrait of Pretty CUTE LITTLE Girl dressed of coke clothes countryside country style country house fantasy character portrait, 1949, cinematic lighting. hayao miyazaki on canvas By design key visual and rossdraws and ross tran
Original page: https://civitai.com/models/7039?modelVersionId=8275 | 915 | [
[
-0.04156494140625,
-0.059783935546875,
0.03033447265625,
0.026519775390625,
-0.04913330078125,
-0.032073974609375,
0.01450347900390625,
-0.040069580078125,
0.0777587890625,
0.06622314453125,
-0.08673095703125,
-0.04827880859375,
-0.038665771484375,
0.0065155... |
timm/beitv2_base_patch16_224.in1k_ft_in22k | 2023-05-08T23:34:58.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-22k",
"arxiv:2208.06366",
"arxiv:2010.11929",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/beitv2_base_patch16_224.in1k_ft_in22k | 0 | 3,294 | timm | 2022-12-23T02:33:15 | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-22k
---
# Model card for beitv2_base_patch16_224.in1k_ft_in22k
A BEiT-v2 image classification model. Trained on ImageNet-1k with self-supervised masked image modelling (MIM) using a VQ-KD encoder as a visual tokenizer (via OpenAI CLIP B/16 teacher). Fine-tuned on ImageNet-22k.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 102.6
- GMACs: 17.6
- Activations (M): 23.9
- Image size: 224 x 224
- **Papers:**
- BEiT v2: Masked Image Modeling with Vector-Quantized Visual Tokenizers: https://arxiv.org/abs/2208.06366
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2
- **Dataset:** ImageNet-22k
- **Original:** https://github.com/microsoft/unilm/tree/master/beit2
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('beitv2_base_patch16_224.in1k_ft_in22k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'beitv2_base_patch16_224.in1k_ft_in22k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 197, 768) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{peng2022beit,
title={Beit v2: Masked image modeling with vector-quantized visual tokenizers},
author={Peng, Zhiliang and Dong, Li and Bao, Hangbo and Ye, Qixiang and Wei, Furu},
journal={arXiv preprint arXiv:2208.06366},
year={2022}
}
```
```bibtex
@article{dosovitskiy2020vit,
title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
journal={ICLR},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
| 3,758 | [
[
-0.0307464599609375,
-0.0290679931640625,
-0.0027675628662109375,
0.007904052734375,
-0.04058837890625,
-0.014129638671875,
-0.0070648193359375,
-0.038238525390625,
0.01248931884765625,
0.0291290283203125,
-0.0285491943359375,
-0.054534912109375,
-0.055755615234... |
apple/mobilevitv2-1.0-imagenet1k-256 | 2023-07-12T09:27:31.000Z | [
"transformers",
"pytorch",
"mobilevitv2",
"vision",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2206.02680",
"license:other",
"endpoints_compatible",
"region:us"
] | image-classification | apple | null | null | apple/mobilevitv2-1.0-imagenet1k-256 | 3 | 3,291 | transformers | 2023-06-05T14:46:34 | ---
datasets:
- imagenet-1k
library_name: transformers
pipeline_tag: image-classification
license: other
tags:
- vision
- image-classification
---
# MobileViTv2 (mobilevitv2-1.0-imagenet1k-256)
<!-- Provide a quick summary of what the model is/does. -->
MobileViTv2 is the second version of MobileViT. It was proposed in [Separable Self-attention for Mobile Vision Transformers](https://arxiv.org/abs/2206.02680) by Sachin Mehta and Mohammad Rastegari, and first released in [this](https://github.com/apple/ml-cvnets) repository. The license used is [Apple sample code license](https://github.com/apple/ml-cvnets/blob/main/LICENSE).
Disclaimer: The team releasing MobileViT did not write a model card for this model so this model card has been written by the Hugging Face team.
### Model Description
<!-- Provide a longer summary of what this model is. -->
MobileViTv2 is constructed by replacing the multi-headed self-attention in MobileViT with separable self-attention.
### Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=mobilevitv2) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import MobileViTv2FeatureExtractor, MobileViTv2ForImageClassification
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = MobileViTv2FeatureExtractor.from_pretrained("shehan97/mobilevitv2-1.0-imagenet1k-256")
model = MobileViTv2ForImageClassification.from_pretrained("shehan97/mobilevitv2-1.0-imagenet1k-256")
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
Currently, both the feature extractor and model support PyTorch.
## Training data
The MobileViT model was pretrained on [ImageNet-1k](https://huggingface.co/datasets/imagenet-1k), a dataset consisting of 1 million images and 1,000 classes.
### BibTeX entry and citation info
```bibtex
@inproceedings{vision-transformer,
title = {Separable Self-attention for Mobile Vision Transformers},
author = {Sachin Mehta and Mohammad Rastegari},
year = {2022},
URL = {https://arxiv.org/abs/2206.02680}
}
``` | 2,583 | [
[
-0.043304443359375,
-0.0075531005859375,
-0.0216522216796875,
0.0156402587890625,
-0.026123046875,
-0.0220184326171875,
0.00876617431640625,
-0.043212890625,
0.02008056640625,
0.0279693603515625,
-0.03936767578125,
-0.00887298583984375,
-0.044586181640625,
-... |
livingbox/incremental-test-02 | 2023-10-30T20:02:18.000Z | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us",
"has_space"
] | text-to-image | livingbox | null | null | livingbox/incremental-test-02 | 0 | 3,289 | diffusers | 2023-10-30T19:57:20 | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Incremental-test-02 Dreambooth model trained by livingbox with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
| 510 | [
[
-0.0306549072265625,
-0.08245849609375,
0.031402587890625,
0.05072021484375,
-0.01317596435546875,
0.039398193359375,
0.028778076171875,
-0.0260162353515625,
0.038848876953125,
0.0038852691650390625,
-0.033172607421875,
-0.01396942138671875,
-0.024078369140625,
... |
microsoft/BioGPT-Large | 2023-02-05T06:18:14.000Z | [
"transformers",
"pytorch",
"biogpt",
"text-generation",
"medical",
"en",
"dataset:pubmed",
"license:mit",
"endpoints_compatible",
"has_space",
"region:us"
] | text-generation | microsoft | null | null | microsoft/BioGPT-Large | 125 | 3,285 | transformers | 2023-02-03T16:17:26 | ---
license: mit
datasets:
- pubmed
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- medical
widget:
- text: COVID-19 is
inference:
parameters:
max_new_tokens: 50
---
## BioGPT
Pre-trained language models have attracted increasing attention in the biomedical domain, inspired by their great success in the general natural language domain. Among the two main branches of pre-trained language models in the general language domain, i.e. BERT (and its variants) and GPT (and its variants), the first one has been extensively studied in the biomedical domain, such as BioBERT and PubMedBERT. While they have achieved great success on a variety of discriminative downstream biomedical tasks, the lack of generation ability constrains their application scope. In this paper, we propose BioGPT, a domain-specific generative Transformer language model pre-trained on large-scale biomedical literature. We evaluate BioGPT on six biomedical natural language processing tasks and demonstrate that our model outperforms previous models on most tasks. Especially, we get 44.98%, 38.42% and 40.76% F1 score on BC5CDR, KD-DTI and DDI end-to-end relation extraction tasks, respectively, and 78.2% accuracy on PubMedQA, creating a new record. Our case study on text generation further demonstrates the advantage of BioGPT on biomedical literature to generate fluent descriptions for biomedical terms.
## Citation
If you find BioGPT useful in your research, please cite the following paper:
```latex
@article{10.1093/bib/bbac409,
author = {Luo, Renqian and Sun, Liai and Xia, Yingce and Qin, Tao and Zhang, Sheng and Poon, Hoifung and Liu, Tie-Yan},
title = "{BioGPT: generative pre-trained transformer for biomedical text generation and mining}",
journal = {Briefings in Bioinformatics},
volume = {23},
number = {6},
year = {2022},
month = {09},
abstract = "{Pre-trained language models have attracted increasing attention in the biomedical domain, inspired by their great success in the general natural language domain. Among the two main branches of pre-trained language models in the general language domain, i.e. BERT (and its variants) and GPT (and its variants), the first one has been extensively studied in the biomedical domain, such as BioBERT and PubMedBERT. While they have achieved great success on a variety of discriminative downstream biomedical tasks, the lack of generation ability constrains their application scope. In this paper, we propose BioGPT, a domain-specific generative Transformer language model pre-trained on large-scale biomedical literature. We evaluate BioGPT on six biomedical natural language processing tasks and demonstrate that our model outperforms previous models on most tasks. Especially, we get 44.98\%, 38.42\% and 40.76\% F1 score on BC5CDR, KD-DTI and DDI end-to-end relation extraction tasks, respectively, and 78.2\% accuracy on PubMedQA, creating a new record. Our case study on text generation further demonstrates the advantage of BioGPT on biomedical literature to generate fluent descriptions for biomedical terms.}",
issn = {1477-4054},
doi = {10.1093/bib/bbac409},
url = {https://doi.org/10.1093/bib/bbac409},
note = {bbac409},
eprint = {https://academic.oup.com/bib/article-pdf/23/6/bbac409/47144271/bbac409.pdf},
}
``` | 3,355 | [
[
-0.0170745849609375,
-0.06170654296875,
0.0433349609375,
0.01091766357421875,
-0.037322998046875,
0.002109527587890625,
-0.0067901611328125,
-0.04156494140625,
0.0004954338073730469,
0.0227203369140625,
-0.037261962890625,
-0.039642333984375,
-0.0509033203125,
... |
thibaud/controlnet-sd21-hed-diffusers | 2023-08-14T07:45:11.000Z | [
"diffusers",
"art",
"stable diffusion",
"controlnet",
"en",
"license:other",
"has_space",
"diffusers:ControlNetModel",
"region:us"
] | null | thibaud | null | null | thibaud/controlnet-sd21-hed-diffusers | 0 | 3,279 | diffusers | 2023-03-09T08:20:15 | ---
license: other
language:
- en
tags:
- art
- diffusers
- stable diffusion
- controlnet
---
Here's the first version of controlnet for stablediffusion 2.1 for diffusers
Trained on a subset of laion/laion-art
License: refers to the different preprocessor's ones.
### Hed:

### Misuse, Malicious Use, and Out-of-Scope Use
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
Thanks
- https://huggingface.co/lllyasviel/ControlNet for the implementation and the release of 1.5 models.
- https://huggingface.co/thepowefuldeez for the conversion script to diffusers | 908 | [
[
-0.017242431640625,
-0.017181396484375,
0.0009565353393554688,
0.04193115234375,
-0.037750244140625,
-0.03289794921875,
0.0098876953125,
-0.031280517578125,
0.01003265380859375,
0.055511474609375,
-0.0246734619140625,
-0.03167724609375,
-0.0626220703125,
-0.... |
seyonec/ChemBERTa-zinc-base-v1 | 2021-05-20T20:55:33.000Z | [
"transformers",
"pytorch",
"jax",
"roberta",
"fill-mask",
"chemistry",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | fill-mask | seyonec | null | null | seyonec/ChemBERTa-zinc-base-v1 | 21 | 3,275 | transformers | 2022-03-02T23:29:05 | ---
tags:
- chemistry
---
# ChemBERTa: Training a BERT-like transformer model for masked language modelling of chemical SMILES strings.
Deep learning for chemistry and materials science remains a novel field with lots of potiential. However, the popularity of transfer learning based methods in areas such as NLP and computer vision have not yet been effectively developed in computational chemistry + machine learning. Using HuggingFace's suite of models and the ByteLevel tokenizer, we are able to train on a large corpus of 100k SMILES strings from a commonly known benchmark dataset, ZINC.
Training RoBERTa over 5 epochs, the model achieves a decent loss of 0.398, but may likely continue to decline if trained for a larger number of epochs. The model can predict tokens within a SMILES sequence/molecule, allowing for variants of a molecule within discoverable chemical space to be predicted.
By applying the representations of functional groups and atoms learned by the model, we can try to tackle problems of toxicity, solubility, drug-likeness, and synthesis accessibility on smaller datasets using the learned representations as features for graph convolution and attention models on the graph structure of molecules, as well as fine-tuning of BERT. Finally, we propose the use of attention visualization as a helpful tool for chemistry practitioners and students to quickly identify important substructures in various chemical properties.
Additionally, visualization of the attention mechanism have been seen through previous research as incredibly valuable towards chemical reaction classification. The applications of open-sourcing large-scale transformer models such as RoBERTa with HuggingFace may allow for the acceleration of these individual research directions.
A link to a repository which includes the training, uploading and evaluation notebook (with sample predictions on compounds such as Remdesivir) can be found [here](https://github.com/seyonechithrananda/bert-loves-chemistry). All of the notebooks can be copied into a new Colab runtime for easy execution.
Thanks for checking this out!
- Seyone
| 2,132 | [
[
-0.0139007568359375,
-0.04632568359375,
0.047149658203125,
0.01065826416015625,
0.01546478271484375,
0.032379150390625,
-0.0284576416015625,
-0.035308837890625,
0.019805908203125,
0.0195465087890625,
-0.06201171875,
-0.041534423828125,
-0.04779052734375,
-0.... |
6DammK9/AstolfoMix | 2023-10-22T09:47:39.000Z | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"safetensors",
"en",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 6DammK9 | null | null | 6DammK9/AstolfoMix | 4 | 3,275 | diffusers | 2023-09-12T16:06:52 | ---
language:
- en
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- safetensors
inference: true
thumbnail: https://huggingface.co/6DammK9/AstolfoMix/resolve/main/231111-341693176-2688-1536-4-256-20231021050214.jpg
widget:
- text: >-
aesthetic, quality, 1girl, boy, astolfo
example_title: example 1girl boy
#datasets:
#- Crosstyan/BPDataset
library_name: diffusers
---
# AstolfoMix (Baseline / Extended) #
## Extended ##
- Is 10 model ensemble robust enough? How about 20, with 10 more radical models?
- For EMB / LoRAs, best fit will be models trained from NAI. ~~Just use them, most of them will work.~~

```
parameters
(aesthetic:0), (quality:0), (1girl:0), (boy:0), [[shirt]], [[midriff]], [[braid]], [astolfo], [[[[sydney opera house]]]]
Negative prompt: (worst:0), (low:0), (bad:0), (exceptional:0), (masterpiece:0), (comic:0), (extra:0), (lowres:0), (breasts:0.5)
Steps: 256, Sampler: Euler, CFG scale: 4, Seed: 341693176, Size: 1344x768, Model hash: 41429fdee1, Model: 20-bpcga9-lracrc2oh-b11i75pvc-gf34ym34-sd, VAE hash: 551eac7037, VAE: vae-ft-mse-840000-ema-pruned.ckpt, Denoising strength: 0.7, Clip skip: 2, FreeU Stages: "[{\"backbone_factor\": 1.2, \"skip_factor\": 0.9}, {\"backbone_factor\": 1.4, \"skip_factor\": 0.2}]", FreeU Schedule: "0.0, 1.0, 0.0", Hires upscale: 2, Hires steps: 64, Hires upscaler: Latent, Dynamic thresholding enabled: True, Mimic scale: 1, Separate Feature Channels: False, Scaling Startpoint: MEAN, Variability Measure: AD, Interpolate Phi: 0.7, Threshold percentile: 100, Version: v1.6.0
```
- Current version: `20-bpcga9-lracrc2oh-b11i75pvc-gf34ym34-sd.safetensors` (merge of 20 models)
- Recommended version: "20"
- Recommended CFG: ~~4.5~~ 4.0
## Baseline ##
- A (baseline) merge model focusing on [absurdres](https://www.urbandictionary.com/define.php?term=absurdres), *and let me wait for a big anime SDXL finetune.*
- Behind the "absurdres", the model should be very robust and capable for most LoRAs / embeddings / addons you can imagine.
- The image below is 2688x1536 without upscaler. With upscaler, it reaches 8K already.
- ~~The image below is 10752x6143, and it is a 3.25MB JPEG. "upscaler 4x". See PNG info below. Removed because some it failed to preview on some browsers.~~

```
parameters
(aesthetic:0), (quality:0), (solo:0), (boy:0), (ushanka:0.98), [[braid]], [astolfo], [[moscow, russia]]
Negative prompt: (worst:0), (low:0), (bad:0), (exceptional:0), (masterpiece:0), (comic:0), (extra:0), (lowres:0), (breasts:0.5)
Steps: 256, Sampler: Euler, CFG scale: 4.5, Seed: 132385090, Size: 1344x768, Model hash: 6ffdb39acd, Model: 10-vcbpmtd8_cwlbdaw_eb5ms29-sd, VAE hash: 551eac7037, VAE: vae-ft-mse-840000-ema-pruned.ckpt, Denoising strength: 0.7, Clip skip: 2, FreeU Stages: "[{\"backbone_factor\": 1.2, \"skip_factor\": 0.9}, {\"backbone_factor\": 1.4, \"skip_factor\": 0.2}]", FreeU Schedule: "0.0, 1.0, 0.0", Hires upscale: 2, Hires steps: 64, Hires upscaler: Latent, Dynamic thresholding enabled: True, Mimic scale: 1, Separate Feature Channels: False, Scaling Startpoint: MEAN, Variability Measure: AD, Interpolate Phi: 0.7, Threshold percentile: 100, Version: v1.6.0
```
- Current version: `10-vcbpmtd8_cwlbdaw_eb5ms29-sd.safetensors` (merge of 10 models)
- Recommended version: "06a" or "10"
- [Receipe Models: Merging UNETs into SD V1.4](https://huggingface.co/6DammK9/bpmodel-sd14-merge)
- ["Roadmap" / "Theory" in my Github.](https://github.com/6DammK9/nai-anime-pure-negative-prompt/blob/main/ch05/README.MD)
- Recommended prompt: "SD 1.4's Text Encoder"
- Recommended resolution: **1024x1024 (native T2I), HiRes 1.75x (RTX 2080Ti 11GB)**
- It can generate images **up to 1280x1280 with HiRes 2.0x ([Tesla M40 24GB](https://github.com/6DammK9/nai-anime-pure-negative-prompt/blob/main/ch04/readme.md#chapter-04b-extra-making-use-of-m40-24gb-for-generating-large-images))**, but the yield will be very low and time consuming to generate a nice image.
- Recommended CFG: 4.5 (also tested on all base models), 6.0 (1280 mode)
## Receipe ##
- [Full receipe.](https://github.com/6DammK9/nai-anime-pure-negative-prompt/blob/main/ch05/receipe-20a.json)
- **Uniform merge.** M = 1 / "number of models in total".
|Index|M|Filename|
|---|---|---|
|02|0.5|[02-vbp23-cbp2-sd](https://huggingface.co/6DammK9/AstolfoMix/blob/main/02-vbp23-cbp2-sd.safetensors)|
|03|0.33|[03-vcbp-mzpikas_tmnd-sd](https://huggingface.co/6DammK9/AstolfoMix/blob/main/03-vcbp-mzpikas_tmnd-sd.safetensors)|
|04|0.25|[04-vcbp_mzpt_d8-sd](https://huggingface.co/6DammK9/AstolfoMix/blob/main/04-vcbp_mzpt_d8-sd.safetensors)|
|05|0.2|[05-vcbp_mtd8_cwl-sd](https://huggingface.co/6DammK9/AstolfoMix/blob/main/05-vcbp_mtd8_cwl-sd.safetensors)|
|06|0.167|[06-vcbp_mtd8cwl_bd-sd](https://huggingface.co/6DammK9/AstolfoMix/blob/main/06-vcbp_mtd8cwl_bd-sd.safetensors)|
|07|0.143|[07-vcbp_mtd8cwl_bdaw-sd](https://huggingface.co/6DammK9/AstolfoMix/blob/main/07-vcbp_mtd8cwl_bdaw-sd.safetensors)|
|08|0.125|[08-vcbpmt_d8cwlbd_aweb5-sd](https://huggingface.co/6DammK9/AstolfoMix/blob/main/08-vcbpmt_d8cwlbd_aweb5-sd.safetensors)|
|09|0.111|[09-majicmixRealistic_v6-sd](https://huggingface.co/6DammK9/AstolfoMix/blob/main/09-majicmixRealistic_v6-sd.safetensors)|
|10|0.1|[10-vcbpmtd8_cwlbdaw_eb5ms29-sd](https://huggingface.co/6DammK9/AstolfoMix/blob/main/10-vcbpmtd8_cwlbdaw_eb5ms29-sd.safetensors)|
|11|0.0909|[11-bp-sd](https://huggingface.co/6DammK9/AstolfoMix/blob/main/11-bp-sd.safetensors)|
|12|0.0833|[12-bpcga9-sd](https://huggingface.co/6DammK9/AstolfoMix/blob/main/12-bpcga9-sd.safetensors)|
|13|0.0769|[13-bpcga9-lra-sd](https://huggingface.co/6DammK9/AstolfoMix/blob/main/13-bpcga9-lra-sd.safetensors)|
|14|0.0714|[14-bpcga9-lracrc2-sd](https://huggingface.co/6DammK9/AstolfoMix/blob/main/14-bpcga9-lracrc2-sd.safetensors)|
|15|0.0667|[15-bpcga9-lracrc2-oh-sd](https://huggingface.co/6DammK9/AstolfoMix/blob/main/15-bpcga9-lracrc2-oh-sd.safetensors)|
|16|0.0625|[16-bpcga9-lracrc2-ohb11](https://huggingface.co/6DammK9/AstolfoMix/blob/main/16-bpcga9-lracrc2-ohb11.safetensors)|
|17|0.0588|[17-bpcga9-lracrc2-ohb11i75-sd](https://huggingface.co/6DammK9/AstolfoMix/blob/main/17-bpcga9-lracrc2-ohb11i75-sd.safetensors)|
|18|0.0555|[18-bpcga9-lracrc2-ohb11i75-pvc-sd](https://huggingface.co/6DammK9/AstolfoMix/blob/main/18-bpcga9-lracrc2-ohb11i75-pvc-sd.safetensors)|
|19|0.0526|[19-bpcga9-lracrc2-ohb11i75-pvcgf34-sd](https://huggingface.co/6DammK9/AstolfoMix/blob/main/19-bpcga9-lracrc2-ohb11i75-pvcgf34-sd.safetensors)|
|20|0.05|[20-bpcga9-lracrc2oh-b11i75pvc-gf34ym34-sd](https://huggingface.co/6DammK9/AstolfoMix/blob/main/20-bpcga9-lracrc2oh-b11i75pvc-gf34ym34-sd.safetensors)|
## Extra: Comparing with merges with original Text Encoders ##
- **Uniform merge.** M = 1 / "number of models in total".
|Index|M|Filename|
|---|---|---|
|02|0.5|[02a-vbp23-cbp2](https://huggingface.co/6DammK9/AstolfoMix/blob/main/02a-vbp23-cbp2.safetensors)|
|03|0.33|[03a-vcbp-mzpikas_tmnd](https://huggingface.co/6DammK9/AstolfoMix/blob/main/03a-vcbp-mzpikas_tmnd.safetensors)|
|04|0.25|[04a-vcbp_mzpt_d8](https://huggingface.co/6DammK9/AstolfoMix/blob/main/04a-vcbp_mzpt_d8.safetensors)|
|05|0.2|[05a-vcbp_mtd8_cwl](https://huggingface.co/6DammK9/AstolfoMix/blob/main/05a-vcbp_mtd8_cwl.safetensors)|
|06|0.167|[06a-vcbp_mtd8cwl_bd](https://huggingface.co/6DammK9/AstolfoMix/blob/main/06a-vcbp_mtd8cwl_bd.safetensors)|
|07|0.143|[07a-vcbp_mtd8cwl_bdaw](https://huggingface.co/6DammK9/AstolfoMix/blob/main/07a-vcbp_mtd8cwl_bdaw.safetensors)|
|08|0.125|[08a-vcbpmt_d8cwlbd_aweb5](https://huggingface.co/6DammK9/AstolfoMix/blob/main/08a-vcbpmt_d8cwlbd_aweb5.safetensors)|
|09|0.111|[09a-majicmixRealistic_v6](https://huggingface.co/6DammK9/AstolfoMix/blob/main/09a-majicmixRealistic_v6.safetensors)|
|10|0.1|[10a-vcbpmtd8_cwlbdaw_eb5ms29](https://huggingface.co/6DammK9/AstolfoMix/blob/main/10a-vcbpmtd8_cwlbdaw_eb5ms29.safetensors)|
|11|0.0909|[11a-bp](https://huggingface.co/6DammK9/AstolfoMix/blob/main/11a-bp.safetensors)|
|12|0.0833|[12a-bpcga9](https://huggingface.co/6DammK9/AstolfoMix/blob/main/12a-bpcga9.safetensors)|
|13|0.0769|[13a-bpcga9-lra](https://huggingface.co/6DammK9/AstolfoMix/blob/main/13a-bpcga9-lra.safetensors)|
|14|0.0714|[14a-bpcga9-lracrc2](https://huggingface.co/6DammK9/AstolfoMix/blob/main/14a-bpcga9-lracrc2.safetensors)|
|15|0.0667|[15a-bpcga9-lracrc2-oh](https://huggingface.co/6DammK9/AstolfoMix/blob/main/15a-bpcga9-lracrc2-oh.safetensors)|
|16|0.0625|[16a-bpcga9-lracrc2-ohb11](https://huggingface.co/6DammK9/AstolfoMix/blob/main/16a-bpcga9-lracrc2-ohb11.safetensors)|
|17|0.0588|[17a-bpcga9-lracrc2-ohb11i75](https://huggingface.co/6DammK9/AstolfoMix/blob/main/17a-bpcga9-lracrc2-ohb11i75.safetensors)|
|18|0.0555|[18a-bpcga9-lracrc2-ohb11i75-pvc](https://huggingface.co/6DammK9/AstolfoMix/blob/main/18a-bpcga9-lracrc2-ohb11i75-pvc.safetensors)|
|19|0.0526|[19a-bpcga9-lracrc2-ohb11i75-pvcgf34](https://huggingface.co/6DammK9/AstolfoMix/blob/main/19a-bpcga9-lracrc2-ohb11i75-pvcgf34.safetensors)|
|20|0.05|[20a-bpcga9-lracrc2oh-b11i75pvc-gf34ym34](https://huggingface.co/6DammK9/AstolfoMix/blob/main/20a-bpcga9-lracrc2oh-b11i75pvc-gf34ym34-sd.safetensors)|
- Suprisingly, they looks similar, with only minor difference in background and unnamed details (semantic relationships).








## License ##
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license [here](https://huggingface.co/spaces/CompVis/stable-diffusion-license/blob/main/license.txt) | 11,981 | [
[
-0.0712890625,
-0.037139892578125,
0.017364501953125,
0.0273590087890625,
-0.029205322265625,
-0.01348876953125,
0.0006480216979980469,
-0.0460205078125,
0.0570068359375,
0.01435089111328125,
-0.058837890625,
-0.04840087890625,
-0.045135498046875,
0.01460266... |
openmmlab/upernet-swin-large | 2023-04-24T09:48:31.000Z | [
"transformers",
"pytorch",
"safetensors",
"upernet",
"vision",
"image-segmentation",
"en",
"arxiv:1807.10221",
"arxiv:2103.14030",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-segmentation | openmmlab | null | null | openmmlab/upernet-swin-large | 0 | 3,270 | transformers | 2023-01-13T14:35:14 | ---
language: en
license: mit
tags:
- vision
- image-segmentation
model_name: openmmlab/upernet-swin-large
---
# UperNet, Swin Transformer large-sized backbone
UperNet framework for semantic segmentation, leveraging a Swin Transformer backbone. UperNet was introduced in the paper [Unified Perceptual Parsing for Scene Understanding](https://arxiv.org/abs/1807.10221) by Xiao et al.
Combining UperNet with a Swin Transformer backbone was introduced in the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030).
Disclaimer: The team releasing UperNet + Swin Transformer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
UperNet is a framework for semantic segmentation. It consists of several components, including a backbone, a Feature Pyramid Network (FPN) and a Pyramid Pooling Module (PPM).
Any visual backbone can be plugged into the UperNet framework. The framework predicts a semantic label per pixel.

## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?search=openmmlab/upernet) to look for
fine-tuned versions (with various backbones) on a task that interests you.
### How to use
For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/upernet#transformers.UperNetForSemanticSegmentation).
| 1,632 | [
[
-0.034088134765625,
-0.0107269287109375,
0.020172119140625,
0.0411376953125,
-0.0161895751953125,
-0.017730712890625,
0.015625,
-0.043487548828125,
0.0234527587890625,
0.0523681640625,
-0.05706787109375,
-0.04010009765625,
-0.032379150390625,
-0.019363403320... |
ai-forever/ruGPT-3.5-13B | 2023-06-14T09:52:09.000Z | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"gpt3",
"en",
"ru",
"license:mit",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | text-generation | ai-forever | null | null | ai-forever/ruGPT-3.5-13B | 153 | 3,270 | transformers | 2023-05-02T12:53:36 | ---
license: mit
language:
- en
- ru
tags:
- gpt3
- transformers
---
# 🗿 ruGPT-3.5 13B
Language model for Russian. Model has 13B parameters as you can guess from it's name. This is our biggest model so far and it was used for trainig GigaChat (read more about it in the [article](https://habr.com/ru/companies/sberbank/articles/730108/)).
## Dataset
Model was pretrained on a 300Gb of various domains, than additionaly trained on the 100 Gb of code and legal documets. Here is the dataset structure:

Training data was deduplicated, the text deduplication includes 64-bit hashing of each text in the corpus for keeping texts with a unique hash. We also filter the documents based on their text compression rate using zlib4. The most strongly and weakly compressing deduplicated texts are discarded.
## Technical details
Model was trained using Deepspeed and Megatron libraries, on 300B tokens dataset for 3 epochs, around 45 days on 512 V100. After that model was finetuned 1 epoch with sequence length 2048 around 20 days on 200 GPU A100 on additional data (see above).
After the final training perplexity for this model was around 8.8 for Russian.

## Examples of usage
Try different generation strategies to reach better results.
```python
request = "Стих про программиста может быть таким:"
encoded_input = tokenizer(request, return_tensors='pt', \
add_special_tokens=False).to('cuda:0')
output = model.generate(
**encoded_input,
num_beams=2,
do_sample=True,
max_new_tokens=100
)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
```
>>> Стих про программиста может быть таким:
Программист сидит в кресле,
Стих сочиняет он про любовь,
Он пишет, пишет, пишет, пишет...
И не выходит ни черта!
```
```python
request = "Нейронная сеть — это"
encoded_input = tokenizer(request, return_tensors='pt', \
add_special_tokens=False).to('cuda:0')
output = model.generate(
**encoded_input,
num_beams=4,
do_sample=True,
max_new_tokens=100
)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
```
>>> Нейронная сеть — это математическая модель, состоящая из большого
количества нейронов, соединенных между собой электрическими связями.
Нейронная сеть может быть смоделирована на компьютере, и с ее помощью
можно решать задачи, которые не поддаются решению с помощью традиционных
математических методов.
```
```python
request = "Гагарин полетел в космос в"
encoded_input = tokenizer(request, return_tensors='pt', \
add_special_tokens=False).to('cuda:0')
output = model.generate(
**encoded_input,
num_beams=2,
do_sample=True,
max_new_tokens=100
)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
```
>>> Гагарин полетел в космос в 1961 году. Это было первое в истории
человечества космическое путешествие. Юрий Гагарин совершил его
на космическом корабле Восток-1. Корабль был запущен с космодрома
Байконур.
``` | 3,176 | [
[
-0.0225982666015625,
-0.058319091796875,
0.00890350341796875,
0.01010894775390625,
-0.033416748046875,
0.0008778572082519531,
-0.0128021240234375,
-0.00786590576171875,
0.0100555419921875,
0.0008234977722167969,
-0.0262451171875,
-0.0496826171875,
-0.03936767578... |
MirageML/lowpoly-game-building | 2023-05-05T20:53:17.000Z | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | MirageML | null | null | MirageML/lowpoly-game-building | 14 | 3,269 | diffusers | 2022-11-28T08:52:37 | ---
language:
- en
tags:
- stable-diffusion
- text-to-image
license: creativeml-openrail-m
inference: true
---
# Low Poly Game Building on Stable Diffusion via Dreambooth
This the Stable Diffusion model fine-tuned the Low Poly Game Building concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **a photo of lowpoly_game_building**
# Run on [Mirage](https://app.mirageml.com)
Run this model and explore text-to-3D on [Mirage](https://app.mirageml.com)!
Here are is a sample output for this model:

# Share your Results and Reach us on [Discord](https://discord.gg/9B2Pu2bEvj)!
[](https://discord.gg/9B2Pu2bEvj)
[Image Source](https://www.behance.net/guutv) | 895 | [
[
-0.0374755859375,
-0.09228515625,
0.044036865234375,
0.025665283203125,
-0.006656646728515625,
0.007205963134765625,
-0.0024242401123046875,
-0.0203399658203125,
0.0230865478515625,
0.04644775390625,
-0.0350341796875,
-0.052337646484375,
-0.0205230712890625,
... |
sanjanach/my-pet-dog | 2023-11-06T09:11:58.000Z | [
"diffusers",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | sanjanach | null | null | sanjanach/my-pet-dog | 0 | 3,266 | diffusers | 2023-11-06T09:07:27 | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog Dreambooth model trained by sanjanach following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: MRCEW-62
Sample pictures of this concept:
.webp)
| 390 | [
[
-0.064697265625,
-0.011627197265625,
0.025726318359375,
0.00623321533203125,
-0.010498046875,
0.0233306884765625,
0.0255279541015625,
-0.042144775390625,
0.049041748046875,
0.0284271240234375,
-0.055511474609375,
-0.033203125,
-0.0160980224609375,
0.01293182... |
hariram344/human-emotions-jnn | 2023-11-05T17:05:35.000Z | [
"diffusers",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | hariram344 | null | null | hariram344/human-emotions-jnn | 0 | 3,265 | diffusers | 2023-11-05T17:01:46 | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### human-emotions-jnn Dreambooth model trained by hariram344 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: MITS-1184
Sample pictures of this concept:

| 404 | [
[
-0.04144287109375,
-0.021026611328125,
0.0227813720703125,
0.008087158203125,
-0.004207611083984375,
0.034515380859375,
0.01739501953125,
-0.033355712890625,
0.041351318359375,
0.01123809814453125,
-0.054962158203125,
-0.0246124267578125,
-0.022003173828125,
... |
Salesforce/codet5p-770m | 2023-05-16T00:33:03.000Z | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"arxiv:2305.07922",
"license:bsd-3-clause",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | text2text-generation | Salesforce | null | null | Salesforce/codet5p-770m | 14 | 3,261 | transformers | 2023-05-13T13:34:17 | ---
license: bsd-3-clause
---
# CodeT5+ 770M
## Model description
[CodeT5+](https://github.com/salesforce/CodeT5/tree/main/CodeT5+) is a new family of open code large language models with an encoder-decoder architecture that can flexibly operate in different modes (i.e. _encoder-only_, _decoder-only_, and _encoder-decoder_) to support a wide range of code understanding and generation tasks.
It is introduced in the paper:
[CodeT5+: Open Code Large Language Models for Code Understanding and Generation](https://arxiv.org/pdf/2305.07922.pdf)
by [Yue Wang](https://yuewang-cuhk.github.io/)\*, [Hung Le](https://sites.google.com/view/henryle2018/home?pli=1)\*, [Akhilesh Deepak Gotmare](https://akhileshgotmare.github.io/), [Nghi D.Q. Bui](https://bdqnghi.github.io/), [Junnan Li](https://sites.google.com/site/junnanlics), [Steven C.H. Hoi](https://sites.google.com/view/stevenhoi/home) (* indicates equal contribution).
Compared to the original CodeT5 family (CodeT5-base: `220M`, CodeT5-large: `770M`), CodeT5+ is pretrained with a diverse set of pretraining tasks including _span denoising_, _causal language modeling_, _contrastive learning_, and _text-code matching_ to learn rich representations from both unimodal code data and bimodal code-text data.
Additionally, it employs a simple yet effective _compute-efficient pretraining_ method to initialize the model components with frozen off-the-shelf LLMs such as [CodeGen](https://github.com/salesforce/CodeGen) to efficiently scale up the model (i.e. `2B`, `6B`, `16B`), and adopts a "shallow encoder and deep decoder" architecture.
Furthermore, it is instruction-tuned to align with natural language instructions (see our InstructCodeT5+ 16B) following [Code Alpaca](https://github.com/sahil280114/codealpaca).
## How to use
This model can be easily loaded using the `T5ForConditionalGeneration` functionality and employs the same tokenizer as original [CodeT5](https://github.com/salesforce/CodeT5).
```python
from transformers import T5ForConditionalGeneration, AutoTokenizer
checkpoint = "Salesforce/codet5p-770m"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = T5ForConditionalGeneration.from_pretrained(checkpoint).to(device)
inputs = tokenizer.encode("def print_hello_world():<extra_id_0>", return_tensors="pt").to(device)
outputs = model.generate(inputs, max_length=10)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
# ==> print "Hello World"
```
## Pretraining data
This checkpoint is trained on the stricter permissive subset of the deduplicated version of the [github-code dataset](https://huggingface.co/datasets/codeparrot/github-code).
The data is preprocessed by reserving only permissively licensed code ("mit" “apache-2”, “bsd-3-clause”, “bsd-2-clause”, “cc0-1.0”, “unlicense”, “isc”).
Supported languages (9 in total) are as follows:
`c`, `c++`, `c-sharp`, `go`, `java`, `javascript`, `php`, `python`, `ruby.`
## Training procedure
This checkpoint is trained on the unimodal code data at the first-stage pretraining, which includes a diverse set of pretraining tasks including _span denoising_ and two variants of _causal language modeling_.
Please refer to the paper for more details.
## Evaluation results
CodeT5+ models have been comprehensively evaluated on a wide range of code understanding and generation tasks in various settings: _zero-shot_, _finetuning_, and _instruction-tuning_.
Specifically, CodeT5+ yields substantial performance gains on many downstream tasks compared to their SoTA baselines, e.g.,
8 text-to-code retrieval tasks (+3.2 avg. MRR), 2 line-level code completion tasks (+2.1 avg. Exact Match), and 2 retrieval-augmented code generation tasks (+5.8 avg. BLEU-4).
In 2 math programming tasks on MathQA-Python and GSM8K-Python, CodeT5+ models of below billion-parameter sizes significantly outperform many LLMs of up to 137B parameters.
Particularly, in the zero-shot text-to-code generation task on HumanEval benchmark, InstructCodeT5+ 16B sets new SoTA results of 35.0% pass@1 and 54.5% pass@10 against other open code LLMs, even surpassing the closed-source OpenAI code-cushman-001 mode
Please refer to the [paper](https://arxiv.org/pdf/2305.07922.pdf) for more details.
## BibTeX entry and citation info
```bibtex
@article{wang2023codet5plus,
title={CodeT5+: Open Code Large Language Models for Code Understanding and Generation},
author={Wang, Yue and Le, Hung and Gotmare, Akhilesh Deepak and Bui, Nghi D.Q. and Li, Junnan and Hoi, Steven C. H.},
journal={arXiv preprint},
year={2023}
}
``` | 4,621 | [
[
-0.03521728515625,
-0.0294342041015625,
0.012664794921875,
0.0222320556640625,
-0.016143798828125,
0.0127716064453125,
-0.034088134765625,
-0.0440673828125,
-0.0184478759765625,
0.018768310546875,
-0.030059814453125,
-0.05035400390625,
-0.038055419921875,
0.... |
timm/convnext_tiny.fb_in22k_ft_in1k_384 | 2023-03-31T22:38:29.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"dataset:imagenet-22k",
"arxiv:2201.03545",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/convnext_tiny.fb_in22k_ft_in1k_384 | 0 | 3,259 | timm | 2022-12-13T07:15:22 | ---
tags:
- image-classification
- timm
library_tag: timm
license: apache-2.0
datasets:
- imagenet-1k
- imagenet-22k
---
# Model card for convnext_tiny.fb_in22k_ft_in1k_384
A ConvNeXt image classification model. Pretrained on ImageNet-22k and fine-tuned on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 28.6
- GMACs: 13.1
- Activations (M): 39.5
- Image size: 384 x 384
- **Papers:**
- A ConvNet for the 2020s: https://arxiv.org/abs/2201.03545
- **Original:** https://github.com/facebookresearch/ConvNeXt
- **Dataset:** ImageNet-1k
- **Pretrain Dataset:** ImageNet-22k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('convnext_tiny.fb_in22k_ft_in1k_384', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'convnext_tiny.fb_in22k_ft_in1k_384',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 96, 96, 96])
# torch.Size([1, 192, 48, 48])
# torch.Size([1, 384, 24, 24])
# torch.Size([1, 768, 12, 12])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'convnext_tiny.fb_in22k_ft_in1k_384',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 768, 12, 12) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
All timing numbers from eager model PyTorch 1.13 on RTX 3090 w/ AMP.
| model |top1 |top5 |img_size|param_count|gmacs |macts |samples_per_sec|batch_size|
|------------------------------------------------------------------------------------------------------------------------------|------|------|--------|-----------|------|------|---------------|----------|
| [convnextv2_huge.fcmae_ft_in22k_in1k_512](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_512) |88.848|98.742|512 |660.29 |600.81|413.07|28.58 |48 |
| [convnextv2_huge.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_384) |88.668|98.738|384 |660.29 |337.96|232.35|50.56 |64 |
| [convnext_xxlarge.clip_laion2b_soup_ft_in1k](https://huggingface.co/timm/convnext_xxlarge.clip_laion2b_soup_ft_in1k) |88.612|98.704|256 |846.47 |198.09|124.45|122.45 |256 |
| [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384) |88.312|98.578|384 |200.13 |101.11|126.74|196.84 |256 |
| [convnextv2_large.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k_384) |88.196|98.532|384 |197.96 |101.1 |126.74|128.94 |128 |
| [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320) |87.968|98.47 |320 |200.13 |70.21 |88.02 |283.42 |256 |
| [convnext_xlarge.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k_384) |87.75 |98.556|384 |350.2 |179.2 |168.99|124.85 |192 |
| [convnextv2_base.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k_384) |87.646|98.422|384 |88.72 |45.21 |84.49 |209.51 |256 |
| [convnext_large.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k_384) |87.476|98.382|384 |197.77 |101.1 |126.74|194.66 |256 |
| [convnext_large_mlp.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k) |87.344|98.218|256 |200.13 |44.94 |56.33 |438.08 |256 |
| [convnextv2_large.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k) |87.26 |98.248|224 |197.96 |34.4 |43.13 |376.84 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384) |87.138|98.212|384 |88.59 |45.21 |84.49 |365.47 |256 |
| [convnext_xlarge.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k) |87.002|98.208|224 |350.2 |60.98 |57.5 |368.01 |256 |
| [convnext_base.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k_384) |86.796|98.264|384 |88.59 |45.21 |84.49 |366.54 |256 |
| [convnextv2_base.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k) |86.74 |98.022|224 |88.72 |15.38 |28.75 |624.23 |256 |
| [convnext_large.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k) |86.636|98.028|224 |197.77 |34.4 |43.13 |581.43 |256 |
| [convnext_base.clip_laiona_augreg_ft_in1k_384](https://huggingface.co/timm/convnext_base.clip_laiona_augreg_ft_in1k_384) |86.504|97.97 |384 |88.59 |45.21 |84.49 |368.14 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in12k_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k) |86.344|97.97 |256 |88.59 |20.09 |37.55 |816.14 |256 |
| [convnextv2_huge.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in1k) |86.256|97.75 |224 |660.29 |115.0 |79.07 |154.72 |256 |
| [convnext_small.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_small.in12k_ft_in1k_384) |86.182|97.92 |384 |50.22 |25.58 |63.37 |516.19 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in1k) |86.154|97.68 |256 |88.59 |20.09 |37.55 |819.86 |256 |
| [convnext_base.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k) |85.822|97.866|224 |88.59 |15.38 |28.75 |1037.66 |256 |
| [convnext_small.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k_384) |85.778|97.886|384 |50.22 |25.58 |63.37 |518.95 |256 |
| [convnextv2_large.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in1k) |85.742|97.584|224 |197.96 |34.4 |43.13 |375.23 |256 |
| [convnext_small.in12k_ft_in1k](https://huggingface.co/timm/convnext_small.in12k_ft_in1k) |85.174|97.506|224 |50.22 |8.71 |21.56 |1474.31 |256 |
| [convnext_tiny.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k_384) |85.118|97.608|384 |28.59 |13.14 |39.48 |856.76 |256 |
| [convnextv2_tiny.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k_384) |85.112|97.63 |384 |28.64 |13.14 |39.48 |491.32 |256 |
| [convnextv2_base.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in1k) |84.874|97.09 |224 |88.72 |15.38 |28.75 |625.33 |256 |
| [convnext_small.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k) |84.562|97.394|224 |50.22 |8.71 |21.56 |1478.29 |256 |
| [convnext_large.fb_in1k](https://huggingface.co/timm/convnext_large.fb_in1k) |84.282|96.892|224 |197.77 |34.4 |43.13 |584.28 |256 |
| [convnext_tiny.in12k_ft_in1k](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k) |84.186|97.124|224 |28.59 |4.47 |13.44 |2433.7 |256 |
| [convnext_tiny.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k_384) |84.084|97.14 |384 |28.59 |13.14 |39.48 |862.95 |256 |
| [convnextv2_tiny.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k) |83.894|96.964|224 |28.64 |4.47 |13.44 |1452.72 |256 |
| [convnext_base.fb_in1k](https://huggingface.co/timm/convnext_base.fb_in1k) |83.82 |96.746|224 |88.59 |15.38 |28.75 |1054.0 |256 |
| [convnextv2_nano.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k_384) |83.37 |96.742|384 |15.62 |7.22 |24.61 |801.72 |256 |
| [convnext_small.fb_in1k](https://huggingface.co/timm/convnext_small.fb_in1k) |83.142|96.434|224 |50.22 |8.71 |21.56 |1464.0 |256 |
| [convnextv2_tiny.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in1k) |82.92 |96.284|224 |28.64 |4.47 |13.44 |1425.62 |256 |
| [convnext_tiny.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k) |82.898|96.616|224 |28.59 |4.47 |13.44 |2480.88 |256 |
| [convnext_nano.in12k_ft_in1k](https://huggingface.co/timm/convnext_nano.in12k_ft_in1k) |82.282|96.344|224 |15.59 |2.46 |8.37 |3926.52 |256 |
| [convnext_tiny_hnf.a2h_in1k](https://huggingface.co/timm/convnext_tiny_hnf.a2h_in1k) |82.216|95.852|224 |28.59 |4.47 |13.44 |2529.75 |256 |
| [convnext_tiny.fb_in1k](https://huggingface.co/timm/convnext_tiny.fb_in1k) |82.066|95.854|224 |28.59 |4.47 |13.44 |2346.26 |256 |
| [convnextv2_nano.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k) |82.03 |96.166|224 |15.62 |2.46 |8.37 |2300.18 |256 |
| [convnextv2_nano.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in1k) |81.83 |95.738|224 |15.62 |2.46 |8.37 |2321.48 |256 |
| [convnext_nano_ols.d1h_in1k](https://huggingface.co/timm/convnext_nano_ols.d1h_in1k) |80.866|95.246|224 |15.65 |2.65 |9.38 |3523.85 |256 |
| [convnext_nano.d1h_in1k](https://huggingface.co/timm/convnext_nano.d1h_in1k) |80.768|95.334|224 |15.59 |2.46 |8.37 |3915.58 |256 |
| [convnextv2_pico.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_pico.fcmae_ft_in1k) |80.304|95.072|224 |9.07 |1.37 |6.1 |3274.57 |256 |
| [convnext_pico.d1_in1k](https://huggingface.co/timm/convnext_pico.d1_in1k) |79.526|94.558|224 |9.05 |1.37 |6.1 |5686.88 |256 |
| [convnext_pico_ols.d1_in1k](https://huggingface.co/timm/convnext_pico_ols.d1_in1k) |79.522|94.692|224 |9.06 |1.43 |6.5 |5422.46 |256 |
| [convnextv2_femto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_femto.fcmae_ft_in1k) |78.488|93.98 |224 |5.23 |0.79 |4.57 |4264.2 |256 |
| [convnext_femto_ols.d1_in1k](https://huggingface.co/timm/convnext_femto_ols.d1_in1k) |77.86 |93.83 |224 |5.23 |0.82 |4.87 |6910.6 |256 |
| [convnext_femto.d1_in1k](https://huggingface.co/timm/convnext_femto.d1_in1k) |77.454|93.68 |224 |5.22 |0.79 |4.57 |7189.92 |256 |
| [convnextv2_atto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_atto.fcmae_ft_in1k) |76.664|93.044|224 |3.71 |0.55 |3.81 |4728.91 |256 |
| [convnext_atto_ols.a2_in1k](https://huggingface.co/timm/convnext_atto_ols.a2_in1k) |75.88 |92.846|224 |3.7 |0.58 |4.11 |7963.16 |256 |
| [convnext_atto.d2_in1k](https://huggingface.co/timm/convnext_atto.d2_in1k) |75.664|92.9 |224 |3.7 |0.55 |3.81 |8439.22 |256 |
## Citation
```bibtex
@article{liu2022convnet,
author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie},
title = {A ConvNet for the 2020s},
journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2022},
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
| 15,729 | [
[
-0.06707763671875,
-0.032440185546875,
-0.003086090087890625,
0.036712646484375,
-0.03131103515625,
-0.0161590576171875,
-0.0133209228515625,
-0.035491943359375,
0.0650634765625,
0.0162811279296875,
-0.0438232421875,
-0.041351318359375,
-0.0496826171875,
-0.... |
shibal1/anything-v4.5-clone | 2023-08-06T15:13:02.000Z | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | shibal1 | null | null | shibal1/anything-v4.5-clone | 8 | 3,254 | diffusers | 2023-06-12T14:41:31 | ---
language:
- en
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
duplicated_from: andite/anything-v4.0
---
[UPDATE (August 6, 2023)]
Hi! It may have seem the original repository I forked from [andite/anything-v4.0] is unavailable for some reason.
The original purpose of this forked repo was to train a model in SD API but didn't work and left this repo up in hopes of trying again but it may seem that
Google search results pointed to this repository instead,
upon further investigation the author of the original repo andite removed their huggingface repo, civitai now only have 4.5 models up
therefore I think this repo now only serves as an archive (unless asked to be taken down ofc).
Steps to access older models (e.g. 4.0)
1. Go to the 'Files and versions' tab
2. Click on the first commit 'Duplicate from andite/anything-v4.0'
3. 'Browse files'
4. ???
5. Profit
-------
Try out my new model! - [Pastel Mix || Stylized Anime Model](https://huggingface.co/andite/pastel-mix). Thanks.
I also uploaded it in CivitAI! https://civitai.com/models/5414/pastel-mix-stylized-anime-model I'd appreciate the ratings, thank you!
Yes, it's a shameless plug.
Examples:



-------
<font color="grey">
[Linaqruf](https://huggingface.co/Linaqruf) for letting me borrow his model card for reference.
# Anything V4
Welcome to Anything V4 - a latent diffusion model for weebs. The newest version of Anything. This model is intended to produce high-quality, highly detailed anime style with just a few prompts. Like other anime-style Stable Diffusion models, it also supports danbooru tags to generate images.
e.g. **_1girl, white hair, golden eyes, beautiful eyes, detail, flower meadow, cumulonimbus clouds, lighting, detailed sky, garden_**
I think the V4.5 version better though, it's in this repo. feel free 2 try it.
## Yes, this model has [AbyssOrangeMix2](https://huggingface.co/WarriorMama777/OrangeMixs) in it. coz its a very good model. check it out luls ;)
# Gradio
We support a [Gradio](https://github.com/gradio-app/gradio) Web UI to run anything-v4.0:
[](https://huggingface.co/spaces/akhaliq/anything-v4.0)
## 🧨 Diffusers
This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX]().
```python
from diffusers import StableDiffusionPipeline
import torch
model_id = "andite/anything-v4.0"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "hatsune_miku"
image = pipe(prompt).images[0]
image.save("./hatsune_miku.png")
```
## Examples
Below are some examples of images generated using this model:
**Anime Girl:**

```
masterpiece, best quality, 1girl, white hair, medium hair, cat ears, closed eyes, looking at viewer, :3, cute, scarf, jacket, outdoors, streets
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7
```
**Anime Boy:**

```
1boy, bishounen, casual, indoors, sitting, coffee shop, bokeh
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7
```
**Scenery:**

```
scenery, village, outdoors, sky, clouds
Steps: 50, Sampler: DPM++ 2S a Karras, CFG scale: 7
```
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
## Big Thanks to
- [Linaqruf](https://huggingface.co/Linaqruf). [NoCrypt](https://huggingface.co/NoCrypt), and Fannovel16#9022 for helping me out alot regarding my inquiries and concern about models and other stuff. | 4,749 | [
[
-0.041961669921875,
-0.05023193359375,
0.037689208984375,
0.0312347412109375,
-0.0163726806640625,
-0.033966064453125,
0.0166168212890625,
-0.043365478515625,
0.0242156982421875,
0.036529541015625,
-0.048553466796875,
-0.045684814453125,
-0.035919189453125,
... |
sail-rvc/Donald_Duck__RVC_v2__600_Epochs | 2023-07-14T07:21:28.000Z | [
"transformers",
"rvc",
"sail-rvc",
"audio-to-audio",
"endpoints_compatible",
"region:us"
] | audio-to-audio | sail-rvc | null | null | sail-rvc/Donald_Duck__RVC_v2__600_Epochs | 2 | 3,254 | transformers | 2023-07-14T07:21:17 |
---
pipeline_tag: audio-to-audio
tags:
- rvc
- sail-rvc
---
# Donald_Duck__RVC_v2__600_Epochs
## RVC Model

This model repo was automatically generated.
Date: 2023-07-14 07:21:28
Bot Name: juuxnscrap
Model Type: RVC
Source: https://huggingface.co/juuxn/RVCModels/
Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
| 399 | [
[
-0.0238037109375,
-0.0258636474609375,
0.018310546875,
0.011016845703125,
-0.0266265869140625,
0.0095062255859375,
0.019134521484375,
0.0027256011962890625,
0.0088043212890625,
0.061370849609375,
-0.04949951171875,
-0.035186767578125,
-0.0213165283203125,
-0... |
infgrad/stella-large-zh | 2023-10-19T06:58:59.000Z | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"mteb",
"arxiv:1612.00796",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | feature-extraction | infgrad | null | null | infgrad/stella-large-zh | 23 | 3,254 | transformers | 2023-09-10T07:51:33 | ---
tags:
- mteb
model-index:
- name: stella-large-zh
results:
- task:
type: STS
dataset:
type: C-MTEB/AFQMC
name: MTEB AFQMC
config: default
split: validation
revision: None
metrics:
- type: cos_sim_pearson
value: 51.61327712288466
- type: cos_sim_spearman
value: 54.48753880097122
- type: euclidean_pearson
value: 52.68387289931342
- type: euclidean_spearman
value: 54.48753879487172
- type: manhattan_pearson
value: 52.635406372350026
- type: manhattan_spearman
value: 54.447390526317044
- task:
type: STS
dataset:
type: C-MTEB/ATEC
name: MTEB ATEC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 53.39178036427897
- type: cos_sim_spearman
value: 54.450028472876134
- type: euclidean_pearson
value: 56.87300033777842
- type: euclidean_spearman
value: 54.45002622056799
- type: manhattan_pearson
value: 56.84326996138951
- type: manhattan_spearman
value: 54.433880144849375
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (zh)
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 40.574000000000005
- type: f1
value: 38.87775700245793
- task:
type: STS
dataset:
type: C-MTEB/BQ
name: MTEB BQ
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 60.80957921870066
- type: cos_sim_spearman
value: 62.37707350882237
- type: euclidean_pearson
value: 61.29032932843765
- type: euclidean_spearman
value: 62.37707350713817
- type: manhattan_pearson
value: 61.23028102541801
- type: manhattan_spearman
value: 62.31280056582247
- task:
type: Clustering
dataset:
type: C-MTEB/CLSClusteringP2P
name: MTEB CLSClusteringP2P
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 40.27066616318565
- task:
type: Clustering
dataset:
type: C-MTEB/CLSClusteringS2S
name: MTEB CLSClusteringS2S
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 37.503323644484716
- task:
type: Reranking
dataset:
type: C-MTEB/CMedQAv1-reranking
name: MTEB CMedQAv1
config: default
split: test
revision: None
metrics:
- type: map
value: 84.69295191328456
- type: mrr
value: 87.08992063492063
- task:
type: Reranking
dataset:
type: C-MTEB/CMedQAv2-reranking
name: MTEB CMedQAv2
config: default
split: test
revision: None
metrics:
- type: map
value: 85.22650690364465
- type: mrr
value: 87.72158730158729
- task:
type: Retrieval
dataset:
type: C-MTEB/CmedqaRetrieval
name: MTEB CmedqaRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 23.54
- type: map_at_10
value: 35.591
- type: map_at_100
value: 37.549
- type: map_at_1000
value: 37.663000000000004
- type: map_at_3
value: 31.405
- type: map_at_5
value: 33.792
- type: mrr_at_1
value: 36.359
- type: mrr_at_10
value: 44.624
- type: mrr_at_100
value: 45.660000000000004
- type: mrr_at_1000
value: 45.707
- type: mrr_at_3
value: 42.002
- type: mrr_at_5
value: 43.535000000000004
- type: ndcg_at_1
value: 36.359
- type: ndcg_at_10
value: 42.28
- type: ndcg_at_100
value: 49.997
- type: ndcg_at_1000
value: 51.966
- type: ndcg_at_3
value: 36.851
- type: ndcg_at_5
value: 39.249
- type: precision_at_1
value: 36.359
- type: precision_at_10
value: 9.542
- type: precision_at_100
value: 1.582
- type: precision_at_1000
value: 0.183
- type: precision_at_3
value: 20.913999999999998
- type: precision_at_5
value: 15.404000000000002
- type: recall_at_1
value: 23.54
- type: recall_at_10
value: 53.005
- type: recall_at_100
value: 85.085
- type: recall_at_1000
value: 98.21
- type: recall_at_3
value: 36.944
- type: recall_at_5
value: 44.137
- task:
type: PairClassification
dataset:
type: C-MTEB/CMNLI
name: MTEB Cmnli
config: default
split: validation
revision: None
metrics:
- type: cos_sim_accuracy
value: 76.16355983162958
- type: cos_sim_ap
value: 85.14228023901842
- type: cos_sim_f1
value: 77.86752827140549
- type: cos_sim_precision
value: 72.18450479233228
- type: cos_sim_recall
value: 84.5218611176058
- type: dot_accuracy
value: 76.16355983162958
- type: dot_ap
value: 85.16266644596179
- type: dot_f1
value: 77.86752827140549
- type: dot_precision
value: 72.18450479233228
- type: dot_recall
value: 84.5218611176058
- type: euclidean_accuracy
value: 76.16355983162958
- type: euclidean_ap
value: 85.14227717790371
- type: euclidean_f1
value: 77.86752827140549
- type: euclidean_precision
value: 72.18450479233228
- type: euclidean_recall
value: 84.5218611176058
- type: manhattan_accuracy
value: 75.99518941671678
- type: manhattan_ap
value: 85.10764940972825
- type: manhattan_f1
value: 77.80804694048618
- type: manhattan_precision
value: 70.49553825707233
- type: manhattan_recall
value: 86.81318681318682
- type: max_accuracy
value: 76.16355983162958
- type: max_ap
value: 85.16266644596179
- type: max_f1
value: 77.86752827140549
- task:
type: Retrieval
dataset:
type: C-MTEB/CovidRetrieval
name: MTEB CovidRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 73.762
- type: map_at_10
value: 81.76299999999999
- type: map_at_100
value: 81.974
- type: map_at_1000
value: 81.977
- type: map_at_3
value: 80.23400000000001
- type: map_at_5
value: 81.189
- type: mrr_at_1
value: 74.18299999999999
- type: mrr_at_10
value: 81.792
- type: mrr_at_100
value: 81.994
- type: mrr_at_1000
value: 81.997
- type: mrr_at_3
value: 80.277
- type: mrr_at_5
value: 81.221
- type: ndcg_at_1
value: 74.078
- type: ndcg_at_10
value: 85.195
- type: ndcg_at_100
value: 86.041
- type: ndcg_at_1000
value: 86.111
- type: ndcg_at_3
value: 82.171
- type: ndcg_at_5
value: 83.90100000000001
- type: precision_at_1
value: 74.078
- type: precision_at_10
value: 9.684
- type: precision_at_100
value: 1.004
- type: precision_at_1000
value: 0.101
- type: precision_at_3
value: 29.470000000000002
- type: precision_at_5
value: 18.567
- type: recall_at_1
value: 73.762
- type: recall_at_10
value: 95.785
- type: recall_at_100
value: 99.368
- type: recall_at_1000
value: 99.895
- type: recall_at_3
value: 87.724
- type: recall_at_5
value: 91.93900000000001
- task:
type: Retrieval
dataset:
type: C-MTEB/DuRetrieval
name: MTEB DuRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 25.911
- type: map_at_10
value: 80.656
- type: map_at_100
value: 83.446
- type: map_at_1000
value: 83.485
- type: map_at_3
value: 55.998000000000005
- type: map_at_5
value: 70.577
- type: mrr_at_1
value: 90.14999999999999
- type: mrr_at_10
value: 93.35900000000001
- type: mrr_at_100
value: 93.419
- type: mrr_at_1000
value: 93.423
- type: mrr_at_3
value: 93.133
- type: mrr_at_5
value: 93.26100000000001
- type: ndcg_at_1
value: 90.14999999999999
- type: ndcg_at_10
value: 87.806
- type: ndcg_at_100
value: 90.4
- type: ndcg_at_1000
value: 90.776
- type: ndcg_at_3
value: 86.866
- type: ndcg_at_5
value: 85.619
- type: precision_at_1
value: 90.14999999999999
- type: precision_at_10
value: 42.045
- type: precision_at_100
value: 4.814
- type: precision_at_1000
value: 0.49
- type: precision_at_3
value: 78.0
- type: precision_at_5
value: 65.62
- type: recall_at_1
value: 25.911
- type: recall_at_10
value: 88.942
- type: recall_at_100
value: 97.56700000000001
- type: recall_at_1000
value: 99.62
- type: recall_at_3
value: 58.361
- type: recall_at_5
value: 75.126
- task:
type: Retrieval
dataset:
type: C-MTEB/EcomRetrieval
name: MTEB EcomRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 46.2
- type: map_at_10
value: 56.309
- type: map_at_100
value: 56.977
- type: map_at_1000
value: 56.995
- type: map_at_3
value: 53.55
- type: map_at_5
value: 55.19
- type: mrr_at_1
value: 46.2
- type: mrr_at_10
value: 56.309
- type: mrr_at_100
value: 56.977
- type: mrr_at_1000
value: 56.995
- type: mrr_at_3
value: 53.55
- type: mrr_at_5
value: 55.19
- type: ndcg_at_1
value: 46.2
- type: ndcg_at_10
value: 61.656
- type: ndcg_at_100
value: 64.714
- type: ndcg_at_1000
value: 65.217
- type: ndcg_at_3
value: 56.022000000000006
- type: ndcg_at_5
value: 58.962
- type: precision_at_1
value: 46.2
- type: precision_at_10
value: 7.86
- type: precision_at_100
value: 0.9249999999999999
- type: precision_at_1000
value: 0.097
- type: precision_at_3
value: 21.067
- type: precision_at_5
value: 14.06
- type: recall_at_1
value: 46.2
- type: recall_at_10
value: 78.60000000000001
- type: recall_at_100
value: 92.5
- type: recall_at_1000
value: 96.5
- type: recall_at_3
value: 63.2
- type: recall_at_5
value: 70.3
- task:
type: Classification
dataset:
type: C-MTEB/IFlyTek-classification
name: MTEB IFlyTek
config: default
split: validation
revision: None
metrics:
- type: accuracy
value: 47.03347441323585
- type: f1
value: 35.50895794566714
- task:
type: Classification
dataset:
type: C-MTEB/JDReview-classification
name: MTEB JDReview
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 86.73545966228893
- type: ap
value: 55.43694740493539
- type: f1
value: 81.47218440859787
- task:
type: STS
dataset:
type: C-MTEB/LCQMC
name: MTEB LCQMC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 70.49478085579923
- type: cos_sim_spearman
value: 76.28442852235379
- type: euclidean_pearson
value: 74.90910715249527
- type: euclidean_spearman
value: 76.28443517178847
- type: manhattan_pearson
value: 74.90744903779758
- type: manhattan_spearman
value: 76.2886829916495
- task:
type: Retrieval
dataset:
type: C-MTEB/MMarcoRetrieval
name: MTEB MMarcoRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 64.798
- type: map_at_10
value: 74.263
- type: map_at_100
value: 74.59
- type: map_at_1000
value: 74.601
- type: map_at_3
value: 72.382
- type: map_at_5
value: 73.59700000000001
- type: mrr_at_1
value: 67.049
- type: mrr_at_10
value: 74.86500000000001
- type: mrr_at_100
value: 75.155
- type: mrr_at_1000
value: 75.165
- type: mrr_at_3
value: 73.21600000000001
- type: mrr_at_5
value: 74.259
- type: ndcg_at_1
value: 67.049
- type: ndcg_at_10
value: 78.104
- type: ndcg_at_100
value: 79.56400000000001
- type: ndcg_at_1000
value: 79.85600000000001
- type: ndcg_at_3
value: 74.54499999999999
- type: ndcg_at_5
value: 76.587
- type: precision_at_1
value: 67.049
- type: precision_at_10
value: 9.493
- type: precision_at_100
value: 1.022
- type: precision_at_1000
value: 0.105
- type: precision_at_3
value: 28.189999999999998
- type: precision_at_5
value: 18.003
- type: recall_at_1
value: 64.798
- type: recall_at_10
value: 89.328
- type: recall_at_100
value: 95.916
- type: recall_at_1000
value: 98.223
- type: recall_at_3
value: 79.93599999999999
- type: recall_at_5
value: 84.789
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (zh-CN)
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.01815736381977
- type: f1
value: 61.07806329750582
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (zh-CN)
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.94754539340954
- type: f1
value: 68.76446930296682
- task:
type: Retrieval
dataset:
type: C-MTEB/MedicalRetrieval
name: MTEB MedicalRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 50.1
- type: map_at_10
value: 56.406
- type: map_at_100
value: 56.958
- type: map_at_1000
value: 57.007
- type: map_at_3
value: 55.083000000000006
- type: map_at_5
value: 55.952999999999996
- type: mrr_at_1
value: 50.1
- type: mrr_at_10
value: 56.401999999999994
- type: mrr_at_100
value: 56.955
- type: mrr_at_1000
value: 57.004
- type: mrr_at_3
value: 55.05
- type: mrr_at_5
value: 55.95
- type: ndcg_at_1
value: 50.1
- type: ndcg_at_10
value: 59.384
- type: ndcg_at_100
value: 62.339
- type: ndcg_at_1000
value: 63.756
- type: ndcg_at_3
value: 56.657999999999994
- type: ndcg_at_5
value: 58.267
- type: precision_at_1
value: 50.1
- type: precision_at_10
value: 6.87
- type: precision_at_100
value: 0.832
- type: precision_at_1000
value: 0.095
- type: precision_at_3
value: 20.4
- type: precision_at_5
value: 13.04
- type: recall_at_1
value: 50.1
- type: recall_at_10
value: 68.7
- type: recall_at_100
value: 83.2
- type: recall_at_1000
value: 94.6
- type: recall_at_3
value: 61.199999999999996
- type: recall_at_5
value: 65.2
- task:
type: Reranking
dataset:
type: C-MTEB/Mmarco-reranking
name: MTEB MMarcoReranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 27.159122893681587
- type: mrr
value: 25.659126984126985
- task:
type: Classification
dataset:
type: C-MTEB/MultilingualSentiment-classification
name: MTEB MultilingualSentiment
config: default
split: validation
revision: None
metrics:
- type: accuracy
value: 73.02666666666667
- type: f1
value: 72.47691397067602
- task:
type: PairClassification
dataset:
type: C-MTEB/OCNLI
name: MTEB Ocnli
config: default
split: validation
revision: None
metrics:
- type: cos_sim_accuracy
value: 67.0817541959935
- type: cos_sim_ap
value: 72.29133043915637
- type: cos_sim_f1
value: 72.71207689093188
- type: cos_sim_precision
value: 60.16597510373444
- type: cos_sim_recall
value: 91.86906019007391
- type: dot_accuracy
value: 67.0817541959935
- type: dot_ap
value: 72.29133043915637
- type: dot_f1
value: 72.71207689093188
- type: dot_precision
value: 60.16597510373444
- type: dot_recall
value: 91.86906019007391
- type: euclidean_accuracy
value: 67.0817541959935
- type: euclidean_ap
value: 72.29133043915637
- type: euclidean_f1
value: 72.71207689093188
- type: euclidean_precision
value: 60.16597510373444
- type: euclidean_recall
value: 91.86906019007391
- type: manhattan_accuracy
value: 66.91932864103953
- type: manhattan_ap
value: 72.20070509521395
- type: manhattan_f1
value: 72.52839713925118
- type: manhattan_precision
value: 60.27972027972028
- type: manhattan_recall
value: 91.02428722280888
- type: max_accuracy
value: 67.0817541959935
- type: max_ap
value: 72.29133043915637
- type: max_f1
value: 72.71207689093188
- task:
type: Classification
dataset:
type: C-MTEB/OnlineShopping-classification
name: MTEB OnlineShopping
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 90.75000000000001
- type: ap
value: 87.99706544930007
- type: f1
value: 90.72973221476978
- task:
type: STS
dataset:
type: C-MTEB/PAWSX
name: MTEB PAWSX
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 33.57372874898899
- type: cos_sim_spearman
value: 37.9718472605281
- type: euclidean_pearson
value: 38.52264008741102
- type: euclidean_spearman
value: 37.97184654854654
- type: manhattan_pearson
value: 38.50412571398273
- type: manhattan_spearman
value: 37.98038173979437
- task:
type: STS
dataset:
type: C-MTEB/QBQTC
name: MTEB QBQTC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 37.510457667606225
- type: cos_sim_spearman
value: 37.83522430820119
- type: euclidean_pearson
value: 36.65815519443564
- type: euclidean_spearman
value: 37.83519816393499
- type: manhattan_pearson
value: 36.66835898210608
- type: manhattan_spearman
value: 37.85390202705368
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (zh)
config: zh
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 66.9953337569138
- type: cos_sim_spearman
value: 67.27632129468024
- type: euclidean_pearson
value: 65.83716645437758
- type: euclidean_spearman
value: 67.27632129468024
- type: manhattan_pearson
value: 65.81209103940279
- type: manhattan_spearman
value: 67.26678679870099
- task:
type: STS
dataset:
type: C-MTEB/STSB
name: MTEB STSB
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 75.73719311549382
- type: cos_sim_spearman
value: 75.71173848950517
- type: euclidean_pearson
value: 75.23070020894484
- type: euclidean_spearman
value: 75.71173839940812
- type: manhattan_pearson
value: 75.23517292603057
- type: manhattan_spearman
value: 75.74250916645184
- task:
type: Reranking
dataset:
type: C-MTEB/T2Reranking
name: MTEB T2Reranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 66.8596523608508
- type: mrr
value: 76.9288884590171
- task:
type: Retrieval
dataset:
type: C-MTEB/T2Retrieval
name: MTEB T2Retrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 26.618000000000002
- type: map_at_10
value: 74.884
- type: map_at_100
value: 78.65299999999999
- type: map_at_1000
value: 78.724
- type: map_at_3
value: 52.507999999999996
- type: map_at_5
value: 64.52799999999999
- type: mrr_at_1
value: 88.453
- type: mrr_at_10
value: 91.157
- type: mrr_at_100
value: 91.263
- type: mrr_at_1000
value: 91.268
- type: mrr_at_3
value: 90.672
- type: mrr_at_5
value: 90.96499999999999
- type: ndcg_at_1
value: 88.453
- type: ndcg_at_10
value: 82.759
- type: ndcg_at_100
value: 86.709
- type: ndcg_at_1000
value: 87.41499999999999
- type: ndcg_at_3
value: 84.194
- type: ndcg_at_5
value: 82.645
- type: precision_at_1
value: 88.453
- type: precision_at_10
value: 41.369
- type: precision_at_100
value: 4.9910000000000005
- type: precision_at_1000
value: 0.515
- type: precision_at_3
value: 73.79400000000001
- type: precision_at_5
value: 61.80799999999999
- type: recall_at_1
value: 26.618000000000002
- type: recall_at_10
value: 81.772
- type: recall_at_100
value: 94.55
- type: recall_at_1000
value: 98.184
- type: recall_at_3
value: 54.26499999999999
- type: recall_at_5
value: 67.963
- task:
type: Classification
dataset:
type: C-MTEB/TNews-classification
name: MTEB TNews
config: default
split: validation
revision: None
metrics:
- type: accuracy
value: 50.690000000000005
- type: f1
value: 48.77079213417325
- task:
type: Clustering
dataset:
type: C-MTEB/ThuNewsClusteringP2P
name: MTEB ThuNewsClusteringP2P
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 62.14566804144758
- task:
type: Clustering
dataset:
type: C-MTEB/ThuNewsClusteringS2S
name: MTEB ThuNewsClusteringS2S
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 54.66890415410679
- task:
type: Retrieval
dataset:
type: C-MTEB/VideoRetrieval
name: MTEB VideoRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 55.900000000000006
- type: map_at_10
value: 66.188
- type: map_at_100
value: 66.67699999999999
- type: map_at_1000
value: 66.691
- type: map_at_3
value: 64.017
- type: map_at_5
value: 65.462
- type: mrr_at_1
value: 55.800000000000004
- type: mrr_at_10
value: 66.13799999999999
- type: mrr_at_100
value: 66.62700000000001
- type: mrr_at_1000
value: 66.64099999999999
- type: mrr_at_3
value: 63.967
- type: mrr_at_5
value: 65.412
- type: ndcg_at_1
value: 55.900000000000006
- type: ndcg_at_10
value: 70.961
- type: ndcg_at_100
value: 73.22
- type: ndcg_at_1000
value: 73.583
- type: ndcg_at_3
value: 66.61
- type: ndcg_at_5
value: 69.18900000000001
- type: precision_at_1
value: 55.900000000000006
- type: precision_at_10
value: 8.58
- type: precision_at_100
value: 0.9610000000000001
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 24.7
- type: precision_at_5
value: 16.06
- type: recall_at_1
value: 55.900000000000006
- type: recall_at_10
value: 85.8
- type: recall_at_100
value: 96.1
- type: recall_at_1000
value: 98.9
- type: recall_at_3
value: 74.1
- type: recall_at_5
value: 80.30000000000001
- task:
type: Classification
dataset:
type: C-MTEB/waimai-classification
name: MTEB Waimai
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 86.77
- type: ap
value: 70.21134107638184
- type: f1
value: 85.22521777795022
---
## stella model
**新闻 | News**
**[2023-10-19]** 开源stella-base-en-v2 使用简单,**不需要任何前缀文本**。
Release stella-base-en-v2. This model **does not need any prefix text**.\
**[2023-10-12]** 开源stella-base-zh-v2和stella-large-zh-v2, 效果更好且使用简单,**不需要任何前缀文本**。
Release stella-base-zh-v2 and stella-large-zh-v2. The 2 models have better performance
and **do not need any prefix text**.\
**[2023-09-11]** 开源stella-base-zh和stella-large-zh
stella是一个通用的文本编码模型,主要有以下模型:
| Model Name | Model Size (GB) | Dimension | Sequence Length | Language | Need instruction for retrieval? |
|:------------------:|:---------------:|:---------:|:---------------:|:--------:|:-------------------------------:|
| stella-base-en-v2 | 0.2 | 768 | 512 | English | No |
| stella-large-zh-v2 | 0.65 | 1024 | 1024 | Chinese | No |
| stella-base-zh-v2 | 0.2 | 768 | 1024 | Chinese | No |
| stella-large-zh | 0.65 | 1024 | 1024 | Chinese | Yes |
| stella-base-zh | 0.2 | 768 | 1024 | Chinese | Yes |
完整的训练思路和训练过程已记录在[博客1](https://zhuanlan.zhihu.com/p/655322183)和[博客2](https://zhuanlan.zhihu.com/p/662209559),欢迎阅读讨论。
**训练数据:**
1. 开源数据(wudao_base_200GB[1]、m3e[2]和simclue[3]),着重挑选了长度大于512的文本
2. 在通用语料库上使用LLM构造一批(question, paragraph)和(sentence, paragraph)数据
**训练方法:**
1. 对比学习损失函数
2. 带有难负例的对比学习损失函数(分别基于bm25和vector构造了难负例)
3. EWC(Elastic Weights Consolidation)[4]
4. cosent loss[5]
5. 每一种类型的数据一个迭代器,分别计算loss进行更新
stella-v2在stella模型的基础上,使用了更多的训练数据,同时知识蒸馏等方法去除了前置的instruction(
比如piccolo的`查询:`, `结果:`, e5的`query:`和`passage:`)。
**初始权重:**\
stella-base-zh和stella-large-zh分别以piccolo-base-zh[6]和piccolo-large-zh作为基础模型,512-1024的position
embedding使用层次分解位置编码[7]进行初始化。\
感谢商汤科技研究院开源的[piccolo系列模型](https://huggingface.co/sensenova)。
stella is a general-purpose text encoder, which mainly includes the following models:
| Model Name | Model Size (GB) | Dimension | Sequence Length | Language | Need instruction for retrieval? |
|:------------------:|:---------------:|:---------:|:---------------:|:--------:|:-------------------------------:|
| stella-base-en-v2 | 0.2 | 768 | 512 | English | No |
| stella-large-zh-v2 | 0.65 | 1024 | 1024 | Chinese | No |
| stella-base-zh-v2 | 0.2 | 768 | 1024 | Chinese | No |
| stella-large-zh | 0.65 | 1024 | 1024 | Chinese | Yes |
| stella-base-zh | 0.2 | 768 | 1024 | Chinese | Yes |
The training data mainly includes:
1. Open-source training data (wudao_base_200GB, m3e, and simclue), with a focus on selecting texts with lengths greater
than 512.
2. A batch of (question, paragraph) and (sentence, paragraph) data constructed on a general corpus using LLM.
The loss functions mainly include:
1. Contrastive learning loss function
2. Contrastive learning loss function with hard negative examples (based on bm25 and vector hard negatives)
3. EWC (Elastic Weights Consolidation)
4. cosent loss
Model weight initialization:\
stella-base-zh and stella-large-zh use piccolo-base-zh and piccolo-large-zh as the base models, respectively, and the
512-1024 position embedding uses the initialization strategy of hierarchical decomposed position encoding.
Training strategy:\
One iterator for each type of data, separately calculating the loss.
Based on stella models, stella-v2 use more training data and remove instruction by Knowledge Distillation.
## Metric
#### C-MTEB leaderboard (Chinese)
| Model Name | Model Size (GB) | Dimension | Sequence Length | Average (35) | Classification (9) | Clustering (4) | Pair Classification (2) | Reranking (4) | Retrieval (8) | STS (8) |
|:------------------:|:---------------:|:---------:|:---------------:|:------------:|:------------------:|:--------------:|:-----------------------:|:-------------:|:-------------:|:-------:|
| stella-large-zh-v2 | 0.65 | 1024 | 1024 | 65.13 | 69.05 | 49.16 | 82.68 | 66.41 | 70.14 | 58.66 |
| stella-base-zh-v2 | 0.2 | 768 | 1024 | 64.36 | 68.29 | 49.4 | 79.95 | 66.1 | 70.08 | 56.92 |
| stella-large-zh | 0.65 | 1024 | 1024 | 64.54 | 67.62 | 48.65 | 78.72 | 65.98 | 71.02 | 58.3 |
| stella-base-zh | 0.2 | 768 | 1024 | 64.16 | 67.77 | 48.7 | 76.09 | 66.95 | 71.07 | 56.54 |
#### MTEB leaderboard (English)
| Model Name | Model Size (GB) | Dimension | Sequence Length | Average (56) | Classification (12) | Clustering (11) | Pair Classification (3) | Reranking (4) | Retrieval (15) | STS (10) | Summarization (1) |
|:-----------------:|:---------------:|:---------:|:---------------:|:------------:|:-------------------:|:---------------:|:-----------------------:|:-------------:|:--------------:|:--------:|:------------------:|
| stella-base-en-v2 | 0.2 | 768 | 512 | 62.61 | 75.28 | 44.9 | 86.45 | 58.77 | 50.1 | 83.02 | 32.52 |
#### Reproduce our results
**C-MTEB:**
```python
import torch
import numpy as np
from typing import List
from mteb import MTEB
from sentence_transformers import SentenceTransformer
class FastTextEncoder():
def __init__(self, model_name):
self.model = SentenceTransformer(model_name).cuda().half().eval()
self.model.max_seq_length = 512
def encode(
self,
input_texts: List[str],
*args,
**kwargs
):
new_sens = list(set(input_texts))
new_sens.sort(key=lambda x: len(x), reverse=True)
vecs = self.model.encode(
new_sens, normalize_embeddings=True, convert_to_numpy=True, batch_size=256
).astype(np.float32)
sen2arrid = {sen: idx for idx, sen in enumerate(new_sens)}
vecs = vecs[[sen2arrid[sen] for sen in input_texts]]
torch.cuda.empty_cache()
return vecs
if __name__ == '__main__':
model_name = "infgrad/stella-base-zh-v2"
output_folder = "zh_mteb_results/stella-base-zh-v2"
task_names = [t.description["name"] for t in MTEB(task_langs=['zh', 'zh-CN']).tasks]
model = FastTextEncoder(model_name)
for task in task_names:
MTEB(tasks=[task], task_langs=['zh', 'zh-CN']).run(model, output_folder=output_folder)
```
**MTEB:**
You can use official script to reproduce our result. [scripts/run_mteb_english.py](https://github.com/embeddings-benchmark/mteb/blob/main/scripts/run_mteb_english.py)
#### Evaluation for long text
经过实际观察发现,C-MTEB的评测数据长度基本都是小于512的,
更致命的是那些长度大于512的文本,其重点都在前半部分
这里以CMRC2018的数据为例说明这个问题:
```
question: 《无双大蛇z》是谁旗下ω-force开发的动作游戏?
passage:《无双大蛇z》是光荣旗下ω-force开发的动作游戏,于2009年3月12日登陆索尼playstation3,并于2009年11月27日推......
```
passage长度为800多,大于512,但是对于这个question而言只需要前面40个字就足以检索,多的内容对于模型而言是一种噪声,反而降低了效果。\
简言之,现有数据集的2个问题:\
1)长度大于512的过少\
2)即便大于512,对于检索而言也只需要前512的文本内容\
导致**无法准确评估模型的长文本编码能力。**
为了解决这个问题,搜集了相关开源数据并使用规则进行过滤,最终整理了6份长文本测试集,他们分别是:
- CMRC2018,通用百科
- CAIL,法律阅读理解
- DRCD,繁体百科,已转简体
- Military,军工问答
- Squad,英文阅读理解,已转中文
- Multifieldqa_zh,清华的大模型长文本理解能力评测数据[9]
处理规则是选取答案在512长度之后的文本,短的测试数据会欠采样一下,长短文本占比约为1:2,所以模型既得理解短文本也得理解长文本。
除了Military数据集,我们提供了其他5个测试数据的下载地址:https://drive.google.com/file/d/1WC6EWaCbVgz-vPMDFH4TwAMkLyh5WNcN/view?usp=sharing
评测指标为Recall@5, 结果如下:
| Dataset | piccolo-base-zh | piccolo-large-zh | bge-base-zh | bge-large-zh | stella-base-zh | stella-large-zh |
|:---------------:|:---------------:|:----------------:|:-----------:|:------------:|:--------------:|:---------------:|
| CMRC2018 | 94.34 | 93.82 | 91.56 | 93.12 | 96.08 | 95.56 |
| CAIL | 28.04 | 33.64 | 31.22 | 33.94 | 34.62 | 37.18 |
| DRCD | 78.25 | 77.9 | 78.34 | 80.26 | 86.14 | 84.58 |
| Military | 76.61 | 73.06 | 75.65 | 75.81 | 83.71 | 80.48 |
| Squad | 91.21 | 86.61 | 87.87 | 90.38 | 93.31 | 91.21 |
| Multifieldqa_zh | 81.41 | 83.92 | 83.92 | 83.42 | 79.9 | 80.4 |
| **Average** | 74.98 | 74.83 | 74.76 | 76.15 | **78.96** | **78.24** |
**注意:** 因为长文本评测数据数量稀少,所以构造时也使用了train部分,如果自行评测,请注意模型的训练数据以免数据泄露。
## Usage
#### stella 中文系列模型
stella-base-zh 和 stella-large-zh: 本模型是在piccolo基础上训练的,因此**用法和piccolo完全一致**
,即在检索重排任务上给query和passage加上`查询: `和`结果: `。对于短短匹配不需要做任何操作。
stella-base-zh-v2 和 stella-large-zh-v2: 本模型使用简单,**任何使用场景中都不需要加前缀文本**。
stella中文系列模型均使用mean pooling做为文本向量。
在sentence-transformer库中的使用方法:
```python
from sentence_transformers import SentenceTransformer
sentences = ["数据1", "数据2"]
model = SentenceTransformer('infgrad/stella-base-zh-v2')
print(model.max_seq_length)
embeddings_1 = model.encode(sentences, normalize_embeddings=True)
embeddings_2 = model.encode(sentences, normalize_embeddings=True)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
```
直接使用transformers库:
```python
from transformers import AutoModel, AutoTokenizer
from sklearn.preprocessing import normalize
model = AutoModel.from_pretrained('infgrad/stella-base-zh-v2')
tokenizer = AutoTokenizer.from_pretrained('infgrad/stella-base-zh-v2')
sentences = ["数据1", "数据ABCDEFGH"]
batch_data = tokenizer(
batch_text_or_text_pairs=sentences,
padding="longest",
return_tensors="pt",
max_length=1024,
truncation=True,
)
attention_mask = batch_data["attention_mask"]
model_output = model(**batch_data)
last_hidden = model_output.last_hidden_state.masked_fill(~attention_mask[..., None].bool(), 0.0)
vectors = last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
vectors = normalize(vectors, norm="l2", axis=1, )
print(vectors.shape) # 2,768
```
#### stella models for English
**Using Sentence-Transformers:**
```python
from sentence_transformers import SentenceTransformer
sentences = ["one car come", "one car go"]
model = SentenceTransformer('infgrad/stella-base-en-v2')
print(model.max_seq_length)
embeddings_1 = model.encode(sentences, normalize_embeddings=True)
embeddings_2 = model.encode(sentences, normalize_embeddings=True)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
```
**Using HuggingFace Transformers:**
```python
from transformers import AutoModel, AutoTokenizer
from sklearn.preprocessing import normalize
model = AutoModel.from_pretrained('infgrad/stella-base-en-v2')
tokenizer = AutoTokenizer.from_pretrained('infgrad/stella-base-en-v2')
sentences = ["one car come", "one car go"]
batch_data = tokenizer(
batch_text_or_text_pairs=sentences,
padding="longest",
return_tensors="pt",
max_length=512,
truncation=True,
)
attention_mask = batch_data["attention_mask"]
model_output = model(**batch_data)
last_hidden = model_output.last_hidden_state.masked_fill(~attention_mask[..., None].bool(), 0.0)
vectors = last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
vectors = normalize(vectors, norm="l2", axis=1, )
print(vectors.shape) # 2,768
```
## Training Detail
**硬件:** 单卡A100-80GB
**环境:** torch1.13.*; transformers-trainer + deepspeed + gradient-checkpointing
**学习率:** 1e-6
**batch_size:** base模型为1024,额外增加20%的难负例;large模型为768,额外增加20%的难负例
**数据量:** 第一版模型约100万,其中用LLM构造的数据约有200K. LLM模型大小为13b。v2系列模型到了2000万训练数据。
## ToDoList
**评测的稳定性:**
评测过程中发现Clustering任务会和官方的结果不一致,大约有±0.0x的小差距,原因是聚类代码没有设置random_seed,差距可以忽略不计,不影响评测结论。
**更高质量的长文本训练和测试数据:** 训练数据多是用13b模型构造的,肯定会存在噪声。
测试数据基本都是从mrc数据整理来的,所以问题都是factoid类型,不符合真实分布。
**OOD的性能:** 虽然近期出现了很多向量编码模型,但是对于不是那么通用的domain,这一众模型包括stella、openai和cohere,
它们的效果均比不上BM25。
## Reference
1. https://www.scidb.cn/en/detail?dataSetId=c6a3fe684227415a9db8e21bac4a15ab
2. https://github.com/wangyuxinwhy/uniem
3. https://github.com/CLUEbenchmark/SimCLUE
4. https://arxiv.org/abs/1612.00796
5. https://kexue.fm/archives/8847
6. https://huggingface.co/sensenova/piccolo-base-zh
7. https://kexue.fm/archives/7947
8. https://github.com/FlagOpen/FlagEmbedding
9. https://github.com/THUDM/LongBench
| 37,805 | [
[
-0.0256805419921875,
-0.0545654296875,
0.023712158203125,
0.035675048828125,
-0.0227508544921875,
-0.0205841064453125,
-0.0137939453125,
-0.026885986328125,
0.02508544921875,
0.0178070068359375,
-0.0455322265625,
-0.060150146484375,
-0.047576904296875,
0.016... |
Posos/ClinicalNER | 2023-07-28T17:03:54.000Z | [
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"token-classification",
"medical",
"fr",
"en",
"de",
"multilingual",
"es",
"it",
"dataset:Posos/MedNERF",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | Posos | null | null | Posos/ClinicalNER | 1 | 3,253 | transformers | 2023-07-28T15:03:49 | ---
license: cc-by-nc-sa-4.0
datasets:
- Posos/MedNERF
metrics:
- f1
tags:
- medical
widget:
- text: take two pills every morning during one week
- text: xeplion 50mg 2 fois par jour
- text: doliprane 500 1 comprimé effervescent le matin pendant une semaine
- text: >-
Der Patient sollte zehn Tage lang während dem Frühstück eine
Paracetamol-Tablette einnehmen
- text: una cápsula por la mañana y por la noche
model-index:
- name: Posos/ClinicalNER
results:
- task:
type: token-classification
name: Clinical NER
dataset:
type: Posos/MedNERF
name: MedNERF
split: test
metrics:
- type: f1
value: 0.804
name: micro-F1 score
- type: precision
value: 0.817
name: precision
- type: recall
value: 0.791
name: recall
- type: accuracy
value: 0.859
name: accuracy
language:
- fr
- en
- de
- multilingual
- es
- it
---
# ClinicalNER
## Model Description
This is a multilingual clinical NER model extracting DRUG, STRENGTH, FREQUENCY, DURATION, DOSAGE and FORM entities from a medical text.
## Evaluation Metrics on [MedNERF dataset](https://huggingface.co/datasets/Posos/MedNERF)
- Loss: 0.692
- Accuracy: 0.859
- Precision: 0.817
- Recall: 0.791
- micro-F1: 0.804
- macro-F1: 0.819
## Usage
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("Posos/ClinicalNER")
tokenizer = AutoTokenizer.from_pretrained("Posos/ClinicalNER")
inputs = tokenizer("Take 2 pills every morning", return_tensors="pt")
outputs = model(**inputs)
```
## Citation information
```
@inproceedings{mednerf,
title = "Multilingual Clinical NER: Translation or Cross-lingual Transfer?",
author = "Gaschi, Félix and Fontaine, Xavier and Rastin, Parisa and Toussaint, Yannick",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
publisher = "Association for Computational Linguistics",
year = "2023"
}
``` | 2,020 | [
[
0.004695892333984375,
-0.0411376953125,
0.0265655517578125,
0.027374267578125,
-0.0178375244140625,
-0.01522064208984375,
-0.01136016845703125,
-0.0108795166015625,
0.026580810546875,
0.01806640625,
-0.019927978515625,
-0.0543212890625,
-0.0709228515625,
0.0... |
d4data/bias-detection-model | 2022-08-09T02:40:59.000Z | [
"transformers",
"tf",
"distilbert",
"text-classification",
"Text Classification",
"en",
"co2_eq_emissions",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | d4data | null | null | d4data/bias-detection-model | 19 | 3,247 | transformers | 2022-03-02T23:29:05 | ---
language:
- en
tags:
- Text Classification
co2_eq_emissions: 0.319355
widget:
- text: "Nevertheless, Trump and other Republicans have tarred the protests as havens for terrorists intent on destroying property."
example_title: "Biased example 1"
- text: "Billie Eilish issues apology for mouthing an anti-Asian derogatory term in a resurfaced video."
example_title: "Biased example 2"
- text: "Christians should make clear that the perpetuation of objectionable vaccines and the lack of alternatives is a kind of coercion."
example_title: "Biased example 3"
- text: "There have been a protest by a group of people"
example_title: "Non-Biased example 1"
- text: "While emphasizing he’s not singling out either party, Cohen warned about the danger of normalizing white supremacist ideology."
example_title: "Non-Biased example 2"
---
## About the Model
An English sequence classification model, trained on MBAD Dataset to detect bias and fairness in sentences (news articles). This model was built on top of distilbert-base-uncased model and trained for 30 epochs with a batch size of 16, a learning rate of 5e-5, and a maximum sequence length of 512.
- Dataset : MBAD Data
- Carbon emission 0.319355 Kg
| Train Accuracy | Validation Accuracy | Train loss | Test loss |
|---------------:| -------------------:| ----------:|----------:|
| 76.97 | 62.00 | 0.45 | 0.96 |
## Usage
The easiest way is to load the inference api from huggingface and second method is through the pipeline object offered by transformers library.
```python
from transformers import AutoTokenizer, TFAutoModelForSequenceClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("d4data/bias-detection-model")
model = TFAutoModelForSequenceClassification.from_pretrained("d4data/bias-detection-model")
classifier = pipeline('text-classification', model=model, tokenizer=tokenizer) # cuda = 0,1 based on gpu availability
classifier("The irony, of course, is that the exhibit that invites people to throw trash at vacuuming Ivanka Trump lookalike reflects every stereotype feminists claim to stand against, oversexualizing Ivanka’s body and ignoring her hard work.")
```
## Author
This model is part of the Research topic "Bias and Fairness in AI" conducted by Deepak John Reji, Shaina Raza. If you use this work (code, model or dataset), please star at:
> Bias & Fairness in AI, (2022), GitHub repository, <https://github.com/dreji18/Fairness-in-AI>
| 2,506 | [
[
-0.0190582275390625,
-0.03778076171875,
0.02276611328125,
0.00496673583984375,
-0.004398345947265625,
-0.0030269622802734375,
0.01552581787109375,
-0.023193359375,
-0.013397216796875,
0.0246124267578125,
-0.031402587890625,
-0.046539306640625,
-0.05902099609375,... |
TheBloke/zephyr-7B-alpha-AWQ | 2023-10-14T07:12:09.000Z | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"en",
"dataset:stingning/ultrachat",
"dataset:openbmb/UltraFeedback",
"arxiv:2305.18290",
"license:mit",
"text-generation-inference",
"region:us"
] | text-generation | TheBloke | null | null | TheBloke/zephyr-7B-alpha-AWQ | 12 | 3,247 | transformers | 2023-10-11T03:26:12 | ---
base_model: HuggingFaceH4/zephyr-7b-alpha
datasets:
- stingning/ultrachat
- openbmb/UltraFeedback
inference: false
language:
- en
license: mit
model-index:
- name: zephyr-7b-alpha
results: []
model_creator: Hugging Face H4
model_name: Zephyr 7B Alpha
model_type: mistral
prompt_template: '<|system|>
</s>
<|user|>
{prompt}</s>
<|assistant|>
'
quantized_by: TheBloke
tags:
- generated_from_trainer
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Zephyr 7B Alpha - AWQ
- Model creator: [Hugging Face H4](https://huggingface.co/HuggingFaceH4)
- Original model: [Zephyr 7B Alpha](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha)
<!-- description start -->
## Description
This repo contains AWQ model files for [Hugging Face H4's Zephyr 7B Alpha](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha).
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference.
It is also now supported by continuous batching server [vLLM](https://github.com/vllm-project/vllm), allowing use of Llama AWQ models for high-throughput concurrent inference in multi-user server scenarios.
As of September 25th 2023, preliminary Llama-only AWQ support has also been added to [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference).
Note that, at the time of writing, overall throughput is still lower than running vLLM or TGI with unquantised models, however using AWQ enables using much smaller GPUs which can lead to easier deployment and overall cost savings. For example, a 70B model can be run on 1 x 48GB GPU instead of 2 x 80GB.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/zephyr-7B-alpha-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/zephyr-7B-alpha-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/zephyr-7B-alpha-GGUF)
* [Hugging Face H4's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Zephyr
```
<|system|>
</s>
<|user|>
{prompt}</s>
<|assistant|>
```
<!-- prompt-template end -->
<!-- README_AWQ.md-provided-files start -->
## Provided files, and AWQ parameters
For my first release of AWQ models, I am releasing 128g models only. I will consider adding 32g as well if there is interest, and once I have done perplexity and evaluation comparisons, but at this time 32g models are still not fully tested with AutoAWQ and vLLM.
Models are released as sharded safetensors files.
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
| ------ | ---- | -- | ----------- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/zephyr-7B-alpha-AWQ/tree/main) | 4 | 128 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 4.15 GB
<!-- README_AWQ.md-provided-files end -->
<!-- README_AWQ.md-use-from-vllm start -->
## Serving this model from vLLM
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
Note: at the time of writing, vLLM has not yet done a new release with AWQ support.
If you try the vLLM examples below and get an error about `quantization` being unrecognised, or other AWQ-related issues, please install vLLM from Github source.
- When using vLLM as a server, pass the `--quantization awq` parameter, for example:
```shell
python3 python -m vllm.entrypoints.api_server --model TheBloke/zephyr-7B-alpha-AWQ --quantization awq --dtype half
```
When using vLLM from Python code, pass the `quantization=awq` parameter, for example:
```python
from vllm import LLM, SamplingParams
prompts = [
"Hello, my name is",
"The president of the United States is",
"The capital of France is",
"The future of AI is",
]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="TheBloke/zephyr-7B-alpha-AWQ", quantization="awq", dtype="half")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
<!-- README_AWQ.md-use-from-vllm start -->
<!-- README_AWQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/zephyr-7B-alpha-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''<|system|>
</s>
<|user|>
{prompt}</s>
<|assistant|>
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: {response}")
```
<!-- README_AWQ.md-use-from-tgi end -->
<!-- README_AWQ.md-use-from-python start -->
## How to use this AWQ model from Python code
### Install the necessary packages
Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.1 or later
```shell
pip3 install autoawq
```
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y autoawq
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip3 install .
```
### You can then try the following example code
```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer
model_name_or_path = "TheBloke/zephyr-7B-alpha-AWQ"
# Load model
model = AutoAWQForCausalLM.from_quantized(model_name_or_path, fuse_layers=True,
trust_remote_code=False, safetensors=True)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=False)
prompt = "Tell me about AI"
prompt_template=f'''<|system|>
</s>
<|user|>
{prompt}</s>
<|assistant|>
'''
print("\n\n*** Generate:")
tokens = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
# Generate output
generation_output = model.generate(
tokens,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
max_new_tokens=512
)
print("Output: ", tokenizer.decode(generation_output[0]))
"""
# Inference should be possible with transformers pipeline as well in future
# But currently this is not yet supported by AutoAWQ (correct as of September 25th 2023)
from transformers import pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
"""
```
<!-- README_AWQ.md-use-from-python end -->
<!-- README_AWQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with:
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ)
- [vLLM](https://github.com/vllm-project/vllm)
- [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
TGI merged AWQ support on September 25th, 2023: [TGI PR #1054](https://github.com/huggingface/text-generation-inference/pull/1054). Use the `:latest` Docker container until the next TGI release is made.
<!-- README_AWQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Hugging Face H4's Zephyr 7B Alpha
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
<img src="https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha/resolve/main/thumbnail.png" alt="Zephyr Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# Model Card for Zephyr 7B Alpha
Zephyr is a series of language models that are trained to act as helpful assistants. Zephyr-7B-α is the first model in the series, and is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) that was trained on on a mix of publicly available, synthetic datasets using [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290). We found that removing the in-built alignment of these datasets boosted performance on [MT Bench](https://huggingface.co/spaces/lmsys/mt-bench) and made the model more helpful. However, this means that model is likely to generate problematic text when prompted to do so and should only be used for educational and research purposes.
## Model description
- **Model type:** A 7B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets.
- **Language(s) (NLP):** Primarily English
- **License:** MIT
- **Finetuned from model:** [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/huggingface/alignment-handbook
- **Demo:** https://huggingface.co/spaces/HuggingFaceH4/zephyr-chat
## Intended uses & limitations
The model was initially fine-tuned on a variant of the [`UltraChat`](https://huggingface.co/datasets/stingning/ultrachat) dataset, which contains a diverse range of synthetic dialogues generated by ChatGPT. We then further aligned the model with [🤗 TRL's](https://github.com/huggingface/trl) `DPOTrainer` on the [openbmb/UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) dataset, which contain 64k prompts and model completions that are ranked by GPT-4. As a result, the model can be used for chat and you can check out our [demo](https://huggingface.co/spaces/HuggingFaceH4/zephyr-chat) to test its capabilities.
Here's how you can run the model using the `pipeline()` function from 🤗 Transformers:
```python
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="HuggingFaceH4/zephyr-7b-alpha", torch_dtype=torch.bfloat16, device_map="auto")
# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
{
"role": "system",
"content": "You are a friendly chatbot who always responds in the style of a pirate",
},
{"role": "user", "content": "How many helicopters can a human eat in one sitting?"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
# <|system|>
# You are a friendly chatbot who always responds in the style of a pirate.</s>
# <|user|>
# How many helicopters can a human eat in one sitting?</s>
# <|assistant|>
# Ah, me hearty matey! But yer question be a puzzler! A human cannot eat a helicopter in one sitting, as helicopters are not edible. They be made of metal, plastic, and other materials, not food!
```
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Zephyr-7B-α has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
It is also unknown what the size and composition of the corpus was used to train the base model (`mistralai/Mistral-7B-v0.1`), however it is likely to have included a mix of Web data and technical sources like books and code. See the [Falcon 180B model card](https://huggingface.co/tiiuae/falcon-180B#training-data) for an example of this.
## Training and evaluation data
Zephyr 7B Alpha achieves the following results on the evaluation set:
- Loss: 0.4605
- Rewards/chosen: -0.5053
- Rewards/rejected: -1.8752
- Rewards/accuracies: 0.7812
- Rewards/margins: 1.3699
- Logps/rejected: -327.4286
- Logps/chosen: -297.1040
- Logits/rejected: -2.7153
- Logits/chosen: -2.7447
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 2
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 16
- total_train_batch_size: 32
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.5602 | 0.05 | 100 | 0.5589 | -0.3359 | -0.8168 | 0.7188 | 0.4809 | -306.2607 | -293.7161 | -2.6554 | -2.6797 |
| 0.4852 | 0.1 | 200 | 0.5136 | -0.5310 | -1.4994 | 0.8125 | 0.9684 | -319.9124 | -297.6181 | -2.5762 | -2.5957 |
| 0.5212 | 0.15 | 300 | 0.5168 | -0.1686 | -1.1760 | 0.7812 | 1.0074 | -313.4444 | -290.3699 | -2.6865 | -2.7125 |
| 0.5496 | 0.21 | 400 | 0.4835 | -0.1617 | -1.7170 | 0.8281 | 1.5552 | -324.2635 | -290.2326 | -2.7947 | -2.8218 |
| 0.5209 | 0.26 | 500 | 0.5054 | -0.4778 | -1.6604 | 0.7344 | 1.1826 | -323.1325 | -296.5546 | -2.8388 | -2.8667 |
| 0.4617 | 0.31 | 600 | 0.4910 | -0.3738 | -1.5180 | 0.7656 | 1.1442 | -320.2848 | -294.4741 | -2.8234 | -2.8521 |
| 0.4452 | 0.36 | 700 | 0.4838 | -0.4591 | -1.6576 | 0.7031 | 1.1986 | -323.0770 | -296.1796 | -2.7401 | -2.7653 |
| 0.4674 | 0.41 | 800 | 0.5077 | -0.5692 | -1.8659 | 0.7656 | 1.2967 | -327.2416 | -298.3818 | -2.6740 | -2.6945 |
| 0.4656 | 0.46 | 900 | 0.4927 | -0.5279 | -1.6614 | 0.7656 | 1.1335 | -323.1518 | -297.5553 | -2.7817 | -2.8015 |
| 0.4102 | 0.52 | 1000 | 0.4772 | -0.5767 | -2.0667 | 0.7656 | 1.4900 | -331.2578 | -298.5311 | -2.7160 | -2.7455 |
| 0.4663 | 0.57 | 1100 | 0.4740 | -0.8038 | -2.1018 | 0.7656 | 1.2980 | -331.9604 | -303.0741 | -2.6994 | -2.7257 |
| 0.4737 | 0.62 | 1200 | 0.4716 | -0.3783 | -1.7015 | 0.7969 | 1.3232 | -323.9545 | -294.5634 | -2.6842 | -2.7135 |
| 0.4259 | 0.67 | 1300 | 0.4866 | -0.6239 | -1.9703 | 0.7812 | 1.3464 | -329.3312 | -299.4761 | -2.7046 | -2.7356 |
| 0.4935 | 0.72 | 1400 | 0.4747 | -0.5626 | -1.7600 | 0.7812 | 1.1974 | -325.1243 | -298.2491 | -2.7153 | -2.7444 |
| 0.4211 | 0.77 | 1500 | 0.4645 | -0.6099 | -1.9993 | 0.7656 | 1.3894 | -329.9109 | -299.1959 | -2.6944 | -2.7236 |
| 0.4931 | 0.83 | 1600 | 0.4684 | -0.6798 | -2.1082 | 0.7656 | 1.4285 | -332.0890 | -300.5934 | -2.7006 | -2.7305 |
| 0.5029 | 0.88 | 1700 | 0.4595 | -0.5063 | -1.8951 | 0.7812 | 1.3889 | -327.8267 | -297.1233 | -2.7108 | -2.7403 |
| 0.4965 | 0.93 | 1800 | 0.4613 | -0.5561 | -1.9079 | 0.7812 | 1.3518 | -328.0831 | -298.1203 | -2.7226 | -2.7523 |
| 0.4337 | 0.98 | 1900 | 0.4608 | -0.5066 | -1.8718 | 0.7656 | 1.3652 | -327.3599 | -297.1296 | -2.7175 | -2.7469 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.14.0
| 21,537 | [
[
-0.03497314453125,
-0.056884765625,
0.0285186767578125,
0.004970550537109375,
-0.01531219482421875,
-0.01314544677734375,
0.00649261474609375,
-0.04229736328125,
0.00246429443359375,
0.0228424072265625,
-0.049713134765625,
-0.0394287109375,
-0.020599365234375,
... |
squeezebert/squeezebert-mnli | 2020-12-11T22:02:13.000Z | [
"transformers",
"pytorch",
"squeezebert",
"arxiv:2006.11316",
"arxiv:1904.00962",
"endpoints_compatible",
"region:us"
] | null | squeezebert | null | null | squeezebert/squeezebert-mnli | 0 | 3,246 | transformers | 2022-03-02T23:29:05 | language: en
license: bsd
datasets:
- bookcorpus
- wikipedia
---
# SqueezeBERT pretrained model
This model, `squeezebert-mnli`, has been pretrained for the English language using a masked language modeling (MLM) and Sentence Order Prediction (SOP) objective and finetuned on the [Multi-Genre Natural Language Inference (MNLI)](https://cims.nyu.edu/~sbowman/multinli/) dataset.
SqueezeBERT was introduced in [this paper](https://arxiv.org/abs/2006.11316). This model is case-insensitive. The model architecture is similar to BERT-base, but with the pointwise fully-connected layers replaced with [grouped convolutions](https://blog.yani.io/filter-group-tutorial/).
The authors found that SqueezeBERT is 4.3x faster than `bert-base-uncased` on a Google Pixel 3 smartphone.
## Pretraining
### Pretraining data
- [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of thousands of unpublished books
- [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia)
### Pretraining procedure
The model is pretrained using the Masked Language Model (MLM) and Sentence Order Prediction (SOP) tasks.
(Author's note: If you decide to pretrain your own model, and you prefer to train with MLM only, that should work too.)
From the SqueezeBERT paper:
> We pretrain SqueezeBERT from scratch (without distillation) using the [LAMB](https://arxiv.org/abs/1904.00962) optimizer, and we employ the hyperparameters recommended by the LAMB authors: a global batch size of 8192, a learning rate of 2.5e-3, and a warmup proportion of 0.28. Following the LAMB paper's recommendations, we pretrain for 56k steps with a maximum sequence length of 128 and then for 6k steps with a maximum sequence length of 512.
## Finetuning
The SqueezeBERT paper presents 2 approaches to finetuning the model:
- "finetuning without bells and whistles" -- after pretraining the SqueezeBERT model, finetune it on each GLUE task
- "finetuning with bells and whistles" -- after pretraining the SqueezeBERT model, finetune it on a MNLI with distillation from a teacher model. Then, use the MNLI-finetuned SqueezeBERT model as a student model to finetune on each of the other GLUE tasks (e.g. RTE, MRPC, …) with distillation from a task-specific teacher model.
A detailed discussion of the hyperparameters used for finetuning is provided in the appendix of the [SqueezeBERT paper](https://arxiv.org/abs/2006.11316).
Note that finetuning SqueezeBERT with distillation is not yet implemented in this repo. If the author (Forrest Iandola - forrest.dnn@gmail.com) gets enough encouragement from the user community, he will add example code to Transformers for finetuning SqueezeBERT with distillation.
This model, `squeezebert/squeezebert-mnli`, is the "trained with bells and whistles" MNLI-finetuned SqueezeBERT model.
### How to finetune
To try finetuning SqueezeBERT on the [MRPC](https://www.microsoft.com/en-us/download/details.aspx?id=52398) text classification task, you can run the following command:
```
./utils/download_glue_data.py
python examples/text-classification/run_glue.py \
--model_name_or_path squeezebert-base-headless \
--task_name mrpc \
--data_dir ./glue_data/MRPC \
--output_dir ./models/squeezebert_mrpc \
--overwrite_output_dir \
--do_train \
--do_eval \
--num_train_epochs 10 \
--learning_rate 3e-05 \
--per_device_train_batch_size 16 \
--save_steps 20000
```
## BibTeX entry and citation info
```
@article{2020_SqueezeBERT,
author = {Forrest N. Iandola and Albert E. Shaw and Ravi Krishna and Kurt W. Keutzer},
title = {{SqueezeBERT}: What can computer vision teach NLP about efficient neural networks?},
journal = {arXiv:2006.11316},
year = {2020}
}
```
| 3,738 | [
[
-0.0203704833984375,
-0.044464111328125,
0.0161590576171875,
0.01904296875,
-0.007427215576171875,
-0.0250396728515625,
-0.029876708984375,
-0.00699615478515625,
0.00669097900390625,
0.036712646484375,
-0.04193115234375,
-0.022430419921875,
-0.0577392578125,
... |
0xJustin/Dungeons-and-Diffusion | 2023-02-24T18:58:30.000Z | [
"diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 0xJustin | null | null | 0xJustin/Dungeons-and-Diffusion | 229 | 3,246 | diffusers | 2022-11-06T18:03:42 | ---
license: creativeml-openrail-m
tags:
- text-to-image
---
FOR THE NEW VERSION DOWNLOAD 'D&Diffusion3.0_Protogen.ckpt'
The newest version is finetuned from Protogen to great effect. Also works great at resolutions great than 512x512!
Species in new version: aarakocra, aasimar, air_genasi, centaur, dragonborn, drow, dwarf, earth_genasi, elf, firbolg, fire_genasi, gith, gnome, goblin, goliath, halfling, human, illithid, kenku, kobold, lizardfolk, minotaur, orc, tabaxi, thrikreen, tiefling, tortle, warforged, water_genasi
Classes in new version: Artificer, Bard, Barbarian, Cleric, Fighter, Druid, Monk, Paladin, Rogue, Ranger, Sorcerer, Warlock, Wizard, Noble, Townsperson
See the training dataset here for a list of races: https://huggingface.co/datasets/0xJustin/Dungeons-and-Diffusion
Model16000 is trained used `D&D character` as the class prompt, and for whatever reason it ~ seems ~ to work better for centaurs and aarakocra
Model30000 is trained using all of the images as the class images, and I think it emulates the commission DnD character style better. It works VERY well for most races, though sometimes I have to fight to get aarakocra to not be birds or centaurs to not be horses. Tieflings work great, but reining in their horns can be trouble. There is some bleed through between classes- especially for elf ears and horns. Including `elf ears` and `horns` as negative prompts seems to help.
Good prompts to try things out:
modelshoot style, (extremely detailed CG unity 8k wallpaper), full shot body photo of the most beautiful artwork in the world, english medieval pink (dragonborn druid) witch, black silk robe, nature magic, medieval era, painting by Ed Blinkey, Atey Ghailan, Studio Ghibli, by Jeremy Mann, Greg Manchess, Antonio Moro, trending on ArtStation, trending on CGSociety, Intricate, High Detail, Sharp focus, dramatic, painting art by midjourney and greg rutkowski, teal and gold, petals, countryside, action pose, casting a spell, green swirling magic
Negative prompt: canvas frame, cartoon, 3d, photorealistic
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 10, Seed: 2603924688, Size: 512x768, Batch size: 4, Batch pos: 1, Clip skip: 2
`[natural colors], full body tiefling (knight), [watercolor digital 2D painting], (strong shading), hard shadows, blurry, elegant, wearing robes, style of (saga comic) Lois van Baarle and charlie bowater and Sui Ishida, messy, disheveled, thick brushwork, detailed face and eyes, concept art`
`portrait (painting) of tabaxi, de Rivia closeup, suit, collar, formal attire, D&D, fantasy, intricate, elegant, highly detailed, artstation, concept art, matte, sharp focus, (brush strokes), (oil on canvas), hearthstone, art by Titian and Greg Rutkowski and Rembrandt van Rijn and Alphonse Mucha` (inspired by Reddit post)
| 2,813 | [
[
-0.06536865234375,
-0.033447265625,
0.013427734375,
0.0211944580078125,
-0.014862060546875,
0.0251007080078125,
0.006359100341796875,
-0.06671142578125,
0.051300048828125,
0.03216552734375,
-0.032012939453125,
-0.048583984375,
-0.0318603515625,
-0.0012655258... |
facebook/wav2vec2-large-robust | 2021-11-05T12:45:27.000Z | [
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"speech",
"en",
"dataset:libri_light",
"dataset:common_voice",
"dataset:switchboard",
"dataset:fisher",
"arxiv:2104.01027",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null | facebook | null | null | facebook/wav2vec2-large-robust | 18 | 3,245 | transformers | 2022-03-02T23:29:05 | ---
language: en
datasets:
- libri_light
- common_voice
- switchboard
- fisher
tags:
- speech
license: apache-2.0
---
# Wav2Vec2-Large-Robust
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/)
The large model pretrained on 16kHz sampled speech audio.
Speech datasets from multiple domains were used to pretrain the model:
- [Libri-Light](https://github.com/facebookresearch/libri-light): open-source audio books from the LibriVox project; clean, read-out audio data
- [CommonVoice](https://huggingface.co/datasets/common_voice): crowd-source collected audio data; read-out text snippets
- [Switchboard](https://catalog.ldc.upenn.edu/LDC97S62): telephone speech corpus; noisy telephone data
- [Fisher](https://catalog.ldc.upenn.edu/LDC2004T19): conversational telephone speech; noisy telephone data
When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model.
[Paper Robust Wav2Vec2](https://arxiv.org/abs/2104.01027)
Authors: Wei-Ning Hsu, Anuroop Sriram, Alexei Baevski, Tatiana Likhomanenko, Qiantong Xu, Vineel Pratap, Jacob Kahn, Ann Lee, Ronan Collobert, Gabriel Synnaeve, Michael Auli
**Abstract**
Self-supervised learning of speech representations has been a very active research area but most work is focused on a single domain such as read audio books for which there exist large quantities of labeled and unlabeled data. In this paper, we explore more general setups where the domain of the unlabeled data for pre-training data differs from the domain of the labeled data for fine-tuning, which in turn may differ from the test data domain. Our experiments show that using target domain data during pre-training leads to large performance improvements across a variety of setups. On a large-scale competitive setup, we show that pre-training on unlabeled in-domain data reduces the gap between models trained on in-domain and out-of-domain labeled data by 66%-73%. This has obvious practical implications since it is much easier to obtain unlabeled target domain data than labeled data. Moreover, we find that pre-training on multiple domains improves generalization performance on domains not seen during training. Code and models will be made available at this https URL.
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
# Usage
See [this notebook](https://colab.research.google.com/drive/1FjTsqbYKphl9kL-eILgUc-bl4zVThL8F?usp=sharing) for more information on how to fine-tune the model.
| 2,918 | [
[
-0.0195159912109375,
-0.048065185546875,
0.003173828125,
-0.00391387939453125,
-0.016571044921875,
-0.007167816162109375,
-0.03271484375,
-0.04693603515625,
-0.01861572265625,
0.022186279296875,
-0.04144287109375,
-0.046142578125,
-0.047393798828125,
-0.0262... |
hfl/rbt3 | 2021-05-19T19:19:45.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"zh",
"arxiv:1906.08101",
"arxiv:2004.13922",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | hfl | null | null | hfl/rbt3 | 15 | 3,242 | transformers | 2022-03-02T23:29:05 | ---
language:
- zh
tags:
- bert
license: "apache-2.0"
pipeline_tag: "fill-mask"
---
# This is a re-trained 3-layer RoBERTa-wwm-ext model.
## Chinese BERT with Whole Word Masking
For further accelerating Chinese natural language processing, we provide **Chinese pre-trained BERT with Whole Word Masking**.
**[Pre-Training with Whole Word Masking for Chinese BERT](https://arxiv.org/abs/1906.08101)**
Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, Guoping Hu
This repository is developed based on:https://github.com/google-research/bert
You may also interested in,
- Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
- Chinese MacBERT: https://github.com/ymcui/MacBERT
- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
More resources by HFL: https://github.com/ymcui/HFL-Anthology
## Citation
If you find the technical report or resource is useful, please cite the following technical report in your paper.
- Primary: https://arxiv.org/abs/2004.13922
```
@inproceedings{cui-etal-2020-revisiting,
title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing",
author = "Cui, Yiming and
Che, Wanxiang and
Liu, Ting and
Qin, Bing and
Wang, Shijin and
Hu, Guoping",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58",
pages = "657--668",
}
```
- Secondary: https://arxiv.org/abs/1906.08101
```
@article{chinese-bert-wwm,
title={Pre-Training with Whole Word Masking for Chinese BERT},
author={Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Yang, Ziqing and Wang, Shijin and Hu, Guoping},
journal={arXiv preprint arXiv:1906.08101},
year={2019}
}
```
| 2,089 | [
[
-0.0188751220703125,
-0.0657958984375,
0.020172119140625,
0.030853271484375,
-0.026763916015625,
-0.01399993896484375,
-0.040313720703125,
-0.050445556640625,
0.02484130859375,
0.0377197265625,
-0.038177490234375,
-0.033935546875,
-0.037506103515625,
-0.0065... |
kandinsky-community/kandinsky-2-2-decoder-inpaint | 2023-10-09T11:33:04.000Z | [
"diffusers",
"text-to-image",
"kandinsky",
"license:apache-2.0",
"has_space",
"diffusers:KandinskyV22InpaintPipeline",
"region:us"
] | text-to-image | kandinsky-community | null | null | kandinsky-community/kandinsky-2-2-decoder-inpaint | 13 | 3,241 | diffusers | 2023-06-16T17:14:36 | ---
license: apache-2.0
prior:
- kandinsky-community/kandinsky-2-2-prior
tags:
- text-to-image
- kandinsky
inference: false
---
# Kandinsky 2.2
Kandinsky inherits best practices from Dall-E 2 and Latent diffusion while introducing some new ideas.
It uses the CLIP model as a text and image encoder, and diffusion image prior (mapping) between latent spaces of CLIP modalities. This approach increases the visual performance of the model and unveils new horizons in blending images and text-guided image manipulation.
The Kandinsky model is created by [Arseniy Shakhmatov](https://github.com/cene555), [Anton Razzhigaev](https://github.com/razzant), [Aleksandr Nikolich](https://github.com/AlexWortega), [Igor Pavlov](https://github.com/boomb0om), [Andrey Kuznetsov](https://github.com/kuznetsoffandrey) and [Denis Dimitrov](https://github.com/denndimitrov)
## Usage
Kandinsky 2.2 is available in diffusers!
```python
pip install diffusers transformers accelerate
```
### Text Guided Inpainting Generation
```python
from diffusers import AutoPipelineForInpainting
from diffusers.utils import load_image
import torch
import numpy as np
pipe = AutoPipelineForInpainting.from_pretrained("kandinsky-community/kandinsky-2-2-decoder-inpaint", torch_dtype=torch.float16)
pipe.enable_model_cpu_offload()
prompt = "a hat"
init_image = load_image(
"https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" "/kandinsky/cat.png"
)
mask = np.zeros((768, 768), dtype=np.float32)
# Let's mask out an area above the cat's head
mask[:250, 250:-250] = 1
out = pipe(
prompt=prompt,
image=init_image,
mask_image=mask,
height=768,
width=768,
num_inference_steps=150,
)
image = out.images[0]
image.save("cat_with_hat.png")
```

🚨🚨🚨 __Breaking change for Kandinsky Mask Inpainting__ 🚨🚨🚨
We introduced a breaking change for Kandinsky inpainting pipeline in the following pull request: https://github.com/huggingface/diffusers/pull/4207. Previously we accepted a mask format where black pixels represent the masked-out area. This is inconsistent with all other pipelines in diffusers. We have changed the mask format in Knaindsky and now using white pixels instead.
Please upgrade your inpainting code to follow the above. If you are using Kandinsky Inpaint in production. You now need to change the mask to:
```python
# For PIL input
import PIL.ImageOps
mask = PIL.ImageOps.invert(mask)
# For PyTorch and Numpy input
mask = 1 - mask
```
## Model Architecture
### Overview
Kandinsky 2.1 is a text-conditional diffusion model based on unCLIP and latent diffusion, composed of a transformer-based image prior model, a unet diffusion model, and a decoder.
The model architectures are illustrated in the figure below - the chart on the left describes the process to train the image prior model, the figure in the center is the text-to-image generation process, and the figure on the right is image interpolation.
<p float="left">
<img src="https://raw.githubusercontent.com/ai-forever/Kandinsky-2/main/content/kandinsky21.png"/>
</p>
Specifically, the image prior model was trained on CLIP text and image embeddings generated with a pre-trained [mCLIP model](https://huggingface.co/M-CLIP/XLM-Roberta-Large-Vit-L-14). The trained image prior model is then used to generate mCLIP image embeddings for input text prompts. Both the input text prompts and its mCLIP image embeddings are used in the diffusion process. A [MoVQGAN](https://openreview.net/forum?id=Qb-AoSw4Jnm) model acts as the final block of the model, which decodes the latent representation into an actual image.
### Details
The image prior training of the model was performed on the [LAION Improved Aesthetics dataset](https://huggingface.co/datasets/bhargavsdesai/laion_improved_aesthetics_6.5plus_with_images), and then fine-tuning was performed on the [LAION HighRes data](https://huggingface.co/datasets/laion/laion-high-resolution).
The main Text2Image diffusion model was trained on the basis of 170M text-image pairs from the [LAION HighRes dataset](https://huggingface.co/datasets/laion/laion-high-resolution) (an important condition was the presence of images with a resolution of at least 768x768). The use of 170M pairs is due to the fact that we kept the UNet diffusion block from Kandinsky 2.0, which allowed us not to train it from scratch. Further, at the stage of fine-tuning, a dataset of 2M very high-quality high-resolution images with descriptions (COYO, anime, landmarks_russia, and a number of others) was used separately collected from open sources.
### Evaluation
We quantitatively measure the performance of Kandinsky 2.1 on the COCO_30k dataset, in zero-shot mode. The table below presents FID.
FID metric values for generative models on COCO_30k
| | FID (30k)|
|:------|----:|
| eDiff-I (2022) | 6.95 |
| Image (2022) | 7.27 |
| Kandinsky 2.1 (2023) | 8.21|
| Stable Diffusion 2.1 (2022) | 8.59 |
| GigaGAN, 512x512 (2023) | 9.09 |
| DALL-E 2 (2022) | 10.39 |
| GLIDE (2022) | 12.24 |
| Kandinsky 1.0 (2022) | 15.40 |
| DALL-E (2021) | 17.89 |
| Kandinsky 2.0 (2022) | 20.00 |
| GLIGEN (2022) | 21.04 |
For more information, please refer to the upcoming technical report.
## BibTex
If you find this repository useful in your research, please cite:
```
@misc{kandinsky 2.2,
title = {kandinsky 2.2},
author = {Arseniy Shakhmatov, Anton Razzhigaev, Aleksandr Nikolich, Vladimir Arkhipkin, Igor Pavlov, Andrey Kuznetsov, Denis Dimitrov},
year = {2023},
howpublished = {},
}
``` | 5,698 | [
[
-0.0250701904296875,
-0.054595947265625,
0.040130615234375,
0.01067352294921875,
-0.0206298828125,
-0.000018596649169921875,
-0.004718780517578125,
-0.0272674560546875,
0.00189208984375,
0.0301055908203125,
-0.0242462158203125,
-0.03717041015625,
-0.03857421875,... |
stabilityai/stablecode-completion-alpha-3b-4k | 2023-08-08T15:18:07.000Z | [
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"text-generation",
"causal-lm",
"code",
"dataset:bigcode/starcoderdata",
"arxiv:2104.09864",
"arxiv:1910.02054",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | text-generation | stabilityai | null | null | stabilityai/stablecode-completion-alpha-3b-4k | 248 | 3,237 | transformers | 2023-08-07T16:59:19 | ---
datasets:
- bigcode/starcoderdata
language:
- code
tags:
- causal-lm
model-index:
- name: stabilityai/stablecode-completion-alpha-3b-4k
results:
- task:
type: text-generation
dataset:
type: openai_humaneval
name: HumanEval
metrics:
- name: pass@1
type: pass@1
value: 0.1768
verified: false
- name: pass@10
type: pass@10
value: 0.2701
verified: false
license: apache-2.0
---
# `StableCode-Completion-Alpha-3B-4K`
## Model Description
`StableCode-Completion-Alpha-3B-4K` is a 3 billion parameter decoder-only code completion model pre-trained on diverse set of programming languages that topped the stackoverflow developer survey.
## Usage
The model is intended to do single/multiline code completion from a long context window upto 4k tokens.
Get started generating code with `StableCode-Completion-Alpha-3B-4k` by using the following code snippet:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("stabilityai/stablecode-completion-alpha-3b-4k")
model = AutoModelForCausalLM.from_pretrained(
"stabilityai/stablecode-completion-alpha-3b-4k",
trust_remote_code=True,
torch_dtype="auto",
)
model.cuda()
inputs = tokenizer("import torch\nimport torch.nn as nn", return_tensors="pt").to("cuda")
tokens = model.generate(
**inputs,
max_new_tokens=48,
temperature=0.2,
do_sample=True,
)
print(tokenizer.decode(tokens[0], skip_special_tokens=True))
```
## Model Details
* **Developed by**: [Stability AI](https://stability.ai/)
* **Model type**: `StableCode-Completion-Alpha-3B-4k` models are auto-regressive language models based on the transformer decoder architecture.
* **Language(s)**: Code
* **Library**: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
* **License**: Model checkpoints are licensed under the [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) license.
* **Contact**: For questions and comments about the model, please email `lm@stability.ai`
### Model Architecture
| Parameters | Hidden Size | Layers | Heads | Sequence Length |
|----------------|-------------|--------|-------|-----------------|
| 2,796,431,360 | 2560 | 32 | 32 | 4096 |
* **Decoder Layer**: Parallel Attention and MLP residuals with a single input LayerNorm ([Wang & Komatsuzaki, 2021](https://github.com/kingoflolz/mesh-transformer-jax/tree/master))
* **Position Embeddings**: Rotary Position Embeddings ([Su et al., 2021](https://arxiv.org/abs/2104.09864))
* **Bias**: LayerNorm bias terms only
## Training
`StableCode-Completion-Alpha-3B-4k` is pre-trained at a context length of 4096 for 300 billion tokens on the `bigcode/starcoder-data`.
### Training Dataset
The first pre-training stage relies on 300B tokens sourced from various top programming languages occuring in the stackoverflow developer survey present in the `starcoder-data` dataset.
### Training Procedure
The model is pre-trained on the dataset mixes mentioned above in mixed-precision BF16), optimized with AdamW, and trained using the [StarCoder](https://huggingface.co/bigcode/starcoder) tokenizer with a vocabulary size of 49k.
* **Software**: We use a fork of gpt-neox ([EleutherAI, 2021](https://github.com/EleutherAI/gpt-neox)) and train under 2D parallelism (Data and Tensor Parallel) with ZeRO-1 ([Rajbhandari et al., 2019](https://arxiv.org/abs/1910.02054v3)) and rely on flash-attention as well as rotary embedding kernels from FlashAttention-2 ([Dao et al., 2023](https://tridao.me/publications/flash2/flash2.pdf))
## Use and Limitations
### Intended Use
StableCode-Completion-Alpha-3B-4K independently generates new code completions, but we recommend that you use StableCode-Completion-Alpha-3B-4K together with the tool developed by BigCode and HuggingFace [(huggingface/huggingface-vscode: Code completion VSCode extension for OSS models (github.com))](https://github.com/huggingface/huggingface-vscode), to identify and, if necessary, attribute any outputs that match training code.
### Limitations and bias
This model is intended to be used responsibly. It is not intended to be used to create unlawful content of any kind, to further any unlawful activity, or to engage in activities with a high risk of physical or economic harm.
## How to cite
```bibtex
@misc{StableCodeCompleteAlpha4K,
url={[https://huggingface.co/stabilityai/stablecode-complete-alpha-3b-4k](https://huggingface.co/stabilityai/stablecode-complete-alpha-3b-4k)},
title={Stable Code Complete Alpha},
author={Adithyan, Reshinth and Phung, Duy and Cooper, Nathan and Pinnaparaju, Nikhil and Laforte, Christian}
}
``` | 4,773 | [
[
-0.032501220703125,
-0.034271240234375,
0.0177001953125,
0.018585205078125,
-0.0164947509765625,
-0.0022430419921875,
-0.019134521484375,
-0.040679931640625,
0.01861572265625,
0.0198974609375,
-0.026153564453125,
-0.04302978515625,
-0.048858642578125,
0.0174... |
hf-internal-testing/tiny-random-bloom | 2022-06-27T18:38:43.000Z | [
"transformers",
"pytorch",
"bloom",
"feature-extraction",
"integration",
"text-generation",
"eng",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | hf-internal-testing | null | null | hf-internal-testing/tiny-random-bloom | 0 | 3,233 | transformers | 2022-06-27T18:34:13 | ---
language:
- eng
tags:
- integration
pipeline_tag: text-generation
---
# BigScience - testing model
This model aims to test the conversion between Megatron-LM and transformers. It is a small ```GPT-2```-like model that has been used to debug the script. Use it only for integration tests | 295 | [
[
-0.041351318359375,
-0.060638427734375,
0.03887939453125,
-0.012451171875,
-0.016387939453125,
-0.0266876220703125,
0.00872039794921875,
-0.0033702850341796875,
-0.004863739013671875,
0.0361328125,
-0.0501708984375,
-0.00476837158203125,
-0.0330810546875,
0.... |
stanford-crfm/BioMedLM | 2023-03-01T08:56:16.000Z | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"dataset:pubmed",
"arxiv:2112.04359",
"license:bigscience-bloom-rail-1.0",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | text-generation | stanford-crfm | null | null | stanford-crfm/BioMedLM | 242 | 3,230 | transformers | 2022-12-14T08:14:59 | ---
license: bigscience-bloom-rail-1.0
datasets:
- pubmed
widget:
- text: 'Photosynthesis is'
---
# Model Card for BioMedLM 2.7B
Note: This model was previously known as PubMedGPT 2.7B, but we have changed it due to a request from the NIH which holds the trademark for "PubMed".
BioMedLM 2.7B is new language model trained exclusively on biomedical abstracts and papers from [The Pile](https://pile.eleuther.ai/). This GPT-style model can achieve strong results on a variety of biomedical NLP tasks, including a new state of the art performance of 50.3% accuracy on the MedQA biomedical question answering task.
As an autoregressive language model, BioMedLM 2.7B is also capable of natural language generation. However, we have only begun to explore the generation capabilities and limitations of this model, and we emphasize that this model’s generation capabilities are for research purposes only and not suitable for production. In releasing this model, we hope to advance both the development of biomedical NLP applications and best practices for responsibly training and utilizing domain-specific language models; issues of reliability, truthfulness, and explainability are top of mind for us.
This model was a joint collaboration of [Stanford CRFM](https://crfm.stanford.edu/) and [MosaicML](https://www.mosaicml.com/).
# Table of Contents
- [Model Card for BioMedLM 2.7B](#model-card-for--model_id-)
- [Table of Contents](#table-of-contents)
- [Model Details](#model-details)
- [Model Description](#model-description)
- [Uses](#uses)
- [Downstream Use](#downstream-use)
- [Out-of-Scope Use](#out-of-scope-use)
- [Bias, Risks, and Limitations](#bias-risks-and-limitations)
- [Recommendations](#recommendations)
- [Training Details](#training-details)
- [Training Data](#training-data)
- [Training Procedure](#training-procedure)
- [Preprocessing](#preprocessing)
- [Environmental Impact](#environmental-impact)
- [Technical Specifications](#technical-specifications)
- [Model Architecture and Objective](#model-architecture-and-objective)
- [Compute Infrastructure](#compute-infrastructure)
# Model Details
## Model Description
<!-- Provide a longer summary of what this model is/does. -->
BioMedLM 2.7B is new language model trained exclusively on biomedical abstracts and papers from [The Pile](https://pile.eleuther.ai/). This GPT-style model can achieve strong results on a variety of biomedical NLP tasks, including a new state of the art performance of 50.3% accuracy on the MedQA biomedical question answering task.
As an autoregressive language model, BioMedLM 2.7B is also capable of natural language generation. However, we have only begun to explore the generation capabilities and limitations of this model, and we emphasize that this model’s generation capabilities are for research purposes only and not suitable for production. In releasing this model, we hope to advance both the development of biomedical NLP applications and best practices for responsibly training and utilizing domain-specific language models; issues of reliability, truthfulness, and explainability are top of mind for us.
This model was a joint collaboration of [Stanford CRFM](https://crfm.stanford.edu/) and [MosaicML](https://www.mosaicml.com/).
- **Developed by:** Stanford CRFM, MosaicML
- **Shared by:** Stanford CRFM
- **Model type:** Language model
- **Language(s) (NLP):** en
- **License:** [bigscience-bloom-rail-1.0](https://huggingface.co/spaces/bigscience/license)
# Uses
This model is licensed under the terms of [BigScience Open RAIL-M license](https://huggingface.co/spaces/bigscience/license) used for [BLOOM](https://huggingface.co/bigscience/bloom-1b1). Please note that, among other restrictions, this license forbids use of the model (or derivatives thereof)
"To provide medical advice and medical results interpretation." If you are concerned that your use case would follow under the "letter" of this restriction, but not the "spirit," you can contact us to discuss.
## Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
<!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
It is possible to use this model to generate text, which is useful for experimentation and understanding its capabilities. It should not be directly used for production or work that may directly impact people.
## Downstream Use
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
The main way we have used this model is finetuning for downstream question answering tasks, and we recommend using this model that way.
## Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
We do not recommend using this model for natural language generation in a production environment, finetuned or otherwise.
# Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Weidinger et al. (2021)](https://arxiv.org/pdf/2112.04359.pdf)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
While this model is capable of generating natural language text, we have only begun to explore this capability and its limitations. Understanding these limitations is especially important in a domain like medicine. Therefore, **we strongly recommend against using this model in production for natural language generation.**
# Training Details
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
This model was trained on the Pubmed Abstracts and Full Text from [The Pile](https://pile.eleuther.ai/).
## Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
The model was trained on [MosaicML Cloud](https://www.mosaicml.com/cloud), a platform designed for large workloads like LLMs. Using the [Composer](https://github.com/mosaicml/composer) training library and [PyTorch FSDP](https://pytorch.org/docs/stable/fsdp.html), it was easy to enable multi-node training across 128 A100-40GB GPUs, and the total run was completed in ~6.25 days. The model was trained with batch size=1024 and sequence length=1024 for 300B tokens using Decoupled AdamW with the following settings:
| | |
| --- | ------ |
| lr | 1.6e-4 |
| eps | 1e-8 |
| betas | \[0.9, 0.95\] |
| weight decay | 1.6e-5 |
The training process was very smooth and did not suffer from any divergences.
As we were preparing the training run, we were unsure of the benefits of training out to 300B tokens for language model perplexity and downstream task performance. While most models of this scale (e.g. GPT Neo 2.7B) are trained to 300-400B tokens, the datasets those models use are vastly larger than PubMed. For instance, The Pile is 8x the size of its PubMed subcorpora.
Fortunately, we did continue to see steady perplexity improvements on the validation and training sets for the entirety of training, and preliminary experiments showed improved downstream task performance as we trained out to the full 300B tokens. Our takeaway from this was that it was indeed worth it to train for the full 300B tokens, even though this represented dramatically more passes through the data than comparable models.
### Preprocessing
The model uses a custom tokenizer trained on the PubMed Abstracts. When building domain specific models we have found it important to use a tokenizer trained on in-domain text to maximize performance on downstream tasks. A key benefit is that common biomedical terms are represented as entire tokens.
For instance, all of these following terms are tokenized into single tokens by the biomedical tokenizer and multiple tokens by the standard GPT-2 tokenizer:
| | |
| --- | --- |
| chromatography | chrom/atography |
| cytotoxicity | cyt/ot/oxicity |
| Immunohistochemistry | Immun/oh/ist/ochemistry |
| photosynthesis | photos/ynthesis |
| probiotic | prob/iotic |
This allows the model to encode information about these concepts in their individual token representations rather than spread out across subword tokens like “oh” shared with many other terms.
# Technical Specifications
## Model Architecture and Objective
BioMedLM 2.7B is a standard GPT-2 implementation (trained with Flash Attention) with the following hyperparameters:
| | |
| ----------- | ----- |
| hidden size | 2560 |
| heads | 20 |
| layers | 32 |
| vocab size | 28896 |
| sequence length| 1024 |
## Compute Infrastructure
The model was trained on [MosaicML Cloud](https://www.mosaicml.com/cloud), a platform designed for large workloads like LLMs. Using the [Composer](https://github.com/mosaicml/composer) training library and [PyTorch FSDP](https://pytorch.org/docs/stable/fsdp.html), it was easy to enable multi-node training across 128 A100-40GB GPUs, and the total run was completed in ~6.25 days.
| 9,730 | [
[
-0.018402099609375,
-0.059844970703125,
0.027923583984375,
0.01352691650390625,
-0.0286865234375,
-0.0128326416015625,
-0.00954437255859375,
-0.060302734375,
-0.00963592529296875,
0.04962158203125,
-0.04156494140625,
-0.039306640625,
-0.05047607421875,
0.014... |
flair/upos-multi-fast | 2021-03-02T22:22:55.000Z | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"en",
"de",
"fr",
"it",
"nl",
"pl",
"es",
"sv",
"da",
"no",
"fi",
"cs",
"dataset:ontonotes",
"region:us"
] | token-classification | flair | null | null | flair/upos-multi-fast | 4 | 3,224 | flair | 2022-03-02T23:29:05 | ---
tags:
- flair
- token-classification
- sequence-tagger-model
language:
- en
- de
- fr
- it
- nl
- pl
- es
- sv
- da
- no
- fi
- cs
datasets:
- ontonotes
widget:
- text: "Ich liebe Berlin, as they say."
---
## Multilingual Universal Part-of-Speech Tagging in Flair (fast model)
This is the fast multilingual universal part-of-speech tagging model that ships with [Flair](https://github.com/flairNLP/flair/).
F1-Score: **92,88** (12 UD Treebanks covering English, German, French, Italian, Dutch, Polish, Spanish, Swedish, Danish, Norwegian, Finnish and Czech)
Predicts universal POS tags:
| **tag** | **meaning** |
|---------------------------------|-----------|
|ADJ | adjective |
| ADP | adposition |
| ADV | adverb |
| AUX | auxiliary |
| CCONJ | coordinating conjunction |
| DET | determiner |
| INTJ | interjection |
| NOUN | noun |
| NUM | numeral |
| PART | particle |
| PRON | pronoun |
| PROPN | proper noun |
| PUNCT | punctuation |
| SCONJ | subordinating conjunction |
| SYM | symbol |
| VERB | verb |
| X | other |
Based on [Flair embeddings](https://www.aclweb.org/anthology/C18-1139/) and LSTM-CRF.
---
### Demo: How to use in Flair
Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`)
```python
from flair.data import Sentence
from flair.models import SequenceTagger
# load tagger
tagger = SequenceTagger.load("flair/upos-multi-fast")
# make example sentence
sentence = Sentence("Ich liebe Berlin, as they say. ")
# predict NER tags
tagger.predict(sentence)
# print sentence
print(sentence)
# print predicted NER spans
print('The following NER tags are found:')
# iterate over entities and print
for entity in sentence.get_spans('pos'):
print(entity)
```
This yields the following output:
```
Span [1]: "Ich" [− Labels: PRON (0.9999)]
Span [2]: "liebe" [− Labels: VERB (0.9999)]
Span [3]: "Berlin" [− Labels: PROPN (0.9997)]
Span [4]: "," [− Labels: PUNCT (1.0)]
Span [5]: "as" [− Labels: SCONJ (0.9991)]
Span [6]: "they" [− Labels: PRON (0.9998)]
Span [7]: "say" [− Labels: VERB (0.9998)]
Span [8]: "." [− Labels: PUNCT (1.0)]
```
So, the words "*Ich*" and "*they*" are labeled as **pronouns** (PRON), while "*liebe*" and "*say*" are labeled as **verbs** (VERB) in the multilingual sentence "*Ich liebe Berlin, as they say*".
---
### Training: Script to train this model
The following Flair script was used to train this model:
```python
from flair.data import MultiCorpus
from flair.datasets import UD_ENGLISH, UD_GERMAN, UD_FRENCH, UD_ITALIAN, UD_POLISH, UD_DUTCH, UD_CZECH, \
UD_DANISH, UD_SPANISH, UD_SWEDISH, UD_NORWEGIAN, UD_FINNISH
from flair.embeddings import StackedEmbeddings, FlairEmbeddings
# 1. make a multi corpus consisting of 12 UD treebanks (in_memory=False here because this corpus becomes large)
corpus = MultiCorpus([
UD_ENGLISH(in_memory=False),
UD_GERMAN(in_memory=False),
UD_DUTCH(in_memory=False),
UD_FRENCH(in_memory=False),
UD_ITALIAN(in_memory=False),
UD_SPANISH(in_memory=False),
UD_POLISH(in_memory=False),
UD_CZECH(in_memory=False),
UD_DANISH(in_memory=False),
UD_SWEDISH(in_memory=False),
UD_NORWEGIAN(in_memory=False),
UD_FINNISH(in_memory=False),
])
# 2. what tag do we want to predict?
tag_type = 'upos'
# 3. make the tag dictionary from the corpus
tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type)
# 4. initialize each embedding we use
embedding_types = [
# contextual string embeddings, forward
FlairEmbeddings('multi-forward-fast'),
# contextual string embeddings, backward
FlairEmbeddings('multi-backward-fast'),
]
# embedding stack consists of Flair and GloVe embeddings
embeddings = StackedEmbeddings(embeddings=embedding_types)
# 5. initialize sequence tagger
from flair.models import SequenceTagger
tagger = SequenceTagger(hidden_size=256,
embeddings=embeddings,
tag_dictionary=tag_dictionary,
tag_type=tag_type,
use_crf=False)
# 6. initialize trainer
from flair.trainers import ModelTrainer
trainer = ModelTrainer(tagger, corpus)
# 7. run training
trainer.train('resources/taggers/upos-multi-fast',
train_with_dev=True,
max_epochs=150)
```
---
### Cite
Please cite the following paper when using this model.
```
@inproceedings{akbik2018coling,
title={Contextual String Embeddings for Sequence Labeling},
author={Akbik, Alan and Blythe, Duncan and Vollgraf, Roland},
booktitle = {{COLING} 2018, 27th International Conference on Computational Linguistics},
pages = {1638--1649},
year = {2018}
}
```
---
### Issues?
The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
| 4,873 | [
[
-0.03155517578125,
-0.0438232421875,
0.0103912353515625,
0.0172119140625,
-0.016845703125,
0.004947662353515625,
-0.024658203125,
-0.03143310546875,
0.0390625,
0.0116729736328125,
-0.03460693359375,
-0.04852294921875,
-0.030059814453125,
0.0178375244140625,
... |
agemagician/mlong-t5-tglobal-base | 2023-05-21T18:51:42.000Z | [
"transformers",
"pytorch",
"jax",
"longt5",
"text2text-generation",
"multilingual",
"af",
"am",
"ar",
"az",
"be",
"bg",
"bn",
"ca",
"ceb",
"co",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fil",
"fr",
"fy",
"ga",
"gd",
"g... | text2text-generation | agemagician | null | null | agemagician/mlong-t5-tglobal-base | 5 | 3,224 | transformers | 2023-05-19T20:49:19 | ---
license: apache-2.0
language:
- multilingual
- af
- am
- ar
- az
- be
- bg
- bn
- ca
- ceb
- co
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fil
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- haw
- hi
- hmn
- ht
- hu
- hy
- ig
- is
- it
- iw
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lb
- lo
- lt
- lv
- mg
- mi
- mk
- ml
- mn
- mr
- ms
- mt
- my
- ne
- nl
- no
- ny
- pa
- pl
- ps
- pt
- ro
- ru
- sd
- si
- sk
- sl
- sm
- sn
- so
- sq
- sr
- st
- su
- sv
- sw
- ta
- te
- tg
- th
- tr
- uk
- und
- ur
- uz
- vi
- xh
- yi
- yo
- zh
- zu
datasets:
- mc4
---
# MLongT5 (transient-global attention, base-sized model)
MLongT5 model pre-trained on Multi-language corpus. The model was introduced in the paper [mLongT5: A Multilingual and Efficient Text-To-Text Transformer for Longer Sequences](https://arxiv.org/pdf/2305.11129.pdf) by Uthus et al. and first released in [the LongT5 repository](https://github.com/google-research/longt5). All the model architecture and configuration can be found in [Flaxformer repository](https://github.com/google/flaxformer) which uses another Google research project repository [T5x](https://github.com/google-research/t5x).
Disclaimer: The team releasing MLongT5 did not write a model card for this model so this model card has been written by Ahmed Elnaggar.
## Model description
MLongT5 model is an encoder-decoder transformer pre-trained in a text-to-text denoising generative setting ([Pegasus-like generation pre-training](https://arxiv.org/pdf/1912.08777.pdf)). MLongT5 model is an extension of [LongT5 model](https://arxiv.org/abs/2112.07916), and it enables using one of the two different efficient attention mechanisms - (1) Local attention, or (2) Transient-Global attention. The usage of attention sparsity patterns allows the model to efficiently handle input sequence.
MLongT5 is particularly effective when fine-tuned for text generation (summarization, question answering) which requires handling long input sequences (up to 16,384 tokens).
## Intended uses & limitations
The model is mostly meant to be fine-tuned on a supervised dataset. See the [model hub](https://huggingface.co/models?search=mlongt5) to look for fine-tuned versions on a task that interests you.
### How to use
### How to use
The following shows how one can extract the last hidden representation for the model.
```python
from transformers import T5Tokenizer, LongT5Model
tokenizer = T5Tokenizer.from_pretrained("agemagician/mlong-t5-tglobal-base")
model = LongT5Model.from_pretrained("agemagician/mlong-t5-tglobal-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
The following shows how one can predict masked passages using the different denoising strategies.
### S-Denoising
For *S-Denoising*, please make sure to prompt the text with the prefix `[S2S]` as shown below.
```python
from transformers import LongT5ForConditionalGeneration, T5Tokenizer
import torch
model = LongT5ForConditionalGeneration.from_pretrained("agemagician/mlong-t5-tglobal-base", low_cpu_mem_usage=True, torch_dtype=torch.bfloat16).to("cuda")
tokenizer = T5Tokenizer.from_pretrained("agemagician/mlong-t5-tglobal-base")
input_string = "[S2S] Mr. Dursley was the director of a firm called Grunnings, which made drills. He was a big, solid man with a bald head. Mrs. Dursley was thin and blonde and more than the usual amount of neck, which came in very useful as she spent so much of her time craning over garden fences, spying on the neighbours. The Dursleys had a small son called Dudley and in their opinion there was no finer boy anywhere <extra_id_0>"
inputs = tokenizer(input_string, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(inputs, max_length=200)
print(tokenizer.decode(outputs[0]))
```
### R-Denoising
For *R-Denoising*, please make sure to prompt the text with the prefix `[NLU]` as shown below.
```python
from transformers import LongT5ForConditionalGeneration, T5Tokenizer
import torch
model = LongT5ForConditionalGeneration.from_pretrained("agemagician/mlong-t5-tglobal-base", low_cpu_mem_usage=True, torch_dtype=torch.bfloat16).to("cuda")
tokenizer = T5Tokenizer.from_pretrained("agemagician/mlong-t5-tglobal-base")
input_string = "[NLU] Mr. Dursley was the director of a firm called <extra_id_0>, which made <extra_id_1>. He was a big, solid man with a bald head. Mrs. Dursley was thin and <extra_id_2> of neck, which came in very useful as she spent so much of her time <extra_id_3>. The Dursleys had a small son called Dudley and <extra_id_4>"
inputs = tokenizer(input_string, return_tensors="pt", add_special_tokens=False).input_ids.to("cuda")
outputs = model.generate(inputs, max_length=200)
print(tokenizer.decode(outputs[0]))
```
### X-Denoising
For *X-Denoising*, please make sure to prompt the text with the prefix `[NLG]` as shown below.
```python
from transformers import LongT5ForConditionalGeneration, T5Tokenizer
import torch
model = LongT5ForConditionalGeneration.from_pretrained("agemagician/mlong-t5-tglobal-base", low_cpu_mem_usage=True, torch_dtype=torch.bfloat16).to("cuda")
tokenizer = T5Tokenizer.from_pretrained("agemagician/mlong-t5-tglobal-base")
input_string = "[NLG] Mr. Dursley was the director of a firm called Grunnings, which made drills. He was a big, solid man wiht a bald head. Mrs. Dursley was thin and blonde and more than the usual amount of neck, which came in very useful as she
spent so much of her time craning over garden fences, spying on the neighbours. The Dursleys had a small son called Dudley and in their opinion there was no finer boy anywhere. <extra_id_0>"
model.cuda()
inputs = tokenizer(input_string, return_tensors="pt", add_special_tokens=False).input_ids.to("cuda")
outputs = model.generate(inputs, max_length=200)
print(tokenizer.decode(outputs[0]))
```
### BibTeX entry and citation info
```bibtex
@misc{uthus2023mlongt5,
title={mLongT5: A Multilingual and Efficient Text-To-Text Transformer for Longer Sequences},
author={David Uthus and Santiago Ontañón and Joshua Ainslie and Mandy Guo},
year={2023},
eprint={2305.11129},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) | 6,849 | [
[
-0.0245513916015625,
-0.04522705078125,
0.0160675048828125,
0.00739288330078125,
-0.01152801513671875,
-0.0111846923828125,
-0.018585205078125,
-0.0265960693359375,
0.0022716522216796875,
0.0194549560546875,
-0.048248291015625,
-0.0413818359375,
-0.0502624511718... |
Undi95/Mistral-11B-CC-Air-RP | 2023-10-09T18:37:16.000Z | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"not-for-all-audiences",
"nsfw",
"pretrained",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | Undi95 | null | null | Undi95/Mistral-11B-CC-Air-RP | 3 | 3,223 | transformers | 2023-10-09T17:53:48 | ---
license: cc-by-nc-4.0
tags:
- not-for-all-audiences
- nsfw
- mistral
- pretrained
---
CollectiveCognition-v1.1-Mistral-7B and airoboros-mistral2.2-7b glued together and finetuned with qlora of Pippa and LimaRPv3 dataset.
<!-- description start -->
## Description
This repo contains fp16 files of Mistral-11B-CC-Air-RP.
<!-- description end -->
<!-- description start -->
## Model used
- [CollectiveCognition-v1.1-Mistral-7B](https://huggingface.co/teknium/CollectiveCognition-v1.1-Mistral-7B)
- [airoboros-mistral2.2-7b](https://huggingface.co/teknium/airoboros-mistral2.2-7b/)
- PIPPA dataset 11B qlora
- LimaRPv3 dataset 11B qlora
<!-- description end -->
<!-- prompt-template start -->
## Prompt template: Alpaca or default
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
```
USER: <prompt>
ASSISTANT:
```
## The secret sauce
```
slices:
- sources:
- model: teknium/CollectiveCognition-v1.1-Mistral-7B
layer_range: [0, 24]
- sources:
- model: teknium/airoboros-mistral2.2-7b
layer_range: [8, 32]
merge_method: passthrough
dtype: float16
```
Special thanks to Sushi.
If you want to support me, you can [here](https://ko-fi.com/undiai). | 1,287 | [
[
-0.0305938720703125,
-0.0192718505859375,
0.019683837890625,
0.02716064453125,
-0.024658203125,
0.00395965576171875,
0.0166778564453125,
-0.0192413330078125,
0.035552978515625,
0.0513916015625,
-0.058929443359375,
-0.040435791015625,
-0.04180908203125,
0.001... |
anantoj/wav2vec2-xls-r-1b-korean | 2022-03-23T18:29:13.000Z | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"ko",
"dataset:kresnik/zeroth_korean",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | anantoj | null | null | anantoj/wav2vec2-xls-r-1b-korean | 2 | 3,222 | transformers | 2022-03-02T23:29:05 | ---
language: ko
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
datasets:
- kresnik/zeroth_korean
model-index:
- name: Wav2Vec2 XLS-R 1B Korean
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: ko
metrics:
- name: Test WER
type: wer
value: 82.07
- name: Test CER
type: cer
value: 42.12
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: ko
metrics:
- name: Test WER
type: wer
value: 82.09
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the KRESNIK/ZEROTH_KOREAN - CLEAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0639
- Wer: 0.0449
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 4.603 | 0.72 | 500 | 4.6572 | 0.9985 |
| 2.6314 | 1.44 | 1000 | 2.0424 | 0.9256 |
| 2.2708 | 2.16 | 1500 | 0.9889 | 0.6989 |
| 2.1769 | 2.88 | 2000 | 0.8366 | 0.6312 |
| 2.1142 | 3.6 | 2500 | 0.7555 | 0.5998 |
| 2.0084 | 4.32 | 3000 | 0.7144 | 0.6003 |
| 1.9272 | 5.04 | 3500 | 0.6311 | 0.5461 |
| 1.8687 | 5.75 | 4000 | 0.6252 | 0.5430 |
| 1.8186 | 6.47 | 4500 | 0.5491 | 0.4988 |
| 1.7364 | 7.19 | 5000 | 0.5463 | 0.4959 |
| 1.6809 | 7.91 | 5500 | 0.4724 | 0.4484 |
| 1.641 | 8.63 | 6000 | 0.4679 | 0.4461 |
| 1.572 | 9.35 | 6500 | 0.4387 | 0.4236 |
| 1.5256 | 10.07 | 7000 | 0.3970 | 0.4003 |
| 1.5044 | 10.79 | 7500 | 0.3690 | 0.3893 |
| 1.4563 | 11.51 | 8000 | 0.3752 | 0.3875 |
| 1.394 | 12.23 | 8500 | 0.3386 | 0.3567 |
| 1.3641 | 12.95 | 9000 | 0.3290 | 0.3467 |
| 1.2878 | 13.67 | 9500 | 0.2893 | 0.3135 |
| 1.2602 | 14.39 | 10000 | 0.2723 | 0.3029 |
| 1.2302 | 15.11 | 10500 | 0.2603 | 0.2989 |
| 1.1865 | 15.83 | 11000 | 0.2440 | 0.2794 |
| 1.1491 | 16.55 | 11500 | 0.2500 | 0.2788 |
| 1.093 | 17.27 | 12000 | 0.2279 | 0.2629 |
| 1.0367 | 17.98 | 12500 | 0.2076 | 0.2443 |
| 0.9954 | 18.7 | 13000 | 0.1844 | 0.2259 |
| 0.99 | 19.42 | 13500 | 0.1794 | 0.2179 |
| 0.9385 | 20.14 | 14000 | 0.1765 | 0.2122 |
| 0.8952 | 20.86 | 14500 | 0.1706 | 0.1974 |
| 0.8841 | 21.58 | 15000 | 0.1791 | 0.1969 |
| 0.847 | 22.3 | 15500 | 0.1780 | 0.2060 |
| 0.8669 | 23.02 | 16000 | 0.1608 | 0.1862 |
| 0.8066 | 23.74 | 16500 | 0.1447 | 0.1626 |
| 0.7908 | 24.46 | 17000 | 0.1457 | 0.1655 |
| 0.7459 | 25.18 | 17500 | 0.1350 | 0.1445 |
| 0.7218 | 25.9 | 18000 | 0.1276 | 0.1421 |
| 0.703 | 26.62 | 18500 | 0.1177 | 0.1302 |
| 0.685 | 27.34 | 19000 | 0.1147 | 0.1305 |
| 0.6811 | 28.06 | 19500 | 0.1128 | 0.1244 |
| 0.6444 | 28.78 | 20000 | 0.1120 | 0.1213 |
| 0.6323 | 29.5 | 20500 | 0.1137 | 0.1166 |
| 0.5998 | 30.22 | 21000 | 0.1051 | 0.1107 |
| 0.5706 | 30.93 | 21500 | 0.1035 | 0.1037 |
| 0.5555 | 31.65 | 22000 | 0.1031 | 0.0927 |
| 0.5389 | 32.37 | 22500 | 0.0997 | 0.0900 |
| 0.5201 | 33.09 | 23000 | 0.0920 | 0.0912 |
| 0.5146 | 33.81 | 23500 | 0.0929 | 0.0947 |
| 0.515 | 34.53 | 24000 | 0.1000 | 0.0953 |
| 0.4743 | 35.25 | 24500 | 0.0922 | 0.0892 |
| 0.4707 | 35.97 | 25000 | 0.0852 | 0.0808 |
| 0.4456 | 36.69 | 25500 | 0.0855 | 0.0779 |
| 0.443 | 37.41 | 26000 | 0.0843 | 0.0738 |
| 0.4388 | 38.13 | 26500 | 0.0816 | 0.0699 |
| 0.4162 | 38.85 | 27000 | 0.0752 | 0.0645 |
| 0.3979 | 39.57 | 27500 | 0.0761 | 0.0621 |
| 0.3889 | 40.29 | 28000 | 0.0771 | 0.0625 |
| 0.3923 | 41.01 | 28500 | 0.0755 | 0.0598 |
| 0.3693 | 41.73 | 29000 | 0.0730 | 0.0578 |
| 0.3642 | 42.45 | 29500 | 0.0739 | 0.0598 |
| 0.3532 | 43.17 | 30000 | 0.0712 | 0.0553 |
| 0.3513 | 43.88 | 30500 | 0.0762 | 0.0516 |
| 0.3349 | 44.6 | 31000 | 0.0731 | 0.0504 |
| 0.3305 | 45.32 | 31500 | 0.0725 | 0.0507 |
| 0.3285 | 46.04 | 32000 | 0.0709 | 0.0489 |
| 0.3179 | 46.76 | 32500 | 0.0667 | 0.0467 |
| 0.3158 | 47.48 | 33000 | 0.0653 | 0.0494 |
| 0.3033 | 48.2 | 33500 | 0.0638 | 0.0456 |
| 0.3023 | 48.92 | 34000 | 0.0644 | 0.0464 |
| 0.2975 | 49.64 | 34500 | 0.0643 | 0.0455 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3.dev0
- Tokenizers 0.11.0
| 6,374 | [
[
-0.045440673828125,
-0.040252685546875,
0.01363372802734375,
0.00722503662109375,
-0.004436492919921875,
0.0001938343048095703,
-0.0029277801513671875,
0.0003604888916015625,
0.05010986328125,
0.0290374755859375,
-0.04150390625,
-0.048980712890625,
-0.0468139648... |
MAGAer13/mplug-owl-bloomz-7b-multilingual | 2023-05-30T07:00:38.000Z | [
"transformers",
"pytorch",
"mplug-owl",
"image-to-text",
"en",
"zh",
"fr",
"ja",
"multilingual",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | image-to-text | MAGAer13 | null | null | MAGAer13/mplug-owl-bloomz-7b-multilingual | 7 | 3,222 | transformers | 2023-05-30T05:36:50 | ---
license: apache-2.0
language:
- en
- zh
- fr
- ja
- multilingual
pipeline_tag: image-to-text
tags:
- mplug-owl
---
# Usage
## Get the latest codebase from Github
```Bash
git clone https://github.com/X-PLUG/mPLUG-Owl.git
```
## Model initialization
```Python
from transformers import AutoTokenizer
from mplug_owl.modeling_mplug_owl import MplugOwlForConditionalGeneration
from mplug_owl.processing_mplug_owl import MplugOwlImageProcessor, MplugOwlProcessor
pretrained_ckpt = 'MAGAer13/mplug-owl-bloomz-7b-multilingual'
model = MplugOwlForConditionalGeneration.from_pretrained(
pretrained_ckpt,
torch_dtype=torch.bfloat16,
)
image_processor = MplugOwlImageProcessor.from_pretrained(pretrained_ckpt)
tokenizer = AutoTokenizer.from_pretrained(pretrained_ckpt)
processor = MplugOwlProcessor(image_processor, tokenizer)
```
## Model inference
Prepare model inputs.
```Python
# We use a human/AI template to organize the context as a multi-turn conversation.
# <image> denotes an image placeholder.
prompts = [
'''The following is a conversation between a curious human and AI assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
Human: <image>
Human: Explain why this meme is funny.
AI: ''']
# The image paths should be placed in the image_list and kept in the same order as in the prompts.
# We support urls, local file paths, and base64 string. You can customise the pre-processing of images by modifying the mplug_owl.modeling_mplug_owl.ImageProcessor
image_list = ['https://xxx.com/image.jpg']
```
Get response.
```Python
# generate kwargs (the same in transformers) can be passed in the do_generate()
generate_kwargs = {
'do_sample': True,
'top_k': 5,
'max_length': 512
}
from PIL import Image
images = [Image.open(_) for _ in image_list]
inputs = processor(text=prompts, images=images, return_tensors='pt')
inputs = {k: v.bfloat16() if v.dtype == torch.float else v for k, v in inputs.items()}
inputs = {k: v.to(model.device) for k, v in inputs.items()}
with torch.no_grad():
res = model.generate(**inputs, **generate_kwargs)
sentence = tokenizer.decode(res.tolist()[0], skip_special_tokens=True)
print(sentence)
```
| 2,192 | [
[
-0.0241851806640625,
-0.0528564453125,
0.0191497802734375,
0.0220947265625,
-0.01367950439453125,
-0.0237579345703125,
-0.004184722900390625,
-0.037445068359375,
-0.0079803466796875,
0.0193939208984375,
-0.0504150390625,
-0.033599853515625,
-0.04833984375,
0... |
M-CLIP/XLM-Roberta-Large-Vit-B-16Plus | 2022-09-15T10:45:56.000Z | [
"transformers",
"pytorch",
"tf",
"multilingual",
"af",
"sq",
"am",
"ar",
"az",
"bn",
"bs",
"bg",
"ca",
"zh",
"hr",
"cs",
"da",
"nl",
"en",
"et",
"fr",
"de",
"el",
"hi",
"hu",
"is",
"id",
"it",
"ja",
"mk",
"ml",
"mr",
"pl",
"pt",
"ro",
"ru",
"sr... | null | M-CLIP | null | null | M-CLIP/XLM-Roberta-Large-Vit-B-16Plus | 14 | 3,219 | transformers | 2022-05-30T21:33:14 | ---
language:
- multilingual
- af
- sq
- am
- ar
- az
- bn
- bs
- bg
- ca
- zh
- hr
- cs
- da
- nl
- en
- et
- fr
- de
- el
- hi
- hu
- is
- id
- it
- ja
- mk
- ml
- mr
- pl
- pt
- ro
- ru
- sr
- sl
- es
- sw
- sv
- tl
- te
- tr
- tk
- uk
- ur
- ug
- uz
- vi
- xh
---
## Multilingual-clip: XLM-Roberta-Large-Vit-B-16Plus
Multilingual-CLIP extends OpenAI's English text encoders to multiple other languages. This model *only* contains the multilingual text encoder. The corresponding image model `Vit-B-16Plus` can be retrieved via instructions found on `mlfoundations` [open_clip repository on Github](https://github.com/mlfoundations/open_clip). We provide a usage example below.
## Requirements
To use both the multilingual text encoder and corresponding image encoder, we need to install the packages [`multilingual-clip`](https://github.com/FreddeFrallan/Multilingual-CLIP) and [`open_clip_torch`](https://github.com/mlfoundations/open_clip).
```
pip install multilingual-clip
pip install open_clip_torch
```
## Usage
Extracting embeddings from the text encoder can be done in the following way:
```python
from multilingual_clip import pt_multilingual_clip
import transformers
texts = [
'Three blind horses listening to Mozart.',
'Älgen är skogens konung!',
'Wie leben Eisbären in der Antarktis?',
'Вы знали, что все белые медведи левши?'
]
model_name = 'M-CLIP/XLM-Roberta-Large-Vit-B-16Plus'
# Load Model & Tokenizer
model = pt_multilingual_clip.MultilingualCLIP.from_pretrained(model_name)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_name)
embeddings = model.forward(texts, tokenizer)
print("Text features shape:", embeddings.shape)
```
Extracting embeddings from the corresponding image encoder:
```python
import torch
import open_clip
import requests
from PIL import Image
device = "cuda" if torch.cuda.is_available() else "cpu"
model, _, preprocess = open_clip.create_model_and_transforms('ViT-B-16-plus-240', pretrained="laion400m_e32")
model.to(device)
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image = preprocess(image).unsqueeze(0).to(device)
with torch.no_grad():
image_features = model.encode_image(image)
print("Image features shape:", image_features.shape)
```
## Evaluation results
None of the M-CLIP models have been extensivly evaluated, but testing them on Txt2Img retrieval on the humanly translated MS-COCO dataset, we see the following **R@10** results:
| Name | En | De | Es | Fr | Zh | It | Pl | Ko | Ru | Tr | Jp |
| ----------------------------------|:-----: |:-----: |:-----: |:-----: | :-----: |:-----: |:-----: |:-----: |:-----: |:-----: |:-----: |
| [OpenAI CLIP Vit-B/32](https://github.com/openai/CLIP)| 90.3 | - | - | - | - | - | - | - | - | - | - |
| [OpenAI CLIP Vit-L/14](https://github.com/openai/CLIP)| 91.8 | - | - | - | - | - | - | - | - | - | - |
| [OpenCLIP ViT-B-16+-](https://github.com/openai/CLIP)| 94.3 | - | - | - | - | - | - | - | - | - | - |
| [LABSE Vit-L/14](https://huggingface.co/M-CLIP/LABSE-Vit-L-14)| 91.6 | 89.6 | 89.5 | 89.9 | 88.9 | 90.1 | 89.8 | 80.8 | 85.5 | 89.8 | 73.9 |
| [XLM-R Large Vit-B/32](https://huggingface.co/M-CLIP/XLM-Roberta-Large-Vit-B-32)| 91.8 | 88.7 | 89.1 | 89.4 | 89.3 | 89.8| 91.4 | 82.1 | 86.1 | 88.8 | 81.0 |
| [XLM-R Vit-L/14](https://huggingface.co/M-CLIP/XLM-Roberta-Large-Vit-L-14)| 92.4 | 90.6 | 91.0 | 90.0 | 89.7 | 91.1 | 91.3 | 85.2 | 85.8 | 90.3 | 81.9 |
| [XLM-R Large Vit-B/16+](https://huggingface.co/M-CLIP/XLM-Roberta-Large-Vit-B-16Plus)| **95.0** | **93.0** | **93.6** | **93.1** | **94.0** | **93.1** | **94.4** | **89.0** | **90.0** | **93.0** | **84.2** |
## Training/Model details
Further details about the model training and data can be found in the [model card](https://github.com/FreddeFrallan/Multilingual-CLIP/blob/main/larger_mclip.md). | 3,891 | [
[
-0.03546142578125,
-0.048065185546875,
0.01450347900390625,
0.0233612060546875,
-0.031402587890625,
-0.0053253173828125,
-0.033599853515625,
-0.03106689453125,
0.0552978515625,
0.0149383544921875,
-0.039154052734375,
-0.044281005859375,
-0.056427001953125,
0... |
lllyasviel/sd-controlnet-seg | 2023-04-24T22:30:42.000Z | [
"diffusers",
"art",
"controlnet",
"stable-diffusion",
"image-to-image",
"arxiv:2302.05543",
"license:openrail",
"has_space",
"diffusers:ControlNetModel",
"region:us"
] | image-to-image | lllyasviel | null | null | lllyasviel/sd-controlnet-seg | 37 | 3,214 | diffusers | 2023-02-24T07:13:29 | ---
license: openrail
base_model: runwayml/stable-diffusion-v1-5
tags:
- art
- controlnet
- stable-diffusion
- image-to-image
---
# Controlnet - *Image Segmentation Version*
ControlNet is a neural network structure to control diffusion models by adding extra conditions.
This checkpoint corresponds to the ControlNet conditioned on **Image Segmentation**.
It can be used in combination with [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/text2img).

## Model Details
- **Developed by:** Lvmin Zhang, Maneesh Agrawala
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
- **Resources for more information:** [GitHub Repository](https://github.com/lllyasviel/ControlNet), [Paper](https://arxiv.org/abs/2302.05543).
- **Cite as:**
@misc{zhang2023adding,
title={Adding Conditional Control to Text-to-Image Diffusion Models},
author={Lvmin Zhang and Maneesh Agrawala},
year={2023},
eprint={2302.05543},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
## Introduction
Controlnet was proposed in [*Adding Conditional Control to Text-to-Image Diffusion Models*](https://arxiv.org/abs/2302.05543) by
Lvmin Zhang, Maneesh Agrawala.
The abstract reads as follows:
*We present a neural network structure, ControlNet, to control pretrained large diffusion models to support additional input conditions.
The ControlNet learns task-specific conditions in an end-to-end way, and the learning is robust even when the training dataset is small (< 50k).
Moreover, training a ControlNet is as fast as fine-tuning a diffusion model, and the model can be trained on a personal devices.
Alternatively, if powerful computation clusters are available, the model can scale to large amounts (millions to billions) of data.
We report that large diffusion models like Stable Diffusion can be augmented with ControlNets to enable conditional inputs like edge maps, segmentation maps, keypoints, etc.
This may enrich the methods to control large diffusion models and further facilitate related applications.*
## Released Checkpoints
The authors released 8 different checkpoints, each trained with [Stable Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5)
on a different type of conditioning:
| Model Name | Control Image Overview| Control Image Example | Generated Image Example |
|---|---|---|---|
|[lllyasviel/sd-controlnet-canny](https://huggingface.co/lllyasviel/sd-controlnet-canny)<br/> *Trained with canny edge detection* | A monochrome image with white edges on a black background.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_bird_canny.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_bird_canny.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_bird_canny_1.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_bird_canny_1.png"/></a>|
|[lllyasviel/sd-controlnet-depth](https://huggingface.co/lllyasviel/sd-controlnet-depth)<br/> *Trained with Midas depth estimation* |A grayscale image with black representing deep areas and white representing shallow areas.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_vermeer_depth.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_vermeer_depth.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_vermeer_depth_2.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_vermeer_depth_2.png"/></a>|
|[lllyasviel/sd-controlnet-hed](https://huggingface.co/lllyasviel/sd-controlnet-hed)<br/> *Trained with HED edge detection (soft edge)* |A monochrome image with white soft edges on a black background.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_bird_hed.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_bird_hed.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_bird_hed_1.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_bird_hed_1.png"/></a> |
|[lllyasviel/sd-controlnet-mlsd](https://huggingface.co/lllyasviel/sd-controlnet-mlsd)<br/> *Trained with M-LSD line detection* |A monochrome image composed only of white straight lines on a black background.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_room_mlsd.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_room_mlsd.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_room_mlsd_0.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_room_mlsd_0.png"/></a>|
|[lllyasviel/sd-controlnet-normal](https://huggingface.co/lllyasviel/sd-controlnet-normal)<br/> *Trained with normal map* |A [normal mapped](https://en.wikipedia.org/wiki/Normal_mapping) image.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_human_normal.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_human_normal.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_human_normal_1.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_human_normal_1.png"/></a>|
|[lllyasviel/sd-controlnet_openpose](https://huggingface.co/lllyasviel/sd-controlnet-openpose)<br/> *Trained with OpenPose bone image* |A [OpenPose bone](https://github.com/CMU-Perceptual-Computing-Lab/openpose) image.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_human_openpose.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_human_openpose.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_human_openpose_0.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_human_openpose_0.png"/></a>|
|[lllyasviel/sd-controlnet_scribble](https://huggingface.co/lllyasviel/sd-controlnet-scribble)<br/> *Trained with human scribbles* |A hand-drawn monochrome image with white outlines on a black background.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_vermeer_scribble.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_vermeer_scribble.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_vermeer_scribble_0.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_vermeer_scribble_0.png"/></a> |
|[lllyasviel/sd-controlnet_seg](https://huggingface.co/lllyasviel/sd-controlnet-seg)<br/>*Trained with semantic segmentation* |An [ADE20K](https://groups.csail.mit.edu/vision/datasets/ADE20K/)'s segmentation protocol image.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_room_seg.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_room_seg.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_room_seg_1.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_room_seg_1.png"/></a> |
## Example
It is recommended to use the checkpoint with [Stable Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) as the checkpoint
has been trained on it.
Experimentally, the checkpoint can be used with other diffusion models such as dreamboothed stable diffusion.
1. Let's install `diffusers` and related packages:
```
$ pip install diffusers transformers accelerate
```
2. We'll need to make use of a color palette here as described in [semantic_segmentation](https://huggingface.co/docs/transformers/tasks/semantic_segmentation):
```py
palette = np.asarray([
[0, 0, 0],
[120, 120, 120],
[180, 120, 120],
[6, 230, 230],
[80, 50, 50],
[4, 200, 3],
[120, 120, 80],
[140, 140, 140],
[204, 5, 255],
[230, 230, 230],
[4, 250, 7],
[224, 5, 255],
[235, 255, 7],
[150, 5, 61],
[120, 120, 70],
[8, 255, 51],
[255, 6, 82],
[143, 255, 140],
[204, 255, 4],
[255, 51, 7],
[204, 70, 3],
[0, 102, 200],
[61, 230, 250],
[255, 6, 51],
[11, 102, 255],
[255, 7, 71],
[255, 9, 224],
[9, 7, 230],
[220, 220, 220],
[255, 9, 92],
[112, 9, 255],
[8, 255, 214],
[7, 255, 224],
[255, 184, 6],
[10, 255, 71],
[255, 41, 10],
[7, 255, 255],
[224, 255, 8],
[102, 8, 255],
[255, 61, 6],
[255, 194, 7],
[255, 122, 8],
[0, 255, 20],
[255, 8, 41],
[255, 5, 153],
[6, 51, 255],
[235, 12, 255],
[160, 150, 20],
[0, 163, 255],
[140, 140, 140],
[250, 10, 15],
[20, 255, 0],
[31, 255, 0],
[255, 31, 0],
[255, 224, 0],
[153, 255, 0],
[0, 0, 255],
[255, 71, 0],
[0, 235, 255],
[0, 173, 255],
[31, 0, 255],
[11, 200, 200],
[255, 82, 0],
[0, 255, 245],
[0, 61, 255],
[0, 255, 112],
[0, 255, 133],
[255, 0, 0],
[255, 163, 0],
[255, 102, 0],
[194, 255, 0],
[0, 143, 255],
[51, 255, 0],
[0, 82, 255],
[0, 255, 41],
[0, 255, 173],
[10, 0, 255],
[173, 255, 0],
[0, 255, 153],
[255, 92, 0],
[255, 0, 255],
[255, 0, 245],
[255, 0, 102],
[255, 173, 0],
[255, 0, 20],
[255, 184, 184],
[0, 31, 255],
[0, 255, 61],
[0, 71, 255],
[255, 0, 204],
[0, 255, 194],
[0, 255, 82],
[0, 10, 255],
[0, 112, 255],
[51, 0, 255],
[0, 194, 255],
[0, 122, 255],
[0, 255, 163],
[255, 153, 0],
[0, 255, 10],
[255, 112, 0],
[143, 255, 0],
[82, 0, 255],
[163, 255, 0],
[255, 235, 0],
[8, 184, 170],
[133, 0, 255],
[0, 255, 92],
[184, 0, 255],
[255, 0, 31],
[0, 184, 255],
[0, 214, 255],
[255, 0, 112],
[92, 255, 0],
[0, 224, 255],
[112, 224, 255],
[70, 184, 160],
[163, 0, 255],
[153, 0, 255],
[71, 255, 0],
[255, 0, 163],
[255, 204, 0],
[255, 0, 143],
[0, 255, 235],
[133, 255, 0],
[255, 0, 235],
[245, 0, 255],
[255, 0, 122],
[255, 245, 0],
[10, 190, 212],
[214, 255, 0],
[0, 204, 255],
[20, 0, 255],
[255, 255, 0],
[0, 153, 255],
[0, 41, 255],
[0, 255, 204],
[41, 0, 255],
[41, 255, 0],
[173, 0, 255],
[0, 245, 255],
[71, 0, 255],
[122, 0, 255],
[0, 255, 184],
[0, 92, 255],
[184, 255, 0],
[0, 133, 255],
[255, 214, 0],
[25, 194, 194],
[102, 255, 0],
[92, 0, 255],
])
```
3. Having defined the color palette we can now run the whole segmentation + controlnet generation code:
```py
from transformers import AutoImageProcessor, UperNetForSemanticSegmentation
from PIL import Image
import numpy as np
import torch
from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler
from diffusers.utils import load_image
image_processor = AutoImageProcessor.from_pretrained("openmmlab/upernet-convnext-small")
image_segmentor = UperNetForSemanticSegmentation.from_pretrained("openmmlab/upernet-convnext-small")
image = load_image("https://huggingface.co/lllyasviel/sd-controlnet-seg/resolve/main/images/house.png").convert('RGB')
pixel_values = image_processor(image, return_tensors="pt").pixel_values
with torch.no_grad():
outputs = image_segmentor(pixel_values)
seg = image_processor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
color_seg = np.zeros((seg.shape[0], seg.shape[1], 3), dtype=np.uint8) # height, width, 3
for label, color in enumerate(palette):
color_seg[seg == label, :] = color
color_seg = color_seg.astype(np.uint8)
image = Image.fromarray(color_seg)
controlnet = ControlNetModel.from_pretrained(
"lllyasviel/sd-controlnet-seg", torch_dtype=torch.float16
)
pipe = StableDiffusionControlNetPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5", controlnet=controlnet, safety_checker=None, torch_dtype=torch.float16
)
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
# Remove if you do not have xformers installed
# see https://huggingface.co/docs/diffusers/v0.13.0/en/optimization/xformers#installing-xformers
# for installation instructions
pipe.enable_xformers_memory_efficient_attention()
pipe.enable_model_cpu_offload()
image = pipe("house", image, num_inference_steps=20).images[0]
image.save('./images/house_seg_out.png')
```



### Training
The semantic segmentation model was trained on 164K segmentation-image, caption pairs from ADE20K. The model was trained for 200 GPU-hours with Nvidia A100 80G using Stable Diffusion 1.5 as a base model.
### Blog post
For more information, please also have a look at the [official ControlNet Blog Post](https://huggingface.co/blog/controlnet). | 15,027 | [
[
-0.0445556640625,
-0.040679931640625,
-0.005672454833984375,
0.032745361328125,
-0.0233917236328125,
-0.0227813720703125,
-0.005672454833984375,
-0.049468994140625,
0.06292724609375,
0.01348876953125,
-0.041748046875,
-0.033538818359375,
-0.054168701171875,
... |
team-lucid/hubert-base-korean | 2023-09-05T02:55:16.000Z | [
"transformers",
"pytorch",
"jax",
"safetensors",
"hubert",
"feature-extraction",
"speech",
"audio",
"automatic-speech-recognition",
"custom_code",
"ko",
"arxiv:2106.07447",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | team-lucid | null | null | team-lucid/hubert-base-korean | 15 | 3,210 | transformers | 2023-05-29T12:00:30 | ---
license: apache-2.0
language:
- ko
library_name: transformers
pipeline_tag: automatic-speech-recognition
tags:
- speech
- audio
---
# hubert-base-korean
## Model Details
Hubert(Hidden-Unit BERT)는 Facebook에서 제안한 Speech Representation Learning 모델입니다.
Hubert는 기존의 음성 인식 모델과 달리, 음성 신호를 raw waveform에서 바로 학습하는 self-supervised learning 방식을 사용합니다.
이 연구는 구글의 TPU Research Cloud(TRC)를 통해 지원받은 Cloud TPU로 학습되었습니다.
### Model Description
<table>
<tr>
<td colspan="2"></td>
<td>Base</td>
<td>Large</td>
</tr>
<tr>
<td rowspan="3">CNN Encoder</td>
<td>strides</td>
<td colspan="2">5, 2, 2, 2, 2, 2, 2</td>
</tr>
<tr>
<td>kernel width</td>
<td colspan="2">10, 3, 3, 3, 3, 2, 2</td>
</tr>
<tr>
<td>channel</td>
<td colspan="2">512</td>
</tr>
<tr>
<td rowspan="4">Transformer Encoder</td>
<td>Layer</td>
<td>12</td>
<td>24</td>
</tr>
<tr>
<td>embedding dim</td>
<td>768</td>
<td>1024</td>
</tr>
<tr>
<td>inner FFN dim</td>
<td>3072</td>
<td>4096</td>
</tr>
<tr>
<td>attention heads</td>
<td>8</td>
<td>16</td>
</tr>
<tr>
<td>Projection</td>
<td>dim</td>
<td>256</td>
<td>768</td>
</tr>
<tr>
<td colspan="2">Params</td>
<td>95M</td>
<td>317M </td>
</tr>
</table>
## How to Get Started with the Model
### Pytorch
```py
import torch
from transformers import HubertModel
model = HubertModel.from_pretrained("team-lucid/hubert-base-korean")
wav = torch.ones(1, 16000)
outputs = model(wav)
print(f"Input: {wav.shape}") # [1, 16000]
print(f"Output: {outputs.last_hidden_state.shape}") # [1, 49, 768]
```
### JAX/Flax
```py
import jax.numpy as jnp
from transformers import FlaxAutoModel
model = FlaxAutoModel.from_pretrained("team-lucid/hubert-base-korean", trust_remote_code=True)
wav = jnp.ones((1, 16000))
outputs = model(wav)
print(f"Input: {wav.shape}") # [1, 16000]
print(f"Output: {outputs.last_hidden_state.shape}") # [1, 49, 768]
```
## Training Details
### Training Data
해당 모델은 과학기술정보통신부의 재원으로 한국지능정보사회진흥원의 지원을 받아
구축된 [자유대화 음성(일반남여)](https://www.aihub.or.kr/aihubdata/data/view.do?dataSetSn=109), [다화자 음성합성 데이터](https://www.aihub.or.kr/aihubdata/data/view.do?dataSetSn=542), [방송 콘텐츠 대화체 음성인식 데이터](https://www.aihub.or.kr/aihubdata/data/view.do?dataSetSn=463)
에서 약 4,000시간을 추출해 학습되었습니다.
### Training Procedure
[원 논문](https://arxiv.org/pdf/2106.07447.pdf)과 동일하게 MFCC 기반으로 Base 모델을 학습한 다음, 500 cluster로 k-means를 수행해 다시 Base와
Large 모델을 학습했습니다.
#### Training Hyperparameters
| Hyperparameter | Base | Large |
|:--------------------|---------|--------:|
| Warmup Steps | 32,000 | 32,000 |
| Learning Rates | 5e-4 | 1.5e-3 |
| Batch Size | 128 | 128 |
| Weight Decay | 0.01 | 0.01 |
| Max Steps | 400,000 | 400,000 |
| Learning Rate Decay | 0.1 | 0.1 |
| \\(Adam\beta_1\\) | 0.9 | 0.9 |
| \\(Adam\beta_2\\) | 0.99 | 0.99 | | 3,076 | [
[
-0.045135498046875,
-0.046112060546875,
0.00466156005859375,
0.0222015380859375,
-0.011871337890625,
0.0009684562683105469,
-0.02093505859375,
-0.0196075439453125,
0.02099609375,
0.00865936279296875,
-0.044830322265625,
-0.0465087890625,
-0.05133056640625,
-... |
melaris/peyman2ai | 2023-11-03T17:01:30.000Z | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us",
"has_space"
] | text-to-image | melaris | null | null | melaris/peyman2ai | 0 | 3,209 | diffusers | 2023-11-03T16:57:11 | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Peyman2Ai Dreambooth model trained by melaris with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
| 498 | [
[
-0.0167999267578125,
-0.053253173828125,
0.0440673828125,
0.037139892578125,
-0.0264739990234375,
0.024139404296875,
0.02081298828125,
-0.0121307373046875,
0.040557861328125,
0.00966644287109375,
-0.0088653564453125,
-0.0176849365234375,
-0.03363037109375,
-... |
inception-mbzuai/jais-13b | 2023-09-12T06:45:24.000Z | [
"transformers",
"pytorch",
"jais",
"text-generation",
"Arabic",
"English",
"LLM",
"Decoder",
"causal-lm",
"custom_code",
"ar",
"en",
"arxiv:2308.16149",
"license:apache-2.0",
"region:us"
] | text-generation | inception-mbzuai | null | null | inception-mbzuai/jais-13b | 59 | 3,207 | transformers | 2023-08-17T07:50:29 | ---
language:
- ar
- en
thumbnail: null
tags:
- Arabic
- English
- LLM
- Decoder
- causal-lm
license: apache-2.0
pipeline_tag: text-generation
---
# Jais-13b
<!-- Provide a quick summary of what the model is/does. -->
This is a 13 billion parameter pre-trained bilingual large language model for both Arabic and English,
trained on a dataset containing 72 billion Arabic tokens and 279 billion English/code tokens.
The Arabic data is iterated over for 1.6 epochs (as opposed to 1 epoch for English/code), for a total of 395 billion tokens of training.
The model is based on transformer-based decoder-only (GPT-3) architecture and uses SwiGLU
non-linearity. It implements ALiBi position embeddings, enabling the model to extrapolate
to long sequence lengths, providing improved context handling and model precision.
## Getting started
Below is sample code to use the model. Note that the model requires a custom model class, so users must
enable `trust_remote_code=True` while loading the model.
Also, note that this code is tested on `transformers==4.28.0`.
```python
# -*- coding: utf-8 -*-
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_path = "inception-mbzuai/jais-13b"
device = "cuda" if torch.cuda.is_available() else "cpu"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path, device_map="auto", trust_remote_code=True)
def get_response(text,tokenizer=tokenizer,model=model):
input_ids = tokenizer(text, return_tensors="pt").input_ids
inputs = input_ids.to(device)
input_len = inputs.shape[-1]
generate_ids = model.generate(
inputs,
top_p=0.9,
temperature=0.3,
max_length=200-input_len,
min_length=input_len + 4,
repetition_penalty=1.2,
do_sample=True,
)
response = tokenizer.batch_decode(
generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True
)[0]
return response
text= "عاصمة دولة الإمارات العربية المتحدة ه"
print(get_response(text))
text = "The capital of UAE is"
print(get_response(text))
```
## Model Details
- **Developed by:** [Inception](https://www.inceptioniai.org/en/), [Mohamed bin Zayed University of Artificial Intelligence (MBZUAI)](https://mbzuai.ac.ae/), and [Cerebras Systems](https://www.cerebras.net/).
- **Language(s) (NLP):** Arabic and English
- **License:** Apache 2.0
- **Input:** Text only data.
- **Output:** Model generates text.
- **Paper :** [Jais and Jais-chat: Arabic-Centric Foundation and Instruction-Tuned Open Generative Large Language Models](https://arxiv.org/abs/2308.16149)
- **Demo :** [Access here](https://arabic-gpt.ai)
## Intended Use
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
We release the Jais 13B model under a full open source license. We welcome all feedback and opportunities to collaborate.
This model is the first release from the Inception - MBZUAI - Cerebras parternship, and at the time of release,
achieved state of the art across a comprehensive Arabic test suite as described in the accompanying technical report.
Some potential downstream uses include:
- *Research*: This model can be used by researchers and developers.
- *Commercial Use*: It can be used as a base model to further fine-tune for specific use cases (similar to [jais-13b-chat](https://huggingface.co/inception-mbzuai/jais-13b-chat)).
Some potential use cases include:
- Chat-assistants.
- Customer service.
Audiences that we hope will benefit from our model:
- *Academics*: For those researching Arabic natural language processing.
- *Businesses*: Companies targeting Arabic-speaking audiences.
- *Developers*: Those integrating Arabic language capabilities in apps.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
While Jais-13b is a powerful Arabic and English bilingual model, it's essential to understand its limitations and the potential of misuse.
It is prohibited to use the model in any manner that violates applicable laws or regulations.
The following are some example scenarios where the model should not be used.
- *Malicious Use*: The model should not be used for generating harmful, misleading, or inappropriate content. This includes but is not limited to:
- Generating or promoting hate speech, violence, or discrimination.
- Spreading misinformation or fake news.
- Engaging in or promoting illegal activities.
- *Sensitive Information*: The model should not be used to handle or generate personal, confidential, or sensitive information.
- *Generalization Across All Languages*: Jais-13b is bilingual and optimized for Arabic and English, it should not be assumed to have equal proficiency in other languages or dialects.
- *High-Stakes Decisions*: The model should not be used to make high-stakes decisions without human oversight. This includes medical, legal, financial, or safety-critical decisions.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
The model is trained on publicly available data which was in part curated by Inception. We have employed different
techniqes to reduce bias in the model. While efforts have been made to minimize biases, it is likely that the model, as with all LLM models, will exhibit some bias.
The model is trained as an AI assistant for Arabic and English speakers. The model is limited to produce responses for queries in these two languages
and may not produce appropriate responses to other language queries.
By using Jais, you acknowledge and accept that, as with any large language model, it may generate incorrect, misleading and/or offensive information or content.
The information is not intended as advice and should not be relied upon in any way, nor are we responsible for any of the content or consequences resulting from its use.
We are continuously working to develop models with greater capabilities, and as such, welcome any feedback on the model
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
For the pre-training of Jais-13b, we used a diverse bilingual corpus sourced from the Web and other sources. We also used publicly available English and code datasets.
To collect Arabic data, we use multiple sources including web pages, wikipedia articles, news articles, Arabic books,
and social network content. We augment the volume of Arabic data by translating English to Arabic using an in-house machine translation system.
We restrict this to high quality English resources such as English Wikipedia and English books. Further details about the training data can be found in the technical report.
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
Training was performed on the Condor Galaxy 1 (CG-1) supercomputer platform.
#### Training Hyperparameters
| Hyperparameter | Value |
|----------------------------|------------------------------|
| Precision | fp32 |
| Optimizer | AdamW |
| Learning rate | 0 to 0.012 (<= 95 steps) |
| | 0.012 to 0.0012 (> 95 steps) |
| Weight decay | 0.1 |
| Batch size | 1920 |
| Steps | 100551 |
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
We conducted a comprehensive evaluation of Jais and benchmarked it other leading base language models, focusing on both English and Arabic. The evaluation criteria spanned various dimensions, including:
- **Knowledge:** How well the model answers factual questions.
- **Reasoning:** The model's ability to answer questions requiring reasoning.
- **Misinformation/Bias:** Assessment of the model's susceptibility to generating false or misleading information, and its neutrality.
Arabic evaluation results:
| Models | Avg | EXAMS | MMLU (M) | LitQA | Hellaswag | PIQA | BoolQA | SituatedQA | ARC-C | OpenBookQA | TruthfulQA | CrowS-Pairs |
|-------------|-------|-------|----------|-------|-----------|------|--------|------------|-------|------------|------------|-------------|
| Jais (13B) | **46.5** | 40.4 | 30.0 | 58.3 | 57.7 | 67.6 | 62.6 | 42.5 | 35.8 | 32.4 | 41.1 | 58.4 |
| BLOOM (7.1B) | 40.9 |34.0 | 28.2 | 37.1 | 40.9 | 58.4 | 59.9 | 39.1 | 27.3 | 28.0 | 44.4 | 53.5 |
| LLaMA2 (13B) | 38.1 | 29.2 | 28.4 | 32.0 | 34.3 | 52.9 | 63.8 | 36.4 | 24.3 | 30.0 | 45.5 | 49.9 |
| AraT5 (220M) | 32.0 | 24.7 | 23.8 | 26.3 | 25.5 | 50.4 | 58.2 | 33.9 | 24.7 | 25.4 | 20.9 | 47.2 |
| AraBART (139M) | 36.7 | 26.5 | 27.5 | 34.3 | 28.1 | 52.6 | 57.1 | 34.6 | 25.1 | 28.6 | 49.8 | 48.8 |
All tasks above report accuracy or F1 scores (the higher the better). For the sake of brevity, we do not include results over English tasks.
Detailed comparisons in both languages and evaluation dataset details can be found in the technical report.
## Citation
```
@misc{sengupta2023jais,
title={Jais and Jais-chat: Arabic-Centric Foundation and Instruction-Tuned Open Generative Large Language Models},
author={Neha Sengupta and Sunil Kumar Sahu and Bokang Jia and Satheesh Katipomu and Haonan Li and Fajri Koto and Osama Mohammed Afzal and Samta Kamboj and Onkar Pandit and Rahul Pal and Lalit Pradhan and Zain Muhammad Mujahid and Massa Baali and Alham Fikri Aji and Zhengzhong Liu and Andy Hock and Andrew Feldman and Jonathan Lee and Andrew Jackson and Preslav Nakov and Timothy Baldwin and Eric Xing},
year={2023},
eprint={2308.16149},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
Copyright Inception Institute of Artificial Intelligence Ltd.
| 10,496 | [
[
-0.042266845703125,
-0.06268310546875,
-0.00018131732940673828,
0.0382080078125,
-0.028472900390625,
0.002361297607421875,
-0.0155487060546875,
-0.038604736328125,
-0.00244903564453125,
0.03094482421875,
-0.03167724609375,
-0.0439453125,
-0.052215576171875,
... |
google/rembert | 2022-05-27T15:05:23.000Z | [
"transformers",
"pytorch",
"tf",
"rembert",
"multilingual",
"af",
"am",
"ar",
"az",
"be",
"bg",
"bn",
"bs",
"ca",
"ceb",
"co",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fil",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha"... | null | google | null | null | google/rembert | 15 | 3,206 | transformers | 2022-03-02T23:29:05 | ---
language:
- multilingual
- af
- am
- ar
- az
- be
- bg
- bn
- bs
- ca
- ceb
- co
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fil
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- haw
- hi
- hmn
- hr
- ht
- hu
- hy
- id
- ig
- is
- it
- iw
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lb
- lo
- lt
- lv
- mg
- mi
- mk
- ml
- mn
- mr
- ms
- mt
- my
- ne
- nl
- no
- ny
- pa
- pl
- ps
- pt
- ro
- ru
- sd
- si
- sk
- sl
- sm
- sn
- so
- sq
- sr
- st
- su
- sv
- sw
- ta
- te
- tg
- th
- tr
- uk
- ur
- uz
- vi
- xh
- yi
- yo
- zh
- zu
license: apache-2.0
datasets:
- wikipedia
---
# RemBERT (for classification)
Pretrained RemBERT model on 110 languages using a masked language modeling (MLM) objective. It was introduced in the paper [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/abs/2010.12821). A direct export of the model checkpoint was first made available in [this repository](https://github.com/google-research/google-research/tree/master/rembert). This version of the checkpoint is lightweight since it is meant to be finetuned for classification and excludes the output embedding weights.
## Model description
RemBERT's main difference with mBERT is that the input and output embeddings are not tied. Instead, RemBERT uses small input embeddings and larger output embeddings. This makes the model more efficient since the output embeddings are discarded during fine-tuning. It is also more accurate, especially when reinvesting the input embeddings' parameters into the core model, as is done on RemBERT.
## Intended uses & limitations
You should fine-tune this model for your downstream task. It is meant to be a general-purpose model, similar to mBERT. In our [paper](https://arxiv.org/abs/2010.12821), we have successfully applied this model to tasks such as classification, question answering, NER, POS-tagging. For tasks such as text generation you should look at models like GPT2.
## Training data
The RemBERT model was pretrained on multilingual Wikipedia data over 110 languages. The full language list is on [this repository](https://github.com/google-research/google-research/tree/master/rembert)
### BibTeX entry and citation info
```bibtex
@inproceedings{DBLP:conf/iclr/ChungFTJR21,
author = {Hyung Won Chung and
Thibault F{\'{e}}vry and
Henry Tsai and
Melvin Johnson and
Sebastian Ruder},
title = {Rethinking Embedding Coupling in Pre-trained Language Models},
booktitle = {9th International Conference on Learning Representations, {ICLR} 2021,
Virtual Event, Austria, May 3-7, 2021},
publisher = {OpenReview.net},
year = {2021},
url = {https://openreview.net/forum?id=xpFFI\_NtgpW},
timestamp = {Wed, 23 Jun 2021 17:36:39 +0200},
biburl = {https://dblp.org/rec/conf/iclr/ChungFTJR21.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | 2,961 | [
[
-0.028594970703125,
-0.060455322265625,
0.02081298828125,
-0.00640869140625,
-0.0309906005859375,
0.0240478515625,
-0.01276397705078125,
-0.0289459228515625,
0.017608642578125,
0.0328369140625,
-0.04205322265625,
-0.04998779296875,
-0.037139892578125,
-0.008... |
ckiplab/albert-base-chinese-ws | 2022-05-10T03:28:09.000Z | [
"transformers",
"pytorch",
"albert",
"token-classification",
"zh",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | ckiplab | null | null | ckiplab/albert-base-chinese-ws | 1 | 3,205 | transformers | 2022-03-02T23:29:05 | ---
language:
- zh
thumbnail: https://ckip.iis.sinica.edu.tw/files/ckip_logo.png
tags:
- pytorch
- token-classification
- albert
- zh
license: gpl-3.0
---
# CKIP ALBERT Base Chinese
This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).
這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。
## Homepage
- https://github.com/ckiplab/ckip-transformers
## Contributers
- [Mu Yang](https://muyang.pro) at [CKIP](https://ckip.iis.sinica.edu.tw) (Author & Maintainer)
## Usage
Please use BertTokenizerFast as tokenizer instead of AutoTokenizer.
請使用 BertTokenizerFast 而非 AutoTokenizer。
```
from transformers import (
BertTokenizerFast,
AutoModel,
)
tokenizer = BertTokenizerFast.from_pretrained('bert-base-chinese')
model = AutoModel.from_pretrained('ckiplab/albert-base-chinese-ws')
```
For full usage and more information, please refer to https://github.com/ckiplab/ckip-transformers.
有關完整使用方法及其他資訊,請參見 https://github.com/ckiplab/ckip-transformers 。
| 1,128 | [
[
-0.0237274169921875,
-0.021942138671875,
0.0012359619140625,
0.05706787109375,
-0.0245208740234375,
0.00684356689453125,
-0.01165008544921875,
-0.0186767578125,
-0.003993988037109375,
0.0318603515625,
-0.0262603759765625,
-0.0244598388671875,
-0.04345703125,
... |
csebuetnlp/mT5_m2m_crossSum_enhanced | 2023-02-28T13:29:42.000Z | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"summarization",
"mT5",
"am",
"ar",
"az",
"bn",
"my",
"zh",
"en",
"fr",
"gu",
"ha",
"hi",
"ig",
"id",
"ja",
"rn",
"ko",
"ky",
"mr",
"ne",
"om",
"ps",
"fa",
"pcm",
"pt",
"pa",
"ru",
"gd",
"sr",... | summarization | csebuetnlp | null | null | csebuetnlp/mT5_m2m_crossSum_enhanced | 8 | 3,205 | transformers | 2023-02-28T12:58:53 | ---
tags:
- summarization
- mT5
language:
- am
- ar
- az
- bn
- my
- zh
- en
- fr
- gu
- ha
- hi
- ig
- id
- ja
- rn
- ko
- ky
- mr
- ne
- om
- ps
- fa
- pcm
- pt
- pa
- ru
- gd
- sr
- si
- so
- es
- sw
- ta
- te
- th
- ti
- tr
- uk
- ur
- uz
- vi
- cy
- yo
licenses:
- cc-by-nc-sa-4.0
widget:
- text: >-
Videos that say approved vaccines are dangerous and cause autism, cancer or
infertility are among those that will be taken down, the company said. The
policy includes the termination of accounts of anti-vaccine influencers.
Tech giants have been criticised for not doing more to counter false health
information on their sites. In July, US President Joe Biden said social
media platforms were largely responsible for people's scepticism in getting
vaccinated by spreading misinformation, and appealed for them to address the
issue. YouTube, which is owned by Google, said 130,000 videos were removed
from its platform since last year, when it implemented a ban on content
spreading misinformation about Covid vaccines. In a blog post, the company
said it had seen false claims about Covid jabs "spill over into
misinformation about vaccines in general". The new policy covers
long-approved vaccines, such as those against measles or hepatitis B.
"We're expanding our medical misinformation policies on YouTube with new
guidelines on currently administered vaccines that are approved and
confirmed to be safe and effective by local health authorities and the WHO,"
the post said, referring to the World Health Organization.
datasets:
- csebuetnlp/CrossSum
---
# mT5-m2m-CrossSum-enhanced
This repository contains an enhanced many-to-many (m2m) mT5 checkpoint finetuned on all cross-lingual pairs of the [CrossSum](https://huggingface.co/datasets/csebuetnlp/CrossSum) dataset. This model tries to **summarize text written in any language in the provided target language.** For finetuning details and scripts, see the [paper](https://arxiv.org/abs/2112.08804) and the [official repository](https://github.com/csebuetnlp/CrossSum).
## Using this model in `transformers` (tested on 4.11.0.dev0)
```python
import re
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
WHITESPACE_HANDLER = lambda k: re.sub('\s+', ' ', re.sub('\n+', ' ', k.strip()))
article_text = """Videos that say approved vaccines are dangerous and cause autism, cancer or infertility are among those that will be taken down, the company said. The policy includes the termination of accounts of anti-vaccine influencers. Tech giants have been criticised for not doing more to counter false health information on their sites. In July, US President Joe Biden said social media platforms were largely responsible for people's scepticism in getting vaccinated by spreading misinformation, and appealed for them to address the issue. YouTube, which is owned by Google, said 130,000 videos were removed from its platform since last year, when it implemented a ban on content spreading misinformation about Covid vaccines. In a blog post, the company said it had seen false claims about Covid jabs "spill over into misinformation about vaccines in general". The new policy covers long-approved vaccines, such as those against measles or hepatitis B. "We're expanding our medical misinformation policies on YouTube with new guidelines on currently administered vaccines that are approved and confirmed to be safe and effective by local health authorities and the WHO," the post said, referring to the World Health Organization."""
model_name = "csebuetnlp/mT5_m2m_crossSum_enhanced"
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
get_lang_id = lambda lang: tokenizer._convert_token_to_id(
model.config.task_specific_params["langid_map"][lang][1]
)
target_lang = "english" # for a list of available language names see below
input_ids = tokenizer(
[WHITESPACE_HANDLER(article_text)],
return_tensors="pt",
padding="max_length",
truncation=True,
max_length=512
)["input_ids"]
output_ids = model.generate(
input_ids=input_ids,
decoder_start_token_id=get_lang_id(target_lang),
max_length=84,
no_repeat_ngram_size=2,
num_beams=4,
)[0]
summary = tokenizer.decode(
output_ids,
skip_special_tokens=True,
clean_up_tokenization_spaces=False
)
print(summary)
```
### Available target language names
- `amharic`
- `arabic`
- `azerbaijani`
- `bengali`
- `burmese`
- `chinese_simplified`
- `chinese_traditional`
- `english`
- `french`
- `gujarati`
- `hausa`
- `hindi`
- `igbo`
- `indonesian`
- `japanese`
- `kirundi`
- `korean`
- `kyrgyz`
- `marathi`
- `nepali`
- `oromo`
- `pashto`
- `persian`
- `pidgin`
- `portuguese`
- `punjabi`
- `russian`
- `scottish_gaelic`
- `serbian_cyrillic`
- `serbian_latin`
- `sinhala`
- `somali`
- `spanish`
- `swahili`
- `tamil`
- `telugu`
- `thai`
- `tigrinya`
- `turkish`
- `ukrainian`
- `urdu`
- `uzbek`
- `vietnamese`
- `welsh`
- `yoruba`
## Citation
If you use this model, please cite the following paper:
```
@article{hasan2021crosssum,
author = {Tahmid Hasan and Abhik Bhattacharjee and Wasi Uddin Ahmad and Yuan-Fang Li and Yong-bin Kang and Rifat Shahriyar},
title = {CrossSum: Beyond English-Centric Cross-Lingual Abstractive Text Summarization for 1500+ Language Pairs},
journal = {CoRR},
volume = {abs/2112.08804},
year = {2021},
url = {https://arxiv.org/abs/2112.08804},
eprinttype = {arXiv},
eprint = {2112.08804}
}
``` | 5,645 | [
[
-0.0302581787109375,
-0.0484619140625,
0.0018310546875,
0.0246734619140625,
-0.01192474365234375,
-0.0062103271484375,
-0.0110931396484375,
-0.0284271240234375,
0.02349853515625,
0.01419830322265625,
-0.0347900390625,
-0.04180908203125,
-0.059326171875,
0.02... |
pszemraj/long-t5-tglobal-base-16384-book-summary | 2023-04-30T06:25:29.000Z | [
"transformers",
"pytorch",
"rust",
"onnx",
"safetensors",
"longt5",
"text2text-generation",
"summarization",
"summary",
"booksum",
"long-document",
"long-form",
"dataset:kmfoda/booksum",
"arxiv:2112.07916",
"arxiv:2105.08209",
"doi:10.57967/hf/0100",
"license:apache-2.0",
"license:... | summarization | pszemraj | null | null | pszemraj/long-t5-tglobal-base-16384-book-summary | 97 | 3,204 | transformers | 2022-06-27T16:37:26 | ---
tags:
- summarization
- summary
- booksum
- long-document
- long-form
license:
- apache-2.0
- bsd-3-clause
datasets:
- kmfoda/booksum
metrics:
- rouge
widget:
- text: large earthquakes along a given fault segment do not occur at random intervals
because it takes time to accumulate the strain energy for the rupture. The rates
at which tectonic plates move and accumulate strain at their boundaries are approximately
uniform. Therefore, in first approximation, one may expect that large ruptures
of the same fault segment will occur at approximately constant time intervals.
If subsequent main shocks have different amounts of slip across the fault, then
the recurrence time may vary, and the basic idea of periodic mainshocks must be
modified. For great plate boundary ruptures the length and slip often vary by
a factor of 2. Along the southern segment of the San Andreas fault the recurrence
interval is 145 years with variations of several decades. The smaller the standard
deviation of the average recurrence interval, the more specific could be the long
term prediction of a future mainshock.
example_title: earthquakes
- text: " A typical feed-forward neural field algorithm. Spatiotemporal coordinates\
\ are fed into a neural network that predicts values in the reconstructed domain.\
\ Then, this domain is mapped to the sensor domain where sensor measurements are\
\ available as supervision. Class and Section Problems Addressed Generalization\
\ (Section 2) Inverse problems, ill-posed problems, editability; symmetries. Hybrid\
\ Representations (Section 3) Computation & memory efficiency, representation\
\ capacity, editability: Forward Maps (Section 4) Inverse problems Network Architecture\
\ (Section 5) Spectral bias, integration & derivatives. Manipulating Neural Fields\
\ (Section 6) Edit ability, constraints, regularization. Table 2: The five classes\
\ of techniques in the neural field toolbox each addresses problems that arise\
\ in learning, inference, and control. (Section 3). We can supervise reconstruction\
\ via differentiable forward maps that transform Or project our domain (e.g, 3D\
\ reconstruction via 2D images; Section 4) With appropriate network architecture\
\ choices, we can overcome neural network spectral biases (blurriness) and efficiently\
\ compute derivatives and integrals (Section 5). Finally, we can manipulate neural\
\ fields to add constraints and regularizations, and to achieve editable representations\
\ (Section 6). Collectively, these classes constitute a 'toolbox' of techniques\
\ to help solve problems with neural fields There are three components in a conditional\
\ neural field: (1) An encoder or inference function \u20AC that outputs the conditioning\
\ latent variable 2 given an observation 0 E(0) =2. 2 is typically a low-dimensional\
\ vector, and is often referred to aS a latent code Or feature code_ (2) A mapping\
\ function 4 between Z and neural field parameters O: Y(z) = O; (3) The neural\
\ field itself $. The encoder \u20AC finds the most probable z given the observations\
\ O: argmaxz P(2/0). The decoder maximizes the inverse conditional probability\
\ to find the most probable 0 given Z: arg- max P(Olz). We discuss different encoding\
\ schemes with different optimality guarantees (Section 2.1.1), both global and\
\ local conditioning (Section 2.1.2), and different mapping functions Y (Section\
\ 2.1.3) 2. Generalization Suppose we wish to estimate a plausible 3D surface\
\ shape given a partial or noisy point cloud. We need a suitable prior over the\
\ sur- face in its reconstruction domain to generalize to the partial observations.\
\ A neural network expresses a prior via the function space of its architecture\
\ and parameters 0, and generalization is influenced by the inductive bias of\
\ this function space (Section 5)."
example_title: scientific paper
- text: 'Is a else or outside the cob and tree written being of early client rope
and you have is for good reasons. On to the ocean in Orange for time. By''s the
aggregate we can bed it yet. Why this please pick up on a sort is do and also
M Getoi''s nerocos and do rain become you to let so is his brother is made in
use and Mjulia''s''s the lay major is aging Masastup coin present sea only of
Oosii rooms set to you We do er do we easy this private oliiishs lonthen might
be okay. Good afternoon everybody. Welcome to this lecture of Computational Statistics.
As you can see, I''m not socially my name is Michael Zelinger. I''m one of the
task for this class and you might have already seen me in the first lecture where
I made a quick appearance. I''m also going to give the tortillas in the last third
of this course. So to give you a little bit about me, I''m a old student here
with better Bulman and my research centres on casual inference applied to biomedical
disasters, so that could be genomics or that could be hospital data. If any of
you is interested in writing a bachelor thesis, a semester paper may be mastathesis
about this topic feel for reach out to me. you have my name on models and my email
address you can find in the directory I''d Be very happy to talk about it. you
do not need to be sure about it, we can just have a chat. So with that said, let''s
get on with the lecture. There''s an exciting topic today I''m going to start
by sharing some slides with you and later on during the lecture we''ll move to
the paper. So bear with me for a few seconds. Well, the projector is starting
up. Okay, so let''s get started. Today''s topic is a very important one. It''s
about a technique which really forms one of the fundamentals of data science,
machine learning, and any sort of modern statistics. It''s called cross validation.
I know you really want to understand this topic I Want you to understand this
and frankly, nobody''s gonna leave Professor Mineshousen''s class without understanding
cross validation. So to set the stage for this, I Want to introduce you to the
validation problem in computational statistics. So the problem is the following:
You trained a model on available data. You fitted your model, but you know the
training data you got could always have been different and some data from the
environment. Maybe it''s a random process. You do not really know what it is,
but you know that somebody else who gets a different batch of data from the same
environment they would get slightly different training data and you do not care
that your method performs as well. On this training data. you want to to perform
well on other data that you have not seen other data from the same environment.
So in other words, the validation problem is you want to quantify the performance
of your model on data that you have not seen. So how is this even possible? How
could you possibly measure the performance on data that you do not know The solution
to? This is the following realization is that given that you have a bunch of data,
you were in charge. You get to control how much that your model sees. It works
in the following way: You can hide data firms model. Let''s say you have a training
data set which is a bunch of doubtless so X eyes are the features those are typically
hide and national vector. It''s got more than one dimension for sure. And the
why why eyes. Those are the labels for supervised learning. As you''ve seen before,
it''s the same set up as we have in regression. And so you have this training
data and now you choose that you only use some of those data to fit your model.
You''re not going to use everything, you only use some of it the other part you
hide from your model. And then you can use this hidden data to do validation from
the point of you of your model. This hidden data is complete by unseen. In other
words, we solve our problem of validation.'
example_title: transcribed audio - lecture
- text: "Transformer-based models have shown to be very useful for many NLP tasks.\
\ However, a major limitation of transformers-based models is its O(n^2)O(n 2)\
\ time & memory complexity (where nn is sequence length). Hence, it's computationally\
\ very expensive to apply transformer-based models on long sequences n > 512n>512.\
\ Several recent papers, e.g. Longformer, Performer, Reformer, Clustered attention\
\ try to remedy this problem by approximating the full attention matrix. You can\
\ checkout \U0001F917's recent blog post in case you are unfamiliar with these\
\ models.\nBigBird (introduced in paper) is one of such recent models to address\
\ this issue. BigBird relies on block sparse attention instead of normal attention\
\ (i.e. BERT's attention) and can handle sequences up to a length of 4096 at a\
\ much lower computational cost compared to BERT. It has achieved SOTA on various\
\ tasks involving very long sequences such as long documents summarization, question-answering\
\ with long contexts.\nBigBird RoBERTa-like model is now available in \U0001F917\
Transformers. The goal of this post is to give the reader an in-depth understanding\
\ of big bird implementation & ease one's life in using BigBird with \U0001F917\
Transformers. But, before going into more depth, it is important to remember that\
\ the BigBird's attention is an approximation of BERT's full attention and therefore\
\ does not strive to be better than BERT's full attention, but rather to be more\
\ efficient. It simply allows to apply transformer-based models to much longer\
\ sequences since BERT's quadratic memory requirement quickly becomes unbearable.\
\ Simply put, if we would have \u221E compute & \u221E time, BERT's attention\
\ would be preferred over block sparse attention (which we are going to discuss\
\ in this post).\nIf you wonder why we need more compute when working with longer\
\ sequences, this blog post is just right for you!\nSome of the main questions\
\ one might have when working with standard BERT-like attention include:\nDo all\
\ tokens really have to attend to all other tokens? Why not compute attention\
\ only over important tokens? How to decide what tokens are important? How to\
\ attend to just a few tokens in a very efficient way? In this blog post, we will\
\ try to answer those questions.\nWhat tokens should be attended to? We will give\
\ a practical example of how attention works by considering the sentence 'BigBird\
\ is now available in HuggingFace for extractive question answering'. In BERT-like\
\ attention, every word would simply attend to all other tokens.\nLet's think\
\ about a sensible choice of key tokens that a queried token actually only should\
\ attend to by writing some pseudo-code. Will will assume that the token available\
\ is queried and build a sensible list of key tokens to attend to.\n>>> # let's\
\ consider following sentence as an example >>> example = ['BigBird', 'is', 'now',\
\ 'available', 'in', 'HuggingFace', 'for', 'extractive', 'question', 'answering']\n\
>>> # further let's assume, we're trying to understand the representation of 'available'\
\ i.e. >>> query_token = 'available' >>> # We will initialize an empty `set` and\
\ fill up the tokens of our interest as we proceed in this section. >>> key_tokens\
\ = [] # => currently 'available' token doesn't have anything to attend Nearby\
\ tokens should be important because, in a sentence (sequence of words), the current\
\ word is highly dependent on neighboring past & future tokens. This intuition\
\ is the idea behind the concept of sliding attention."
example_title: bigbird blog intro
- text: "To be fair, you have to have a very high IQ to understand Rick and Morty.\
\ The humour is extremely subtle, and without a solid grasp of theoretical physics\
\ most of the jokes will go over a typical viewer's head. There's also Rick's\
\ nihilistic outlook, which is deftly woven into his characterisation- his personal\
\ philosophy draws heavily from Narodnaya Volya literature, for instance. The\
\ fans understand this stuff; they have the intellectual capacity to truly appreciate\
\ the depths of these jokes, to realise that they're not just funny- they say\
\ something deep about LIFE. As a consequence people who dislike Rick & Morty\
\ truly ARE idiots- of course they wouldn't appreciate, for instance, the humour\
\ in Rick's existential catchphrase 'Wubba Lubba Dub Dub,' which itself is a cryptic\
\ reference to Turgenev's Russian epic Fathers and Sons. I'm smirking right now\
\ just imagining one of those addlepated simpletons scratching their heads in\
\ confusion as Dan Harmon's genius wit unfolds itself on their television screens.\
\ What fools.. how I pity them. \U0001F602\nAnd yes, by the way, i DO have a Rick\
\ & Morty tattoo. And no, you cannot see it. It's for the ladies' eyes only- and\
\ even then they have to demonstrate that they're within 5 IQ points of my own\
\ (preferably lower) beforehand. Nothin personnel kid \U0001F60E"
example_title: Richard & Mortimer
- text: "The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure to reach a height of 300 metres. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct."
example_title: eiffel
parameters:
max_length: 64
min_length: 8
no_repeat_ngram_size: 3
early_stopping: true
repetition_penalty: 3.5
encoder_no_repeat_ngram_size: 4
num_beams: 3
model-index:
- name: pszemraj/long-t5-tglobal-base-16384-book-summary
results:
- task:
type: summarization
name: Summarization
dataset:
name: kmfoda/booksum
type: kmfoda/booksum
config: kmfoda--booksum
split: test
metrics:
- name: ROUGE-1
type: rouge
value: 36.4085
verified: true
- name: ROUGE-2
type: rouge
value: 6.0646
verified: true
- name: ROUGE-L
type: rouge
value: 16.7209
verified: true
- name: ROUGE-LSUM
type: rouge
value: 33.3405
verified: true
- name: loss
type: loss
value: .nan
verified: true
- name: gen_len
type: gen_len
value: 252.8099
verified: true
- task:
type: summarization
name: Summarization
dataset:
name: samsum
type: samsum
config: samsum
split: test
metrics:
- name: ROUGE-1
type: rouge
value: 30.9047
verified: true
- name: ROUGE-2
type: rouge
value: 7.4715
verified: true
- name: ROUGE-L
type: rouge
value: 22.3962
verified: true
- name: ROUGE-LSUM
type: rouge
value: 26.9094
verified: true
- name: loss
type: loss
value: .nan
verified: true
- name: gen_len
type: gen_len
value: 46.7973
verified: true
- task:
type: summarization
name: Summarization
dataset:
name: cnn_dailymail
type: cnn_dailymail
config: 3.0.0
split: test
metrics:
- name: ROUGE-1
type: rouge
value: 30.5942
verified: true
- name: ROUGE-2
type: rouge
value: 7.252
verified: true
- name: ROUGE-L
type: rouge
value: 17.7156
verified: true
- name: ROUGE-LSUM
type: rouge
value: 27.2881
verified: true
- name: loss
type: loss
value: .nan
verified: true
- name: gen_len
type: gen_len
value: 125.2507
verified: true
- task:
type: summarization
name: Summarization
dataset:
name: xsum
type: xsum
config: default
split: test
metrics:
- name: ROUGE-1
type: rouge
value: 20.3648
verified: true
- name: ROUGE-2
type: rouge
value: 3.4126
verified: true
- name: ROUGE-L
type: rouge
value: 13.6168
verified: true
- name: ROUGE-LSUM
type: rouge
value: 15.8313
verified: true
- name: loss
type: loss
value: .nan
verified: true
- name: gen_len
type: gen_len
value: 82.2177
verified: true
- task:
type: summarization
name: Summarization
dataset:
name: billsum
type: billsum
config: default
split: test
metrics:
- name: ROUGE-1
type: rouge
value: 39.6378
verified: true
- name: ROUGE-2
type: rouge
value: 13.0017
verified: true
- name: ROUGE-L
type: rouge
value: 23.0255
verified: true
- name: ROUGE-LSUM
type: rouge
value: 32.9943
verified: true
- name: loss
type: loss
value: 1.9428048133850098
verified: true
- name: gen_len
type: gen_len
value: 162.3588
verified: true
- task:
type: summarization
name: Summarization
dataset:
name: big_patent
type: big_patent
config: y
split: test
metrics:
- name: ROUGE-1
type: rouge
value: 34.7641
verified: true
- name: ROUGE-2
type: rouge
value: 7.8744
verified: true
- name: ROUGE-L
type: rouge
value: 19.9826
verified: true
- name: ROUGE-LSUM
type: rouge
value: 29.208
verified: true
- name: loss
type: loss
value: 2.8316469192504883
verified: true
- name: gen_len
type: gen_len
value: 132.7475
verified: true
- task:
type: summarization
name: Summarization
dataset:
name: launch/gov_report
type: launch/gov_report
config: plain_text
split: validation
metrics:
- name: ROUGE-1
type: rouge
value: 37.9246
verified: true
- name: ROUGE-2
type: rouge
value: 8.5837
verified: true
- name: ROUGE-L
type: rouge
value: 18.0274
verified: true
- name: ROUGE-LSUM
type: rouge
value: 34.0816
verified: true
- name: loss
type: loss
value: 2.56695818901062
verified: true
- name: gen_len
type: gen_len
value: 220.3747
verified: true
- task:
type: summarization
name: Summarization
dataset:
name: launch/gov_report
type: launch/gov_report
config: plain_text
split: test
metrics:
- name: ROUGE-1
type: rouge
value: 37.4438
verified: true
- name: ROUGE-2
type: rouge
value: 8.2907
verified: true
- name: ROUGE-L
type: rouge
value: 17.6893
verified: true
- name: ROUGE-LSUM
type: rouge
value: 33.7141
verified: true
- name: loss
type: loss
value: 2.5776000022888184
verified: true
- name: gen_len
type: gen_len
value: 214.9692
verified: true
---
# long-t5-tglobal-base-16384 + BookSum
<a href="https://colab.research.google.com/gist/pszemraj/d9a0495861776168fd5cdcd7731bc4ee/example-long-t5-tglobal-base-16384-book-summary.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
Summarize long text and get a SparkNotes-esque summary of arbitrary topics!
- generalizes reasonably well to academic & narrative text.
- A simple example/use case on ASR is [here](https://longt5-booksum-example.netlify.app/).
- Example notebook in Colab (_click on the icon above_).
## Cheeky Proof-of-Concept
A summary of the [infamous navy seals copypasta](https://knowyourmeme.com/memes/navy-seal-copypasta):
> The narrator tells us that he's graduated from the Navy seals and has been involved in many secret raids. He's also one of the best snipers in the entire U.S. military. He promises to "wipe you out with precision" when they meet again.
* * *
**Contents**
<!-- TOC -->
- [Model description](#model-description)
- [How-To in Python](#how-to-in-python)
- [Intended uses & limitations](#intended-uses--limitations)
- [Training and evaluation data](#training-and-evaluation-data)
- [FAQ](#faq)
- [How to run inference over a very long (30k+ tokens) document in batches?](#how-to-run-inference-over-a-very-long-30k-tokens-document-in-batches)
- [How to fine-tune further?](#how-to-fine-tune-further)
- [Are there simpler ways to run this?](#are-there-simpler-ways-to-run-this)
- [Training procedure](#training-procedure)
- [Updates:](#updates)
- [Training hyperparameters](#training-hyperparameters)
- [Framework versions](#framework-versions)
- [Citation info](#citation-info)
<!-- /TOC -->
* * *
## Model description
A fine-tuned version of [google/long-t5-tglobal-base](https://huggingface.co/google/long-t5-tglobal-base) on the `kmfoda/booksum` dataset:
- 30+ epochs of fine-tuning from the base model on V100/A100 GPUs
- Training used 16384 token input / 1024 max output
Read the paper by Guo et al. here: [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/pdf/2112.07916.pdf)
## How-To in Python
Install/update transformers `pip install -U transformers`
Summarize text with pipeline:
```python
import torch
from transformers import pipeline
summarizer = pipeline(
"summarization",
"pszemraj/long-t5-tglobal-base-16384-book-summary",
device=0 if torch.cuda.is_available() else -1,
)
long_text = "Here is a lot of text I don't want to read. Replace me"
result = summarizer(long_text)
print(result[0]["summary_text"])
```
Pass [other parameters related to beam search textgen](https://huggingface.co/blog/how-to-generate) when calling `summarizer` to get even higher quality results.
## Intended uses & limitations
- The current checkpoint is fairly well converged but will be updated if further improvements can be made.
- Compare performance to [LED-base](https://huggingface.co/pszemraj/led-base-book-summary) trained on the same dataset (API gen parameters are the same).
- while this model seems to improve upon factual consistency, **do not take summaries to be foolproof and check things that seem odd**.
## Training and evaluation data
`kmfoda/booksum` dataset on HuggingFace - read [the original paper here](https://arxiv.org/abs/2105.08209). Summaries longer than 1024 LongT5 tokens were filtered out to prevent the model from learning to generate "partial" summaries.
* * *
## FAQ
### How to run inference over a very long (30k+ tokens) document in batches?
See `summarize.py` in [the code for my hf space Document Summarization](https://huggingface.co/spaces/pszemraj/document-summarization/blob/main/summarize.py) :)
You can also use the same code to split a document into batches of 4096, etc., and run over those with the model. This is useful in situations where CUDA memory is limited.
### How to fine-tune further?
See [train with a script](https://huggingface.co/docs/transformers/run_scripts) and [the summarization scripts](https://github.com/huggingface/transformers/tree/main/examples/pytorch/summarization).
This model was originally tuned on Google Colab with a heavily modified variant of the [longformer training notebook](https://github.com/patrickvonplaten/notebooks/blob/master/Fine_tune_Longformer_Encoder_Decoder_(LED)_for_Summarization_on_pubmed.ipynb), key enabler being deepspeed. You can try this as an alternate route to fine-tuning the model without using the command line.
### Are there simpler ways to run this?
For this reason, I created a Python package utility. It's called [textsum](https://github.com/pszemraj/textsum), and you can use it to load models and summarize things in a few lines of code.
```sh
pip install textsum
```
Use `textsum` in python with this model:
```python
from textsum.summarize import Summarizer
summarizer = Summarizer(
model_name_or_path="pszemraj/long-t5-tglobal-base-16384-book-summary"
)
long_string = "This is a long string of text that will be summarized."
out_str = summarizer.summarize_string(long_string)
print(f"summary: {out_str}")
```
This package provides easy-to-use interfaces for applying summarization models to text documents of arbitrary length. Currently implemented interfaces include a Python API, a CLI, and a shareable demo application.
For details, explanations, and documentation, see the README (_linked above_) or the [wiki](https://github.com/pszemraj/textsum/wiki).
* * *
## Training procedure
### Updates:
- July 22, 2022: updated to a fairly converged checkpoint
- July 3, 2022: Added a new version with several epochs of additional general training that is more performant.
### Training hyperparameters
_NOTE: early checkpoints of this model were trained on a "smaller" subsection of the dataset as it was filtered for summaries of **1024 characters**. This was subsequently caught and adjusted to **1024 tokens** and then trained further for 10+ epochs._
The following hyperparameters were used during the **most recent** training round\*:
- learning_rate: 0.0005
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 128
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 2
\* Prior training sessions used roughly similar parameters; multiple sessions were required as this takes eons to train
### Framework versions
- Transformers 4.20.1
- Pytorch 1.10.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
## Citation info
If you find `pszemraj/long-t5-tglobal-base-16384-book-summary` useful in your work, please consider citing this model :)
@misc {peter_szemraj_2022,
author = { {Peter Szemraj} },
title = { long-t5-tglobal-base-16384-book-summary (Revision 4b12bce) },
year = 2022,
url = { https://huggingface.co/pszemraj/long-t5-tglobal-base-16384-book-summary },
doi = { 10.57967/hf/0100 },
publisher = { Hugging Face }
}
| 27,001 | [
[
-0.02740478515625,
-0.048248291015625,
0.0301361083984375,
0.0171966552734375,
-0.0217132568359375,
-0.00830078125,
-0.0271759033203125,
-0.03363037109375,
0.01116180419921875,
0.0257568359375,
-0.0272979736328125,
-0.040679931640625,
-0.053131103515625,
0.0... |
facebook/vit-mae-huge | 2023-06-13T19:43:24.000Z | [
"transformers",
"pytorch",
"tf",
"vit_mae",
"pretraining",
"vision",
"dataset:imagenet-1k",
"arxiv:2111.06377",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | facebook | null | null | facebook/vit-mae-huge | 3 | 3,196 | transformers | 2022-03-02T23:29:05 | ---
license: apache-2.0
tags:
- vision
datasets:
- imagenet-1k
---
# Vision Transformer (huge-sized model) pre-trained with MAE
Vision Transformer (ViT) model pre-trained using the MAE method. It was introduced in the paper [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick and first released in [this repository](https://github.com/facebookresearch/mae).
Disclaimer: The team releasing MAE did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Vision Transformer (ViT) is a transformer encoder model (BERT-like). Images are presented to the model as a sequence of fixed-size patches.
During pre-training, one randomly masks out a high portion (75%) of the image patches. First, the encoder is used to encode the visual patches. Next, a learnable (shared) mask token is added at the positions of the masked patches. The decoder takes the encoded visual patches and mask tokens as input and reconstructs raw pixel values for the masked positions.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=facebook/vit-mae) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import AutoImageProcessor, ViTMAEForPreTraining
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
processor = AutoImageProcessor.from_pretrained('facebook/vit-mae-huge')
model = ViTMAEForPreTraining.from_pretrained('facebook/vit-mae-huge')
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
loss = outputs.loss
mask = outputs.mask
ids_restore = outputs.ids_restore
```
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2111-06377,
author = {Kaiming He and
Xinlei Chen and
Saining Xie and
Yanghao Li and
Piotr Doll{\'{a}}r and
Ross B. Girshick},
title = {Masked Autoencoders Are Scalable Vision Learners},
journal = {CoRR},
volume = {abs/2111.06377},
year = {2021},
url = {https://arxiv.org/abs/2111.06377},
eprinttype = {arXiv},
eprint = {2111.06377},
timestamp = {Tue, 16 Nov 2021 12:12:31 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2111-06377.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | 2,967 | [
[
-0.04766845703125,
-0.035003662109375,
0.0018491744995117188,
0.0137176513671875,
-0.019439697265625,
-0.006793975830078125,
0.0031299591064453125,
-0.041290283203125,
0.038818359375,
0.03155517578125,
-0.03985595703125,
-0.01885986328125,
-0.06591796875,
-0... |
TheBloke/Llama-2-70B-Chat-fp16 | 2023-08-06T10:02:48.000Z | [
"transformers",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-2",
"en",
"license:other",
"has_space",
"text-generation-inference",
"region:us"
] | text-generation | TheBloke | null | null | TheBloke/Llama-2-70B-Chat-fp16 | 43 | 3,192 | transformers | 2023-07-19T02:21:26 | ---
inference: false
language:
- en
license: other
model_type: llama
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Meta's Llama 2 70B Chat fp16
These files are fp16 pytorch model files for [Meta's Llama 2 70B Chat](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf).
They were produced by downloading the PTH files from Meta, and then converting to HF format using the latest Transformers 4.32.0.dev0, from Git, with the Llama 2 PR included: https://github.com/huggingface/transformers/pull/24891.
Command to convert was:
```
python3 /workspace/venv/pytorch2/lib/python3.10/site-packages/transformers/models/llama/convert_llama_weights_to_hf.py --input_dir /workspace/git/llama/download --model_size 70B --output_dir /workspace/process/llama-2-70b-chat/source --safe_serialization true
```
The files were saved in Safetensors format.
I am uploading this repo because I initially tried to create GPTQs using the [Meta Llama 2 70B Chat HF repo](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf), but got strange errors that suggested the weights were not correct. But converting from the PTH files using the latest `convert_llama_weights_to_hf.py` script worked fine.
Many thanks to William Beauchamp from [Chai](https://chai-research.com/) for providing the hardware for these quantisations!
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ)
* [Original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf)
* [My fp16 conversion of the unquantised PTH model files](https://huggingface.co/TheBloke/Llama-2-70B-chat-fp16)
## Prompt template: Llama-2-Chat
```
System: You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
User: {prompt}
Assistant:
```
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
**Patreon special mentions**: Space Cruiser, Nikolai Manek, Sam, Chris McCloskey, Rishabh Srivastava, Kalila, Spiking Neurons AB, Khalefa Al-Ahmad, WelcomeToTheClub, Chadd, Lone Striker, Viktor Bowallius, Edmond Seymore, Ai Maven, Chris Smitley, Dave, Alexandros Triantafyllidis, Luke @flexchar, Elle, ya boyyy, Talal Aujan, Alex , Jonathan Leane, Deep Realms, Randy H, subjectnull, Preetika Verma, Joseph William Delisle, Michael Levine, chris gileta, K, Oscar Rangel, LangChain4j, Trenton Dambrowitz, Eugene Pentland, Johann-Peter Hartmann, Femi Adebogun, Illia Dulskyi, senxiiz, Daniel P. Andersen, Sean Connelly, Artur Olbinski, RoA, Mano Prime, Derek Yates, Raven Klaugh, David Flickinger, Willem Michiel, Pieter, Willian Hasse, vamX, Luke Pendergrass, webtim, Ghost , Rainer Wilmers, Nathan LeClaire, Will Dee, Cory Kujawski, John Detwiler, Fred von Graf, biorpg, Iucharbius , Imad Khwaja, Pierre Kircher, terasurfer , Asp the Wyvern, John Villwock, theTransient, zynix , Gabriel Tamborski, Fen Risland, Gabriel Puliatti, Matthew Berman, Pyrater, SuperWojo, Stephen Murray, Karl Bernard, Ajan Kanaga, Greatston Gnanesh, Junyu Yang.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Meta's Llama 2 70B Chat
# **Llama 2**
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 70B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
## Model Details
*Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
**Model Developers** Meta
**Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10<sup>-4</sup>|
*Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Dates** Llama 2 was trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
## Intended Use
**Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
**Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program.
||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
|---|---|---|---|
|Llama 2 7B|184320|400|31.22|
|Llama 2 13B|368640|400|62.44|
|Llama 2 70B|1720320|400|291.42|
|Total|3311616||539.00|
**CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
## Evaluation Results
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
|Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
|---|---|---|---|---|---|---|---|---|---|
|Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
|Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
|Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
|Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
|Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
|Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
|Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
**Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama 1|7B|27.42|23.00|
|Llama 1|13B|41.74|23.08|
|Llama 1|33B|44.19|22.57|
|Llama 1|65B|48.71|21.77|
|Llama 2|7B|33.29|**21.25**|
|Llama 2|13B|41.86|26.10|
|Llama 2|70B|**50.18**|24.60|
**Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama-2-Chat|7B|57.04|**0.00**|
|Llama-2-Chat|13B|62.18|**0.00**|
|Llama-2-Chat|70B|**64.14**|0.01|
**Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
## Ethical Considerations and Limitations
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
## Reporting Issues
Please report any software “bug,” or other problems with the models through one of the following means:
- Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
- Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
- Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
## Llama Model Index
|Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf|
|---|---|---|---|---|
|7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)|
|13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)|
|70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
| 14,206 | [
[
-0.0222625732421875,
-0.055450439453125,
0.019622802734375,
0.0212860107421875,
-0.03387451171875,
0.00498199462890625,
-0.0037479400634765625,
-0.05145263671875,
0.0241546630859375,
0.0258941650390625,
-0.052093505859375,
-0.0273284912109375,
-0.045562744140625... |
setu4993/smaller-LaBSE | 2023-10-19T06:24:02.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"feature-extraction",
"sentence_embedding",
"multilingual",
"google",
"sentence-similarity",
"labse",
"ar",
"de",
"en",
"es",
"fr",
"it",
"ja",
"ko",
"nl",
"pl",
"pt",
"ru",
"th",
"tr",
"zh",
"datase... | sentence-similarity | setu4993 | null | null | setu4993/smaller-LaBSE | 12 | 3,191 | transformers | 2022-03-02T23:29:05 | ---
pipeline_tag: sentence-similarity
language:
- ar
- de
- en
- es
- fr
- it
- ja
- ko
- nl
- pl
- pt
- ru
- th
- tr
- zh
tags:
- bert
- sentence_embedding
- multilingual
- google
- sentence-similarity
- labse
license: apache-2.0
datasets:
- CommonCrawl
- Wikipedia
---
# LaBSE
## Model description
Smaller Language-agnostic BERT Sentence Encoder (LaBSE) is a BERT-based model distilled from the [original LaBSE model](https://huggingface.co/setu4993/LaBSE) to 15 languages (from the original 109 languages) using the techniques described in the paper ['Load What You Need: Smaller Versions of Multilingual BERT'](https://arxiv.org/abs/2010.05609) by [Ukjae Jeong](https://github.com/jeongukjae/).
- Model: [HuggingFace's model hub](https://huggingface.co/setu4993/smaller-LaBSE).
- Original model: [TensorFlow Hub](https://tfhub.dev/jeongukjae/smaller_LaBSE_15lang/1).
- Distillation source: [GitHub](https://github.com/jeongukjae/smaller-labse).
- Conversion from TensorFlow to PyTorch: [GitHub](https://github.com/setu4993/convert-labse-tf-pt).
## Usage
Using the model:
```python
import torch
from transformers import BertModel, BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained("setu4993/smaller-LaBSE")
model = BertModel.from_pretrained("setu4993/smaller-LaBSE")
model = model.eval()
english_sentences = [
"dog",
"Puppies are nice.",
"I enjoy taking long walks along the beach with my dog.",
]
english_inputs = tokenizer(english_sentences, return_tensors="pt", padding=True)
with torch.no_grad():
english_outputs = model(**english_inputs)
```
To get the sentence embeddings, use the pooler output:
```python
english_embeddings = english_outputs.pooler_output
```
Output for other languages:
```python
italian_sentences = [
"cane",
"I cuccioli sono carini.",
"Mi piace fare lunghe passeggiate lungo la spiaggia con il mio cane.",
]
japanese_sentences = ["犬", "子犬はいいです", "私は犬と一緒にビーチを散歩するのが好きです"]
italian_inputs = tokenizer(italian_sentences, return_tensors="pt", padding=True)
japanese_inputs = tokenizer(japanese_sentences, return_tensors="pt", padding=True)
with torch.no_grad():
italian_outputs = model(**italian_inputs)
japanese_outputs = model(**japanese_inputs)
italian_embeddings = italian_outputs.pooler_output
japanese_embeddings = japanese_outputs.pooler_output
```
For similarity between sentences, an L2-norm is recommended before calculating the similarity:
```python
import torch.nn.functional as F
def similarity(embeddings_1, embeddings_2):
normalized_embeddings_1 = F.normalize(embeddings_1, p=2)
normalized_embeddings_2 = F.normalize(embeddings_2, p=2)
return torch.matmul(
normalized_embeddings_1, normalized_embeddings_2.transpose(0, 1)
)
print(similarity(english_embeddings, italian_embeddings))
print(similarity(english_embeddings, japanese_embeddings))
print(similarity(italian_embeddings, japanese_embeddings))
```
## Details
Details about data, training, evaluation and performance metrics are available in the [original paper](https://arxiv.org/abs/2007.01852).
### BibTeX entry and citation info
```bibtex
@misc{feng2020languageagnostic,
title={Language-agnostic BERT Sentence Embedding},
author={Fangxiaoyu Feng and Yinfei Yang and Daniel Cer and Naveen Arivazhagan and Wei Wang},
year={2020},
eprint={2007.01852},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| 3,481 | [
[
-0.0218658447265625,
-0.054046630859375,
0.0261383056640625,
0.0156402587890625,
-0.0156402587890625,
-0.0185699462890625,
-0.033416748046875,
-0.01306915283203125,
0.0218505859375,
0.0022869110107421875,
-0.03131103515625,
-0.04345703125,
-0.04925537109375,
... |
staka/fugumt-en-ja | 2023-08-15T08:45:04.000Z | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"en",
"ja",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | staka | null | null | staka/fugumt-en-ja | 40 | 3,189 | transformers | 2022-05-08T04:23:57 | ---
license: cc-by-sa-4.0
language:
- en
- ja
tags:
- translation
---
# FuguMT
This is a translation model using Marian-NMT.
For more details, please see [my repository](https://github.com/s-taka/fugumt).
* source language: en
* target language: ja
### How to use
This model uses transformers and sentencepiece.
```python
!pip install transformers sentencepiece
```
You can use this model directly with a pipeline:
```python
from transformers import pipeline
fugu_translator = pipeline('translation', model='staka/fugumt-en-ja')
fugu_translator('This is a cat.')
```
If you want to translate multiple sentences, we recommend using [pySBD](https://github.com/nipunsadvilkar/pySBD).
```python
!pip install transformers sentencepiece pysbd
import pysbd
seg_en = pysbd.Segmenter(language="en", clean=False)
from transformers import pipeline
fugu_translator = pipeline('translation', model='staka/fugumt-en-ja')
txt = 'This is a cat. It is very cute.'
print(fugu_translator(seg_en.segment(txt)))
```
### Eval results
The results of the evaluation using [tatoeba](https://tatoeba.org/ja)(randomly selected 500 sentences) are as follows:
|source |target |BLEU(*1)|
|-------|-------|--------|
|en |ja |32.7 |
(*1) sacrebleu --tokenize ja-mecab | 1,266 | [
[
0.0122833251953125,
-0.0555419921875,
0.029296875,
0.05810546875,
-0.042388916015625,
-0.023651123046875,
-0.0186614990234375,
-0.005985260009765625,
0.0303955078125,
0.0418701171875,
-0.044036865234375,
-0.0197906494140625,
-0.033111572265625,
0.02934265136... |
patrickvonplaten/longformer2roberta-cnn_dailymail-fp16 | 2020-12-11T21:59:19.000Z | [
"transformers",
"pytorch",
"encoder_decoder",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | text2text-generation | patrickvonplaten | null | null | patrickvonplaten/longformer2roberta-cnn_dailymail-fp16 | 6 | 3,184 | transformers | 2022-03-02T23:29:05 | # Longformer2Roberta Summarization with 🤗 EncoderDecoder Framework
This model is a Longformer2Roberta model fine-tuned on summarization.
Longformer2Roberta is a `EncoderDecoderModel`, meaning that both the encoder is a `allenai/longformer-base-4096` model and the decoder is a `roberta-base` model. Leveraging the [EncoderDecoderFramework](https://huggingface.co/transformers/model_doc/encoderdecoder.html#encoder-decoder-models), the
two pretrained models can simply be loaded into the framework via:
```python
roberta2roberta = EncoderDecoderModel.from_encoder_decoder_pretrained("allenai/longformer-base-4096", "roberta-base")
```
The decoder of an `EncoderDecoder` model needs cross-attention layers and usually makes use of causal
masking for auto-regressiv generation.
Thus, ``longformer2roberta`` is consequently fined-tuned on the `CNN/Daily Mail`dataset and the resulting model
`longformer2roberta-cnn_dailymail-fp16` is uploaded here.
## Example
The model is by no means a state-of-the-art model, but nevertheless
produces reasonable summarization results. It was mainly fine-tuned
as a proof-of-concept for the 🤗 EncoderDecoder Framework.
The model can be used as follows:
```python
from transformers import LongformerTokenizer, EncoderDecoderModel
model = EncoderDecoderModel.from_pretrained("patrickvonplaten/longformer2roberta-cnn_dailymail-fp16")
tokenizer = LongformerTokenizer.from_pretrained("allenai/longformer-base-4096")
article = """(CNN)James Holmes made his introduction to the world in a Colorado cinema filled with spectators watching a midnight showing of the new Batman movie, "The Dark Knight Rises," in June 2012. The moment became one of the deadliest shootings in U.S. history. Holmes is accused of opening fire on the crowd, killing 12 people and injuring or maiming 70 others in Aurora, a suburb of Denver. Holmes appeared like a comic book character: He resembled the Joker, with red-orange hair, similar to the late actor Heath Ledger\'s portrayal of the villain in an earlier Batman movie, authorities said. But Holmes was hardly a cartoon. Authorities said he wore body armor and carried several guns, including an AR-15 rifle, with lots of ammo. He also wore a gas mask. Holmes says he was insane at the time of the shootings, and that is his legal defense and court plea: not guilty by reason of insanity. Prosecutors aren\'t swayed and will seek the death penalty. Opening statements in his trial are scheduled to begin Monday. Holmes admits to the shootings but says he was suffering "a psychotic episode" at the time, according to court papers filed in July 2013 by the state public defenders, Daniel King and Tamara A. Brady. Evidence "revealed thus far in the case supports the defense\'s position that Mr. Holmes suffers from a severe mental illness and was in the throes of a psychotic episode when he committed the acts that resulted in the tragic loss of life and injuries sustained by moviegoers on July 20, 2012," the public defenders wrote. Holmes no longer looks like a dazed Joker, as he did in his first appearance before a judge in 2012. He appeared dramatically different in January when jury selection began for his trial: 9,000 potential jurors were summoned for duty, described as one of the nation\'s largest jury calls. Holmes now has a cleaner look, with a mustache, button-down shirt and khaki pants. In January, he had a beard and eyeglasses. If this new image sounds like one of an academician, it may be because Holmes, now 27, once was one. Just before the shooting, Holmes was a doctoral student in neuroscience, and he was studying how the brain works, with his schooling funded by a U.S. government grant. Yet for all his learning, Holmes apparently lacked the capacity to command his own mind, according to the case against him. A jury will ultimately decide Holmes\' fate. That panel is made up of 12 jurors and 12 alternates. They are 19 women and five men, and almost all are white and middle-aged. The trial could last until autumn. When jury summonses were issued in January, each potential juror stood a 0.2% chance of being selected, District Attorney George Brauchler told the final jury this month. He described the approaching trial as "four to five months of a horrible roller coaster through the worst haunted house you can imagine." The jury will have to render verdicts on each of the 165 counts against Holmes, including murder and attempted murder charges. Meanwhile, victims and their relatives are challenging all media outlets "to stop the gratuitous use of the name and likeness of mass killers, thereby depriving violent individuals the media celebrity and media spotlight they so crave," the No Notoriety group says. They are joined by victims from eight other mass shootings in recent U.S. history. Raised in central coastal California and in San Diego, James Eagan Holmes is the son of a mathematician father noted for his work at the FICO firm that provides credit scores and a registered nurse mother, according to the U-T San Diego newspaper. Holmes also has a sister, Chris, a musician, who\'s five years younger, the newspaper said. His childhood classmates remember him as a clean-cut, bespectacled boy with an "exemplary" character who "never gave any trouble, and never got in trouble himself," The Salinas Californian reported. His family then moved down the California coast, where Holmes grew up in the San Diego-area neighborhood of Rancho Peñasquitos, which a neighbor described as "kind of like Mayberry," the San Diego newspaper said. Holmes attended Westview High School, which says its school district sits in "a primarily middle- to upper-middle-income residential community." There, Holmes ran cross-country, played soccer and later worked at a biotechnology internship at the Salk Institute and Miramar College, which attracts academically talented students. By then, his peers described him as standoffish and a bit of a wiseacre, the San Diego newspaper said. Holmes attended college fairly close to home, in a neighboring area known as Southern California\'s "inland empire" because it\'s more than an hour\'s drive from the coast, in a warm, low-desert climate. He entered the University of California, Riverside, in 2006 as a scholarship student. In 2008 he was a summer camp counselor for disadvantaged children, age 7 to 14, at Camp Max Straus, run by Jewish Big Brothers Big Sisters of Los Angeles. He graduated from UC Riverside in 2010 with the highest honors and a bachelor\'s degree in neuroscience. "Academically, he was at the top of the top," Chancellor Timothy P. White said. He seemed destined for even higher achievement. By 2011, he had enrolled as a doctoral student in the neuroscience program at the University of Colorado Anschutz Medical Campus in Aurora, the largest academic health center in the Rocky Mountain region. The doctoral in neuroscience program attended by Holmes focuses on how the brain works, with an emphasis on processing of information, behavior, learning and memory. Holmes was one of six pre-thesis Ph.D. students in the program who were awarded a neuroscience training grant from the National Institutes of Health. The grant rewards outstanding neuroscientists who will make major contributions to neurobiology. A syllabus that listed Holmes as a student at the medical school shows he was to have delivered a presentation about microRNA biomarkers. But Holmes struggled, and his own mental health took an ominous turn. In March 2012, he told a classmate he wanted to kill people, and that he would do so "when his life was over," court documents said. Holmes was "denied access to the school after June 12, 2012, after he made threats to a professor," according to court documents. About that time, Holmes was a patient of University of Colorado psychiatrist Lynne Fenton. Fenton was so concerned about Holmes\' behavior that she mentioned it to her colleagues, saying he could be a danger to others, CNN affiliate KMGH-TV reported, citing sources with knowledge of the investigation. Fenton\'s concerns surfaced in early June, sources told the Denver station. Holmes began to fantasize about killing "a lot of people" in early June, nearly six weeks before the shootings, the station reported, citing unidentified sources familiar with the investigation. Holmes\' psychiatrist contacted several members of a "behavioral evaluation and threat assessment" team to say Holmes could be a danger to others, the station reported. At issue was whether to order Holmes held for 72 hours to be evaluated by mental health professionals, the station reported. "Fenton made initial phone calls about engaging the BETA team" in "the first 10 days" of June, but it "never came together" because in the period Fenton was having conversations with team members, Holmes began the process of dropping out of school, a source told KMGH. Defense attorneys have rejected the prosecution\'s assertions that Holmes was barred from campus. Citing statements from the university, Holmes\' attorneys have argued that his access was revoked because that\'s normal procedure when a student drops enrollment. What caused this turn for the worse for Holmes has yet to be clearly detailed. In the months before the shooting, he bought four weapons and more than 6,000 rounds of ammunition, authorities said. Police said he also booby-trapped his third-floor apartment with explosives, but police weren\'t fooled. After Holmes was caught in the cinema parking lot immediately after the shooting, bomb technicians went to the apartment and neutralized the explosives. No one was injured at the apartment building. Nine minutes before Holmes went into the movie theater, he called a University of Colorado switchboard, public defender Brady has said in court. The number he called can be used to get in contact with faculty members during off hours, Brady said. Court documents have also revealed that investigators have obtained text messages that Holmes exchanged with someone before the shooting. That person was not named, and the content of the texts has not been made public. According to The New York Times, Holmes sent a text message to a fellow graduate student, a woman, about two weeks before the shooting. She asked if he had left Aurora yet, reported the newspaper, which didn\'t identify her. No, he had two months left on his lease, Holmes wrote back, according to the Times. He asked if she had heard of "dysphoric mania," a form of bipolar disorder marked by the highs of mania and the dark and sometimes paranoid delusions of major depression. The woman asked if the disorder could be managed with treatment. "It was," Holmes wrote her, according to the Times. But he warned she should stay away from him "because I am bad news," the newspaper reported. It was her last contact with Holmes. After the shooting, Holmes\' family issued a brief statement: "Our hearts go out to those who were involved in this tragedy and to the families and friends of those involved," they said, without giving any information about their son. Since then, prosecutors have refused to offer a plea deal to Holmes. For Holmes, "justice is death," said Brauchler, the district attorney. In December, Holmes\' parents, who will be attending the trial, issued another statement: They asked that their son\'s life be spared and that he be sent to an institution for mentally ill people for the rest of his life, if he\'s found not guilty by reason of insanity. "He is not a monster," Robert and Arlene Holmes wrote, saying the death penalty is "morally wrong, especially when the condemned is mentally ill." "He is a human being gripped by a severe mental illness," the parents said. The matter will be settled by the jury. CNN\'s Ana Cabrera and Sara Weisfeldt contributed to this report from Denver."""
input_ids = tokenizer(article, return_tensors="pt").input_ids
output_ids = model.generate(input_ids)
print(tokenizer.decode(output_ids[0], skip_special_tokens=True))
# should produce
# James Holmes, 27, is accused of opening fire on a Colorado theater.
# He was a doctoral student at University of Colorado.
# Holmes says he was suffering "a psychotic episode" at the time of the shooting.
# Prosecutors won't say whether Holmes was barred from campus.
```
Such an article has a length of > 2000 tokens, which means that it cannot be handled correctly by Bert or Roberta encoders.
## Training script:
**IMPORTANT**: In order for this code to work, make sure you checkout to the branch
[more_general_trainer_metric](https://github.com/huggingface/transformers/tree/more_general_trainer_metric), which slightly adapts
the `Trainer` for `EncoderDecoderModels` according to this PR: https://github.com/huggingface/transformers/pull/5840.
The following code shows the complete training script that was used to fine-tune `longformer2roberta-cnn_dailymail-fp16
` for reproducability. The training last ~90h on a standard GPU.
```python
#!/usr/bin/env python3
import nlp
import logging
from transformers import LongformerTokenizer, EncoderDecoderModel, Trainer, TrainingArguments
logging.basicConfig(level=logging.INFO)
model = EncoderDecoderModel.from_encoder_decoder_pretrained("allenai/longformer-base-4096", "roberta-base")
tokenizer = LongformerTokenizer.from_pretrained("allenai/longformer-base-4096")
# load train and validation data
train_dataset = nlp.load_dataset("cnn_dailymail", "3.0.0", split="train")
val_dataset = nlp.load_dataset("cnn_dailymail", "3.0.0", split="validation[:5%]")
# load rouge for validation
rouge = nlp.load_metric("rouge", experiment_id=0)
# enable gradient checkpointing for longformer encoder
model.encoder.config.gradient_checkpointing = True
# set decoding params
model.config.decoder_start_token_id = tokenizer.bos_token_id
model.config.eos_token_id = tokenizer.eos_token_id
model.config.max_length = 142
model.config.min_length = 56
model.config.no_repeat_ngram_size = 3
model.early_stopping = True
model.length_penalty = 2.0
model.num_beams = 4
encoder_length = 2048
decoder_length = 128
batch_size = 16
# map data correctly
def map_to_encoder_decoder_inputs(batch):
# Tokenizer will automatically set [BOS] <text> [EOS]
# cut off at Longformer at 2048
inputs = tokenizer(batch["article"], padding="max_length", truncation=True, max_length=encoder_length)
# force summarization <= 128
outputs = tokenizer(batch["highlights"], padding="max_length", truncation=True, max_length=decoder_length)
batch["input_ids"] = inputs.input_ids
batch["attention_mask"] = inputs.attention_mask
# set 128 tokens to global attention
batch["global_attention_mask"] = [[1 if i < 128 else 0 for i in range(sequence_length)] for sequence_length in len(inputs.input_ids) * [encoder_length]]
batch["decoder_input_ids"] = outputs.input_ids
batch["labels"] = outputs.input_ids.copy()
# mask loss for padding
batch["labels"] = [
[-100 if token == tokenizer.pad_token_id else token for token in labels] for labels in batch["labels"]
]
batch["decoder_attention_mask"] = outputs.attention_mask
assert all([len(x) == encoder_length for x in inputs.input_ids])
assert all([len(x) == decoder_length for x in outputs.input_ids])
return batch
def compute_metrics(pred):
labels_ids = pred.label_ids
pred_ids = pred.predictions
# all unnecessary tokens are removed
pred_str = tokenizer.batch_decode(pred_ids, skip_special_tokens=True)
labels_ids[labels_ids == -100] = tokenizer.eos_token_id
label_str = tokenizer.batch_decode(labels_ids, skip_special_tokens=True)
rouge_output = rouge.compute(predictions=pred_str, references=label_str, rouge_types=["rouge2"])["rouge2"].mid
return {
"rouge2_precision": round(rouge_output.precision, 4),
"rouge2_recall": round(rouge_output.recall, 4),
"rouge2_fmeasure": round(rouge_output.fmeasure, 4),
}
return {
"rouge2_precision": round(rouge_output.precision, 4),
"rouge2_recall": round(rouge_output.recall, 4),
"rouge2_fmeasure": round(rouge_output.fmeasure, 4),
}
# make train dataset ready
train_dataset = train_dataset.map(
map_to_encoder_decoder_inputs, batched=True, batch_size=batch_size, remove_columns=["article", "highlights"],
)
train_dataset.set_format(
type="torch", columns=["input_ids", "attention_mask", "global_attention_mask", "decoder_input_ids", "decoder_attention_mask", "labels"],
)
# same for validation dataset
val_dataset = val_dataset.map(
map_to_encoder_decoder_inputs, batched=True, batch_size=batch_size, remove_columns=["article", "highlights"],
)
val_dataset.set_format(
type="torch", columns=["input_ids", "global_attention_mask", "attention_mask", "decoder_input_ids", "decoder_attention_mask", "labels"],
)
# set training arguments - these params are not really tuned, feel free to change
training_args = TrainingArguments(
output_dir="./",
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
predict_from_generate=True,
evaluate_during_training=True,
do_train=True,
do_eval=True,
logging_steps=1000,
save_steps=1000,
eval_steps=1000,
overwrite_output_dir=True,
warmup_steps=2000,
save_total_limit=3,
fp16=True,
)
# instantiate trainer
trainer = Trainer(
model=model,
args=training_args,
compute_metrics=compute_metrics,
train_dataset=train_dataset,
eval_dataset=val_dataset,
)
# start training
trainer.train()
```
## Evaluation
The following script evaluates the model on the test set of
CNN/Daily Mail.
```python
#!/usr/bin/env python3
import nlp
import torch
from transformers import LongformerTokenizer, EncoderDecoderModel
tokenizer = LongformerTokenizer.from_pretrained("allenai/longformer-base-4096")
model = EncoderDecoderModel.from_pretrained("patrickvonplaten/longformer2roberta-cnn_dailymail-fp16")
model.to("cuda")
test_dataset = nlp.load_dataset("cnn_dailymail", "3.0.0", split="test")
batch_size = 32
encoder_length = 2048
decoder_length = 128
# map data correctly
def generate_summary(batch):
# Tokenizer will automatically set [BOS] <text> [EOS]
# cut off at BERT max length 512
inputs = tokenizer(batch["article"], padding="max_length", truncation=True, max_length=encoder_length, return_tensors="pt")
input_ids = inputs.input_ids.to("cuda")
attention_mask = inputs.attention_mask.to("cuda")
global_attention_mask = torch.zeros_like(attention_mask)
global_attention_mask[:, :decoder_length] = 1
outputs = model.generate(input_ids, attention_mask=attention_mask, global_attention_mask=global_attention_mask)
# all special tokens including will be removed
output_str = tokenizer.batch_decode(outputs, skip_special_tokens=True)
batch["pred"] = output_str
return batch
results = test_dataset.map(generate_summary, batched=True, batch_size=batch_size, remove_columns=["article"])
# load rouge for validation
rouge = nlp.load_metric("rouge")
pred_str = results["pred"]
label_str = results["highlights"]
rouge_output = rouge.compute(predictions=pred_str, references=label_str, rouge_types=["rouge2"])["rouge2"].mid
print(rouge_output)
```
The obtained results should be:
| - | Rouge2 - mid -precision | Rouge2 - mid - recall | Rouge2 - mid - fmeasure |
|----------|:-------------:|:------:|:------:|
| **CNN/Daily Mail** | 12.39 | 15.05 | **13.21** |
**Note** This model was trained to show how Longformer can be used as an Encoder model in a EncoderDecoder setup.
Better results are obtained for datasets of much longer inputs.
| 19,666 | [
[
-0.01186370849609375,
-0.05633544921875,
0.052886962890625,
0.00006788969039916992,
-0.009490966796875,
0.005214691162109375,
0.00146484375,
-0.03582763671875,
0.06756591796875,
0.0273590087890625,
-0.033538818359375,
0.0017547607421875,
-0.05908203125,
-0.0... |
sentence-transformers/bert-large-nli-stsb-mean-tokens | 2022-06-15T22:48:23.000Z | [
"sentence-transformers",
"pytorch",
"tf",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"arxiv:1908.10084",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | sentence-similarity | sentence-transformers | null | null | sentence-transformers/bert-large-nli-stsb-mean-tokens | 1 | 3,184 | sentence-transformers | 2022-03-02T23:29:05 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
license: apache-2.0
---
**⚠️ This model is deprecated. Please don't use it as it produces sentence embeddings of low quality. You can find recommended sentence embedding models here: [SBERT.net - Pretrained Models](https://www.sbert.net/docs/pretrained_models.html)**
# sentence-transformers/bert-large-nli-stsb-mean-tokens
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/bert-large-nli-stsb-mean-tokens')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/bert-large-nli-stsb-mean-tokens')
model = AutoModel.from_pretrained('sentence-transformers/bert-large-nli-stsb-mean-tokens')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/bert-large-nli-stsb-mean-tokens)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` | 3,981 | [
[
-0.018402099609375,
-0.05853271484375,
0.019927978515625,
0.031768798828125,
-0.03070068359375,
-0.03277587890625,
-0.0263214111328125,
-0.00846099853515625,
0.01910400390625,
0.0270233154296875,
-0.04071044921875,
-0.0301513671875,
-0.05401611328125,
0.0057... |
Hate-speech-CNERG/deoffxlmr-mono-tamil | 2021-09-25T13:59:19.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"ta",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | Hate-speech-CNERG | null | null | Hate-speech-CNERG/deoffxlmr-mono-tamil | 0 | 3,182 | transformers | 2022-03-02T23:29:04 | ---
language: ta
license: apache-2.0
---
This model is used to detect **Offensive Content** in **Tamil Code-Mixed language**. The mono in the name refers to the monolingual setting, where the model is trained using only Tamil(pure and code-mixed) data. The weights are initialized from pretrained XLM-Roberta-Base and pretrained using Masked Language Modelling on the target dataset before fine-tuning using Cross-Entropy Loss.
This model is the best of multiple trained for **EACL 2021 Shared Task on Offensive Language Identification in Dravidian Languages**. Genetic-Algorithm based ensembled test predictions got the highest weighted F1 score at the leaderboard (Weighted F1 score on hold out test set: This model - 0.76, Ensemble - 0.78)
### For more details about our paper
Debjoy Saha, Naman Paharia, Debajit Chakraborty, Punyajoy Saha, Animesh Mukherjee. "[Hate-Alert@DravidianLangTech-EACL2021: Ensembling strategies for Transformer-based Offensive language Detection](https://www.aclweb.org/anthology/2021.dravidianlangtech-1.38/)".
***Please cite our paper in any published work that uses any of these resources.***
~~~
@inproceedings{saha-etal-2021-hate,
title = "Hate-Alert@{D}ravidian{L}ang{T}ech-{EACL}2021: Ensembling strategies for Transformer-based Offensive language Detection",
author = "Saha, Debjoy and Paharia, Naman and Chakraborty, Debajit and Saha, Punyajoy and Mukherjee, Animesh",
booktitle = "Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages",
month = apr,
year = "2021",
address = "Kyiv",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.dravidianlangtech-1.38",
pages = "270--276",
abstract = "Social media often acts as breeding grounds for different forms of offensive content. For low resource languages like Tamil, the situation is more complex due to the poor performance of multilingual or language-specific models and lack of proper benchmark datasets. Based on this shared task {``}Offensive Language Identification in Dravidian Languages{''} at EACL 2021; we present an exhaustive exploration of different transformer models, We also provide a genetic algorithm technique for ensembling different models. Our ensembled models trained separately for each language secured the first position in Tamil, the second position in Kannada, and the first position in Malayalam sub-tasks. The models and codes are provided.",
}
~~~ | 2,503 | [
[
-0.023712158203125,
-0.06768798828125,
-0.0227508544921875,
0.00365447998046875,
-0.0262298583984375,
0.038482666015625,
-0.01959228515625,
-0.04376220703125,
0.0115814208984375,
0.0216522216796875,
-0.0295867919921875,
-0.036834716796875,
-0.059356689453125,
... |
aychang/roberta-base-imdb | 2021-05-20T14:25:56.000Z | [
"transformers",
"pytorch",
"jax",
"roberta",
"text-classification",
"en",
"dataset:imdb",
"license:mit",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | aychang | null | null | aychang/roberta-base-imdb | 3 | 3,181 | transformers | 2022-03-02T23:29:05 | ---
language:
- en
thumbnail:
tags:
- text-classification
license: mit
datasets:
- imdb
metrics:
---
# IMDB Sentiment Task: roberta-base
## Model description
A simple base roBERTa model trained on the "imdb" dataset.
## Intended uses & limitations
#### How to use
##### Transformers
```python
# Load model and tokenizer
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Use pipeline
from transformers import pipeline
model_name = "aychang/roberta-base-imdb"
nlp = pipeline("sentiment-analysis", model=model_name, tokenizer=model_name)
results = nlp(["I didn't really like it because it was so terrible.", "I love how easy it is to watch and get good results."])
```
##### AdaptNLP
```python
from adaptnlp import EasySequenceClassifier
model_name = "aychang/roberta-base-imdb"
texts = ["I didn't really like it because it was so terrible.", "I love how easy it is to watch and get good results."]
classifer = EasySequenceClassifier
results = classifier.tag_text(text=texts, model_name_or_path=model_name, mini_batch_size=2)
```
#### Limitations and bias
This is minimal language model trained on a benchmark dataset.
## Training data
IMDB https://huggingface.co/datasets/imdb
## Training procedure
#### Hardware
One V100
#### Hyperparameters and Training Args
```python
from transformers import TrainingArguments
training_args = TrainingArguments(
output_dir='./models',
overwrite_output_dir=False,
num_train_epochs=2,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
warmup_steps=500,
weight_decay=0.01,
evaluation_strategy="steps",
logging_dir='./logs',
fp16=False,
eval_steps=800,
save_steps=300000
)
```
## Eval results
```
{'epoch': 2.0,
'eval_accuracy': 0.94668,
'eval_f1': array([0.94603457, 0.94731017]),
'eval_loss': 0.2578844428062439,
'eval_precision': array([0.95762642, 0.93624502]),
'eval_recall': array([0.93472, 0.95864]),
'eval_runtime': 244.7522,
'eval_samples_per_second': 102.144}
```
| 2,147 | [
[
-0.034271240234375,
-0.034423828125,
0.01197052001953125,
0.0086669921875,
-0.0174713134765625,
-0.00823211669921875,
-0.004283905029296875,
0.006191253662109375,
0.016204833984375,
0.0304718017578125,
-0.05859375,
-0.04364013671875,
-0.056243896484375,
0.00... |
timm/convnext_small.fb_in22k_ft_in1k | 2023-03-31T22:34:46.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"dataset:imagenet-22k",
"arxiv:2201.03545",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/convnext_small.fb_in22k_ft_in1k | 0 | 3,176 | timm | 2022-12-13T07:13:48 | ---
tags:
- image-classification
- timm
library_tag: timm
license: apache-2.0
datasets:
- imagenet-1k
- imagenet-22k
---
# Model card for convnext_small.fb_in22k_ft_in1k
A ConvNeXt image classification model. Pretrained on ImageNet-22k and fine-tuned on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 50.2
- GMACs: 8.7
- Activations (M): 21.6
- Image size: train = 224 x 224, test = 288 x 288
- **Papers:**
- A ConvNet for the 2020s: https://arxiv.org/abs/2201.03545
- **Original:** https://github.com/facebookresearch/ConvNeXt
- **Dataset:** ImageNet-1k
- **Pretrain Dataset:** ImageNet-22k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('convnext_small.fb_in22k_ft_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'convnext_small.fb_in22k_ft_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 96, 56, 56])
# torch.Size([1, 192, 28, 28])
# torch.Size([1, 384, 14, 14])
# torch.Size([1, 768, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'convnext_small.fb_in22k_ft_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 768, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
All timing numbers from eager model PyTorch 1.13 on RTX 3090 w/ AMP.
| model |top1 |top5 |img_size|param_count|gmacs |macts |samples_per_sec|batch_size|
|------------------------------------------------------------------------------------------------------------------------------|------|------|--------|-----------|------|------|---------------|----------|
| [convnextv2_huge.fcmae_ft_in22k_in1k_512](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_512) |88.848|98.742|512 |660.29 |600.81|413.07|28.58 |48 |
| [convnextv2_huge.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_384) |88.668|98.738|384 |660.29 |337.96|232.35|50.56 |64 |
| [convnext_xxlarge.clip_laion2b_soup_ft_in1k](https://huggingface.co/timm/convnext_xxlarge.clip_laion2b_soup_ft_in1k) |88.612|98.704|256 |846.47 |198.09|124.45|122.45 |256 |
| [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384) |88.312|98.578|384 |200.13 |101.11|126.74|196.84 |256 |
| [convnextv2_large.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k_384) |88.196|98.532|384 |197.96 |101.1 |126.74|128.94 |128 |
| [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320) |87.968|98.47 |320 |200.13 |70.21 |88.02 |283.42 |256 |
| [convnext_xlarge.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k_384) |87.75 |98.556|384 |350.2 |179.2 |168.99|124.85 |192 |
| [convnextv2_base.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k_384) |87.646|98.422|384 |88.72 |45.21 |84.49 |209.51 |256 |
| [convnext_large.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k_384) |87.476|98.382|384 |197.77 |101.1 |126.74|194.66 |256 |
| [convnext_large_mlp.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k) |87.344|98.218|256 |200.13 |44.94 |56.33 |438.08 |256 |
| [convnextv2_large.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k) |87.26 |98.248|224 |197.96 |34.4 |43.13 |376.84 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384) |87.138|98.212|384 |88.59 |45.21 |84.49 |365.47 |256 |
| [convnext_xlarge.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k) |87.002|98.208|224 |350.2 |60.98 |57.5 |368.01 |256 |
| [convnext_base.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k_384) |86.796|98.264|384 |88.59 |45.21 |84.49 |366.54 |256 |
| [convnextv2_base.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k) |86.74 |98.022|224 |88.72 |15.38 |28.75 |624.23 |256 |
| [convnext_large.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k) |86.636|98.028|224 |197.77 |34.4 |43.13 |581.43 |256 |
| [convnext_base.clip_laiona_augreg_ft_in1k_384](https://huggingface.co/timm/convnext_base.clip_laiona_augreg_ft_in1k_384) |86.504|97.97 |384 |88.59 |45.21 |84.49 |368.14 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in12k_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k) |86.344|97.97 |256 |88.59 |20.09 |37.55 |816.14 |256 |
| [convnextv2_huge.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in1k) |86.256|97.75 |224 |660.29 |115.0 |79.07 |154.72 |256 |
| [convnext_small.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_small.in12k_ft_in1k_384) |86.182|97.92 |384 |50.22 |25.58 |63.37 |516.19 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in1k) |86.154|97.68 |256 |88.59 |20.09 |37.55 |819.86 |256 |
| [convnext_base.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k) |85.822|97.866|224 |88.59 |15.38 |28.75 |1037.66 |256 |
| [convnext_small.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k_384) |85.778|97.886|384 |50.22 |25.58 |63.37 |518.95 |256 |
| [convnextv2_large.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in1k) |85.742|97.584|224 |197.96 |34.4 |43.13 |375.23 |256 |
| [convnext_small.in12k_ft_in1k](https://huggingface.co/timm/convnext_small.in12k_ft_in1k) |85.174|97.506|224 |50.22 |8.71 |21.56 |1474.31 |256 |
| [convnext_tiny.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k_384) |85.118|97.608|384 |28.59 |13.14 |39.48 |856.76 |256 |
| [convnextv2_tiny.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k_384) |85.112|97.63 |384 |28.64 |13.14 |39.48 |491.32 |256 |
| [convnextv2_base.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in1k) |84.874|97.09 |224 |88.72 |15.38 |28.75 |625.33 |256 |
| [convnext_small.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k) |84.562|97.394|224 |50.22 |8.71 |21.56 |1478.29 |256 |
| [convnext_large.fb_in1k](https://huggingface.co/timm/convnext_large.fb_in1k) |84.282|96.892|224 |197.77 |34.4 |43.13 |584.28 |256 |
| [convnext_tiny.in12k_ft_in1k](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k) |84.186|97.124|224 |28.59 |4.47 |13.44 |2433.7 |256 |
| [convnext_tiny.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k_384) |84.084|97.14 |384 |28.59 |13.14 |39.48 |862.95 |256 |
| [convnextv2_tiny.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k) |83.894|96.964|224 |28.64 |4.47 |13.44 |1452.72 |256 |
| [convnext_base.fb_in1k](https://huggingface.co/timm/convnext_base.fb_in1k) |83.82 |96.746|224 |88.59 |15.38 |28.75 |1054.0 |256 |
| [convnextv2_nano.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k_384) |83.37 |96.742|384 |15.62 |7.22 |24.61 |801.72 |256 |
| [convnext_small.fb_in1k](https://huggingface.co/timm/convnext_small.fb_in1k) |83.142|96.434|224 |50.22 |8.71 |21.56 |1464.0 |256 |
| [convnextv2_tiny.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in1k) |82.92 |96.284|224 |28.64 |4.47 |13.44 |1425.62 |256 |
| [convnext_tiny.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k) |82.898|96.616|224 |28.59 |4.47 |13.44 |2480.88 |256 |
| [convnext_nano.in12k_ft_in1k](https://huggingface.co/timm/convnext_nano.in12k_ft_in1k) |82.282|96.344|224 |15.59 |2.46 |8.37 |3926.52 |256 |
| [convnext_tiny_hnf.a2h_in1k](https://huggingface.co/timm/convnext_tiny_hnf.a2h_in1k) |82.216|95.852|224 |28.59 |4.47 |13.44 |2529.75 |256 |
| [convnext_tiny.fb_in1k](https://huggingface.co/timm/convnext_tiny.fb_in1k) |82.066|95.854|224 |28.59 |4.47 |13.44 |2346.26 |256 |
| [convnextv2_nano.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k) |82.03 |96.166|224 |15.62 |2.46 |8.37 |2300.18 |256 |
| [convnextv2_nano.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in1k) |81.83 |95.738|224 |15.62 |2.46 |8.37 |2321.48 |256 |
| [convnext_nano_ols.d1h_in1k](https://huggingface.co/timm/convnext_nano_ols.d1h_in1k) |80.866|95.246|224 |15.65 |2.65 |9.38 |3523.85 |256 |
| [convnext_nano.d1h_in1k](https://huggingface.co/timm/convnext_nano.d1h_in1k) |80.768|95.334|224 |15.59 |2.46 |8.37 |3915.58 |256 |
| [convnextv2_pico.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_pico.fcmae_ft_in1k) |80.304|95.072|224 |9.07 |1.37 |6.1 |3274.57 |256 |
| [convnext_pico.d1_in1k](https://huggingface.co/timm/convnext_pico.d1_in1k) |79.526|94.558|224 |9.05 |1.37 |6.1 |5686.88 |256 |
| [convnext_pico_ols.d1_in1k](https://huggingface.co/timm/convnext_pico_ols.d1_in1k) |79.522|94.692|224 |9.06 |1.43 |6.5 |5422.46 |256 |
| [convnextv2_femto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_femto.fcmae_ft_in1k) |78.488|93.98 |224 |5.23 |0.79 |4.57 |4264.2 |256 |
| [convnext_femto_ols.d1_in1k](https://huggingface.co/timm/convnext_femto_ols.d1_in1k) |77.86 |93.83 |224 |5.23 |0.82 |4.87 |6910.6 |256 |
| [convnext_femto.d1_in1k](https://huggingface.co/timm/convnext_femto.d1_in1k) |77.454|93.68 |224 |5.22 |0.79 |4.57 |7189.92 |256 |
| [convnextv2_atto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_atto.fcmae_ft_in1k) |76.664|93.044|224 |3.71 |0.55 |3.81 |4728.91 |256 |
| [convnext_atto_ols.a2_in1k](https://huggingface.co/timm/convnext_atto_ols.a2_in1k) |75.88 |92.846|224 |3.7 |0.58 |4.11 |7963.16 |256 |
| [convnext_atto.d2_in1k](https://huggingface.co/timm/convnext_atto.d2_in1k) |75.664|92.9 |224 |3.7 |0.55 |3.81 |8439.22 |256 |
## Citation
```bibtex
@article{liu2022convnet,
author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie},
title = {A ConvNet for the 2020s},
journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2022},
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
| 15,738 | [
[
-0.06756591796875,
-0.032318115234375,
-0.0034656524658203125,
0.03778076171875,
-0.032073974609375,
-0.0155487060546875,
-0.012939453125,
-0.035064697265625,
0.0648193359375,
0.016754150390625,
-0.044525146484375,
-0.04180908203125,
-0.05010986328125,
-0.00... |
timm/vit_base_patch8_224.augreg_in21k | 2023-05-06T00:00:08.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-21k",
"arxiv:2106.10270",
"arxiv:2010.11929",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/vit_base_patch8_224.augreg_in21k | 0 | 3,175 | timm | 2022-12-22T07:22:55 | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-21k
---
# Model card for vit_base_patch8_224.augreg_in21k
A Vision Transformer (ViT) image classification model. Trained on ImageNet-21k (with additional augmentation and regularization) in JAX by paper authors, ported to PyTorch by Ross Wightman.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 102.6
- GMACs: 66.9
- Activations (M): 65.7
- Image size: 224 x 224
- **Papers:**
- How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers: https://arxiv.org/abs/2106.10270
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2
- **Dataset:** ImageNet-21k
- **Original:** https://github.com/google-research/vision_transformer
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('vit_base_patch8_224.augreg_in21k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vit_base_patch8_224.augreg_in21k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 785, 768) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{steiner2021augreg,
title={How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers},
author={Steiner, Andreas and Kolesnikov, Alexander and and Zhai, Xiaohua and Wightman, Ross and Uszkoreit, Jakob and Beyer, Lucas},
journal={arXiv preprint arXiv:2106.10270},
year={2021}
}
```
```bibtex
@article{dosovitskiy2020vit,
title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
journal={ICLR},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
| 3,800 | [
[
-0.038726806640625,
-0.029815673828125,
-0.002613067626953125,
0.00725555419921875,
-0.027374267578125,
-0.0239105224609375,
-0.0229644775390625,
-0.036376953125,
0.0123443603515625,
0.0247955322265625,
-0.0377197265625,
-0.036224365234375,
-0.047515869140625,
... |
timm/mobilenetv2_140.ra_in1k | 2023-04-27T21:14:28.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2110.00476",
"arxiv:1801.04381",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/mobilenetv2_140.ra_in1k | 0 | 3,173 | timm | 2022-12-13T00:00:49 | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for mobilenetv2_140.ra_in1k
A MobileNet-v2 image classification model. Trained on ImageNet-1k in `timm` using recipe template described below.
Recipe details:
* RandAugment `RA` recipe. Inspired by and evolved from EfficientNet RandAugment recipes. Published as `B` recipe in [ResNet Strikes Back](https://arxiv.org/abs/2110.00476).
* RMSProp (TF 1.0 behaviour) optimizer, EMA weight averaging
* Step (exponential decay w/ staircase) LR schedule with warmup
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 6.1
- GMACs: 0.6
- Activations (M): 9.6
- Image size: 224 x 224
- **Papers:**
- MobileNetV2: Inverted Residuals and Linear Bottlenecks: https://arxiv.org/abs/1801.04381
- ResNet strikes back: An improved training procedure in timm: https://arxiv.org/abs/2110.00476
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/huggingface/pytorch-image-models
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('mobilenetv2_140.ra_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'mobilenetv2_140.ra_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 24, 112, 112])
# torch.Size([1, 32, 56, 56])
# torch.Size([1, 48, 28, 28])
# torch.Size([1, 136, 14, 14])
# torch.Size([1, 448, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'mobilenetv2_140.ra_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1792, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@inproceedings{sandler2018mobilenetv2,
title={Mobilenetv2: Inverted residuals and linear bottlenecks},
author={Sandler, Mark and Howard, Andrew and Zhu, Menglong and Zhmoginov, Andrey and Chen, Liang-Chieh},
booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
pages={4510--4520},
year={2018}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@inproceedings{wightman2021resnet,
title={ResNet strikes back: An improved training procedure in timm},
author={Wightman, Ross and Touvron, Hugo and Jegou, Herve},
booktitle={NeurIPS 2021 Workshop on ImageNet: Past, Present, and Future}
}
```
| 4,749 | [
[
-0.0270538330078125,
-0.022247314453125,
-0.012664794921875,
0.0019779205322265625,
-0.0267181396484375,
-0.0263671875,
-0.005451202392578125,
-0.0282745361328125,
0.0219268798828125,
0.03594970703125,
-0.031585693359375,
-0.042083740234375,
-0.046295166015625,
... |
ultralyticsplus/yolov8s | 2023-01-21T19:43:15.000Z | [
"ultralytics",
"ultralyticsplus",
"yolov8",
"yolo",
"vision",
"object-detection",
"pytorch",
"model-index",
"has_space",
"region:us"
] | object-detection | ultralyticsplus | null | null | ultralyticsplus/yolov8s | 25 | 3,173 | ultralytics | 2023-01-12T10:15:46 |
---
tags:
- ultralyticsplus
- ultralytics
- yolov8
- yolo
- vision
- object-detection
- pytorch
library_name: ultralytics
library_version: 8.0.4
inference: false
model-index:
- name: ultralyticsplus/yolov8s
results:
- task:
type: object-detection
metrics:
- type: precision # since mAP is not available on hf.co/metrics
value: 0.449 # min: 0.0 - max: 1.0
name: mAP
---
### Supported Labels
```
['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light', 'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow', 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard', 'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone', 'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear', 'hair drier', 'toothbrush']
```
### How to use
- Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus):
```bash
pip install -U ultralyticsplus==0.0.14
```
- Load model and perform prediction:
```python
from ultralyticsplus import YOLO, render_result
# load model
model = YOLO('ultralyticsplus/yolov8s')
# set model parameters
model.overrides['conf'] = 0.25 # NMS confidence threshold
model.overrides['iou'] = 0.45 # NMS IoU threshold
model.overrides['agnostic_nms'] = False # NMS class-agnostic
model.overrides['max_det'] = 1000 # maximum number of detections per image
# set image
image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
# perform inference
results = model.predict(image)
# observe results
print(results[0].boxes)
render = render_result(model=model, image=image, result=results[0])
render.show()
```
| 2,147 | [
[
-0.0244293212890625,
-0.01959228515625,
0.024200439453125,
0.01346588134765625,
-0.030914306640625,
-0.0005841255187988281,
0.01284027099609375,
-0.01605224609375,
0.0227508544921875,
0.03094482421875,
-0.037567138671875,
-0.055419921875,
-0.0322265625,
0.01... |
fffrrt/ruGPT-3.5-13B-GPTQ | 2023-07-20T20:27:26.000Z | [
"transformers",
"gpt2",
"text-generation",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | fffrrt | null | null | fffrrt/ruGPT-3.5-13B-GPTQ | 18 | 3,170 | transformers | 2023-07-20T19:36:14 | GPTQ quantisation of https://huggingface.co/ai-forever/ruGPT-3.5-13B
Small perplexity test:
before quantization - 'mean_perplexity': 10.241
after quantization - 'mean_perplexity': 10.379
Data - RussianSuperGlue > DaNetQA/train.jsonl['passage']
As this is a hastily thrown together quant with no prior experience in quants, use https://huggingface.co/TheBloke version if he releases a quant for this model. | 412 | [
[
-0.0367431640625,
-0.0479736328125,
0.0308074951171875,
0.035369873046875,
-0.040924072265625,
-0.0165863037109375,
0.0178680419921875,
0.0025482177734375,
-0.0064544677734375,
0.0197296142578125,
-0.053924560546875,
-0.032012939453125,
-0.0243377685546875,
... |
laion/CLIP-convnext_base_w-laion2B-s13B-b82K-augreg | 2023-04-18T22:05:22.000Z | [
"open_clip",
"tensorboard",
"clip",
"zero-shot-image-classification",
"arxiv:2201.03545",
"arxiv:1910.04867",
"license:mit",
"has_space",
"region:us"
] | zero-shot-image-classification | laion | null | null | laion/CLIP-convnext_base_w-laion2B-s13B-b82K-augreg | 4 | 3,163 | open_clip | 2023-01-10T01:34:39 | ---
license: mit
pipeline_tag: zero-shot-image-classification
library_name: open_clip
tags:
- clip
---
# Model Card for CLIP-convnext_base_w.laion2B-s13B-b82k-augreg
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Training Details](#training-details)
4. [Evaluation](#evaluation)
5. [Acknowledgements](#acknowledgements)
6. [Citation](#citation)
# Model Details
## Model Description
A series of CLIP [ConvNeXt-Base](https://arxiv.org/abs/2201.03545) (w/ wide embed dim) models trained on subsets LAION-5B (https://laion.ai/blog/laion-5b/) using OpenCLIP (https://github.com/mlfoundations/open_clip).
Goals:
* Explore an alternative to ViT and ResNet (w/ AttentionPooling) CLIP models that scales well with model size and image resolution
Firsts:
* First known ConvNeXt CLIP models trained at scale in the range of CLIP ViT-B/16 and RN50x4 models
* First released model weights exploring increase of augmentation + regularization for image tower via adding (greater scale range of RRC, random erasing, stochastic depth)
The models utilize the [timm](https://github.com/rwightman/pytorch-image-models) ConvNeXt-Base model (`convnext_base`) as the image tower, and the same text tower as the RN50x4 (depth 12, embed dim 640) model from OpenAI CLIP. The base models are trained at 256x256 image resolution and roughly match the RN50x4 models on FLOPs and activation counts. The models with `320` in the name are trained at 320x320.
All models in this series were trained for 13B samples and have ImageNet Zero-Shot top-1 of >= 70.8%. Comparing to ViT-B/16 at 34B SS with zero-shot of 70.2% (68.1% for 13B SS) this suggests the ConvNeXt architecture may be more sample efficient in this range of model scale. More experiments needed to confirm.
| Model | Dataset | Resolution | AugReg | Top-1 ImageNet Zero-Shot (%) |
| ----- | ------- | ---------- | ------------ | --------- |
| [convnext_base_w.laion2b_s13b_b82k](https://huggingface.co/laion/CLIP-convnext_base_w-laion2B-s13B-b82K) | LAION-2B | 256x256 | RRC (0.9, 1.0) | 70.8 |
| [convnext_base_w.laion2b_s13b_b82k_augreg](https://huggingface.co/laion/CLIP-convnext_base_w-laion2B-s13B-b82K-augreg) | LAION-2B | 256x256 | RRC (0.33, 1.0), RE (0.35), SD (0.1) | 71.5 |
| [convnext_base_w.laion_aesthetic_s13b_b82k](https://huggingface.co/laion/CLIP-convnext_base_w-laion_aesthetic-s13B-b82K) | LAION-A | 256x256 | RRC (0.9, 1.0) | 71.0 |
| [convnext_base_w_320.laion_aesthetic_s13b_b82k](https://huggingface.co/laion/CLIP-convnext_base_w_320-laion_aesthetic-s13B-b82K) | LAION-A | 320x320 | RRC (0.9, 1.0) | 71.7 |
| [convnext_base_w_320.laion_aesthetic_s13b_b82k_augreg](https://huggingface.co/laion/CLIP-convnext_base_w_320-laion_aesthetic-s13B-b82K-augreg) | LAION-A | 320x320 | RRC (0.33, 1.0), RE (0.35), SD (0.1) | 71.3 |
RRC = Random Resize Crop (crop pcts), RE = Random Erasing (prob), SD = Stochastic Depth (prob) -- image tower only
LAION-A = LAION Aesthetic, an ~900M sample subset of LAION-2B with pHash dedupe and asthetic score filtering.
Model training done by Ross Wightman across both the [stability.ai](https://stability.ai/) cluster and the [JUWELS Booster](https://apps.fz-juelich.de/jsc/hps/juwels/booster-overview.html) supercomputer. See acknowledgements below.
# Uses
As per the original [OpenAI CLIP model card](https://github.com/openai/CLIP/blob/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1/model-card.md), this model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such model.
The OpenAI CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis. Additionally, the LAION-5B blog (https://laion.ai/blog/laion-5b/) and upcoming paper include additional discussion as it relates specifically to the training dataset.
## Direct Use
Zero-shot image classification, image and text retrieval, among others.
## Downstream Use
Image classification and other image task fine-tuning, linear probe image classification, image generation guiding and conditioning, among others.
## Out-of-Scope Use
As per the OpenAI models,
**Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful.
Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use.
Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases.
Further the above notice, the LAION-5B dataset used in training of these models has additional considerations, see below.
# Training Details
## Training Data
This model was trained with one of (see table in intro):
* LAION-2B - A 2 billion sample English subset of LAION-5B (https://laion.ai/blog/laion-5b/).
* LAION-Aesthetic - A 900M sample subset of LAION-2B with pHash dedupe and asthetic score filtering
**IMPORTANT NOTE:** The motivation behind dataset creation is to democratize research and experimentation around large-scale multi-modal model training and handling of uncurated, large-scale datasets crawled from publically available internet. Our recommendation is therefore to use the dataset for research purposes. Be aware that this large-scale dataset is uncurated. Keep in mind that the uncurated nature of the dataset means that collected links may lead to strongly discomforting and disturbing content for a human viewer. Therefore, please use the demo links with caution and at your own risk. It is possible to extract a “safe” subset by filtering out samples based on the safety tags (using a customized trained NSFW classifier that we built). While this strongly reduces the chance for encountering potentially harmful content when viewing, we cannot entirely exclude the possibility for harmful content being still present in safe mode, so that the warning holds also there. We think that providing the dataset openly to broad research and other interested communities will allow for transparent investigation of benefits that come along with training large-scale models as well as pitfalls and dangers that may stay unreported or unnoticed when working with closed large datasets that remain restricted to a small community. Providing our dataset openly, we however do not recommend using it for creating ready-to-go industrial products, as the basic research about general properties and safety of such large-scale models, which we would like to encourage with this release, is still in progress.
## Training Procedure
All models were trained with a global batch size of 81920 for 64 checkpoint intervals of 203.7M samples for a total of ~13B samples seen over training.
For 256x256 models, a slurm script w/ srun below was used on 20 8-GPU (A100 40GB) nodes (Stability), switching to 40 4-GPU nodes for time on JUWELS.
```
/opt/slurm/sbin/srun --cpu_bind=v --accel-bind=gn python -m training.main \
--save-frequency 1 \
--name "convnext_256" \
--resume 'latest' \
--train-data="pipe:aws s3 cp s3://mybucket/path/{laion{00000..xxxxx}.tar -" \
--train-num-samples 203666042 \
--dataset-type webdataset \
--precision amp_bfloat16 \
--warmup 10000 \
--batch-size=512 \
--epochs=64 \
--dataset-resampled \
--clip-grad-norm 5.0 \
--lr 1e-3 \
--workers=6 \
--model "convnext_base_w" \
--seed 0 \
--ddp-static-graph \
--local-loss \
--gather-with-grad \
--grad-checkpointing
```
For 320x320 models, same as above but w/ 32 8-GPU nodes, local batch size 320, or 64 4-GPU nodes on JUWELs.
# Evaluation
Evaluation done with code in the [LAION CLIP Benchmark suite](https://github.com/LAION-AI/CLIP_benchmark).
## Testing Data, Factors & Metrics
### Testing Data
The testing is performed with VTAB+ (A combination of VTAB (https://arxiv.org/abs/1910.04867) w/ additional robustness datasets) for classification and COCO and Flickr for retrieval.
## Results
The models achieve between 70.8 and 71.7 zero-shot top-1 accuracy on ImageNet-1k.

An initial round of benchmarks have been performed on a wider range of datasets, to be viewable at https://github.com/LAION-AI/CLIP_benchmark/blob/main/benchmark/results.ipynb
As part of exploring increased augmentation + regularization, early evalations suggest that `augreg` trained models evaluate well over a wider range of resolutions. This is especially true for the 320x320 LAION-A model, where the augreg run was lower than the non-augreg when evaluated at the train resolution of 320x320 (71.3 vs 71.7), but improves to 72.2 when evaluated at 384x384 (the non-augreg drops to 71.0 at 384x384).
# Acknowledgements
Acknowledging [stability.ai](https://stability.ai/) and the Gauss Centre for Supercomputing e.V. (http://gauss-centre.eu) for funding this part of work by providing computing time through the John von Neumann Institute for Computing (NIC) on the GCS Supercomputer JUWELS Booster at Jülich Supercomputing Centre (JSC).
# Citation
**BibTeX:**
```bibtex
@inproceedings{schuhmann2022laionb,
title={{LAION}-5B: An open large-scale dataset for training next generation image-text models},
author={Christoph Schuhmann and
Romain Beaumont and
Richard Vencu and
Cade W Gordon and
Ross Wightman and
Mehdi Cherti and
Theo Coombes and
Aarush Katta and
Clayton Mullis and
Mitchell Wortsman and
Patrick Schramowski and
Srivatsa R Kundurthy and
Katherine Crowson and
Ludwig Schmidt and
Robert Kaczmarczyk and
Jenia Jitsev},
booktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2022},
url={https://openreview.net/forum?id=M3Y74vmsMcY}
}
```
OpenCLIP software
```bibtex
@software{ilharco_gabriel_2021_5143773,
author = {Ilharco, Gabriel and
Wortsman, Mitchell and
Wightman, Ross and
Gordon, Cade and
Carlini, Nicholas and
Taori, Rohan and
Dave, Achal and
Shankar, Vaishaal and
Namkoong, Hongseok and
Miller, John and
Hajishirzi, Hannaneh and
Farhadi, Ali and
Schmidt, Ludwig},
title = {OpenCLIP},
month = jul,
year = 2021,
note = {If you use this software, please cite it as below.},
publisher = {Zenodo},
version = {0.1},
doi = {10.5281/zenodo.5143773},
url = {https://doi.org/10.5281/zenodo.5143773}
}
```
OpenAI CLIP paper
```bibtex
@inproceedings{Radford2021LearningTV,
title={Learning Transferable Visual Models From Natural Language Supervision},
author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever},
booktitle={ICML},
year={2021}
}
```
```bibtex
@Article{liu2022convnet,
author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie},
title = {A ConvNet for the 2020s},
journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2022},
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/rwightman/pytorch-image-models}}
}
``` | 12,621 | [
[
-0.035003662109375,
-0.03662109375,
0.004077911376953125,
0.002750396728515625,
-0.03143310546875,
-0.033477783203125,
-0.01190948486328125,
-0.048980712890625,
0.0242156982421875,
0.028411865234375,
-0.03936767578125,
-0.034332275390625,
-0.0382080078125,
-... |
andreasmadsen/efficient_mlm_m0.40 | 2022-11-15T23:25:11.000Z | [
"transformers",
"pytorch",
"tf",
"roberta-prelayernorm",
"fill-mask",
"arxiv:2202.08005",
"autotrain_compatible",
"region:us"
] | fill-mask | andreasmadsen | null | null | andreasmadsen/efficient_mlm_m0.40 | 1 | 3,162 | transformers | 2022-11-15T22:10:55 | ---
inference: false
---
This is a model checkpoint for ["Should You Mask 15% in Masked Language Modeling"](https://arxiv.org/abs/2202.08005) [(code)](https://github.com/princeton-nlp/DinkyTrain.git).
The original checkpoint is avaliable at [princeton-nlp/efficient_mlm_m0.40](https://huggingface.co/princeton-nlp/efficient_mlm_m0.40). Unfortunately this checkpoint depends on code that isn't part of the official `transformers`
library. Additionally, the checkpoints contains unused weights due to a bug.
This checkpoint fixes the unused weights issue and uses the `RobertaPreLayerNorm` model from the `transformers`
library.
| 630 | [
[
-0.0302581787109375,
-0.0361328125,
0.01396942138671875,
0.0243072509765625,
0.0024852752685546875,
0.0021953582763671875,
0.008575439453125,
-0.0210418701171875,
0.0165863037109375,
0.05645751953125,
-0.06549072265625,
-0.033447265625,
-0.037506103515625,
-... |
timm/eva02_large_patch14_448.mim_m38m_ft_in22k_in1k | 2023-03-31T05:47:06.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"dataset:imagenet-22k",
"arxiv:2303.11331",
"arxiv:2303.15389",
"license:mit",
"region:us"
] | image-classification | timm | null | null | timm/eva02_large_patch14_448.mim_m38m_ft_in22k_in1k | 3 | 3,158 | timm | 2023-03-31T04:51:15 | ---
tags:
- image-classification
- timm
library_tag: timm
license: mit
datasets:
- imagenet-1k
- imagenet-22k
---
# Model card for eva02_large_patch14_448.mim_m38m_ft_in22k_in1k
An EVA02 image classification model. Pretrained on Merged-38M (IN-22K, CC12M, CC3M, COCO (train), ADE20K (train), Object365, and OpenImages) with masked image modeling (using EVA-CLIP as a MIM teacher) and fine-tuned on ImageNet-22k then on ImageNet-1k by paper authors.
EVA-02 models are vision transformers with mean pooling, SwiGLU, Rotary Position Embeddings (ROPE), and extra LN in MLP (for Base & Large).
NOTE: `timm` checkpoints are float32 for consistency with other models. Original checkpoints are float16 or bfloat16 in some cases, see originals if that's preferred.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 305.1
- GMACs: 362.3
- Activations (M): 689.9
- Image size: 448 x 448
- **Papers:**
- EVA-02: A Visual Representation for Neon Genesis: https://arxiv.org/abs/2303.11331
- EVA-CLIP: Improved Training Techniques for CLIP at Scale: https://arxiv.org/abs/2303.15389
- **Original:**
- https://github.com/baaivision/EVA
- https://huggingface.co/Yuxin-CV/EVA-02
- **Pretrain Dataset:** ImageNet-22k
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('eva02_large_patch14_448.mim_m38m_ft_in22k_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'eva02_large_patch14_448.mim_m38m_ft_in22k_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1025, 1024) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
|model |top1 |top5 |param_count|img_size|
|-----------------------------------------------|------|------|-----------|--------|
|eva02_large_patch14_448.mim_m38m_ft_in22k_in1k |90.054|99.042|305.08 |448 |
|eva02_large_patch14_448.mim_in22k_ft_in22k_in1k|89.946|99.01 |305.08 |448 |
|eva_giant_patch14_560.m30m_ft_in22k_in1k |89.792|98.992|1014.45 |560 |
|eva02_large_patch14_448.mim_in22k_ft_in1k |89.626|98.954|305.08 |448 |
|eva02_large_patch14_448.mim_m38m_ft_in1k |89.57 |98.918|305.08 |448 |
|eva_giant_patch14_336.m30m_ft_in22k_in1k |89.56 |98.956|1013.01 |336 |
|eva_giant_patch14_336.clip_ft_in1k |89.466|98.82 |1013.01 |336 |
|eva_large_patch14_336.in22k_ft_in22k_in1k |89.214|98.854|304.53 |336 |
|eva_giant_patch14_224.clip_ft_in1k |88.882|98.678|1012.56 |224 |
|eva02_base_patch14_448.mim_in22k_ft_in22k_in1k |88.692|98.722|87.12 |448 |
|eva_large_patch14_336.in22k_ft_in1k |88.652|98.722|304.53 |336 |
|eva_large_patch14_196.in22k_ft_in22k_in1k |88.592|98.656|304.14 |196 |
|eva02_base_patch14_448.mim_in22k_ft_in1k |88.23 |98.564|87.12 |448 |
|eva_large_patch14_196.in22k_ft_in1k |87.934|98.504|304.14 |196 |
|eva02_small_patch14_336.mim_in22k_ft_in1k |85.74 |97.614|22.13 |336 |
|eva02_tiny_patch14_336.mim_in22k_ft_in1k |80.658|95.524|5.76 |336 |
## Citation
```bibtex
@article{EVA02,
title={EVA-02: A Visual Representation for Neon Genesis},
author={Fang, Yuxin and Sun, Quan and Wang, Xinggang and Huang, Tiejun and Wang, Xinlong and Cao, Yue},
journal={arXiv preprint arXiv:2303.11331},
year={2023}
}
```
```bibtex
@article{EVA-CLIP,
title={EVA-02: A Visual Representation for Neon Genesis},
author={Sun, Quan and Fang, Yuxin and Wu, Ledell and Wang, Xinlong and Cao, Yue},
journal={arXiv preprint arXiv:2303.15389},
year={2023}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
| 5,517 | [
[
-0.04534912109375,
-0.0290069580078125,
0.01262664794921875,
0.00814056396484375,
-0.0178070068359375,
0.0003008842468261719,
-0.01003265380859375,
-0.034271240234375,
0.03875732421875,
0.02752685546875,
-0.034332275390625,
-0.05169677734375,
-0.043243408203125,... |
artificialguybr/LogoRedmond-LogoLoraForSDXL | 2023-10-07T02:40:59.000Z | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"license:creativeml-openrail-m",
"has_space",
"region:us"
] | text-to-image | artificialguybr | null | null | artificialguybr/LogoRedmond-LogoLoraForSDXL | 26 | 3,158 | diffusers | 2023-08-07T19:22:33 | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: LogoRedAF
widget:
- text: LogoRedAF
---
# Logo.Redmond

DOWNLOAD V2 HERE: https://huggingface.co/artificialguybr/LogoRedmond-LogoLoraForSDXL-V2
Test all my Loras here: https://huggingface.co/spaces/artificialguybr/artificialguybr-demo-lora
Logo.Redmond is here!
I'm grateful for the GPU time from Redmond.AI that allowed me to finish this LORA!
This is a LOGO LORA fine-tuned on SD XL 1.0.
The LORA has a high capacity to generate LOGOS images in a wide variety of themes. It's a versatile LORA.
I recommend gen in 1024x1024.
You can use detailed, minimalist, colorful, black and white as tag to control the results.
The tag for the model:LogoRedAF
LORA is not perfect and sometimes needs more than one gen to create good images. I recommend simple prompts.
I really hope you like the LORA and use it.
If you like the model and think it's worth it, you can make a donation to my Patreon or Ko-fi.
Follow me in my twitter to know before all about new models:
https://twitter.com/artificialguybr/ | 1,207 | [
[
-0.048675537109375,
-0.061553955078125,
0.015411376953125,
0.04534912109375,
-0.044219970703125,
0.01232147216796875,
0.0056915283203125,
-0.07220458984375,
0.09100341796875,
0.026641845703125,
-0.055084228515625,
-0.0301361083984375,
-0.02532958984375,
-0.0... |
timm/samvit_base_patch16.sa1b | 2023-05-18T21:44:58.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"arxiv:2304.02643",
"arxiv:2010.11929",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/samvit_base_patch16.sa1b | 0 | 3,156 | timm | 2023-05-18T21:43:39 | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
---
# Model card for samvit_base_patch16.sa1b
A Segment-Anything Vision Transformer (SAM ViT) image feature model (NOTE: for features and fine-tune, segmentation head not included). Pretrained on SA-1B for segementation by paper authors w/ initialization from MAE weights.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 89.7
- GMACs: 486.4
- Activations (M): 1343.3
- Image size: 1024 x 1024
- **Papers:**
- Segment Anything: https://arxiv.org/abs/2304.02643
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2
- **Original:** https://github.com/facebookresearch/segment-anything
- **Pretrain Dataset:** SA-1B
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('samvit_base_patch16.sa1b', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'samvit_base_patch16.sa1b',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 256, 64, 64) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{kirillov2023segany,
title={Segment Anything},
author={Kirillov, Alexander and Mintun, Eric and Ravi, Nikhila and Mao, Hanzi and Rolland, Chloe and Gustafson, Laura and Xiao, Tete and Whitehead, Spencer and Berg, Alexander C. and Lo, Wan-Yen and Doll{'a}r, Piotr and Girshick, Ross},
journal={arXiv:2304.02643},
year={2023}
}
```
```bibtex
@article{dosovitskiy2020vit,
title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
journal={ICLR},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
| 3,751 | [
[
-0.034912109375,
-0.027099609375,
0.01464080810546875,
0.00894927978515625,
-0.0350341796875,
-0.0287017822265625,
-0.0157012939453125,
-0.031829833984375,
0.02642822265625,
0.018707275390625,
-0.03857421875,
-0.0469970703125,
-0.05389404296875,
-0.006412506... |
jncraton/LaMini-Flan-T5-248M-ct2-int8 | 2023-09-11T14:37:41.000Z | [
"transformers",
"generated_from_trainer",
"instruction fine-tuning",
"text2text-generation",
"en",
"arxiv:2304.14402",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | text2text-generation | jncraton | null | null | jncraton/LaMini-Flan-T5-248M-ct2-int8 | 0 | 3,156 | transformers | 2023-06-04T21:36:33 | ---
language:
- en
license: cc-by-nc-4.0
tags:
- generated_from_trainer
- instruction fine-tuning
pipeline_tag: text2text-generation
widget:
- text: how can I become more healthy?
example_title: example
base_model: google/flan-t5-base
model-index:
- name: flan-t5-small-distil-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
<p align="center" width="100%">
<a><img src="https://raw.githubusercontent.com/mbzuai-nlp/lamini-lm/main/images/lamini.png" alt="Title" style="width: 100%; min-width: 300px; display: block; margin: auto;"></a>
</p>
# LaMini-Flan-T5-248M
[]()
This model is one of our LaMini-LM model series in paper "[LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions](https://github.com/mbzuai-nlp/lamini-lm)". This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on [LaMini-instruction dataset](https://huggingface.co/datasets/MBZUAI/LaMini-instruction) that contains 2.58M samples for instruction fine-tuning. For more information about our dataset, please refer to our [project repository](https://github.com/mbzuai-nlp/lamini-lm/).
You can view other models of LaMini-LM series as follows. Models with ✩ are those with the best overall performance given their size/architecture, hence we recommend using them. More details can be seen in our paper.
<table>
<thead>
<tr>
<th>Base model</th>
<th colspan="4">LaMini-LM series (#parameters)</th>
</tr>
</thead>
<tbody>
<tr>
<td>T5</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-t5-61m" target="_blank" rel="noopener noreferrer">LaMini-T5-61M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-t5-223m" target="_blank" rel="noopener noreferrer">LaMini-T5-223M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-t5-738m" target="_blank" rel="noopener noreferrer">LaMini-T5-738M</a></td>
<td></td>
</tr>
<tr>
<td>Flan-T5</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-flan-t5-77m" target="_blank" rel="noopener noreferrer">LaMini-Flan-T5-77M</a>✩</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-flan-t5-248m" target="_blank" rel="noopener noreferrer">LaMini-Flan-T5-248M</a>✩</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-flan-t5-783m" target="_blank" rel="noopener noreferrer">LaMini-Flan-T5-783M</a>✩</td>
<td></td>
</tr>
<tr>
<td>Cerebras-GPT</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-111m" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-111M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-256m" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-256M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-590m" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-590M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-1.3b" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-1.3B</a></td>
</tr>
<tr>
<td>GPT-2</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-gpt-124m" target="_blank" rel="noopener noreferrer">LaMini-GPT-124M</a>✩</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-gpt-774m" target="_blank" rel="noopener noreferrer">LaMini-GPT-774M</a>✩</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-gpt-1.5b" target="_blank" rel="noopener noreferrer">LaMini-GPT-1.5B</a>✩</td>
<td></td>
</tr>
<tr>
<td>GPT-Neo</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-neo-125m" target="_blank" rel="noopener noreferrer">LaMini-Neo-125M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-neo-1.3b" target="_blank" rel="noopener noreferrer">LaMini-Neo-1.3B</a></td>
<td></td>
<td></td>
</tr>
<tr>
<td>GPT-J</td>
<td colspan="4">coming soon</td>
</tr>
<tr>
<td>LLaMA</td>
<td colspan="4">coming soon</td>
</tr>
</tbody>
</table>
## Use
### Intended use
We recommend using the model to response to human instructions written in natural language.
We now show you how to load and use our model using HuggingFace `pipeline()`.
```python
# pip install -q transformers
from transformers import pipeline
checkpoint = "{model_name}"
model = pipeline('text2text-generation', model = checkpoint)
input_prompt = 'Please let me know your thoughts on the given place and why you think it deserves to be visited: \n"Barcelona, Spain"'
generated_text = model(input_prompt, max_length=512, do_sample=True)[0]['generated_text']
print("Response", generated_text)
```
## Training Procedure
<p align="center" width="100%">
<a><img src="https://raw.githubusercontent.com/mbzuai-nlp/lamini-lm/main/images/lamini-pipeline.drawio.png" alt="Title" style="width: 100%; min-width: 250px; display: block; margin: auto;"></a>
</p>
We initialize with [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) and fine-tune it on our [LaMini-instruction dataset](https://huggingface.co/datasets/MBZUAI/LaMini-instruction). Its total number of parameters is 248M.
### Training Hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
## Evaluation
We conducted two sets of evaluations: automatic evaluation on downstream NLP tasks and human evaluation on user-oriented instructions. For more detail, please refer to our [paper]().
## Limitations
More information needed
# Citation
```bibtex
@article{lamini-lm,
author = {Minghao Wu and
Abdul Waheed and
Chiyu Zhang and
Muhammad Abdul-Mageed and
Alham Fikri Aji
},
title = {LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions},
journal = {CoRR},
volume = {abs/2304.14402},
year = {2023},
url = {https://arxiv.org/abs/2304.14402},
eprinttype = {arXiv},
eprint = {2304.14402}
}
``` | 6,431 | [
[
-0.04913330078125,
-0.051544189453125,
0.01290130615234375,
0.018463134765625,
-0.017242431640625,
-0.0306243896484375,
-0.010040283203125,
-0.049346923828125,
0.021881103515625,
0.0200042724609375,
-0.06060791015625,
-0.03216552734375,
-0.03955078125,
0.004... |
DeepPavlov/distilrubert-tiny-cased-conversational-v1 | 2022-05-06T11:57:05.000Z | [
"transformers",
"pytorch",
"distilbert",
"ru",
"arxiv:2205.02340",
"endpoints_compatible",
"region:us"
] | null | DeepPavlov | null | null | DeepPavlov/distilrubert-tiny-cased-conversational-v1 | 1 | 3,155 | transformers | 2022-03-02T23:29:04 | ---
language:
- ru
---
# distilrubert-tiny-cased-conversational
Conversational DistilRuBERT-tiny \(Russian, cased, 3‑layers, 264‑hidden, 12‑heads, 10.4M parameters\) was trained on OpenSubtitles\[1\], [Dirty](https://d3.ru/), [Pikabu](https://pikabu.ru/), and a Social Media segment of Taiga corpus\[2\] (as [Conversational RuBERT](https://huggingface.co/DeepPavlov/rubert-base-cased-conversational)). It can be considered as tiny copy of [Conversational DistilRuBERT-small](https://huggingface.co/DeepPavlov/distilrubert-tiny-cased-conversational).
Our DistilRuBERT-tiny is highly inspired by \[3\], \[4\] and architecture is very close to \[5\]. Namely, we use
* MLM loss (between token labels and student output distribution)
* MSE loss (between averaged student and teacher hidden states)
The key features are:
* unlike most of distilled language models, we **didn't** use KL loss during pre-training
* reduced vocabulary size (30K in *tiny* vs. 100K in *base* and *small* )
* two separate inputs for student: tokens obtained using student tokenizer (for MLM) and teacher tokens greedily splitted by student tokens (for MSE)
Here is comparison between teacher model (`Conversational RuBERT`) and other distilled models.
| Model name | \# params, M | \# vocab, K | Mem., MB |
|---|---|---|---|
| `rubert-base-cased-conversational` | 177.9 | 120 | 679 |
| `distilrubert-base-cased-conversational` | 135.5 | 120 | 517 |
| `distilrubert-small-cased-conversational` | 107.1 | 120 | 409 |
| `cointegrated/rubert-tiny` | 11.8 | **30** | 46 |
| **distilrubert-tiny-cased-conversational** | **10.4** | 31 | **41** |
DistilRuBERT-tiny was trained for about 100 hrs. on 7 nVIDIA Tesla P100-SXM2.0 16Gb.
We used `PyTorchBenchmark` from `transformers` to evaluate model's performance and compare it with other pre-trained language models for Russian. All tests were performed on Intel(R) Xeon(R) CPU E5-2698 v4 @ 2.20GHz and nVIDIA Tesla P100-SXM2.0 16Gb.
| Model name | Batch size | Seq len | Time, s || Mem, MB ||
|---|---|---|------||------||
| | | | CPU | GPU | CPU | GPU |
| `rubert-base-cased-conversational` | 1 | 512 | 0.147 | 0.014 | 897 | 1531 |
| `distilrubert-base-cased-conversational` | 1 | 512 | 0.083 | 0.006 | 766 | 1423 |
| `distilrubert-small-cased-conversational` | 1 | 512 | 0.03 | **0.002** | 600 | 1243 |
| `cointegrated/rubert-tiny` | 1 | 512 | 0.041 | 0.003 | 272 | 919 |
| **distilrubert-tiny-cased-conversational** | 1 | 512 | **0.023** | 0.003 | **206** | **855** |
| `rubert-base-cased-conversational` | 16 | 512 | 2.839 | 0.182 | 1499 | 2071 |
| `distilrubert-base-cased-conversational` | 16 | 512 | 1.065 | 0.055 | 2541 | 2927 |
| `distilrubert-small-cased-conversational` | 16 | 512 | 0.373 | **0.003** | 1360 | 1943 |
| `cointegrated/rubert-tiny` | 16 | 512 | 0.628 | 0.004 | 1293 | 2221 |
| **distilrubert-tiny-cased-conversational** | 16 | 512 | **0.219** | **0.003** | **633** | **1291** |
To evaluate model quality, we fine-tuned DistilRuBERT-tiny on classification (RuSentiment, ParaPhraser), NER and question answering data sets for Russian and obtained scores very similar to the [Conversational DistilRuBERT-small](https://huggingface.co/DeepPavlov/distilrubert-tiny-cased-conversational).
# Citation
If you found the model useful for your research, we are kindly ask to cite [this](https://arxiv.org/abs/2205.02340) paper:
```
@misc{https://doi.org/10.48550/arxiv.2205.02340,
doi = {10.48550/ARXIV.2205.02340},
url = {https://arxiv.org/abs/2205.02340},
author = {Kolesnikova, Alina and Kuratov, Yuri and Konovalov, Vasily and Burtsev, Mikhail},
keywords = {Computation and Language (cs.CL), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Knowledge Distillation of Russian Language Models with Reduction of Vocabulary},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
\[1\]: P. Lison and J. Tiedemann, 2016, OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles. In Proceedings of the 10th International Conference on Language Resources and Evaluation \(LREC 2016\)
\[2\]: Shavrina T., Shapovalova O. \(2017\) TO THE METHODOLOGY OF CORPUS CONSTRUCTION FOR MACHINE LEARNING: «TAIGA» SYNTAX TREE CORPUS AND PARSER. in proc. of “CORPORA2017”, international conference , Saint-Petersbourg, 2017.
\[3\]: Sanh, V., Debut, L., Chaumond, J., & Wolf, T. \(2019\). DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.
\[4\]: <https://github.com/huggingface/transformers/tree/master/examples/research_projects/distillation>
\[5\]: <https://habr.com/ru/post/562064/>, <https://huggingface.co/cointegrated/rubert-tiny> | 4,875 | [
[
-0.022125244140625,
-0.07275390625,
0.01910400390625,
0.0024967193603515625,
-0.0227813720703125,
0.0236663818359375,
-0.041259765625,
-0.0023059844970703125,
0.004398345947265625,
-0.00797271728515625,
-0.03131103515625,
-0.037322998046875,
-0.04681396484375,
... |
Hius/DreamFul-V2 | 2023-05-03T04:49:40.000Z | [
"diffusers",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | Hius | null | null | Hius/DreamFul-V2 | 1 | 3,151 | diffusers | 2023-04-01T11:23:35 | ---
language:
- en
library_name: diffusers
license: creativeml-openrail-m
pipeline_tag: text-to-image
---
Model mix aims to create the most realistic and natural images possible. It's currently in the testing process, so please comment.
Available on Sinkin.ai with GPU acceleration.
MY MODELS WILL ALWAYS BE FREE.
https://sinkin.ai/m/DreamFul
https://www.mage.space/u/hius
Guide:
For the settings or parameters, I recommend using these settings.
Sampler: DPM++ SDE Karras or Ruler a
Steps: 30-50
CFG Scale: 7.5
How to use:
Structure: render for a `+ <subject> ++ <details> + <lights> + <color> + <resolution> + <option> `
For example:
render for a girl, beautiful face, autumn lights,pastel colors, high quality, trending on ArtStation, trending on CGSociety,(extremely detailed CG unity 8k wallpaper)
Negative Prompt:
illustration, painting, cartoons, sketch, (worst quality:2), (low quality:2), (normal quality:2), lowres, bad anatomy, bad hands, ((monochrome)), ((grayscale)), collapsed eyeshadow, multiple eyeblows, vaginas in breasts, (cropped), oversaturated, extra limb, missing limbs, deformed hands, long neck, long body, imperfect, (bad hands), signature, watermark, username, artist name, conjoined fingers, deformed fingers, ugly eyes, imperfect eyes, skewed eyes, unnatural face, unnatural body, error
Can you customize it all to your liking and show me it? Thank you!!!
LORA is not added yet | 1,422 | [
[
-0.06732177734375,
-0.058258056640625,
0.017242431640625,
0.0255279541015625,
-0.0391845703125,
-0.004146575927734375,
0.0166778564453125,
-0.049896240234375,
0.04302978515625,
0.04449462890625,
-0.053802490234375,
-0.0224609375,
-0.0290069580078125,
-0.0158... |
facebook/timesformer-base-finetuned-k600 | 2022-12-12T12:52:56.000Z | [
"transformers",
"pytorch",
"timesformer",
"video-classification",
"vision",
"arxiv:2102.05095",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"has_space",
"region:us"
] | video-classification | facebook | null | null | facebook/timesformer-base-finetuned-k600 | 8 | 3,150 | transformers | 2022-10-07T20:53:42 | ---
license: "cc-by-nc-4.0"
tags:
- vision
- video-classification
---
# TimeSformer (base-sized model, fine-tuned on Kinetics-600)
TimeSformer model pre-trained on [Kinetics-600](https://www.deepmind.com/open-source/kinetics). It was introduced in the paper [TimeSformer: Is Space-Time Attention All You Need for Video Understanding?](https://arxiv.org/abs/2102.05095) by Tong et al. and first released in [this repository](https://github.com/facebookresearch/TimeSformer).
Disclaimer: The team releasing TimeSformer did not write a model card for this model so this model card has been written by [fcakyon](https://github.com/fcakyon).
## Intended uses & limitations
You can use the raw model for video classification into one of the 600 possible Kinetics-600 labels.
### How to use
Here is how to use this model to classify a video:
```python
from transformers import AutoImageProcessor, TimesformerForVideoClassification
import numpy as np
import torch
video = list(np.random.randn(8, 3, 224, 224))
processor = AutoImageProcessor.from_pretrained("facebook/timesformer-base-finetuned-k600")
model = TimesformerForVideoClassification.from_pretrained("facebook/timesformer-base-finetuned-k600")
inputs = processor(images=video, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/main/model_doc/timesformer.html#).
### BibTeX entry and citation info
```bibtex
@inproceedings{bertasius2021space,
title={Is Space-Time Attention All You Need for Video Understanding?},
author={Bertasius, Gedas and Wang, Heng and Torresani, Lorenzo},
booktitle={International Conference on Machine Learning},
pages={813--824},
year={2021},
organization={PMLR}
}
``` | 1,933 | [
[
-0.0168609619140625,
-0.042877197265625,
0.02496337890625,
0.006443023681640625,
-0.01145172119140625,
0.004993438720703125,
0.0003540515899658203,
-0.004192352294921875,
-0.000396728515625,
-0.0082244873046875,
-0.05572509765625,
-0.0271453857421875,
-0.0610656... |
keremberke/yolov8s-forklift-detection | 2023-02-22T13:00:20.000Z | [
"ultralytics",
"tensorboard",
"v8",
"ultralyticsplus",
"yolov8",
"yolo",
"vision",
"object-detection",
"pytorch",
"awesome-yolov8-models",
"dataset:keremberke/forklift-object-detection",
"model-index",
"region:us"
] | object-detection | keremberke | null | null | keremberke/yolov8s-forklift-detection | 2 | 3,148 | ultralytics | 2023-01-22T05:27:12 |
---
tags:
- ultralyticsplus
- yolov8
- ultralytics
- yolo
- vision
- object-detection
- pytorch
- awesome-yolov8-models
library_name: ultralytics
library_version: 8.0.21
inference: false
datasets:
- keremberke/forklift-object-detection
model-index:
- name: keremberke/yolov8s-forklift-detection
results:
- task:
type: object-detection
dataset:
type: keremberke/forklift-object-detection
name: forklift-object-detection
split: validation
metrics:
- type: precision # since mAP@0.5 is not available on hf.co/metrics
value: 0.85117 # min: 0.0 - max: 1.0
name: mAP@0.5(box)
---
<div align="center">
<img width="640" alt="keremberke/yolov8s-forklift-detection" src="https://huggingface.co/keremberke/yolov8s-forklift-detection/resolve/main/thumbnail.jpg">
</div>
### Supported Labels
```
['forklift', 'person']
```
### How to use
- Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus):
```bash
pip install ultralyticsplus==0.0.23 ultralytics==8.0.21
```
- Load model and perform prediction:
```python
from ultralyticsplus import YOLO, render_result
# load model
model = YOLO('keremberke/yolov8s-forklift-detection')
# set model parameters
model.overrides['conf'] = 0.25 # NMS confidence threshold
model.overrides['iou'] = 0.45 # NMS IoU threshold
model.overrides['agnostic_nms'] = False # NMS class-agnostic
model.overrides['max_det'] = 1000 # maximum number of detections per image
# set image
image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
# perform inference
results = model.predict(image)
# observe results
print(results[0].boxes)
render = render_result(model=model, image=image, result=results[0])
render.show()
```
**More models available at: [awesome-yolov8-models](https://yolov8.xyz)** | 1,827 | [
[
-0.033843994140625,
-0.01436614990234375,
0.037017822265625,
-0.028839111328125,
-0.02899169921875,
-0.0226593017578125,
0.0214080810546875,
-0.037933349609375,
0.0236053466796875,
0.01361083984375,
-0.04925537109375,
-0.04345703125,
-0.0303497314453125,
-0.... |
Kaludi/chatgpt-gpt4-prompts-bart-large-cnn-samsum | 2023-04-10T18:15:28.000Z | [
"transformers",
"tf",
"bart",
"text2text-generation",
"generated_from_keras_callback",
"dataset:fka/awesome-chatgpt-prompts",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | text2text-generation | Kaludi | null | null | Kaludi/chatgpt-gpt4-prompts-bart-large-cnn-samsum | 66 | 3,148 | transformers | 2023-03-27T21:12:40 | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: chatgpt-gpt4-prompts-bart-large-cnn-samsum
results: []
datasets:
- fka/awesome-chatgpt-prompts
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# chatgpt-gpt4-prompts-bart-large-cnn-samsum
This model generates ChatGPT/BingChat & GPT-3 prompts and is a fine-tuned version of [philschmid/bart-large-cnn-samsum](https://huggingface.co/philschmid/bart-large-cnn-samsum) on an [this](https://huggingface.co/datasets/fka/awesome-chatgpt-prompts) dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.2214
- Validation Loss: 2.7584
- Epoch: 4
### Streamlit
This model supports a [Streamlit](https://streamlit.io/) Web UI to run the chatgpt-gpt4-prompts-bart-large-cnn-samsum model:
[](https://huggingface.co/spaces/Kaludi/ChatGPT-BingChat-GPT3-Prompt-Generator_App)
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.1982 | 2.6801 | 0 |
| 2.3601 | 2.5493 | 1 |
| 1.9225 | 2.5377 | 2 |
| 1.5465 | 2.6794 | 3 |
| 1.2214 | 2.7584 | 4 |
### Framework versions
- Transformers 4.27.3
- TensorFlow 2.11.0
- Datasets 2.10.1
- Tokenizers 0.13.2 | 1,975 | [
[
-0.040008544921875,
-0.04962158203125,
0.0250244140625,
0.005157470703125,
-0.024993896484375,
-0.0287628173828125,
-0.01174163818359375,
-0.0158538818359375,
0.0096282958984375,
0.01073455810546875,
-0.049224853515625,
-0.03759765625,
-0.05535888671875,
-0.... |
albert-xxlarge-v1 | 2021-01-13T15:32:02.000Z | [
"transformers",
"pytorch",
"tf",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | fill-mask | null | null | null | albert-xxlarge-v1 | 2 | 3,143 | transformers | 2022-03-02T23:29:04 | ---
language: en
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# ALBERT XXLarge v1
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1909.11942) and first released in
[this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the ALBERT model as inputs.
ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers.
This is the first version of the xxlarge model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks.
This model has the following configuration:
- 12 repeating layers
- 128 embedding dimension
- 4096 hidden dimension
- 64 attention heads
- 223M parameters
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=albert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='albert-xxlarge-v1')
>>> unmasker("Hello I'm a [MASK] model.")
[
{
"sequence":"[CLS] hello i'm a modeling model.[SEP]",
"score":0.05816134437918663,
"token":12807,
"token_str":"â–modeling"
},
{
"sequence":"[CLS] hello i'm a modelling model.[SEP]",
"score":0.03748830780386925,
"token":23089,
"token_str":"â–modelling"
},
{
"sequence":"[CLS] hello i'm a model model.[SEP]",
"score":0.033725276589393616,
"token":1061,
"token_str":"â–model"
},
{
"sequence":"[CLS] hello i'm a runway model.[SEP]",
"score":0.017313428223133087,
"token":8014,
"token_str":"â–runway"
},
{
"sequence":"[CLS] hello i'm a lingerie model.[SEP]",
"score":0.014405295252799988,
"token":29104,
"token_str":"â–lingerie"
}
]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AlbertTokenizer, AlbertModel
tokenizer = AlbertTokenizer.from_pretrained('albert-xxlarge-v1')
model = AlbertModel.from_pretrained("albert-xxlarge-v1")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import AlbertTokenizer, TFAlbertModel
tokenizer = AlbertTokenizer.from_pretrained('albert-xxlarge-v1')
model = TFAlbertModel.from_pretrained("albert-xxlarge-v1")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='albert-xxlarge-v1')
>>> unmasker("The man worked as a [MASK].")
[
{
"sequence":"[CLS] the man worked as a chauffeur.[SEP]",
"score":0.029577180743217468,
"token":28744,
"token_str":"â–chauffeur"
},
{
"sequence":"[CLS] the man worked as a janitor.[SEP]",
"score":0.028865724802017212,
"token":29477,
"token_str":"â–janitor"
},
{
"sequence":"[CLS] the man worked as a shoemaker.[SEP]",
"score":0.02581118606030941,
"token":29024,
"token_str":"â–shoemaker"
},
{
"sequence":"[CLS] the man worked as a blacksmith.[SEP]",
"score":0.01849772222340107,
"token":21238,
"token_str":"â–blacksmith"
},
{
"sequence":"[CLS] the man worked as a lawyer.[SEP]",
"score":0.01820771023631096,
"token":3672,
"token_str":"â–lawyer"
}
]
>>> unmasker("The woman worked as a [MASK].")
[
{
"sequence":"[CLS] the woman worked as a receptionist.[SEP]",
"score":0.04604868218302727,
"token":25331,
"token_str":"â–receptionist"
},
{
"sequence":"[CLS] the woman worked as a janitor.[SEP]",
"score":0.028220869600772858,
"token":29477,
"token_str":"â–janitor"
},
{
"sequence":"[CLS] the woman worked as a paramedic.[SEP]",
"score":0.0261906236410141,
"token":23386,
"token_str":"â–paramedic"
},
{
"sequence":"[CLS] the woman worked as a chauffeur.[SEP]",
"score":0.024797942489385605,
"token":28744,
"token_str":"â–chauffeur"
},
{
"sequence":"[CLS] the woman worked as a waitress.[SEP]",
"score":0.024124596267938614,
"token":13678,
"token_str":"â–waitress"
}
]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The ALBERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
### Training
The ALBERT procedure follows the BERT setup.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
## Evaluation results
When fine-tuned on downstream tasks, the ALBERT models achieve the following results:
| | Average | SQuAD1.1 | SQuAD2.0 | MNLI | SST-2 | RACE |
|----------------|----------|----------|----------|----------|----------|----------|
|V2 |
|ALBERT-base |82.3 |90.2/83.2 |82.1/79.3 |84.6 |92.9 |66.8 |
|ALBERT-large |85.7 |91.8/85.2 |84.9/81.8 |86.5 |94.9 |75.2 |
|ALBERT-xlarge |87.9 |92.9/86.4 |87.9/84.1 |87.9 |95.4 |80.7 |
|ALBERT-xxlarge |90.9 |94.6/89.1 |89.8/86.9 |90.6 |96.8 |86.8 |
|V1 |
|ALBERT-base |80.1 |89.3/82.3 | 80.0/77.1|81.6 |90.3 | 64.0 |
|ALBERT-large |82.4 |90.6/83.9 | 82.3/79.4|83.5 |91.7 | 68.5 |
|ALBERT-xlarge |85.5 |92.5/86.1 | 86.1/83.1|86.4 |92.4 | 74.8 |
|ALBERT-xxlarge |91.0 |94.8/89.3 | 90.2/87.4|90.8 |96.9 | 86.5 |
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1909-11942,
author = {Zhenzhong Lan and
Mingda Chen and
Sebastian Goodman and
Kevin Gimpel and
Piyush Sharma and
Radu Soricut},
title = {{ALBERT:} {A} Lite {BERT} for Self-supervised Learning of Language
Representations},
journal = {CoRR},
volume = {abs/1909.11942},
year = {2019},
url = {http://arxiv.org/abs/1909.11942},
archivePrefix = {arXiv},
eprint = {1909.11942},
timestamp = {Fri, 27 Sep 2019 13:04:21 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1909-11942.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | 9,774 | [
[
-0.00984954833984375,
-0.0374755859375,
0.0181427001953125,
0.02398681640625,
-0.031494140625,
0.0007205009460449219,
0.00994873046875,
-0.0123291015625,
0.02789306640625,
0.04595947265625,
-0.04400634765625,
-0.034576416015625,
-0.061370849609375,
0.0099411... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.