modelId stringlengths 4 111 | lastModified stringlengths 24 24 | tags list | pipeline_tag stringlengths 5 30 ⌀ | author stringlengths 2 34 ⌀ | config null | securityStatus null | id stringlengths 4 111 | likes int64 0 9.53k | downloads int64 2 73.6M | library_name stringlengths 2 84 ⌀ | created timestamp[us] | card stringlengths 101 901k | card_len int64 101 901k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
AnimeTest/rinotuna-man | 2023-03-07T17:39:23.000Z | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | AnimeTest | null | null | AnimeTest/rinotuna-man | 7 | 450 | diffusers | 2023-02-25T17:09:09 | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Rinotuna-Man Dreambooth model trained by AnimeTest with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
### Tag Rinotuna ###
[Twitter Rinotuna](https://twitter.com/rinotuna)
[Artstation Rinotuna](https://www.artstation.com/rinotuna)
[other Rinotuna](https://linktr.ee/rinotuna)
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
| 7,975 | [
[
-0.08355712890625,
-0.0244293212890625,
0.005443572998046875,
0.0288543701171875,
-0.0460205078125,
-0.00893402099609375,
-0.0037021636962890625,
-0.05511474609375,
0.0849609375,
0.016693115234375,
-0.0251922607421875,
-0.02874755859375,
-0.049774169921875,
... |
badmonk/mxho | 2023-07-03T09:48:58.000Z | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | badmonk | null | null | badmonk/mxho | 1 | 450 | diffusers | 2023-07-03T02:02:37 | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
# Model Card for MXHO
## Model Description
- **Developed by:** BADMONK
- **Model type:** Dreambooth Model + Extracted LoRA
- **Language(s) (NLP):** EN
- **License:** Creativeml-Openrail-M
- **Parent Model:** Reliberate
# How to Get Started with the Model
Use the code below to get started with the model.
### MXHO ### | 418 | [
[
-0.00714874267578125,
-0.03515625,
0.033599853515625,
0.0012416839599609375,
-0.0750732421875,
-0.0006570816040039062,
0.055023193359375,
-0.0280609130859375,
0.043121337890625,
0.0653076171875,
-0.0599365234375,
-0.0582275390625,
-0.027679443359375,
-0.0262... |
timm/fastvit_sa24.apple_dist_in1k | 2023-08-23T21:05:13.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2303.14189",
"license:other",
"region:us"
] | image-classification | timm | null | null | timm/fastvit_sa24.apple_dist_in1k | 0 | 450 | timm | 2023-08-23T21:04:56 | ---
tags:
- image-classification
- timm
library_name: timm
license: other
datasets:
- imagenet-1k
---
# Model card for fastvit_sa24.apple_dist_in1k
A FastViT image classification model. Trained on ImageNet-1k with distillation by paper authors.
Please observe [original license](https://github.com/apple/ml-fastvit/blob/8af5928238cab99c45f64fc3e4e7b1516b8224ba/LICENSE).
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 21.6
- GMACs: 3.8
- Activations (M): 23.9
- Image size: 256 x 256
- **Papers:**
- FastViT: A Fast Hybrid Vision Transformer using Structural Reparameterization: https://arxiv.org/abs/2303.14189
- **Original:** https://github.com/apple/ml-fastvit
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('fastvit_sa24.apple_dist_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'fastvit_sa24.apple_dist_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 64, 64])
# torch.Size([1, 128, 32, 32])
# torch.Size([1, 256, 16, 16])
# torch.Size([1, 512, 8, 8])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'fastvit_sa24.apple_dist_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 512, 8, 8) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Citation
```bibtex
@inproceedings{vasufastvit2023,
author = {Pavan Kumar Anasosalu Vasu and James Gabriel and Jeff Zhu and Oncel Tuzel and Anurag Ranjan},
title = {FastViT: A Fast Hybrid Vision Transformer using Structural Reparameterization},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
year = {2023}
}
```
| 3,708 | [
[
-0.0423583984375,
-0.037750244140625,
0.0017957687377929688,
0.017730712890625,
-0.03253173828125,
-0.0157623291015625,
-0.008026123046875,
-0.0186309814453125,
0.0250701904296875,
0.026580810546875,
-0.03887939453125,
-0.04449462890625,
-0.050689697265625,
... |
Helsinki-NLP/opus-mt-mg-en | 2023-08-16T12:01:00.000Z | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"mg",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | Helsinki-NLP | null | null | Helsinki-NLP/opus-mt-mg-en | 0 | 449 | transformers | 2022-03-02T23:29:04 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-mg-en
* source languages: mg
* target languages: en
* OPUS readme: [mg-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/mg-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/mg-en/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/mg-en/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/mg-en/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| GlobalVoices.mg.en | 27.6 | 0.522 |
| Tatoeba.mg.en | 50.2 | 0.607 |
| 858 | [
[
-0.02117919921875,
-0.024017333984375,
0.0193328857421875,
0.026458740234375,
-0.0281982421875,
-0.026580810546875,
-0.030303955078125,
-0.006412506103515625,
0.0016574859619140625,
0.03326416015625,
-0.051422119140625,
-0.047637939453125,
-0.047943115234375,
... |
bolbolzaban/gpt2-persian | 2021-05-21T14:23:14.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"farsi",
"persian",
"fa",
"doi:10.57967/hf/1207",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | bolbolzaban | null | null | bolbolzaban/gpt2-persian | 15 | 449 | transformers | 2022-03-02T23:29:05 | ---
language: fa
license: apache-2.0
tags:
- farsi
- persian
---
# GPT2-Persian
bolbolzaban/gpt2-persian is gpt2 language model that is trained with hyper parameters similar to standard gpt2-medium with following differences:
1. The context size is reduced from 1024 to 256 sub words in order to make the training affordable
2. Instead of BPE, google sentence piece tokenizor is used for tokenization.
3. The training dataset only include Persian text. All non-persian characters are replaced with especial tokens (e.g [LAT], [URL], [NUM])
Please refer to this [blog post](https://medium.com/@khashei/a-not-so-dangerous-ai-in-the-persian-language-39172a641c84) for further detail.
Also try the model [here](https://huggingface.co/bolbolzaban/gpt2-persian?text=%D8%AF%D8%B1+%DB%8C%DA%A9+%D8%A7%D8%AA%D9%81%D8%A7%D9%82+%D8%B4%DA%AF%D9%81%D8%AA+%D8%A7%D9%86%DA%AF%DB%8C%D8%B2%D8%8C+%D9%BE%DA%98%D9%88%D9%87%D8%B4%DA%AF%D8%B1%D8%A7%D9%86) or on [Bolbolzaban.com](http://www.bolbolzaban.com/text).
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline, AutoTokenizer, GPT2LMHeadModel
tokenizer = AutoTokenizer.from_pretrained('bolbolzaban/gpt2-persian')
model = GPT2LMHeadModel.from_pretrained('bolbolzaban/gpt2-persian')
generator = pipeline('text-generation', model, tokenizer=tokenizer, config={'max_length':256})
sample = generator('در یک اتفاق شگفت انگیز، پژوهشگران')
```
If you are using Tensorflow import TFGPT2LMHeadModel instead of GPT2LMHeadModel.
## Fine-tuning
Find a basic fine-tuning example on this [Github Repo](https://github.com/khashei/bolbolzaban-gpt2-persian).
## Special Tokens
gpt-persian is trained for the purpose of research on Persian poetry. Because of that all english words and numbers are replaced with special tokens and only standard Persian alphabet is used as part of input text. Here is one example:
Original text: اگر آیفون یا آیپد شما دارای سیستم عامل iOS 14.3 یا iPadOS 14.3 یا نسخههای جدیدتر باشد
Text used in training: اگر آیفون یا آیپد شما دارای سیستم عامل [LAT] [NUM] یا [LAT] [NUM] یا نسخههای جدیدتر باشد
Please consider normalizing your input text using [Hazm](https://github.com/sobhe/hazm) or similar libraries and ensure only Persian characters are provided as input.
If you want to use classical Persian poetry as input use [BOM] (begining of mesra) at the beginning of each verse (مصرع) followed by [EOS] (end of statement) at the end of each couplet (بیت).
See following links for example:
[[BOM] توانا بود](https://huggingface.co/bolbolzaban/gpt2-persian?text=%5BBOM%5D+%D8%AA%D9%88%D8%A7%D9%86%D8%A7+%D8%A8%D9%88%D8%AF)
[[BOM] توانا بود هر که دانا بود [BOM]](https://huggingface.co/bolbolzaban/gpt2-persian?text=%5BBOM%5D+%D8%AA%D9%88%D8%A7%D9%86%D8%A7+%D8%A8%D9%88%D8%AF+%D9%87%D8%B1+%DA%A9%D9%87+%D8%AF%D8%A7%D9%86%D8%A7+%D8%A8%D9%88%D8%AF+%5BBOM%5D)
[[BOM] توانا بود هر که دانا بود [BOM] ز دانش دل پیر](https://huggingface.co/bolbolzaban/gpt2-persian?text=%5BBOM%5D+%D8%AA%D9%88%D8%A7%D9%86%D8%A7+%D8%A8%D9%88%D8%AF+%D9%87%D8%B1+%DA%A9%D9%87+%D8%AF%D8%A7%D9%86%D8%A7+%D8%A8%D9%88%D8%AF+%5BBOM%5D+%D8%B2+%D8%AF%D8%A7%D9%86%D8%B4+%D8%AF%D9%84+%D9%BE%DB%8C%D8%B1)
[[BOM] توانا بود هر که دانا بود [BOM] ز دانش دل پیربرنا بود [EOS]](https://huggingface.co/bolbolzaban/gpt2-persian?text=%5BBOM%5D+%D8%AA%D9%88%D8%A7%D9%86%D8%A7+%D8%A8%D9%88%D8%AF+%D9%87%D8%B1+%DA%A9%D9%87+%D8%AF%D8%A7%D9%86%D8%A7+%D8%A8%D9%88%D8%AF+%5BBOM%5D+%D8%B2+%D8%AF%D8%A7%D9%86%D8%B4+%D8%AF%D9%84+%D9%BE%DB%8C%D8%B1%D8%A8%D8%B1%D9%86%D8%A7+%D8%A8%D9%88%D8%AF++%5BEOS%5D)
If you like to know about structure of classical Persian poetry refer to these [blog posts](https://medium.com/@khashei).
## Acknowledgment
This project is supported by Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC).
## Citation and Reference
Please reference "bolbolzaban.com" website if you are using gpt2-persian in your research or commertial application.
## Contacts
Please reachout on [Linkedin](https://www.linkedin.com/in/khashei/) or [Telegram](https://t.me/khasheia) if you have any question or need any help to use the model.
Follow [Bolbolzaban](http://bolbolzaban.com/about) on [Twitter](https://twitter.com/bolbol_zaban), [Telegram](https://t.me/bolbol_zaban) or [Instagram](https://www.instagram.com/bolbolzaban/) | 4,326 | [
[
-0.0281219482421875,
-0.0633544921875,
0.021148681640625,
0.019317626953125,
-0.048553466796875,
0.0060272216796875,
-0.035736083984375,
-0.029937744140625,
0.01532745361328125,
0.01068878173828125,
-0.03033447265625,
-0.045440673828125,
-0.0474853515625,
0.... |
cahya/t5-base-indonesian-summarization-cased | 2022-11-19T20:41:24.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"pipeline:summarization",
"summarization",
"id",
"dataset:id_liputan6",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | summarization | cahya | null | null | cahya/t5-base-indonesian-summarization-cased | 4 | 449 | transformers | 2022-03-02T23:29:05 | ---
language: id
tags:
- pipeline:summarization
- summarization
- t5
datasets:
- id_liputan6
---
# Indonesian T5 Summarization Base Model
Finetuned T5 base summarization model for Indonesian.
## Finetuning Corpus
`t5-base-indonesian-summarization-cased` model is based on `t5-base-bahasa-summarization-cased` by [huseinzol05](https://huggingface.co/huseinzol05), finetuned using [id_liputan6](https://huggingface.co/datasets/id_liputan6) dataset.
## Load Finetuned Model
```python
from transformers import T5Tokenizer, T5Model, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("cahya/t5-base-indonesian-summarization-cased")
model = T5ForConditionalGeneration.from_pretrained("cahya/t5-base-indonesian-summarization-cased")
```
## Code Sample
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("cahya/t5-base-indonesian-summarization-cased")
model = T5ForConditionalGeneration.from_pretrained("cahya/t5-base-indonesian-summarization-cased")
#
ARTICLE_TO_SUMMARIZE = ""
# generate summary
input_ids = tokenizer.encode(ARTICLE_TO_SUMMARIZE, return_tensors='pt')
summary_ids = model.generate(input_ids,
min_length=20,
max_length=80,
num_beams=10,
repetition_penalty=2.5,
length_penalty=1.0,
early_stopping=True,
no_repeat_ngram_size=2,
use_cache=True,
do_sample = True,
temperature = 0.8,
top_k = 50,
top_p = 0.95)
summary_text = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
print(summary_text)
```
Output:
```
```
| 1,671 | [
[
-0.024566650390625,
-0.034454345703125,
-0.0024204254150390625,
0.0386962890625,
-0.0416259765625,
-0.000400543212890625,
-0.0150909423828125,
-0.0013704299926757812,
0.0245819091796875,
0.035736083984375,
-0.030517578125,
-0.06396484375,
-0.054656982421875,
... |
cointegrated/rubert-tiny-bilingual-nli | 2023-10-06T11:57:57.000Z | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"rubert",
"russian",
"nli",
"rte",
"zero-shot-classification",
"ru",
"dataset:cointegrated/nli-rus-translated-v2021",
"endpoints_compatible",
"region:us"
] | zero-shot-classification | cointegrated | null | null | cointegrated/rubert-tiny-bilingual-nli | 2 | 449 | transformers | 2022-03-02T23:29:05 | ---
language: ru
pipeline_tag: zero-shot-classification
tags:
- rubert
- russian
- nli
- rte
- zero-shot-classification
widget:
- text: "Сервис отстойный, кормили невкусно"
candidate_labels: "Мне понравилось, Мне не понравилось"
hypothesis_template: "{}."
datasets:
- cointegrated/nli-rus-translated-v2021
---
# RuBERT-tiny for NLI (natural language inference)
This is the [cointegrated/rubert-tiny](https://huggingface.co/cointegrated/rubert-tiny) model fine-tuned to predict the logical relationship between two short texts: entailment or not entailment.
For more details, see the card for a related model: https://huggingface.co/cointegrated/rubert-base-cased-nli-threeway
| 683 | [
[
-0.0252532958984375,
-0.08660888671875,
0.043212890625,
0.0089569091796875,
-0.01560211181640625,
-0.004772186279296875,
-0.0277862548828125,
-0.0322265625,
0.041778564453125,
0.039093017578125,
-0.0450439453125,
-0.0121002197265625,
-0.0264892578125,
0.0066... |
flexudy/t5-base-multi-sentence-doctor | 2020-12-11T23:33:25.000Z | [
"transformers",
"pytorch",
"tf",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | flexudy | null | null | flexudy/t5-base-multi-sentence-doctor | 39 | 449 | transformers | 2022-03-02T23:29:05 | 
# Sentence-Doctor
Sentence doctor is a T5 model that attempts to correct the errors or mistakes found in sentences. Model works on English, German and French text.
## 1. Problem:
Many NLP models depend on tasks like *Text Extraction Libraries, OCR, Speech to Text libraries* and **Sentence Boundary Detection**
As a consequence errors caused by these tasks in your NLP pipeline can affect the quality of models in applications. Especially since models are often trained on **clean** input.
## 2. Solution:
Here we provide a model that **attempts** to reconstruct sentences based on the its context (sourrounding text). The task is pretty straightforward:
* `Given an "erroneous" sentence, and its context, reconstruct the "intended" sentence`.
## 3. Use Cases:
* Attempt to repair noisy sentences that where extracted with OCR software or text extractors.
* Attempt to repair sentence boundaries.
* Example (in German): **Input: "und ich bin im**",
* Prefix_Context: "Hallo! Mein Name ist John", Postfix_Context: "Januar 1990 geboren."
* Output: "John und ich bin im Jahr 1990 geboren"
* Possibly sentence level spelling correction -- Although this is not the intended use.
* Input: "I went to church **las yesteday**" => Output: "I went to church last Sunday".
## 4. Disclaimer
Note how we always emphises on the word *attempt*. The current version of the model was only trained on **150K** sentences from the tatoeba dataset: https://tatoeba.org/eng. (50K per language -- En, Fr, De).
Hence, we strongly encourage you to finetune the model on your dataset. We might release a version trained on more data.
## 5. Datasets
We generated synthetic data from the tatoeba dataset: https://tatoeba.org/eng. Randomly applying different transformations on words and characters based on some probabilities. The datasets are available in the data folder (where **sentence_doctor_dataset_300K** is a larger dataset with 100K sentences for each language).
## 6. Usage
### 6.1 Preprocessing
* Let us assume we have the following text (Note that there are no punctuation marks in the text):
```python
text = "That is my job I am a medical doctor I save lives"
```
* You decided extract the sentences and for some obscure reason, you obtained these sentences:
```python
sentences = ["That is my job I a", "m a medical doct", "I save lives"]
```
* You now wish to correct the sentence **"m a medical doct"**.
Here is the single preprocessing step for the model:
```python
input_text = "repair_sentence: " + sentences[1] + " context: {" + sentences[0] + "}{" + sentences[2] + "} </s>"
```
**Explanation**:</br>
* We are telling the model to repair the sentence with the prefix "repair_sentence: "
* Then append the sentence we want to repair **sentence[1]** which is "m a medical doct"
* Next we give some context to the model. In the case, the context is some text that occured before the sentence and some text that appeard after the sentence in the original text.
* To do that, we append the keyword "context :"
* Append **{sentence[0]}** "{That is my job I a}". (Note how it is sourrounded by curly braces).
* Append **{sentence[2]}** "{I save lives}".
* At last we tell the model this is the end of the input with </s>.
```python
print(input_text) # repair_sentence: m a medical doct context: {That is my job I a}{or I save lives} </s>
```
<br/>
**The context is optional**, so the input could also be ```repair_sentence: m a medical doct context: {}{} </s>```
### 6.2 Inference
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("flexudy/t5-base-multi-sentence-doctor")
model = AutoModelWithLMHead.from_pretrained("flexudy/t5-base-multi-sentence-doctor")
input_text = "repair_sentence: m a medical doct context: {That is my job I a}{or I save lives} </s>"
input_ids = tokenizer.encode(input_text, return_tensors="pt")
outputs = model.generate(input_ids, max_length=32, num_beams=1)
sentence = tokenizer.decode(outputs[0], skip_special_tokens=True, clean_up_tokenization_spaces=True)
assert sentence == "I am a medical doctor."
```
## 7. Fine-tuning
We also provide a script `train_any_t5_task.py` that might help you fine-tune any Text2Text Task with T5. We added #TODO comments all over to help you use train with ease. For example:
```python
# TODO Set your training epochs
config.TRAIN_EPOCHS = 3
```
If you don't want to read the #TODO comments, just pass in your data like this
```python
# TODO Where is your data ? Enter the path
trainer.start("data/sentence_doctor_dataset_300.csv")
```
and voila!! Please feel free to correct any mistakes in the code and make a pull request.
## 8. Attribution
* [Huggingface](https://huggingface.co/) transformer lib for making this possible
* Abhishek Kumar Mishra's transformer [tutorial](https://github.com/abhimishra91/transformers-tutorials/blob/master/transformers_summarization_wandb.ipynb) on text summarisation. Our training code is just a modified version of their code. So many thanks.
* We finetuned this model from the huggingface hub: WikinewsSum/t5-base-multi-combine-wiki-news. Thanks to the [authors](https://huggingface.co/WikinewsSum)
* We also read a lot of work from [Suraj Patil](https://github.com/patil-suraj)
* No one has been forgotten, hopefully :)
| 5,344 | [
[
0.0086822509765625,
-0.05999755859375,
0.0457763671875,
0.0079345703125,
-0.007061004638671875,
-0.019683837890625,
-0.01183319091796875,
-0.0211639404296875,
0.017730712890625,
0.036346435546875,
-0.04150390625,
-0.035858154296875,
-0.04656982421875,
0.0369... |
monologg/koelectra-base-v2-discriminator | 2021-10-20T16:54:30.000Z | [
"transformers",
"pytorch",
"electra",
"pretraining",
"korean",
"ko",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | monologg | null | null | monologg/koelectra-base-v2-discriminator | 1 | 449 | transformers | 2022-03-02T23:29:05 | ---
language: ko
license: apache-2.0
tags:
- korean
---
# KoELECTRA v2 (Base Discriminator)
Pretrained ELECTRA Language Model for Korean (`koelectra-base-v2-discriminator`)
For more detail, please see [original repository](https://github.com/monologg/KoELECTRA/blob/master/README_EN.md).
## Usage
### Load model and tokenizer
```python
>>> from transformers import ElectraModel, ElectraTokenizer
>>> model = ElectraModel.from_pretrained("monologg/koelectra-base-v2-discriminator")
>>> tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-base-v2-discriminator")
```
### Tokenizer example
```python
>>> from transformers import ElectraTokenizer
>>> tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-base-v2-discriminator")
>>> tokenizer.tokenize("[CLS] 한국어 ELECTRA를 공유합니다. [SEP]")
['[CLS]', '한국어', 'EL', '##EC', '##TRA', '##를', '공유', '##합니다', '.', '[SEP]']
>>> tokenizer.convert_tokens_to_ids(['[CLS]', '한국어', 'EL', '##EC', '##TRA', '##를', '공유', '##합니다', '.', '[SEP]'])
[2, 5084, 16248, 3770, 19059, 29965, 2259, 10431, 5, 3]
```
## Example using ElectraForPreTraining
```python
import torch
from transformers import ElectraForPreTraining, ElectraTokenizer
discriminator = ElectraForPreTraining.from_pretrained("monologg/koelectra-base-v2-discriminator")
tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-base-v2-discriminator")
sentence = "나는 방금 밥을 먹었다."
fake_sentence = "나는 내일 밥을 먹었다."
fake_tokens = tokenizer.tokenize(fake_sentence)
fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt")
discriminator_outputs = discriminator(fake_inputs)
predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2)
print(list(zip(fake_tokens, predictions.tolist()[1:-1])))
```
| 1,749 | [
[
-0.0168609619140625,
-0.0233917236328125,
0.003833770751953125,
0.0264892578125,
-0.049957275390625,
0.0193023681640625,
-0.003002166748046875,
0.005931854248046875,
0.0214691162109375,
0.040374755859375,
-0.03076171875,
-0.04150390625,
-0.04217529296875,
0.... |
hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax | 2023-10-27T12:08:24.000Z | [
"transformers",
"pytorch",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"en",
"de",
"fr",
"fi",
"sv",
"nl",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | hmbyt5-preliminary | null | null | hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax | 0 | 449 | transformers | 2023-04-29T09:13:48 | ---
license: mit
language:
- en
- de
- fr
- fi
- sv
- nl
---
# hmByT5 - Preliminary Language Models
Preliminary Historic Multilingual and Monolingual ByT5 Models. Following languages are currently covered:
* English (British Library Corpus - Books)
* German (Europeana Newspaper)
* French (Europeana Newspaper)
* Finnish (Europeana Newspaper)
* Swedish (Europeana Newspaper)
* Dutch (Delpher Corpus)
More details can be found in [our GitHub repository](https://github.com/stefan-it/hmByT5).
# Pretraining
We use the official JAX/FLAX example in Hugging Face Transformers to pretrain a ByT5 model on a single v3-8 TPU.
Details about the training can be found [here](https://github.com/stefan-it/hmByT5/tree/main/hmbyt5-flax).
This model was trained with `mean_noise_span_length=20` for one epoch.
# Evaluation on Downstream Tasks (NER)
See detailed results at [hmLeaderboard](https://huggingface.co/spaces/stefan-it/hmLeaderboard).
# Acknowledgements
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
| 1,128 | [
[
-0.042755126953125,
-0.039215087890625,
0.0262298583984375,
0.0328369140625,
-0.0098114013671875,
0.00212860107421875,
-0.0273590087890625,
-0.058929443359375,
0.0194549560546875,
0.034027099609375,
-0.06793212890625,
-0.03521728515625,
-0.0255126953125,
0.0... |
TheBloke/Xwin-LM-7B-V0.1-GPTQ | 2023-09-27T12:53:52.000Z | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:llama2",
"text-generation-inference",
"region:us"
] | text-generation | TheBloke | null | null | TheBloke/Xwin-LM-7B-V0.1-GPTQ | 7 | 449 | transformers | 2023-09-21T08:31:10 | ---
license: llama2
model_name: Xwin-LM 7B V0.1
base_model: Xwin-LM/Xwin-LM-7b-V0.1
inference: false
model_creator: Xwin-LM
model_type: llama
prompt_template: 'A chat between a curious user and an artificial intelligence assistant.
The assistant gives helpful, detailed, and polite answers to the user''s questions.
USER: {prompt} ASSISTANT:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Xwin-LM 7B V0.1 - GPTQ
- Model creator: [Xwin-LM](https://huggingface.co/Xwin-LM)
- Original model: [Xwin-LM 7B V0.1](https://huggingface.co/Xwin-LM/Xwin-LM-7b-V0.1)
<!-- description start -->
## Description
This repo contains GPTQ model files for [Xwin-LM's Xwin-LM 7B V0.1](https://huggingface.co/Xwin-LM/Xwin-LM-7b-V0.1).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Xwin-LM-7B-V0.1-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Xwin-LM-7B-V0.1-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Xwin-LM-7B-V0.1-GGUF)
* [Xwin-LM's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Xwin-LM/Xwin-LM-7b-V0.1)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Vicuna
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT:
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
All recent GPTQ files are made with AutoGPTQ, and all files in non-main branches are made with AutoGPTQ. Files in the `main` branch which were uploaded before August 2023 were made with GPTQ-for-LLaMa.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Xwin-LM-7B-V0.1-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 3.90 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Xwin-LM-7B-V0.1-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 4.28 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Xwin-LM-7B-V0.1-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.01 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Xwin-LM-7B-V0.1-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.16 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
| [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/Xwin-LM-7B-V0.1-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.62 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/Xwin-LM-7B-V0.1-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/Xwin-LM-7B-V0.1-GPTQ:gptq-4bit-32g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `Xwin-LM-7B-V0.1-GPTQ`:
```shell
mkdir Xwin-LM-7B-V0.1-GPTQ
huggingface-cli download TheBloke/Xwin-LM-7B-V0.1-GPTQ --local-dir Xwin-LM-7B-V0.1-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir Xwin-LM-7B-V0.1-GPTQ
huggingface-cli download TheBloke/Xwin-LM-7B-V0.1-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir Xwin-LM-7B-V0.1-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Huggingface cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir Xwin-LM-7B-V0.1-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Xwin-LM-7B-V0.1-GPTQ --local-dir Xwin-LM-7B-V0.1-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/Xwin-LM-7B-V0.1-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Xwin-LM-7B-V0.1-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/Xwin-LM-7B-V0.1-GPTQ:gptq-4bit-32g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Xwin-LM-7B-V0.1-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
* Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-python start -->
## How to use this GPTQ model from Python code
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install transformers optimum
pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7
```
If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.4.2
pip3 install .
```
### You can then use the following code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/Xwin-LM-7B-V0.1-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-32g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT:
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI).
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
[Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Xwin-LM's Xwin-LM 7B V0.1
<h3 align="center">
Xwin-LM: Powerful, Stable, and Reproducible LLM Alignment
</h3>
<p align="center">
<a href="https://github.com/Xwin-LM/Xwin-LM"><img src="https://img.shields.io/badge/GitHub-yellow.svg?style=social&logo=github"></a><a href="https://huggingface.co/Xwin-LM"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue"></a>
</p>
**Step up your LLM alignment with Xwin-LM!**
Xwin-LM aims to develop and open-source alignment technologies for large language models, including supervised fine-tuning (SFT), reward models (RM), reject sampling, reinforcement learning from human feedback (RLHF), etc. Our first release, built-upon on the Llama2 base models, ranked **TOP-1** on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/). Notably, it's **the first to surpass GPT-4** on this benchmark. The project will be continuously updated.
## News
- 💥 [Sep, 2023] We released [Xwin-LM-70B-V0.1](https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1), which has achieved a win-rate against Davinci-003 of **95.57%** on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/) benchmark, ranking as **TOP-1** on AlpacaEval. **It was the FIRST model surpassing GPT-4** on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/). Also note its winrate v.s. GPT-4 is **60.61**.
- 🔍 [Sep, 2023] RLHF plays crucial role in the strong performance of Xwin-LM-V0.1 release!
- 💥 [Sep, 2023] We released [Xwin-LM-13B-V0.1](https://huggingface.co/Xwin-LM/Xwin-LM-13B-V0.1), which has achieved **91.76%** win-rate on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/), ranking as **top-1** among all 13B models.
- 💥 [Sep, 2023] We released [Xwin-LM-7B-V0.1](https://huggingface.co/Xwin-LM/Xwin-LM-7B-V0.1), which has achieved **87.82%** win-rate on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/), ranking as **top-1** among all 7B models.
## Model Card
| Model | Checkpoint | Report | License |
|------------|------------|-------------|------------------|
|Xwin-LM-7B-V0.1| 🤗 <a href="https://huggingface.co/Xwin-LM/Xwin-LM-7B-V0.1" target="_blank">HF Link</a> | 📃**Coming soon (Stay tuned)** | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License|
|Xwin-LM-13B-V0.1| 🤗 <a href="https://huggingface.co/Xwin-LM/Xwin-LM-13B-V0.1" target="_blank">HF Link</a> | | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License|
|Xwin-LM-70B-V0.1| 🤗 <a href="https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1" target="_blank">HF Link</a> | | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License|
## Benchmarks
### Xwin-LM performance on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/).
The table below displays the performance of Xwin-LM on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/), where evaluates its win-rate against Text-Davinci-003 across 805 questions. To provide a comprehensive evaluation, we present, for the first time, the win-rate against ChatGPT and GPT-4 as well. Our Xwin-LM model family establish a new state-of-the-art performance across all metrics. Notably, Xwin-LM-70B-V0.1 has eclipsed GPT-4 for the first time, achieving an impressive win-rate of **95.57%** to Text-Davinci-003 and **60.61%** to GPT-4.
| **Model** | **AlpacaEval (winrate %)** | **AlpacaEval (winrate %)** |**AlpacaEval (winrate %)** |
|----------------------------------|------------|----------|-------------|
| | **v.s. Text-Davinci-003** | **v.s. ChatGPT** | **v.s. GPT4**|
| **Xwin-LM-70B-V0.1** | **95.57** | **87.50** | **60.61** |
| GPT-4 | 95.28 | 84.66 | 50.00 |
| WizardLM-70B-V1.0 | 92.91 | 80.19 | 46.70 |
| Llama-2-70B-Chat | 92.66 | 81.97 | 51.19 |
| **Xwin-LM-13B-V0.1** | **91.76** | **81.79** | **55.30** |
| ChatGPT | 89.37 | 50.00 | 16.60 |
| WizardLM-13B-V1.2 | 89.17 | 75.75 | 41.91 |
| **Xwin-LM-7B-V0.1** | **87.35** | **76.40** | **47.57** |
| Llama-2-13B-Chat | 81.09 | 64.22 | 30.92 |
### Case study on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/)
| Prompts | GTP4 response | Xwin-LM response |
|----------------|------------|--------|
|I've put a plate on top of an apple, and take the plate from kitchen to dinning room. Where is the apple now|The apple is now in the dining room, as it was underneath the plate when you moved the plate from the kitchen to the dining room.|The apple is still in the kitchen, as you have only moved the plate. The apple is under the plate, so if you want to bring the apple to the dining room, you should either move the apple directly or put the apple on the plate and then move the plate to the dining room.|
| Answer the following question.<br> How do you say "good evening" in French. | In French, "good evening" is said as "bonsoir." | In French, "good evening" is translated to "bonsoir." This greeting is used in the evening and nighttime hours, typically after 6 p.m. It is a combination of "bon," which means "good," and "soir," which means "evening." To pronounce it, say "bone-swahr." |
### Xwin-LM performance on NLP foundation tasks.
The following table provides a comparison of Xwin-LMs with other LLMs on NLP foundation tasks in [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
| Model | MMLU 5-shot | ARC 25-shot | TruthfulQA 0-shot | HellaSwag 10-shot | Average |
|------------------|-------------|-------------|-------------------|-------------------|------------|
| Text-davinci-003 | 56.9 | **85.2** | 59.3 | 82.2 | 70.9 |
|Vicuna-13b 1.1 | 51.3 | 53.0 | 51.8 | 80.1 | 59.1 |
|Guanaco 30B | 57.6 | 63.7 | 50.7 | 85.1 | 64.3 |
| WizardLM-7B 1.0 | 42.7 | 51.6 | 44.7 | 77.7 | 54.2 |
| WizardLM-13B 1.0 | 52.3 | 57.2 | 50.5 | 81.0 | 60.2 |
| WizardLM-30B 1.0 | 58.8 | 62.5 | 52.4 | 83.3 | 64.2|
| Llama-2-7B-Chat | 48.3 | 52.9 | 45.6 | 78.6 | 56.4 |
| Llama-2-13B-Chat | 54.6 | 59.0 | 44.1 | 81.9 | 59.9 |
| Llama-2-70B-Chat | 63.9 | 64.6 | 52.8 | 85.9 | 66.8 |
| **Xwin-LM-7B-V0.1** | 49.7 | 56.2 | 48.1 | 79.5 | 58.4 |
| **Xwin-LM-13B-V0.1** | 56.6 | 62.4 | 45.5 | 83.0 | 61.9 |
| **Xwin-LM-70B-V0.1** | **69.6** | 70.5 | **60.1** | **87.1** | **71.8** |
## Inference
### Conversation templates
To obtain desired results, please strictly follow the conversation templates when utilizing our model for inference. Our model adopts the prompt format established by [Vicuna](https://github.com/lm-sys/FastChat) and is equipped to support **multi-turn** conversations.
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: Hi! ASSISTANT: Hello.</s>USER: Who are you? ASSISTANT: I am Xwin-LM.</s>......
```
### HuggingFace Example
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("Xwin-LM/Xwin-LM-7B-V0.1")
tokenizer = AutoTokenizer.from_pretrained("Xwin-LM/Xwin-LM-7B-V0.1")
(
prompt := "A chat between a curious user and an artificial intelligence assistant. "
"The assistant gives helpful, detailed, and polite answers to the user's questions. "
"USER: Hello, can you help me? "
"ASSISTANT:"
)
inputs = tokenizer(prompt, return_tensors="pt")
samples = model.generate(**inputs, max_new_tokens=4096, temperature=0.7)
output = tokenizer.decode(samples[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True)
print(output)
# Of course! I'm here to help. Please feel free to ask your question or describe the issue you're having, and I'll do my best to assist you.
```
### vllm Example
Because Xwin-LM is based on Llama2, it also offers support for rapid inference using [vllm](https://github.com/vllm-project/vllm). Please refer to [vllm](https://github.com/vllm-project/vllm) for detailed installation instructions.
```python
from vllm import LLM, SamplingParams
(
prompt := "A chat between a curious user and an artificial intelligence assistant. "
"The assistant gives helpful, detailed, and polite answers to the user's questions. "
"USER: Hello, can you help me? "
"ASSISTANT:"
)
sampling_params = SamplingParams(temperature=0.7, max_tokens=4096)
llm = LLM(model="Xwin-LM/Xwin-LM-7B-V0.1")
outputs = llm.generate([prompt,], sampling_params)
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(generated_text)
```
## TODO
- [ ] Release the source code
- [ ] Release more capabilities, such as math, reasoning, and etc.
## Citation
Please consider citing our work if you use the data or code in this repo.
```
@software{xwin-lm,
title = {Xwin-LM},
author = {Xwin-LM Team},
url = {https://github.com/Xwin-LM/Xwin-LM},
version = {pre-release},
year = {2023},
month = {9},
}
```
## Acknowledgements
Thanks to [Llama 2](https://ai.meta.com/llama/), [FastChat](https://github.com/lm-sys/FastChat), [AlpacaFarm](https://github.com/tatsu-lab/alpaca_farm), and [vllm](https://github.com/vllm-project/vllm).
| 26,843 | [
[
-0.04412841796875,
-0.05670166015625,
0.0181121826171875,
0.01331329345703125,
-0.015716552734375,
-0.00827789306640625,
0.006084442138671875,
-0.040985107421875,
0.02130126953125,
0.027679443359375,
-0.0506591796875,
-0.0377197265625,
-0.02593994140625,
-0.... |
BAAI/JudgeLM-13B-v1.0 | 2023-10-27T11:57:06.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"instruction-finetuning",
"en",
"arxiv:2310.17631",
"text-generation-inference",
"region:us"
] | text-generation | BAAI | null | null | BAAI/JudgeLM-13B-v1.0 | 3 | 449 | transformers | 2023-10-27T11:00:33 |
---
inference: false
language:
- en
tags:
- instruction-finetuning
pretty_name: JudgeLM-100K
task_categories:
- text-generation
---
<br>
# JudgeLM Model Card
## Model Details
JudgeLM is a judge model trained by fine-tuning Vicuna on JudgeLM-100K dataset.
- **Developed by:** [HUST](https://english.hust.edu.cn/), [BAAI](https://www.baai.ac.cn/english.html)
- **Model type:** An auto-regressive language model based on the transformer architecture.
- **License:** Non-commercial license
- **Finetuned from model:** [Vicuna](https://vicuna.lmsys.org).
### Model Sources
- **Repository:** https://github.com/baaivision/JudgeLM
- **Paper:** https://arxiv.org/abs/2310.17631
- **Demo:** http://218.91.113.230:9004/
## Uses
The primary use of JudgeLM is research on evaluating the performance of large language models and chatbots.
The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence.
## How to Get Started with the Model
- Judge large language models with this model: https://github.com/baaivision/JudgeLM/tree/main/judgelm/llm_judge.
- Serve this model with the gradio: https://github.com/baaivision/JudgeLM/tree/main/judgelm/serve.
## Training Details
JudgeLM v1.0 is fine-tuned from Vicuna-v1.3 with supervised instruction fine-tuning.
The training data is around 200K judge samples from [JudgeLM-100K dataset](https://huggingface.co/datasets/BAAI/JudgeLM-100K).
See more details in the "Fine-tuning Settings" section in the appendix of this [paper](https://arxiv.org/abs/2310.17631).
## Evaluation
JudgeLM is evaluated on JudgeLM val set, with judgements produced by GPT-4 teacher. See more details in this [paper](https://arxiv.org/abs/2310.17631) and try it with [code](https://github.com/baaivision/JudgeLM/tree/main/judgelm/llm_judge).
## Additional Information
### Citation Information
```
@article{zhu2023judgelm,
title={JudgeLM: Fine-tuned Large Language Models are Scalable Judges},
author={Lianghui Zhu and Xinggang Wang and Xinlong Wang},
year={2023},
eprint={2310.17631},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| 2,174 | [
[
-0.0273284912109375,
-0.06085205078125,
0.0355224609375,
0.0020275115966796875,
-0.034912109375,
-0.027587890625,
-0.03057861328125,
-0.03131103515625,
-0.00858306884765625,
0.044403076171875,
-0.0149078369140625,
-0.038818359375,
-0.04339599609375,
-0.02197... |
YuxinJiang/unsup-promcse-bert-base-uncased | 2023-04-05T14:03:55.000Z | [
"transformers",
"pytorch",
"bert",
"arxiv:2203.06875",
"arxiv:1908.10084",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | YuxinJiang | null | null | YuxinJiang/unsup-promcse-bert-base-uncased | 1 | 448 | transformers | 2023-01-14T01:18:28 | ---
license: mit
---
# PromCSE: Improved Universal Sentence Embeddings with Prompt-based Contrastive Learning and Energy-based Learning
[](https://colab.research.google.com/drive/1lanXViJzbmGM1bwm8AflNUKmrvDidg_3?usp=sharing)
arXiv link: https://arxiv.org/abs/2203.06875v2
Published in [**EMNLP 2022**](https://2022.emnlp.org/)
Our code is modified based on [SimCSE](https://github.com/princeton-nlp/SimCSE) and [P-tuning v2](https://github.com/THUDM/P-tuning-v2/). Here we would like to sincerely thank them for their excellent works.
## Model List
We have released our supervised and unsupervised models on huggingface, which acquire **Top 1** results on 1 domain-shifted STS task and 4 standard STS tasks:
[](https://paperswithcode.com/sota/semantic-textual-similarity-on-cxc?p=deep-continuous-prompt-for-contrastive-1)
[](https://paperswithcode.com/sota/semantic-textual-similarity-on-sick?p=deep-continuous-prompt-for-contrastive-1)
[](https://paperswithcode.com/sota/semantic-textual-similarity-on-sts12?p=deep-continuous-prompt-for-contrastive-1)
[](https://paperswithcode.com/sota/semantic-textual-similarity-on-sts13?p=deep-continuous-prompt-for-contrastive-1)
[](https://paperswithcode.com/sota/semantic-textual-similarity-on-sts14?p=deep-continuous-prompt-for-contrastive-1)
[](https://paperswithcode.com/sota/semantic-textual-similarity-on-sts16?p=deep-continuous-prompt-for-contrastive-1)
[](https://paperswithcode.com/sota/semantic-textual-similarity-on-sts15?p=deep-continuous-prompt-for-contrastive-1)
<!-- <img src="https://github.com/YJiangcm/DCPCSE/blob/master/figure/leaderboard.png" width="700" height="380"> -->
| Model | STS12 | STS13 | STS14 | STS15 | STS16 | STS-B | SICK-R | Avg. |
|:-----------------------:|:-----:|:----------:|:---------:|:-----:|:-----:|:-----:|:-----:|:-----:|
| [YuxinJiang/unsup-promcse-bert-base-uncased](https://huggingface.co/YuxinJiang/unsup-promcse-bert-base-uncased) | 73.03 |85.18| 76.70| 84.19 |79.69| 80.62| 70.00| 78.49|
| [YuxinJiang/sup-promcse-roberta-base](https://huggingface.co/YuxinJiang/sup-promcse-roberta-base) | 76.75 |85.86| 80.98| 86.51 |83.51| 86.58| 80.41| 82.94|
| [YuxinJiang/sup-promcse-roberta-large](https://huggingface.co/YuxinJiang/sup-promcse-roberta-large) | 79.14 |88.64| 83.73| 87.33 |84.57| 87.84| 82.07| 84.76|
**Naming rules**: `unsup` and `sup` represent "unsupervised" (trained on Wikipedia corpus) and "supervised" (trained on NLI datasets) respectively.
## Usage
[](https://colab.research.google.com/drive/1lanXViJzbmGM1bwm8AflNUKmrvDidg_3?usp=sharing)
We provide an easy-to-use python package `promcse` which contains the following functions:
**(1) encode sentences into embedding vectors;
(2) compute cosine simiarities between sentences;
(3) given queries, retrieval top-k semantically similar sentences for each query.**
To use the tool, first install the `promcse` package from [PyPI](https://pypi.org/project/promcse/)
```bash
pip install promcse
```
After installing the package, you can load our model by two lines of code
```python
from promcse import PromCSE
model = PromCSE("YuxinJiang/unsup-promcse-bert-base-uncased", "cls_before_pooler", 16)
# model = PromCSE("YuxinJiang/sup-promcse-roberta-base")
# model = PromCSE("YuxinJiang/sup-promcse-roberta-large")
```
Then you can use our model for **encoding sentences into embeddings**
```python
embeddings = model.encode("A woman is reading.")
```
**Compute the cosine similarities** between two groups of sentences
```python
sentences_a = ['A woman is reading.', 'A man is playing a guitar.']
sentences_b = ['He plays guitar.', 'A woman is making a photo.']
similarities = model.similarity(sentences_a, sentences_b)
```
Or build index for a group of sentences and **search** among them
```python
sentences = ['A woman is reading.', 'A man is playing a guitar.']
model.build_index(sentences)
results = model.search("He plays guitar.")
```
## Train PromCSE
In the following section, we describe how to train a PromCSE model by using our code.
### Setups
[](https://www.python.org/downloads/release/python-382/)
[](https://pytorch.org/get-started/previous-versions/)
Run the following script to install the remaining dependencies,
```bash
pip install -r requirements.txt
```
### Evaluation
[](https://colab.research.google.com/drive/1lanXViJzbmGM1bwm8AflNUKmrvDidg_3?usp=sharing)
Our evaluation code for sentence embeddings is based on a modified version of [SentEval](https://github.com/facebookresearch/SentEval). It evaluates sentence embeddings on semantic textual similarity (STS) tasks and downstream transfer tasks. For STS tasks, our evaluation takes the "all" setting, and report Spearman's correlation. The STS tasks include seven standard STS tasks (STS12-16, STSB, SICK-R) and one domain-shifted STS task (CxC).
Before evaluation, please download the evaluation datasets by running
```bash
cd SentEval/data/downstream/
bash download_dataset.sh
```
To evaluate the domain shift robustness of sentence embedding, we need to download [CxC](https://drive.google.com/drive/folders/1ZnRlVlc4kFsKbaWj9cFbb8bQU0fxzz1c?usp=sharing), and put the data into *SentEval/data/downstream/CocoCXC*
Then come back to the root directory, you can evaluate the well trained models using our evaluation code. For example,
```bash
python evaluation.py \
--model_name_or_path YuxinJiang/sup-promcse-roberta-large \
--pooler_type cls \
--task_set sts \
--mode test \
--pre_seq_len 10
```
which is expected to output the results in a tabular format:
```
------ test ------
+-------+-------+-------+-------+-------+--------------+-----------------+-------+
| STS12 | STS13 | STS14 | STS15 | STS16 | STSBenchmark | SICKRelatedness | Avg. |
+-------+-------+-------+-------+-------+--------------+-----------------+-------+
| 79.14 | 88.64 | 83.73 | 87.33 | 84.57 | 87.84 | 82.07 | 84.76 |
+-------+-------+-------+-------+-------+--------------+-----------------+-------+
```
Arguments for the evaluation script are as follows,
* `--model_name_or_path`: The name or path of a `transformers`-based pre-trained checkpoint.
* `--pooler_type`: Pooling method. Now we support
* `cls` (default): Use the representation of `[CLS]` token. A linear+activation layer is applied after the representation (it's in the standard BERT implementation). If you use **supervised PromCSE**, you should use this option.
* `cls_before_pooler`: Use the representation of `[CLS]` token without the extra linear+activation. If you use **unsupervised PromCSE**, you should take this option.
* `avg`: Average embeddings of the last layer. If you use checkpoints of SBERT/SRoBERTa ([paper](https://arxiv.org/abs/1908.10084)), you should use this option.
* `avg_top2`: Average embeddings of the last two layers.
* `avg_first_last`: Average embeddings of the first and last layers. If you use vanilla BERT or RoBERTa, this works the best.
* `--mode`: Evaluation mode
* `test` (default): The default test mode. To faithfully reproduce our results, you should use this option.
* `dev`: Report the development set results. Note that in STS tasks, only `STS-B` and `SICK-R` have development sets, so we only report their numbers. It also takes a fast mode for transfer tasks, so the running time is much shorter than the `test` mode (though numbers are slightly lower).
* `fasttest`: It is the same as `test`, but with a fast mode so the running time is much shorter, but the reported numbers may be lower (only for transfer tasks).
* `--task_set`: What set of tasks to evaluate on (if set, it will override `--tasks`)
* `sts` (default): Evaluate on STS tasks, including `STS 12~16`, `STS-B` and `SICK-R`. This is the most commonly-used set of tasks to evaluate the quality of sentence embeddings.
* `cococxc`: Evaluate on domain-shifted CXC task.
* `transfer`: Evaluate on transfer tasks.
* `full`: Evaluate on both STS and transfer tasks.
* `na`: Manually set tasks by `--tasks`.
* `--tasks`: Specify which dataset(s) to evaluate on. Will be overridden if `--task_set` is not `na`. See the code for a full list of tasks.
* `--pre_seq_len`: The length of deep continuous prompt.
### Training
**Data**
Following SimCSE, we use the same datasets to train our unsupervised models and supervised models. You can run `data/download_wiki.sh` and `data/download_nli.sh` to download the two datasets.
**Training scripts**
(The same as `run_unsup_example.sh`)
```bash
python train.py \
--model_name_or_path bert-base-uncased \
--train_file data/wiki1m_for_simcse.txt \
--output_dir result/my-unsup-promcse-bert-base-uncased \
--num_train_epochs 1 \
--per_device_train_batch_size 256 \
--learning_rate 3e-2 \
--max_seq_length 32 \
--evaluation_strategy steps \
--metric_for_best_model stsb_spearman \
--load_best_model_at_end \
--eval_steps 125 \
--pooler_type cls \
--mlp_only_train \
--pre_seq_len 16 \
--overwrite_output_dir \
--temp 0.05 \
--do_train \
--do_eval \
--fp16
```
We provide example training scripts for both unsupervised and supervised PromCSE. In `run_unsup_example.sh`, we provide a single-GPU (or CPU) example for the unsupervised version, and in `run_sup_example.sh` we give a **multiple-GPU** example for the supervised version. Both scripts call `train.py` for training. We explain the arguments in following:
* `--train_file`: Training file path. We support "txt" files (one line for one sentence) and "csv" files (2-column: pair data with no hard negative; 3-column: pair data with one corresponding hard negative instance). You can use our provided Wikipedia or NLI data, or you can use your own data with the same format.
* `--model_name_or_path`: Pre-trained checkpoints to start with. For now we support BERT-based models (`bert-base-uncased`, `bert-large-uncased`, etc.) and RoBERTa-based models (`RoBERTa-base`, `RoBERTa-large`, etc.).
* `--temp`: Temperature for the contrastive loss.
* `--pooler_type`: Pooling method. It's the same as the `--pooler_type` in the [evaluation part](#evaluation).
* `--mlp_only_train`: We have found that for unsupervised PromCSE, it works better to train the model with MLP layer but test the model without it. You should use this argument when training unsupervised PromCSE models.
* `--hard_negative_weight`: If using hard negatives (i.e., there are 3 columns in the training file), this is the logarithm of the weight. For example, if the weight is 1, then this argument should be set as 0 (default value).
* `--do_mlm`: Whether to use the MLM auxiliary objective. If True:
* `--mlm_weight`: Weight for the MLM objective.
* `--mlm_probability`: Masking rate for the MLM objective.
* `--pre_seq_len`: The length of deep continuous prompt.
* `--prefix_projection`: Whether apply a two-layer MLP head over the prompt embeddings.
* `--prefix_hidden_size`: The hidden size of the MLP projection head if prefix_projection is used.
* `--do_eh_loss`: Whether to use Energy-based Hinge loss in supervised models. If True:
* `--eh_loss_margin`: Margin of Energy-based Hinge loss.
* `--eh_loss_weight`: Weight of Energy-based Hinge loss.
All the other arguments are standard Huggingface's `transformers` training arguments. Some of the often-used arguments are: `--output_dir`, `--learning_rate`, `--per_device_train_batch_size`. In our example scripts, we also set to evaluate the model on the STS-B development set (need to download the dataset following the [evaluation](#evaluation) section) and save the best checkpoint.
All our experiments are conducted on Nvidia 3090 GPUs.
**Hyperparameters**
| **Unsupervised** | BERT-base | BERT-large | RoBERTa-base | RoBERTa-large |
|:--------------|:-----------:|:--------------:|:---------:|:---------:|
| Batch size | 256 | 256 | 64 | 64
| Learning rate | 3e-2 | 3e-2 | 3e-2 | 1e-2 |
| Prompt length | 16 | 10 | 14 | 10 |
| do_mlm | False | False | True | True |
| Epoch |1|1|1|1|
| Valid steps | 125 | 125 | 125 | 125 |
| **Supervised** | BERT-base | BERT-large | RoBERTa-base | RoBERTa-large |
|:--------------|:-----------:|:--------------:|:---------:|:---------:|
| Batch size | 256 | 256 | 512 | 512
| Learning rate | 1e-2 | 5e-3 | 1e-2 | 5e-3 |
| Prompt length | 12 | 12 | 10 | 10 |
| do_mlm | False | False | False | False |
| Epoch |10|10|10|10|
| Valid steps | 125 | 125 | 125 | 125 |
## Citation
Please cite our paper by:
```bibtex
@inproceedings{jiang-etal-2022-improved,
title = "Improved Universal Sentence Embeddings with Prompt-based Contrastive Learning and Energy-based Learning",
author = "Jiang, Yuxin and
Zhang, Linhan and
Wang, Wei",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2022",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-emnlp.220",
pages = "3021--3035",
}
```
| 14,624 | [
[
-0.0307464599609375,
-0.05914306640625,
0.04022216796875,
0.007293701171875,
-0.020050048828125,
-0.0137786865234375,
-0.0138702392578125,
-0.0139312744140625,
0.0184478759765625,
0.02252197265625,
-0.052276611328125,
-0.06793212890625,
-0.034027099609375,
0... |
timm/resnest14d.gluon_in1k | 2023-04-23T23:35:06.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2004.08955",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/resnest14d.gluon_in1k | 0 | 448 | timm | 2023-04-23T23:34:53 | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for resnest14d.gluon_in1k
A ResNeSt (ResNet based architecture with Split Attention) image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 10.6
- GMACs: 2.8
- Activations (M): 7.3
- Image size: 224 x 224
- **Papers:**
- ResNeSt: Split-Attention Networks: https://arxiv.org/abs/2004.08955
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/zhanghang1989/ResNeSt
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('resnest14d.gluon_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'resnest14d.gluon_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 112, 112])
# torch.Size([1, 256, 56, 56])
# torch.Size([1, 512, 28, 28])
# torch.Size([1, 1024, 14, 14])
# torch.Size([1, 2048, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'resnest14d.gluon_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2048, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{zhang2020resnest,
title={ResNeSt: Split-Attention Networks},
author={Zhang, Hang and Wu, Chongruo and Zhang, Zhongyue and Zhu, Yi and Zhang, Zhi and Lin, Haibin and Sun, Yue and He, Tong and Muller, Jonas and Manmatha, R. and Li, Mu and Smola, Alexander},
journal={arXiv preprint arXiv:2004.08955},
year={2020}
}
```
| 3,756 | [
[
-0.035614013671875,
-0.038177490234375,
0.00940704345703125,
0.0078582763671875,
-0.0265960693359375,
-0.0207672119140625,
-0.02178955078125,
-0.027099609375,
0.02984619140625,
0.02880859375,
-0.045196533203125,
-0.04632568359375,
-0.0552978515625,
-0.001346... |
timm/efficientvit_b3.r288_in1k | 2023-08-18T22:48:46.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2205.14756",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/efficientvit_b3.r288_in1k | 0 | 448 | timm | 2023-08-18T22:48:01 | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for efficientvit_b3.r288_in1k
An EfficientViT (MIT) image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 48.6
- GMACs: 6.6
- Activations (M): 44.2
- Image size: 288 x 288
- **Papers:**
- EfficientViT: Lightweight Multi-Scale Attention for On-Device Semantic Segmentation: https://arxiv.org/abs/2205.14756
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/mit-han-lab/efficientvit
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('efficientvit_b3.r288_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'efficientvit_b3.r288_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 72, 72])
# torch.Size([1, 128, 36, 36])
# torch.Size([1, 256, 18, 18])
# torch.Size([1, 512, 9, 9])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'efficientvit_b3.r288_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 512, 9, 9) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{cai2022efficientvit,
title={Efficientvit: Enhanced linear attention for high-resolution low-computation visual recognition},
author={Cai, Han and Gan, Chuang and Han, Song},
journal={arXiv preprint arXiv:2205.14756},
year={2022}
}
```
| 3,666 | [
[
-0.031951904296875,
-0.04461669921875,
0.00868988037109375,
0.011199951171875,
-0.02117919921875,
-0.03656005859375,
-0.0207672119140625,
-0.02398681640625,
0.0151214599609375,
0.0242767333984375,
-0.036407470703125,
-0.046295166015625,
-0.0494384765625,
-0.... |
Edresson/wav2vec2-large-xlsr-coraa-portuguese | 2022-03-31T13:28:43.000Z | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"pt",
"portuguese-speech-corpus",
"hf-asr-leaderboard",
"PyTorch",
"dataset:CORAA",
"arxiv:2110.15731",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | Edresson | null | null | Edresson/wav2vec2-large-xlsr-coraa-portuguese | 12 | 447 | transformers | 2022-03-02T23:29:04 | ---
language: pt
datasets:
- CORAA
metrics:
- wer
tags:
- audio
- speech
- wav2vec2
- pt
- portuguese-speech-corpus
- automatic-speech-recognition
- hf-asr-leaderboard
- speech
- PyTorch
license: apache-2.0
model-index:
- name: Edresson Casanova XLSR Wav2Vec2 Large 53 Portuguese
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: CORAA
type: CORAA
args: pt
metrics:
- name: Test CORAA WER
type: wer
value: 25.26
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: pt
metrics:
- name: Test WER on Common Voice 7
type: wer
value: 20.08
---
# Wav2vec 2.0 trained with CORAA Portuguese Dataset
This a the demonstration of a fine-tuned Wav2vec model for Portuguese using the following [CORAA dataset](https://github.com/nilc-nlp/CORAA)
# Use this model
```python
from transformers import AutoTokenizer, Wav2Vec2ForCTC
tokenizer = AutoTokenizer.from_pretrained("Edresson/wav2vec2-large-xlsr-coraa-portuguese")
model = Wav2Vec2ForCTC.from_pretrained("Edresson/wav2vec2-large-xlsr-coraa-portuguese")
```
# Results
For the results check the [CORAA article](https://arxiv.org/abs/2110.15731)
# Example test with Common Voice Dataset
```python
dataset = load_dataset("common_voice", "pt", split="test", data_dir="./cv-corpus-6.1-2020-12-11")
resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
return batch
```
```python
ds = dataset.map(map_to_array)
result = ds.map(map_to_pred, batched=True, batch_size=1, remove_columns=list(ds.features.keys()))
print(wer.compute(predictions=result["predicted"], references=result["target"]))
```
| 2,144 | [
[
-0.015167236328125,
-0.036224365234375,
-0.0013036727905273438,
0.0270233154296875,
-0.02191162109375,
-0.01006317138671875,
-0.0203704833984375,
-0.0281829833984375,
0.0146636962890625,
0.039154052734375,
-0.04595947265625,
-0.054046630859375,
-0.04190063476562... |
UWB-AIR/Czert-B-base-cased | 2022-03-16T10:39:50.000Z | [
"transformers",
"pytorch",
"tf",
"bert",
"pretraining",
"cs",
"fill-mask",
"arxiv:2103.13031",
"endpoints_compatible",
"region:us"
] | fill-mask | UWB-AIR | null | null | UWB-AIR/Czert-B-base-cased | 0 | 447 | transformers | 2022-03-02T23:29:05 | ---
tags:
- cs
- fill-mask
---
# CZERT
This repository keeps trained Czert-B model for the paper [Czert – Czech BERT-like Model for Language Representation
](https://arxiv.org/abs/2103.13031)
For more information, see the paper
## Available Models
You can download **MLM & NSP only** pretrained models
~~[CZERT-A-v1](https://air.kiv.zcu.cz/public/CZERT-A-czert-albert-base-uncased.zip)
[CZERT-B-v1](https://air.kiv.zcu.cz/public/CZERT-B-czert-bert-base-cased.zip)~~
After some additional experiments, we found out that the tokenizers config was exported wrongly. In Czert-B-v1, the tokenizer parameter "do_lower_case" was wrongly set to true. In Czert-A-v1 the parameter "strip_accents" was incorrectly set to true.
Both mistakes are repaired in v2.
[CZERT-A-v2](https://air.kiv.zcu.cz/public/CZERT-A-v2-czert-albert-base-uncased.zip)
[CZERT-B-v2](https://air.kiv.zcu.cz/public/CZERT-B-v2-czert-bert-base-cased.zip)
or choose from one of **Finetuned Models**
| | Models |
| - | - |
| Sentiment Classification<br> (Facebook or CSFD) | [CZERT-A-sentiment-FB](https://air.kiv.zcu.cz/public/CZERT-A_fb.zip) <br> [CZERT-B-sentiment-FB](https://air.kiv.zcu.cz/public/CZERT-B_fb.zip) <br> [CZERT-A-sentiment-CSFD](https://air.kiv.zcu.cz/public/CZERT-A_csfd.zip) <br> [CZERT-B-sentiment-CSFD](https://air.kiv.zcu.cz/public/CZERT-B_csfd.zip) | Semantic Text Similarity <br> (Czech News Agency) | [CZERT-A-sts-CNA](https://air.kiv.zcu.cz/public/CZERT-A-sts-CNA.zip) <br> [CZERT-B-sts-CNA](https://air.kiv.zcu.cz/public/CZERT-B-sts-CNA.zip)
| Named Entity Recognition | [CZERT-A-ner-CNEC](https://air.kiv.zcu.cz/public/CZERT-A-ner-CNEC-cased.zip) <br> [CZERT-B-ner-CNEC](https://air.kiv.zcu.cz/public/CZERT-B-ner-CNEC-cased.zip) <br>[PAV-ner-CNEC](https://air.kiv.zcu.cz/public/PAV-ner-CNEC-cased.zip) <br> [CZERT-A-ner-BSNLP](https://air.kiv.zcu.cz/public/CZERT-A-ner-BSNLP-cased.zip)<br>[CZERT-B-ner-BSNLP](https://air.kiv.zcu.cz/public/CZERT-B-ner-BSNLP-cased.zip) <br>[PAV-ner-BSNLP](https://air.kiv.zcu.cz/public/PAV-ner-BSNLP-cased.zip) |
| Morphological Tagging<br> | [CZERT-A-morphtag-126k](https://air.kiv.zcu.cz/public/CZERT-A-morphtag-126k-cased.zip)<br>[CZERT-B-morphtag-126k](https://air.kiv.zcu.cz/public/CZERT-B-morphtag-126k-cased.zip) |
| Semantic Role Labelling |[CZERT-A-srl](https://air.kiv.zcu.cz/public/CZERT-A-srl-cased.zip)<br> [CZERT-B-srl](https://air.kiv.zcu.cz/public/CZERT-B-srl-cased.zip) |
## How to Use CZERT?
### Sentence Level Tasks
We evaluate our model on two sentence level tasks:
* Sentiment Classification,
* Semantic Text Similarity.
<!-- tokenizer = BertTokenizerFast.from_pretrained(CZERT_MODEL_PATH, strip_accents=False)
\\tmodel = TFAlbertForSequenceClassification.from_pretrained(CZERT_MODEL_PATH, num_labels=1)
or
self.tokenizer = BertTokenizerFast.from_pretrained(CZERT_MODEL_PATH, strip_accents=False)
self.model_encoder = AutoModelForSequenceClassification.from_pretrained(CZERT_MODEL_PATH, from_tf=True)
-->
\\t
### Document Level Tasks
We evaluate our model on one document level task
* Multi-label Document Classification.
### Token Level Tasks
We evaluate our model on three token level tasks:
* Named Entity Recognition,
* Morphological Tagging,
* Semantic Role Labelling.
## Downstream Tasks Fine-tuning Results
### Sentiment Classification
| | mBERT | SlavicBERT | ALBERT-r | Czert-A | Czert-B |
|:----:|:------------------------:|:------------------------:|:------------------------:|:-----------------------:|:--------------------------------:|
| FB | 71.72 ± 0.91 | 73.87 ± 0.50 | 59.50 ± 0.47 | 72.47 ± 0.72 | **76.55** ± **0.14** |
| CSFD | 82.80 ± 0.14 | 82.51 ± 0.14 | 75.40 ± 0.18 | 79.58 ± 0.46 | **84.79** ± **0.26** |
Average F1 results for the Sentiment Classification task. For more information, see [the paper](https://arxiv.org/abs/2103.13031).
### Semantic Text Similarity
| | **mBERT** | **Pavlov** | **Albert-random** | **Czert-A** | **Czert-B** |
|:-------------|:--------------:|:--------------:|:-----------------:|:--------------:|:----------------------:|
| STA-CNA | 83.335 ± 0.063 | 83.593 ± 0.050 | 43.184 ± 0.125 | 82.942 ± 0.106 | **84.345** ± **0.028** |
| STS-SVOB-img | 79.367 ± 0.486 | 79.900 ± 0.810 | 15.739 ± 2.992 | 79.444 ± 0.338 | **83.744** ± **0.395** |
| STS-SVOB-hl | 78.833 ± 0.296 | 76.996 ± 0.305 | 33.949 ± 1.807 | 75.089 ± 0.806 | **79.827 ± 0.469** |
Comparison of Pearson correlation achieved using pre-trained CZERT-A, CZERT-B, mBERT, Pavlov and randomly initialised Albert on semantic text similarity. For more information see [the paper](https://arxiv.org/abs/2103.13031).
### Multi-label Document Classification
| | mBERT | SlavicBERT | ALBERT-r | Czert-A | Czert-B |
|:-----:|:------------:|:------------:|:------------:|:------------:|:-------------------:|
| AUROC | 97.62 ± 0.08 | 97.80 ± 0.06 | 94.35 ± 0.13 | 97.49 ± 0.07 | **98.00** ± **0.04** |
| F1 | 83.04 ± 0.16 | 84.08 ± 0.14 | 72.44 ± 0.22 | 82.27 ± 0.17 | **85.06** ± **0.11** |
Comparison of F1 and AUROC score achieved using pre-trained CZERT-A, CZERT-B, mBERT, Pavlov and randomly initialised Albert on multi-label document classification. For more information see [the paper](https://arxiv.org/abs/2103.13031).
### Morphological Tagging
| | mBERT | Pavlov | Albert-random | Czert-A | Czert-B |
|:-----------------------|:---------------|:---------------|:---------------|:---------------|:---------------|
| Universal Dependencies | 99.176 ± 0.006 | 99.211 ± 0.008 | 96.590 ± 0.096 | 98.713 ± 0.008 | **99.300 ± 0.009** |
Comparison of F1 score achieved using pre-trained CZERT-A, CZERT-B, mBERT, Pavlov and randomly initialised Albert on morphological tagging task. For more information see [the paper](https://arxiv.org/abs/2103.13031).
### Semantic Role Labelling
<div id="tab:SRL">
| | mBERT | Pavlov | Albert-random | Czert-A | Czert-B | dep-based | gold-dep |
|:------:|:----------:|:----------:|:-------------:|:----------:|:----------:|:---------:|:--------:|
| span | 78.547 ± 0.110 | 79.333 ± 0.080 | 51.365 ± 0.423 | 72.254 ± 0.172 | **81.861 ± 0.102** | \\\\- | \\\\- |
| syntax | 90.226 ± 0.224 | 90.492 ± 0.040 | 80.747 ± 0.131 | 80.319 ± 0.054 | **91.462 ± 0.062** | 85.19 | 89.52 |
SRL results – dep columns are evaluate with labelled F1 from CoNLL 2009 evaluation script, other columns are evaluated with span F1 score same as it was used for NER evaluation. For more information see [the paper](https://arxiv.org/abs/2103.13031).
</div>
### Named Entity Recognition
| | mBERT | Pavlov | Albert-random | Czert-A | Czert-B |
|:-----------|:---------------|:---------------|:---------------|:---------------|:---------------|
| CNEC | **86.225 ± 0.208** | **86.565 ± 0.198** | 34.635 ± 0.343 | 72.945 ± 0.227 | 86.274 ± 0.116 |
| BSNLP 2019 | 84.006 ± 1.248 | **86.699 ± 0.370** | 19.773 ± 0.938 | 48.859 ± 0.605 | **86.729 ± 0.344** |
Comparison of f1 score achieved using pre-trained CZERT-A, CZERT-B, mBERT, Pavlov and randomly initialised Albert on named entity recognition task. For more information see [the paper](https://arxiv.org/abs/2103.13031).
## Licence
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. http://creativecommons.org/licenses/by-nc-sa/4.0/
## How should I cite CZERT?
For now, please cite [the Arxiv paper](https://arxiv.org/abs/2103.13031):
```
@article{sido2021czert,
title={Czert -- Czech BERT-like Model for Language Representation},
author={Jakub Sido and Ondřej Pražák and Pavel Přibáň and Jan Pašek and Michal Seják and Miloslav Konopík},
year={2021},
eprint={2103.13031},
archivePrefix={arXiv},
primaryClass={cs.CL},
journal={arXiv preprint arXiv:2103.13031},
}
```
| 9,428 | [
[
-0.041259765625,
-0.044769287109375,
0.0133056640625,
0.017578125,
-0.0178680419921875,
-0.006092071533203125,
-0.018646240234375,
-0.0296783447265625,
0.0169525146484375,
0.02069091796875,
-0.052154541015625,
-0.07098388671875,
-0.05059814453125,
0.01171112... |
Bingsu/clip-vit-large-patch14-ko | 2022-11-18T02:13:00.000Z | [
"transformers",
"pytorch",
"tf",
"safetensors",
"clip",
"zero-shot-image-classification",
"ko",
"arxiv:2004.09813",
"license:mit",
"endpoints_compatible",
"region:us"
] | zero-shot-image-classification | Bingsu | null | null | Bingsu/clip-vit-large-patch14-ko | 4 | 447 | transformers | 2022-10-11T01:55:47 | ---
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png
candidate_labels: 기타 치는 고양이, 피아노 치는 강아지
example_title: Guitar, cat and dog
language: ko
license: mit
---
# clip-vit-large-patch14-ko
Korean CLIP model trained by [Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation](https://arxiv.org/abs/2004.09813)
[Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation](https://arxiv.org/abs/2004.09813)로 학습된 한국어 CLIP 모델입니다.
훈련 코드: <https://github.com/Bing-su/KoCLIP_training_code>
사용된 데이터: AIHUB에 있는 모든 한국어-영어 병렬 데이터
## How to Use
#### 1.
```python
import requests
import torch
from PIL import Image
from transformers import AutoModel, AutoProcessor
repo = "Bingsu/clip-vit-large-patch14-ko"
model = AutoModel.from_pretrained(repo)
processor = AutoProcessor.from_pretrained(repo)
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(text=["고양이 두 마리", "개 두 마리"], images=image, return_tensors="pt", padding=True)
with torch.inference_mode():
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image
probs = logits_per_image.softmax(dim=1)
```
```python
>>> probs
tensor([[0.9974, 0.0026]])
```
#### 2.
```python
from transformers import pipeline
repo = "Bingsu/clip-vit-large-patch14-ko"
pipe = pipeline("zero-shot-image-classification", model=repo)
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
result = pipe(images=url, candidate_labels=["고양이 한 마리", "고양이 두 마리", "분홍색 소파에 드러누운 고양이 친구들"], hypothesis_template="{}")
```
```python
>>> result
[{'score': 0.9907576441764832, 'label': '분홍색 소파에 드러누운 고양이 친구들'},
{'score': 0.009206341579556465, 'label': '고양이 두 마리'},
{'score': 3.606083555496298e-05, 'label': '고양이 한 마리'}]
``` | 1,851 | [
[
-0.0289764404296875,
-0.05377197265625,
0.0198516845703125,
0.033966064453125,
-0.03790283203125,
0.0010509490966796875,
-0.020782470703125,
-0.0035190582275390625,
0.03460693359375,
0.0258331298828125,
-0.0229949951171875,
-0.05328369140625,
-0.049163818359375,... |
microsoft/unispeech-sat-large | 2021-12-14T19:17:12.000Z | [
"transformers",
"pytorch",
"unispeech-sat",
"pretraining",
"speech",
"en",
"arxiv:1912.07875",
"arxiv:2106.06909",
"arxiv:2101.00390",
"arxiv:2110.05752",
"endpoints_compatible",
"region:us"
] | null | microsoft | null | null | microsoft/unispeech-sat-large | 1 | 446 | transformers | 2022-03-02T23:29:05 | ---
language:
- en
datasets:
tags:
- speech
---
# UniSpeech-SAT-Large
[Microsoft's UniSpeech](https://www.microsoft.com/en-us/research/publication/unispeech-unified-speech-representation-learning-with-labeled-and-unlabeled-data/)
The large model pretrained on 16kHz sampled speech audio with utterance and speaker contrastive loss. When using the model, make sure that your speech input is also sampled at 16kHz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model.
The model was pre-trained on:
- 60,000 hours of [Libri-Light](https://arxiv.org/abs/1912.07875)
- 10,000 hours of [GigaSpeech](https://arxiv.org/abs/2106.06909)
- 24,000 hours of [VoxPopuli](https://arxiv.org/abs/2101.00390)
[Paper: UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER
AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752)
Authors: Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu
**Abstract**
*Self-supervised learning (SSL) is a long-standing goal for speech processing, since it utilizes large-scale unlabeled data and avoids extensive human labeling. Recent years witness great successes in applying self-supervised learning in speech recognition, while limited exploration was attempted in applying SSL for modeling speaker characteristics. In this paper, we aim to improve the existing SSL framework for speaker representation learning. Two methods are introduced for enhancing the unsupervised speaker information extraction. First, we apply the multi-task learning to the current SSL framework, where we integrate the utterance-wise contrastive loss with the SSL objective function. Second, for better speaker discrimination, we propose an utterance mixing strategy for data augmentation, where additional overlapped utterances are created unsupervisely and incorporate during training. We integrate the proposed methods into the HuBERT framework. Experiment results on SUPERB benchmark show that the proposed system achieves state-of-the-art performance in universal representation learning, especially for speaker identification oriented tasks. An ablation study is performed verifying the efficacy of each proposed method. Finally, we scale up training dataset to 94 thousand hours public audio data and achieve further performance improvement in all SUPERB tasks..*
The original model can be found under https://github.com/microsoft/UniSpeech/tree/main/UniSpeech-SAT.
# Usage
This is an English pre-trained speech model that has to be fine-tuned on a downstream task like speech recognition or audio classification before it can be
used in inference. The model was pre-trained in English and should therefore perform well only in English. The model has been shown to work well on task such as speaker verification, speaker identification, and speaker diarization.
**Note**: The model was pre-trained on phonemes rather than characters. This means that one should make sure that the input text is converted to a sequence
of phonemes before fine-tuning.
## Speech Recognition
To fine-tune the model for speech recognition, see [the official speech recognition example](https://github.com/huggingface/transformers/tree/master/examples/pytorch/speech-recognition).
## Speech Classification
To fine-tune the model for speech classification, see [the official audio classification example](https://github.com/huggingface/transformers/tree/master/examples/pytorch/audio-classification).
## Speaker Verification
TODO
## Speaker Diarization
TODO
# Contribution
The model was contributed by [cywang](https://huggingface.co/cywang) and [patrickvonplaten](https://huggingface.co/patrickvonplaten).
# License
The official license can be found [here](https://github.com/microsoft/UniSpeech/blob/main/LICENSE)
 | 4,224 | [
[
-0.0226898193359375,
-0.0355224609375,
0.01056671142578125,
0.010223388671875,
-0.0299224853515625,
-0.0009860992431640625,
-0.0301055908203125,
-0.037109375,
0.006244659423828125,
0.03485107421875,
-0.022430419921875,
-0.029083251953125,
-0.03179931640625,
... |
facebook/xm_transformer_unity_hk-en | 2022-10-19T14:28:29.000Z | [
"fairseq",
"audio",
"audio-to-audio",
"speech-to-speech-translation",
"dataset:MuST-C",
"dataset:TAT",
"dataset:Hokkien dramas",
"license:cc-by-nc-4.0",
"has_space",
"region:us"
] | audio-to-audio | facebook | null | null | facebook/xm_transformer_unity_hk-en | 5 | 446 | fairseq | 2022-10-08T00:55:30 | ---
license: cc-by-nc-4.0
library_name: fairseq
task: audio-to-audio
tags:
- fairseq
- audio
- audio-to-audio
- speech-to-speech-translation
datasets:
- MuST-C
- TAT
- Hokkien dramas
---
## xm_transformer_unity_hk-en
Speech-to-speech translation model with two-pass decoder (UnitY) from fairseq:
- Hokkien-English
- Trained with supervised data in TED, drama, [TAT](https://sites.google.com/speech.ntut.edu.tw/fsw/home/tat-corpus) domain, and weakly supervised data in drama domain. See [here](https://research.facebook.com/publications/hokkien-direct-speech-to-speech-translation)
for training details.
- Speech synthesis with [facebook/unit_hifigan_mhubert_vp_en_es_fr_it3_400k_layer11_km1000_lj_dur](https://huggingface.co/facebook/unit_hifigan_mhubert_vp_en_es_fr_it3_400k_layer11_km1000_lj_dur)
- [Project Page](https://github.com/facebookresearch/fairseq/tree/ust/examples/hokkien)
## Usage
```python
import json
import os
from pathlib import Path
import IPython.display as ipd
from fairseq import hub_utils
from fairseq.checkpoint_utils import load_model_ensemble_and_task_from_hf_hub
from fairseq.models.speech_to_text.hub_interface import S2THubInterface
from fairseq.models.text_to_speech import CodeHiFiGANVocoder
from fairseq.models.text_to_speech.hub_interface import VocoderHubInterface
from huggingface_hub import snapshot_download
import torchaudio
cache_dir = os.getenv("HUGGINGFACE_HUB_CACHE")
models, cfg, task = load_model_ensemble_and_task_from_hf_hub(
"facebook/xm_transformer_unity_hk-en",
arg_overrides={"config_yaml": "config.yaml", "task": "speech_to_text"},
cache_dir=cache_dir,
)
#model = models[0].cpu()
#cfg["task"].cpu = True
generator = task.build_generator([model], cfg)
# requires 16000Hz mono channel audio
audio, _ = torchaudio.load("/path/to/an/audio/file")
sample = S2THubInterface.get_model_input(task, audio)
unit = S2THubInterface.get_prediction(task, model, generator, sample)
# speech synthesis
library_name = "fairseq"
cache_dir = (
cache_dir or (Path.home() / ".cache" / library_name).as_posix()
)
cache_dir = snapshot_download(
f"facebook/unit_hifigan_mhubert_vp_en_es_fr_it3_400k_layer11_km1000_lj_dur", cache_dir=cache_dir, library_name=library_name
)
x = hub_utils.from_pretrained(
cache_dir,
"model.pt",
".",
archive_map=CodeHiFiGANVocoder.hub_models(),
config_yaml="config.json",
fp16=False,
is_vocoder=True,
)
with open(f"{x['args']['data']}/config.json") as f:
vocoder_cfg = json.load(f)
assert (
len(x["args"]["model_path"]) == 1
), "Too many vocoder models in the input"
vocoder = CodeHiFiGANVocoder(x["args"]["model_path"][0], vocoder_cfg)
tts_model = VocoderHubInterface(vocoder_cfg, vocoder)
tts_sample = tts_model.get_model_input(unit)
wav, sr = tts_model.get_prediction(tts_sample)
ipd.Audio(wav, rate=sr)
``` | 2,854 | [
[
-0.03265380859375,
-0.050262451171875,
0.0095367431640625,
0.022979736328125,
-0.0116119384765625,
-0.00550079345703125,
-0.0195770263671875,
-0.0150604248046875,
-0.00836944580078125,
0.03399658203125,
-0.052947998046875,
-0.04156494140625,
-0.04144287109375,
... |
timm/eva02_tiny_patch14_224.mim_in22k | 2023-03-31T05:47:21.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"arxiv:2303.11331",
"arxiv:2303.15389",
"license:mit",
"region:us"
] | image-classification | timm | null | null | timm/eva02_tiny_patch14_224.mim_in22k | 1 | 446 | timm | 2023-03-31T04:56:10 | ---
tags:
- image-classification
- timm
library_tag: timm
license: mit
---
# Model card for eva02_tiny_patch14_224.mim_in22k
An EVA02 feature / representation model. Pretrained on ImageNet-22k with masked image modeling (using EVA-CLIP as a MIM teacher) by paper authors.
EVA-02 models are vision transformers with mean pooling, SwiGLU, Rotary Position Embeddings (ROPE), and extra LN in MLP (for Base & Large).
NOTE: `timm` checkpoints are float32 for consistency with other models. Original checkpoints are float16 or bfloat16 in some cases, see originals if that's preferred.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 5.5
- GMACs: 1.7
- Activations (M): 9.1
- Image size: 224 x 224
- **Papers:**
- EVA-02: A Visual Representation for Neon Genesis: https://arxiv.org/abs/2303.11331
- EVA-CLIP: Improved Training Techniques for CLIP at Scale: https://arxiv.org/abs/2303.15389
- **Original:**
- https://github.com/baaivision/EVA
- https://huggingface.co/Yuxin-CV/EVA-02
- **Pretrain Dataset:** ImageNet-22k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('eva02_tiny_patch14_224.mim_in22k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'eva02_tiny_patch14_224.mim_in22k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 257, 192) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
|model |top1 |top5 |param_count|img_size|
|-----------------------------------------------|------|------|-----------|--------|
|eva02_large_patch14_448.mim_m38m_ft_in22k_in1k |90.054|99.042|305.08 |448 |
|eva02_large_patch14_448.mim_in22k_ft_in22k_in1k|89.946|99.01 |305.08 |448 |
|eva_giant_patch14_560.m30m_ft_in22k_in1k |89.792|98.992|1014.45 |560 |
|eva02_large_patch14_448.mim_in22k_ft_in1k |89.626|98.954|305.08 |448 |
|eva02_large_patch14_448.mim_m38m_ft_in1k |89.57 |98.918|305.08 |448 |
|eva_giant_patch14_336.m30m_ft_in22k_in1k |89.56 |98.956|1013.01 |336 |
|eva_giant_patch14_336.clip_ft_in1k |89.466|98.82 |1013.01 |336 |
|eva_large_patch14_336.in22k_ft_in22k_in1k |89.214|98.854|304.53 |336 |
|eva_giant_patch14_224.clip_ft_in1k |88.882|98.678|1012.56 |224 |
|eva02_base_patch14_448.mim_in22k_ft_in22k_in1k |88.692|98.722|87.12 |448 |
|eva_large_patch14_336.in22k_ft_in1k |88.652|98.722|304.53 |336 |
|eva_large_patch14_196.in22k_ft_in22k_in1k |88.592|98.656|304.14 |196 |
|eva02_base_patch14_448.mim_in22k_ft_in1k |88.23 |98.564|87.12 |448 |
|eva_large_patch14_196.in22k_ft_in1k |87.934|98.504|304.14 |196 |
|eva02_small_patch14_336.mim_in22k_ft_in1k |85.74 |97.614|22.13 |336 |
|eva02_tiny_patch14_336.mim_in22k_ft_in1k |80.658|95.524|5.76 |336 |
## Citation
```bibtex
@article{EVA02,
title={EVA-02: A Visual Representation for Neon Genesis},
author={Fang, Yuxin and Sun, Quan and Wang, Xinggang and Huang, Tiejun and Wang, Xinlong and Cao, Yue},
journal={arXiv preprint arXiv:2303.11331},
year={2023}
}
```
```bibtex
@article{EVA-CLIP,
title={EVA-02: A Visual Representation for Neon Genesis},
author={Sun, Quan and Fang, Yuxin and Wu, Ledell and Wang, Xinlong and Cao, Yue},
journal={arXiv preprint arXiv:2303.15389},
year={2023}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
| 5,277 | [
[
-0.045440673828125,
-0.0300445556640625,
0.0151214599609375,
0.00673675537109375,
-0.0163116455078125,
0.00048041343688964844,
-0.01024627685546875,
-0.032989501953125,
0.041290283203125,
0.0260009765625,
-0.03369140625,
-0.049835205078125,
-0.0426025390625,
... |
emilianJR/XXMix_9realistic | 2023-05-25T13:00:10.000Z | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | emilianJR | null | null | emilianJR/XXMix_9realistic | 13 | 446 | diffusers | 2023-05-25T07:44:46 | ---
language:
- en
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Diffuser model for this SD checkpoint:
https://civitai.com/models/47274/xxmix9realistic
**emilianJR/XXMix_9realistic** is the HuggingFace diffuser that you can use with **diffusers.StableDiffusionPipeline()**.
Examples | Examples | Examples
---- | ---- | ----
 | ,%20(masterpiece),(ultra-detailed_1.2),(photorealistic_1.4),(highres),reflection,low%20ang.jpeg) | 
,%20(masterpiece),(ultra-detailed_1.2),(photorealistic_1.4),(highres),reflection,low%20ang.jpeg) | ,%20high%20detailed%20skin,%20outdoor,%20Standing%20in%20the%20middle%20of%20the%20water,%20reflection,%20backli.jpeg) | ,%20(old%20man_1.2),%20,%20solo,%20white%20background,%20balck%20eyes,%20(white%20Beard_1.2),%20Fur%20clothes,%20muscular,.jpeg)
-------
## 🧨 Diffusers
This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
```python
from diffusers import StableDiffusionPipeline
import torch
model_id = "emilianJR/XXMix_9realistic"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "YOUR PROMPT"
image = pipe(prompt).images[0]
image.save("image.png")
```
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) | 2,697 | [
[
-0.046661376953125,
-0.04229736328125,
0.03155517578125,
0.038543701171875,
-0.0261077880859375,
-0.005199432373046875,
0.019775390625,
-0.00817108154296875,
0.033172607421875,
0.03936767578125,
-0.04901123046875,
-0.04547119140625,
-0.043792724609375,
-0.00... |
newsmediabias/UnBIAS-classification-bert | 2023-10-25T19:02:50.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"en",
"dataset:newsmediabias/news-bias-full-data",
"license:openrail",
"endpoints_compatible",
"region:us"
] | text-classification | newsmediabias | null | null | newsmediabias/UnBIAS-classification-bert | 0 | 446 | transformers | 2023-10-14T00:01:19 | ---
license: openrail
language:
- en
datasets:
- newsmediabias/news-bias-full-data
---
## Bias Classification Using Bert
# Overview:
This is a BERT based model designed to detect bias in text data enabling users to identify whether a given text is biased or non-biased.
## Performance:
The model's performance on unseen data is:
### Non-biased Precision: 0.93 Recall: 0.96
### Biased Precision: 0.91 Recall: 0.88
## Overall accuracy : 0.93
## Usage
To use the model, you can utilize the transformers library from Hugging Face:
```
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("newsmediabias/UnBIAS-classification-bert")
model = AutoModelForSequenceClassification.from_pretrained("newsmediabias/UnBIAS-classification-bert")
classifier = pipeline("text-classification", model=model, tokenizer=tokenizer , device=0 if device.type == "cuda" else -1)
classifier("Anyone can excel at coding.")
```
 | 1,106 | [
[
-0.043060302734375,
-0.04852294921875,
0.0110321044921875,
0.0401611328125,
-0.015106201171875,
-0.0093536376953125,
-0.01192474365234375,
-0.02923583984375,
0.0234222412109375,
0.0200347900390625,
-0.060333251953125,
-0.037750244140625,
-0.05615234375,
-0.0... |
kyujinpy/Kosy-platypus2-13B-v4 | 2023-11-02T01:52:54.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"ko",
"dataset:kyujinpy/KOpen-platypus",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | kyujinpy | null | null | kyujinpy/Kosy-platypus2-13B-v4 | 0 | 446 | transformers | 2023-10-28T17:25:07 | ---
language:
- ko
datasets:
- kyujinpy/KOpen-platypus
library_name: transformers
pipeline_tag: text-generation
license: cc-by-nc-sa-4.0
---
# **Kosy🍵llama**

## Model Details
**Model Developers** Kyujin Han (kyujinpy)
**Model Description**
[NEFTune](https://github.com/neelsjain/NEFTune) method를 활용하여 훈련한 Ko-platypus2 new version!
(Noisy + KO + llama = Kosy🍵llama)
**Repo Link**
Github **KoNEFTune**: [Kosy🍵llama](https://github.com/Marker-Inc-Korea/KoNEFTune)
If you visit our github, you can easily apply **Random_noisy_embedding_fine-tuning**!!
**Base Model**
[hyunseoki/ko-en-llama2-13b](https://huggingface.co/hyunseoki/ko-en-llama2-13b)
**Training Dataset**
Version of combined dataset: [kyujinpy/KOpen-platypus](https://huggingface.co/datasets/kyujinpy/KOpen-platypus)
I use A100 GPU 40GB and COLAB, when trianing.
# **Model comparisons**
[KO-LLM leaderboard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard)
# **NEFT comparisons**

| Model | Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 |
| --- | --- | --- | --- | --- | --- | --- |
| [Ko-Platypus2-13B](https://huggingface.co/kyujinpy/KO-Platypus2-13B) | 45.60 | 44.20 | 54.31 | 42.47 | 44.41 | 42.62 |
| *NEFT(🍵kosy)+MLP-v1 | 43.64 | 43.94 | 53.88 | 42.68 | 43.46 | 34.24 |
| *NEFT(🍵kosy)+MLP-v2 | 45.45 | 44.20 | 54.56 | 42.60 | 42.68 | 42.98 |
| [***NEFT(🍵kosy)+MLP-v3**](https://huggingface.co/kyujinpy/Kosy-platypus2-13B-v3) | 46.31 | 43.34 | 54.54 | 43.38 | 44.11 | 46.16 |
| NEFT(🍵kosy)+Attention | 44.92 |42.92 | 54.48 | 42.99 | 43.00 | 41.20 |
| NEFT(🍵kosy) | 45.08 | 43.09 | 53.61 | 41.06 | 43.47 | 43.21 |
> *Different Hyperparameters such that learning_rate, batch_size, epoch, etc...
# Implementation Code
```python
### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "kyujinpy/Koisy-Platypus2-13B"
OpenOrca = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)
```
--- | 2,181 | [
[
-0.046539306640625,
-0.0550537109375,
0.0229339599609375,
0.027313232421875,
-0.046783447265625,
0.00016260147094726562,
-0.01514434814453125,
-0.0211334228515625,
0.02093505859375,
0.0248870849609375,
-0.0286712646484375,
-0.049591064453125,
-0.050933837890625,... |
anonymous-german-nlp/german-gpt2 | 2021-05-21T13:20:42.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"de",
"license:mit",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | text-generation | anonymous-german-nlp | null | null | anonymous-german-nlp/german-gpt2 | 1 | 445 | transformers | 2022-03-02T23:29:05 | ---
language: de
widget:
- text: "Heute ist sehr schönes Wetter in"
license: mit
---
# German GPT-2 model
**Note**: This model was de-anonymized and now lives at:
https://huggingface.co/dbmdz/german-gpt2
Please use the new model name instead! | 248 | [
[
-0.0176849365234375,
-0.05865478515625,
0.0261077880859375,
0.0199127197265625,
-0.0460205078125,
0.000728607177734375,
0.032745361328125,
-0.0250091552734375,
0.021697998046875,
0.018890380859375,
-0.05633544921875,
-0.021026611328125,
-0.0484619140625,
-0.... |
dbmdz/electra-base-turkish-cased-discriminator | 2020-12-11T21:37:26.000Z | [
"transformers",
"pytorch",
"tf",
"electra",
"pretraining",
"tr",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | dbmdz | null | null | dbmdz/electra-base-turkish-cased-discriminator | 0 | 445 | transformers | 2022-03-02T23:29:05 | ---
language: tr
license: mit
---
# 🤗 + 📚 dbmdz Turkish ELECTRA model
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources a cased ELECTRA base model for Turkish 🎉
# Turkish ELECTRA model
We release a base ELEC**TR**A model for Turkish, that was trained on the same data as *BERTurk*.
> ELECTRA is a new method for self-supervised language representation learning. It can be used to
> pre-train transformer networks using relatively little compute. ELECTRA models are trained to
> distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to
> the discriminator of a GAN.
More details about ELECTRA can be found in the [ICLR paper](https://openreview.net/forum?id=r1xMH1BtvB)
or in the [official ELECTRA repository](https://github.com/google-research/electra) on GitHub.
## Stats
The current version of the model is trained on a filtered and sentence
segmented version of the Turkish [OSCAR corpus](https://traces1.inria.fr/oscar/),
a recent Wikipedia dump, various [OPUS corpora](http://opus.nlpl.eu/) and a
special corpus provided by [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/).
The final training corpus has a size of 35GB and 44,04,976,662 tokens.
Thanks to Google's TensorFlow Research Cloud (TFRC) we could train a cased model
on a TPU v3-8 for 1M steps.
## Model weights
[Transformers](https://github.com/huggingface/transformers)
compatible weights for both PyTorch and TensorFlow are available.
| Model | Downloads
| ------------------------------------------------ | ---------------------------------------------------------------------------------------------------------------
| `dbmdz/electra-base-turkish-cased-discriminator` | [`config.json`](https://cdn.huggingface.co/dbmdz/electra-base-turkish-cased-discriminator/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/electra-base-turkish-cased-discriminator/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/electra-base-turkish-cased-discriminator/vocab.txt)
## Usage
With Transformers >= 2.8 our ELECTRA base cased model can be loaded like:
```python
from transformers import AutoModelWithLMHead, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("dbmdz/electra-base-turkish-cased-discriminator")
model = AutoModelWithLMHead.from_pretrained("dbmdz/electra-base-turkish-cased-discriminator")
```
## Results
For results on PoS tagging or NER tasks, please refer to
[this repository](https://github.com/stefan-it/turkish-bert/electra).
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz).
# Contact (Bugs, Feedback, Contribution and more)
For questions about our ELECTRA models just open an issue
[here](https://github.com/dbmdz/berts/issues/new) 🤗
# Acknowledgments
Thanks to [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/) for providing us
additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing
us the Turkish NER dataset for evaluation.
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
| 3,398 | [
[
-0.038726806640625,
-0.04400634765625,
0.00876617431640625,
-0.005344390869140625,
-0.018157958984375,
-0.0037631988525390625,
-0.0086517333984375,
-0.019561767578125,
0.023681640625,
0.04351806640625,
-0.0198516845703125,
-0.046905517578125,
-0.033966064453125,... |
mrm8488/codebert-base-finetuned-stackoverflow-ner | 2022-10-17T18:14:52.000Z | [
"transformers",
"pytorch",
"jax",
"roberta",
"token-classification",
"en",
"dataset:https://aclanthology.org/2020.acl-main.443/",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | mrm8488 | null | null | mrm8488/codebert-base-finetuned-stackoverflow-ner | 13 | 445 | transformers | 2022-03-02T23:29:05 | ---
language: en
datasets:
- https://aclanthology.org/2020.acl-main.443/
widget:
- text: "I want to create a table and ListView or ArrayList for Android or javascript in Windows 10"
license: mit
---
# Codebert (base) fine-tuned this [dataset](https://aclanthology.org/2020.acl-main.443/) for NER
## Eval metrics
eval_accuracy_score = 0.9430622955139325
eval_precision = 0.6047440699126092
eval_recall = 0.6100755667506297
eval_f1 = 0.607398119122257
| 458 | [
[
-0.021575927734375,
-0.039306640625,
0.01169586181640625,
0.01070404052734375,
0.0155181884765625,
0.019744873046875,
-0.0169525146484375,
-0.0026645660400390625,
0.0247039794921875,
0.0255584716796875,
-0.00576019287109375,
-0.07586669921875,
-0.028564453125,
... |
timm/eca_nfnet_l2.ra3_in1k | 2023-03-24T01:14:30.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2102.06171",
"arxiv:2101.08692",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/eca_nfnet_l2.ra3_in1k | 0 | 445 | timm | 2023-03-24T01:13:40 | ---
tags:
- image-classification
- timm
library_tag: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for eca_nfnet_l2.ra3_in1k
A ECA-NFNet-Lite (Lightweight NFNet w/ ECA attention) image classification model. Trained in `timm` by Ross Wightman.
Normalization Free Networks are (pre-activation) ResNet-like models without any normalization layers. Instead of Batch Normalization or alternatives, they use Scaled Weight Standardization and specifically placed scalar gains in residual path and at non-linearities based on signal propagation analysis.
Lightweight NFNets are `timm` specific variants that reduce the SE and bottleneck ratio from 0.5 -> 0.25 (reducing widths) and use a smaller group size while maintaining the same depth. SiLU activations used instead of GELU.
This NFNet variant also uses ECA (Efficient Channel Attention) instead of SE (Squeeze-and-Excitation).
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 56.7
- GMACs: 21.0
- Activations (M): 47.4
- Image size: train = 320 x 320, test = 384 x 384
- **Papers:**
- High-Performance Large-Scale Image Recognition Without Normalization: https://arxiv.org/abs/2102.06171
- Characterizing signal propagation to close the performance gap in unnormalized ResNets: https://arxiv.org/abs/2101.08692
- **Original:** https://github.com/huggingface/pytorch-image-models
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('eca_nfnet_l2.ra3_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'eca_nfnet_l2.ra3_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 160, 160])
# torch.Size([1, 256, 80, 80])
# torch.Size([1, 512, 40, 40])
# torch.Size([1, 1536, 20, 20])
# torch.Size([1, 3072, 10, 10])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'eca_nfnet_l2.ra3_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 3072, 10, 10) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{brock2021high,
author={Andrew Brock and Soham De and Samuel L. Smith and Karen Simonyan},
title={High-Performance Large-Scale Image Recognition Without Normalization},
journal={arXiv preprint arXiv:2102.06171},
year={2021}
}
```
```bibtex
@inproceedings{brock2021characterizing,
author={Andrew Brock and Soham De and Samuel L. Smith},
title={Characterizing signal propagation to close the performance gap in
unnormalized ResNets},
booktitle={9th International Conference on Learning Representations, {ICLR}},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
| 5,087 | [
[
-0.042205810546875,
-0.03955078125,
-0.00015783309936523438,
0.00923919677734375,
-0.0259552001953125,
-0.028045654296875,
-0.02581787109375,
-0.043365478515625,
0.0254974365234375,
0.034393310546875,
-0.032073974609375,
-0.051727294921875,
-0.0548095703125,
... |
timm/samvit_huge_patch16.sa1b | 2023-05-18T21:56:08.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"arxiv:2304.02643",
"arxiv:2010.11929",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/samvit_huge_patch16.sa1b | 0 | 445 | timm | 2023-05-18T21:46:19 | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
---
# Model card for samvit_huge_patch16.sa1b
A Segment-Anything Vision Transformer (SAM ViT) image feature model (NOTE: for features and fine-tune, segmentation head not included). Pretrained on SA-1B for segementation by paper authors w/ initialization from MAE weights.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 637.0
- GMACs: 2982.2
- Activations (M): 3428.2
- Image size: 1024 x 1024
- **Papers:**
- Segment Anything: https://arxiv.org/abs/2304.02643
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2
- **Original:** https://github.com/facebookresearch/segment-anything
- **Pretrain Dataset:** SA-1B
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('samvit_huge_patch16.sa1b', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'samvit_huge_patch16.sa1b',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 256, 64, 64) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{kirillov2023segany,
title={Segment Anything},
author={Kirillov, Alexander and Mintun, Eric and Ravi, Nikhila and Mao, Hanzi and Rolland, Chloe and Gustafson, Laura and Xiao, Tete and Whitehead, Spencer and Berg, Alexander C. and Lo, Wan-Yen and Doll{'a}r, Piotr and Girshick, Ross},
journal={arXiv:2304.02643},
year={2023}
}
```
```bibtex
@article{dosovitskiy2020vit,
title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
journal={ICLR},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
| 3,753 | [
[
-0.03662109375,
-0.0285186767578125,
0.015625,
0.0102996826171875,
-0.032867431640625,
-0.028411865234375,
-0.0176544189453125,
-0.03411865234375,
0.0271148681640625,
0.0197906494140625,
-0.037872314453125,
-0.046417236328125,
-0.05426025390625,
-0.005973815... |
Joemgu/mlong-t5-large-sumstew | 2023-10-18T05:20:47.000Z | [
"transformers",
"pytorch",
"longt5",
"text2text-generation",
"summarization",
"long",
"title generation",
"en",
"de",
"fr",
"it",
"es",
"dataset:Joemgu/sumstew",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | Joemgu | null | null | Joemgu/mlong-t5-large-sumstew | 2 | 445 | transformers | 2023-06-11T19:35:06 | ---
language:
- en
- de
- fr
- it
- es
license: apache-2.0
library_name: transformers
tags:
- summarization
- long
- title generation
datasets:
- Joemgu/sumstew
metrics:
- rouge
pipeline_tag: summarization
model-index:
- name: Joemgu/mlong-t5-large-sumstew
results:
- task:
type: summarization
name: Summarization
dataset:
name: samsum
type: samsum
config: samsum
split: test
metrics:
- type: rouge
value: 29.7108
name: ROUGE-1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYjQ5MjY0NzQ1NzllNGQwOThiOTM1YzEyZWVjODFjZDdjZDViMjI3ODJmMzJmMWMxMTM3MzJiMzI1MzVhYTY1NyIsInZlcnNpb24iOjF9.ba2p1M93JoZuytNmxMMEMp8XSxiTESY0fcJLLGSMinXcpNNgou5voTCBYdlSLvEKwEOHClfVdiNUVJMjzYg0BA
- type: rouge
value: 8.2261
name: ROUGE-2
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZDI3MzAzNGI2MDNkY2Q5ODdiM2ZlOTM0ZDBkMDRlMjBiNzhjYmFhMmIyZTBmMjM3NjEzMDRjZTZjOTQwYTA2YiIsInZlcnNpb24iOjF9.F7ziJPm8M1RueG6vGIbaHCsq7hcp2SIi_CoQfdVSrJZbyo3wNZoWwEj3YS5AmPs7pZUYUj4d5Lyx1OzW5UInBQ
- type: rouge
value: 23.3337
name: ROUGE-L
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYTA0MzRjM2JmOWJmZmM5NzhlNjIzZTBiYTg2Mjg3MzYwZTUwNDBkNjBkMDMyN2JhZjg1MzVjM2ZmNTFmM2EzOSIsInZlcnNpb24iOjF9.hwi4TH_eMUbKB-9_BxFQgpm5rTvr0-3tZXJWhJAhULXvrDaCM_QQUP15Mpvj8rhkj5RWSyyXwePXzHa2kQ5GDg
- type: rouge
value: 26.2366
name: ROUGE-LSUM
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZmI1NWZhNGQwNjY1OTAxZmY5YTE0NTkyNGQ0ZTA4N2QyZTk2ZWE5NDc1ZjYwMjBmYzI1OThlN2Q4YTJhNzg0ZiIsInZlcnNpb24iOjF9.IAw2t2gIgUde3-fALzgqdxF0lj0_vkIDn350NZC7wtVa-qRBgbYc8wMOAZDz2n4B7i-vp6vbWYX19ee4gRy5Cg
- type: loss
value: 3.2148165702819824
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZGQ2ZDBhMjY4OTJmMTJiNzNkY2YxYzlmNDllY2M3YTRkYzQxOTljM2U4NzEyODUxNDMzN2E4ZWY2NjliYmQ2MCIsInZlcnNpb24iOjF9.lRdscf3-6dyITJQZc3KGIiw_hDhHSbZrN-I_8CPjeyP-x23fHSkH1UbKaYnXdzpaNwKen-FPib25rJN5mOx_CQ
- type: gen_len
value: 19.0
name: gen_len
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzRiYTlkOWE3YTBiMDA3MzE1MjAxYzA3N2FkNzJiNGFkYjBkNzhmY2FmMTMzYmQxNzJmMTE5ZTFmNGEwZmQ4OCIsInZlcnNpb24iOjF9.gF5iEgwXuKzCx2acampeKmEQDMd3KhH5GHHFQqVrYauTvruh43xf9fGBao-_iuSJFAnLget2kpaUISxaxUKgBA
---
# mLong-T5-large-sumstew
TL;DR: Multilingual, longform (up to 16k input tokens), abstractive summarization model. Trained on sumstew to generate both title and summary of a given input document.
## Usage:
### Using pipeline (Easy):
```python
from transformers import pipeline
summarizer = pipeline("summarization", "joemgu/mlong-t5-large-sumstew")
text = "Alice was beginning to get very tired of sitting by her sister on the bank, and of having nothing to do: once or twice she had peeped into the book her sister was reading, but it had no pictures or conversations in it, 'and what is the use of a book,' thought Alice 'without pictures or conversations?' So she was considering in her own mind (as well as she could, for the hot day made her feel very sleepy and stupid), whether the pleasure of making a daisy-chain would be worth the trouble of getting up and picking the daisies, when suddenly a White Rabbit with pink eyes ran close by her. There was nothing so very remarkable in that; nor did Alice think it so very much out of the way to hear the Rabbit say to itself, 'Oh dear! Oh dear! I shall be late!' (when she thought it over afterwards, it occurred to her that she ought to have wondered at this, but at the time it all seemed quite natural); but when the Rabbit actually took a watch out of its waistcoat-pocket, and looked at it, and then hurried on, Alice started to her feet, for it flashed across her mind that she had never before seen a rabbit with either a waistcoat-pocket, or a watch to take out of it, and burning with curiosity, she ran across the field after it, and fortunately was just in time to see it pop down a large rabbit-hole under the hedge. In another moment down went Alice after it, never once considering how in the world she was to get out again."
summary = summarizer(text)[0]["summary_text"]
print(summary)
```
Output:
```text
Title: Alice and the White Rabbit Summary: Alice is a bored and curious girl who follows a White Rabbit with a watch into a rabbit-hole. She enters a strange world where she has many adventures and meets many peculiar creatures.
```
### Using .from_pretrained for more control (Advanced):
```python
from transformers import LongT5ForConditionalGeneration, T5Tokenizer
checkpoint = "joemgu/mlong-t5-large-sumstew"
gen_kwargs = {
"max_length": 1024,
"do_sample": False,
"num_beams": 4, # higher = better, but uses more memory
"use_cache": True, # Set to False if running out of memory, but will be MUCH slower
"early_stopping": True,
"num_return_sequences": 1,
"repetition_penalty": 3.5,
"encoder_repetition_penalty": 2.0,
"length_penalty": 1.0, # higher = longer summaries
"encoder_no_repeat_ngram_size": 4,
"no_repeat_ngram_size": 6,
}
model = LongT5ForConditionalGeneration.from_pretrained(checkpoint)
tokenizer = T5Tokenizer.from_pretrained(checkpoint)
prefix = "Write a title and summarize: "
input_document = "Alice was beginning to get very tired of sitting by her sister on the bank, and of having nothing to do: once or twice she had peeped into the book her sister was reading, but it had no pictures or conversations in it, 'and what is the use of a book,' thought Alice 'without pictures or conversations?' So she was considering in her own mind (as well as she could, for the hot day made her feel very sleepy and stupid), whether the pleasure of making a daisy-chain would be worth the trouble of getting up and picking the daisies, when suddenly a White Rabbit with pink eyes ran close by her. There was nothing so very remarkable in that; nor did Alice think it so very much out of the way to hear the Rabbit say to itself, 'Oh dear! Oh dear! I shall be late!' (when she thought it over afterwards, it occurred to her that she ought to have wondered at this, but at the time it all seemed quite natural); but when the Rabbit actually took a watch out of its waistcoat-pocket, and looked at it, and then hurried on, Alice started to her feet, for it flashed across her mind that she had never before seen a rabbit with either a waistcoat-pocket, or a watch to take out of it, and burning with curiosity, she ran across the field after it, and fortunately was just in time to see it pop down a large rabbit-hole under the hedge. In another moment down went Alice after it, never once considering how in the world she was to get out again."
inputs = tokenizer(prefix + input_document, return_tensors="pt", max_length=16384, truncation=True, add_special_tokens=True)
outputs = model.generate(**inputs, **gen_kwargs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
Prefix your document of choice with either:
- "Summarize: "+INPUT_TEXT
- "Write a title and summarize: "+INPUT_TEXT
Depending on the prefix, the output will either be:
- "Summary: SUMMARY OF INPUT_TEXT"
- "Title: TITLE OF INPUT_TEXT Summary: SUMMARY OF INPUT_TEXT" | 7,385 | [
[
-0.018310546875,
-0.0552978515625,
0.032684326171875,
0.0211944580078125,
-0.01062774658203125,
-0.01219940185546875,
-0.00812530517578125,
-0.034393310546875,
0.03131103515625,
0.0282745361328125,
-0.0440673828125,
-0.0283355712890625,
-0.0504150390625,
0.0... |
alarcon7a/lora-trained-xl-colab | 2023-08-09T19:47:31.000Z | [
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"license:openrail++",
"has_space",
"region:us"
] | text-to-image | alarcon7a | null | null | alarcon7a/lora-trained-xl-colab | 1 | 445 | diffusers | 2023-08-09T17:47:03 |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of alarconc person
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - alarcon7a/lora-trained-xl-colab
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of alarconc person using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
| 643 | [
[
-0.0199127197265625,
-0.028564453125,
0.021514892578125,
0.00457000732421875,
-0.034515380859375,
0.0184783935546875,
0.021728515625,
-0.0159759521484375,
0.07122802734375,
0.042144775390625,
-0.042877197265625,
-0.03131103515625,
-0.045074462890625,
-0.0026... |
dbddv01/gpt2-french-small | 2023-05-05T11:57:48.000Z | [
"transformers",
"pytorch",
"jax",
"safetensors",
"gpt2",
"text-generation",
"french",
"model",
"fr",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | dbddv01 | null | null | dbddv01/gpt2-french-small | 6 | 444 | transformers | 2022-03-02T23:29:05 | ---
language: "fr"
tags:
- french
- gpt2
- model
---
A small french language model for french text generation (and possibly more NLP tasks...)
**Introduction**
This french gpt2 model is based on openai GPT-2 small model.
It was trained on a <b>very small (190Mb) dataset </b> from french wikipedia using Transfer Learning and Fine-tuning techniques in just over a day, on one Colab pro with 1GPU 16GB.
It was created applying the recept of <b>Pierre Guillou</b>
See https://medium.com/@pierre_guillou/faster-than-training-from-scratch-fine-tuning-the-english-gpt-2-in-any-language-with-hugging-f2ec05c98787
It is a proof-of-concept that makes possible to get a language model in any language with low ressources.
It was fine-tuned from the English pre-trained GPT-2 small using the Hugging Face libraries (Transformers and Tokenizers) wrapped into the fastai v2 Deep Learning framework. All the fine-tuning fastai v2 techniques were used.
It is now available on Hugging Face. For further information or requests, please go to "Faster than training from scratch — Fine-tuning the English GPT-2 in any language with Hugging Face and fastai v2 (practical case with Portuguese)".
Model migth be improved by using larger dataset under larger powerful training infrastructure. At least this one can be used for small finetuning experimentation (i.e with aitextgen).
PS : I've lost the metrics but it speaks french with some minor grammar issues, coherence of text is somehow limited. | 1,492 | [
[
-0.036773681640625,
-0.064453125,
0.02874755859375,
0.02801513671875,
-0.0136260986328125,
-0.0283355712890625,
-0.0295257568359375,
-0.0447998046875,
0.01532745361328125,
0.037750244140625,
-0.039642333984375,
0.0060577392578125,
-0.05157470703125,
0.006099... |
stanford-crfm/music-small-800k | 2023-06-16T21:27:08.000Z | [
"transformers",
"pytorch",
"gpt2",
"arxiv:2306.08620",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"has_space"
] | null | stanford-crfm | null | null | stanford-crfm/music-small-800k | 0 | 444 | transformers | 2023-06-04T23:54:35 | ---
license: apache-2.0
---
This is a Small (128M parameter) Transformer trained for 800k steps on arrival-time encoded music from the [Lakh MIDI dataset](https://colinraffel.com/projects/lmd/). This model was trained with anticipation.
# References for the Anticipatory Music Transformer
The Anticipatory Music Transformer paper is available on [ArXiv](http://arxiv.org/abs/2306.08620).
The full model card is available [here](https://johnthickstun.com/assets/pdf/music-modelcard.pdf).
Code for using this model is available on [GitHub](https://github.com/jthickstun/anticipation/).
See the accompanying [blog post](https://crfm.stanford.edu/2023/06/16/anticipatory-music-transformer.html) for additional discussion of this model.
| 738 | [
[
-0.045928955078125,
-0.014739990234375,
0.036865234375,
0.005481719970703125,
-0.042816162109375,
-0.0222320556640625,
0.0123443603515625,
-0.0255584716796875,
0.0221405029296875,
0.036956787109375,
-0.0579833984375,
-0.0144805908203125,
-0.04888916015625,
-... |
digiplay/bluePencilRealistic_v01 | 2023-07-22T14:01:26.000Z | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | digiplay | null | null | digiplay/bluePencilRealistic_v01 | 2 | 444 | diffusers | 2023-06-12T19:44:17 | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Author's page:
https://huggingface.co/bluepen5805/blue_pencil_realistic
This model can generate realistic image,2.5D images,paintings like images,
it is very useful👍.
Sample images I made:



| 739 | [
[
-0.0279541015625,
-0.05120849609375,
0.025421142578125,
0.02069091796875,
-0.035247802734375,
-0.0102386474609375,
0.00594329833984375,
-0.042633056640625,
0.032135009765625,
0.0253143310546875,
-0.05047607421875,
-0.03143310546875,
-0.0178375244140625,
-0.0... |
badmonk/anjyuneko | 2023-07-15T05:31:47.000Z | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | badmonk | null | null | badmonk/anjyuneko | 1 | 444 | diffusers | 2023-07-15T05:27:49 | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
# Model Card for ANJYUNEKO
## Model Description
- **Developed by:** BADMONK
- **Model type:** Dreambooth Model + Extracted LoRA
- **Language(s) (NLP):** EN
- **License:** Creativeml-Openrail-M
- **Parent Model:** BRAV5
# How to Get Started with the Model
Use the code below to get started with the model.
### ANJYUNEKO ### | 423 | [
[
-0.0111236572265625,
-0.0307464599609375,
0.0113067626953125,
0.00806427001953125,
-0.06402587890625,
0.005329132080078125,
0.0335693359375,
-0.0212554931640625,
0.041290283203125,
0.056915283203125,
-0.05377197265625,
-0.05474853515625,
-0.041046142578125,
... |
badmonk/sramzno | 2023-07-15T12:31:36.000Z | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | badmonk | null | null | badmonk/sramzno | 1 | 444 | diffusers | 2023-07-15T12:29:38 | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
# Model Card for SRAMZNO
## Model Description
- **Developed by:** BADMONK
- **Model type:** Dreambooth Model + Extracted LoRA
- **Language(s) (NLP):** EN
- **License:** Creativeml-Openrail-M
- **Parent Model:** ChilloutMix
# How to Get Started with the Model
Use the code below to get started with the model.
### SRAMZNO ### | 425 | [
[
-0.0162506103515625,
-0.02008056640625,
0.01953125,
0.01317596435546875,
-0.0831298828125,
-0.00014078617095947266,
0.0211181640625,
-0.027313232421875,
0.04241943359375,
0.057281494140625,
-0.0667724609375,
-0.05364990234375,
-0.0518798828125,
-0.0357971191... |
stablediffusionapi/deliberate-v3 | 2023-09-06T17:27:32.000Z | [
"diffusers",
"stablediffusionapi.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | stablediffusionapi | null | null | stablediffusionapi/deliberate-v3 | 0 | 444 | diffusers | 2023-09-06T17:26:20 | ---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# deliberate-v3 API Inference

## Get API Key
Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed.
Replace Key in below code, change **model_id** to "deliberate-v3"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Try model for free: [Generate Images](https://stablediffusionapi.com/models/deliberate-v3)
Model link: [View model](https://stablediffusionapi.com/models/deliberate-v3)
Credits: [View credits](https://civitai.com/?query=deliberate-v3)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v4/dreambooth"
payload = json.dumps({
"key": "your_api_key",
"model_id": "deliberate-v3",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** | 2,453 | [
[
-0.03082275390625,
-0.061004638671875,
0.0478515625,
0.0263519287109375,
-0.035675048828125,
0.0027008056640625,
0.0196075439453125,
-0.042694091796875,
0.04095458984375,
0.050445556640625,
-0.0677490234375,
-0.053741455078125,
-0.021514892578125,
0.00254058... |
antoinelouis/crossencoder-mMiniLMv2-L12-mmarcoFR | 2023-10-05T09:32:33.000Z | [
"sentence-transformers",
"pytorch",
"xlm-roberta",
"sentence-similarity",
"fr",
"dataset:unicamp-dl/mmarco",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | sentence-similarity | antoinelouis | null | null | antoinelouis/crossencoder-mMiniLMv2-L12-mmarcoFR | 0 | 444 | sentence-transformers | 2023-09-16T21:16:23 | ---
pipeline_tag: sentence-similarity
language: fr
license: apache-2.0
datasets:
- unicamp-dl/mmarco
metrics:
- recall
tags:
- sentence-similarity
library_name: sentence-transformers
---
# crossencoder-mMiniLMv2-L12-mmarcoFR
This is a [sentence-transformers](https://www.SBERT.net) model trained on the **French** portion of the [mMARCO](https://huggingface.co/datasets/unicamp-dl/mmarco) dataset.
It performs cross-attention between a question-passage pair and outputs a relevance score between 0 and 1. The model can be used for tasks like clustering or [semantic search]((https://www.sbert.net/examples/applications/retrieve_rerank/README.html): given a query, encode the latter with some candidate passages -- e.g., retrieved with BM25 or a biencoder -- then sort the passages in a decreasing order of relevance according to the model's predictions.
## Usage
***
#### Sentence-Transformers
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```bash
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import CrossEncoder
pairs = [('Query', 'Paragraph1'), ('Query', 'Paragraph2') , ('Query', 'Paragraph3')]
model = CrossEncoder('antoinelouis/crossencoder-mMiniLMv2-L12-mmarcoFR')
scores = model.predict(pairs)
print(scores)
```
#### 🤗 Transformers
Without [sentence-transformers](https://www.SBERT.net), you can use the model as follows:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model = AutoModelForSequenceClassification.from_pretrained('antoinelouis/crossencoder-mMiniLMv2-L12-mmarcoFR')
tokenizer = AutoTokenizer.from_pretrained('antoinelouis/crossencoder-mMiniLMv2-L12-mmarcoFR')
pairs = [('Query', 'Paragraph1'), ('Query', 'Paragraph2') , ('Query', 'Paragraph3')]
features = tokenizer(pairs, padding=True, truncation=True, return_tensors='pt')
model.eval()
with torch.no_grad():
scores = model(**features).logits
print(scores)
```
## Evaluation
***
We evaluated the model on 500 random queries from the mMARCO-fr train set (which were excluded from training). Each of these queries has at least one relevant and up to 200 irrelevant passages.
Below, we compare the model performance with other cross-encoder models fine-tuned on the same dataset. We report the R-precision (RP), mean reciprocal rank (MRR), and recall at various cut-offs (R@k).
| | model | Vocab. | #Param. | Size | RP | MRR@10 | R@10(↑) | R@20 | R@50 | R@100 |
|---:|:-----------------------------------------------------------------------------------------------------------------------------|:-------|--------:|------:|-------:|---------:|---------:|-------:|-------:|--------:|
| 1 | [crossencoder-camembert-base-mmarcoFR](https://huggingface.co/antoinelouis/crossencoder-camembert-base-mmarcoFR) | fr | 110M | 443MB | 35.65 | 50.44 | 82.95 | 91.50 | 96.80 | 98.80 |
| 2 | **crossencoder-mMiniLMv2-L12-mmarcoFR** | fr,99+ | 118M | 471MB | 34.37 | 51.01 | 82.23 | 90.60 | 96.45 | 98.40 |
| 3 | [crossencoder-mpnet-base-mmarcoFR](https://huggingface.co/antoinelouis/crossencoder-mpnet-base-mmarcoFR) | en | 109M | 438MB | 29.68 | 46.13 | 80.45 | 87.90 | 93.15 | 96.60 |
| 4 | [crossencoder-distilcamembert-mmarcoFR](https://huggingface.co/antoinelouis/crossencoder-distilcamembert-mmarcoFR) | fr | 68M | 272MB | 27.28 | 43.71 | 80.30 | 89.10 | 95.55 | 98.60 |
| 5 | [crossencoder-electra-base-french-mmarcoFR](https://huggingface.co/antoinelouis/crossencoder-electra-base-french-mmarcoFR) | fr | 110M | 443MB | 28.32 | 45.28 | 79.22 | 87.15 | 93.15 | 95.75 |
| 6 | [crossencoder-mMiniLMv2-L6-mmarcoFR](https://huggingface.co/antoinelouis/crossencoder-mMiniLMv2-L6-mmarcoFR) | fr,99+ | 107M | 428MB | 33.92 | 49.33 | 79.00 | 88.35 | 94.80 | 98.20 |
## Training
***
#### Background
We used the [nreimers/mMiniLMv2-L12-H384-distilled-from-XLMR-Large](https://huggingface.co/nreimers/mMiniLMv2-L12-H384-distilled-from-XLMR-Large) model and fine-tuned it with a binary cross-entropy loss function on 1M question-passage pairs in French with a positive-to-negative ratio of 4 (i.e., 25% of the pairs are relevant and 75% are irrelevant).
#### Hyperparameters
We trained the model on a single Tesla V100 GPU with 32GBs of memory during 10 epochs (i.e., 312.4k steps) using a batch size of 32. We used the adamw optimizer with an initial learning rate of 2e-05, weight decay of 0.01, learning rate warmup over the first 500 steps, and linear decay of the learning rate. The sequence length was limited to 512 tokens.
#### Data
We used the French version of the [mMARCO](https://huggingface.co/datasets/unicamp-dl/mmarco) dataset to fine-tune our model. mMARCO is a multi-lingual machine-translated version of the MS MARCO dataset, a popular large-scale IR dataset.
## Citation
***
```bibtex
@online{louis2023,
author = 'Antoine Louis',
title = 'crossencoder-mMiniLMv2-L12-mmarcoFR: A Cross-Encoder Model Trained on 1M sentence pairs in French',
publisher = 'Hugging Face',
month = 'september',
year = '2023',
url = 'https://huggingface.co/antoinelouis/crossencoder-mMiniLMv2-L12-mmarcoFR',
}
``` | 5,597 | [
[
-0.0408935546875,
-0.0355224609375,
0.008514404296875,
0.0212860107421875,
-0.0189208984375,
-0.01541900634765625,
-0.02288818359375,
-0.0266265869140625,
0.015655517578125,
0.02374267578125,
-0.045196533203125,
-0.03729248046875,
-0.06842041015625,
0.020141... |
TransQuest/monotransquest-da-multilingual | 2021-06-03T19:06:25.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"Quality Estimation",
"monotransquest",
"DA",
"multilingual-multilingual",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | TransQuest | null | null | TransQuest/monotransquest-da-multilingual | 1 | 443 | transformers | 2022-03-02T23:29:05 | ---
language: multilingual-multilingual
tags:
- Quality Estimation
- monotransquest
- DA
license: apache-2.0
---
# TransQuest: Translation Quality Estimation with Cross-lingual Transformers
The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commercial translation workflows as they have numerous potential uses. They can be employed to select the best translation when several translation engines are available or can inform the end user about the reliability of automatically translated content. In addition, QE systems can be used to decide whether a translation can be published as it is in a given context, or whether it requires human post-editing before publishing or translation from scratch by a human. The quality estimation can be done at different levels: document level, sentence level and word level.
With TransQuest, we have opensourced our research in translation quality estimation which also won the sentence-level direct assessment quality estimation shared task in [WMT 2020](http://www.statmt.org/wmt20/quality-estimation-task.html). TransQuest outperforms current open-source quality estimation frameworks such as [OpenKiwi](https://github.com/Unbabel/OpenKiwi) and [DeepQuest](https://github.com/sheffieldnlp/deepQuest).
## Features
- Sentence-level translation quality estimation on both aspects: predicting post editing efforts and direct assessment.
- Word-level translation quality estimation capable of predicting quality of source words, target words and target gaps.
- Outperform current state-of-the-art quality estimation methods like DeepQuest and OpenKiwi in all the languages experimented.
- Pre-trained quality estimation models for fifteen language pairs are available in [HuggingFace.](https://huggingface.co/TransQuest)
## Installation
### From pip
```bash
pip install transquest
```
### From Source
```bash
git clone https://github.com/TharinduDR/TransQuest.git
cd TransQuest
pip install -r requirements.txt
```
## Using Pre-trained Models
```python
import torch
from transquest.algo.sentence_level.monotransquest.run_model import MonoTransQuestModel
model = MonoTransQuestModel("xlmroberta", "TransQuest/monotransquest-da-multilingual", num_labels=1, use_cuda=torch.cuda.is_available())
predictions, raw_outputs = model.predict([["Reducerea acestor conflicte este importantă pentru conservare.", "Reducing these conflicts is not important for preservation."]])
print(predictions)
```
## Documentation
For more details follow the documentation.
1. **[Installation](https://tharindudr.github.io/TransQuest/install/)** - Install TransQuest locally using pip.
2. **Architectures** - Checkout the architectures implemented in TransQuest
1. [Sentence-level Architectures](https://tharindudr.github.io/TransQuest/architectures/sentence_level_architectures/) - We have released two architectures; MonoTransQuest and SiameseTransQuest to perform sentence level quality estimation.
2. [Word-level Architecture](https://tharindudr.github.io/TransQuest/architectures/word_level_architecture/) - We have released MicroTransQuest to perform word level quality estimation.
3. **Examples** - We have provided several examples on how to use TransQuest in recent WMT quality estimation shared tasks.
1. [Sentence-level Examples](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/)
2. [Word-level Examples](https://tharindudr.github.io/TransQuest/examples/word_level_examples/)
4. **Pre-trained Models** - We have provided pretrained quality estimation models for fifteen language pairs covering both sentence-level and word-level
1. [Sentence-level Models](https://tharindudr.github.io/TransQuest/models/sentence_level_pretrained/)
2. [Word-level Models](https://tharindudr.github.io/TransQuest/models/word_level_pretrained/)
5. **[Contact](https://tharindudr.github.io/TransQuest/contact/)** - Contact us for any issues with TransQuest
## Citations
If you are using the word-level architecture, please consider citing this paper which is accepted to [ACL 2021](https://2021.aclweb.org/).
```bash
@InProceedings{ranasinghe2021,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {An Exploratory Analysis of Multilingual Word Level Quality Estimation with Cross-Lingual Transformers},
booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics},
year = {2021}
}
```
If you are using the sentence-level architectures, please consider citing these papers which were presented in [COLING 2020](https://coling2020.org/) and in [WMT 2020](http://www.statmt.org/wmt20/) at EMNLP 2020.
```bash
@InProceedings{transquest:2020a,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest: Translation Quality Estimation with Cross-lingual Transformers},
booktitle = {Proceedings of the 28th International Conference on Computational Linguistics},
year = {2020}
}
```
```bash
@InProceedings{transquest:2020b,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest at WMT2020: Sentence-Level Direct Assessment},
booktitle = {Proceedings of the Fifth Conference on Machine Translation},
year = {2020}
}
```
| 5,423 | [
[
-0.023345947265625,
-0.047821044921875,
0.03680419921875,
0.020599365234375,
-0.0142669677734375,
0.0078582763671875,
0.004459381103515625,
-0.046173095703125,
0.01198577880859375,
0.0216827392578125,
-0.046478271484375,
-0.0460205078125,
-0.04656982421875,
... |
burakaytan/roberta-base-turkish-uncased | 2022-09-07T05:44:18.000Z | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"tr",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | burakaytan | null | null | burakaytan/roberta-base-turkish-uncased | 8 | 443 | transformers | 2022-04-20T06:08:13 | ---
language: tr
license: mit
---
🇹🇷 RoBERTaTurk
## Model description
This is a Turkish RoBERTa base model pretrained on Turkish Wikipedia, Turkish OSCAR, and some news websites.
The final training corpus has a size of 38 GB and 329.720.508 sentences.
Thanks to Turkcell we could train the model on Intel(R) Xeon(R) Gold 6230R CPU @ 2.10GHz 256GB RAM 2 x GV100GL [Tesla V100 PCIe 32GB] GPU for 2.5M steps.
# Usage
Load transformers library with:
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("burakaytan/roberta-base-turkish-uncased")
model = AutoModelForMaskedLM.from_pretrained("burakaytan/roberta-base-turkish-uncased")
```
# Fill Mask Usage
```python
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model="burakaytan/roberta-base-turkish-uncased",
tokenizer="burakaytan/roberta-base-turkish-uncased"
)
fill_mask("iki ülke arasında <mask> başladı")
[{'sequence': 'iki ülke arasında savaş başladı',
'score': 0.3013845384120941,
'token': 1359,
'token_str': ' savaş'},
{'sequence': 'iki ülke arasında müzakereler başladı',
'score': 0.1058429479598999,
'token': 30439,
'token_str': ' müzakereler'},
{'sequence': 'iki ülke arasında görüşmeler başladı',
'score': 0.07718811184167862,
'token': 4916,
'token_str': ' görüşmeler'},
{'sequence': 'iki ülke arasında kriz başladı',
'score': 0.07174749672412872,
'token': 3908,
'token_str': ' kriz'},
{'sequence': 'iki ülke arasında çatışmalar başladı',
'score': 0.05678590387105942,
'token': 19346,
'token_str': ' çatışmalar'}]
```
## Citation and Related Information
To cite this model:
```bibtex
@inproceedings{aytan2022comparison,
title={Comparison of Transformer-Based Models Trained in Turkish and Different Languages on Turkish Natural Language Processing Problems},
author={Aytan, Burak and Sakar, C Okan},
booktitle={2022 30th Signal Processing and Communications Applications Conference (SIU)},
pages={1--4},
year={2022},
organization={IEEE}
}
``` | 2,058 | [
[
-0.0301666259765625,
-0.0526123046875,
0.00850677490234375,
0.01381683349609375,
-0.025848388671875,
-0.0140380859375,
-0.0279083251953125,
-0.01340484619140625,
-0.0065765380859375,
0.035980224609375,
-0.03643798828125,
-0.0283355712890625,
-0.0675048828125,
... |
caidas/swin2SR-classical-sr-x4-64 | 2023-01-21T12:08:11.000Z | [
"transformers",
"pytorch",
"swin2sr",
"image-to-image",
"vision",
"arxiv:2209.11345",
"license:apache-2.0",
"region:us"
] | image-to-image | caidas | null | null | caidas/swin2SR-classical-sr-x4-64 | 2 | 443 | transformers | 2022-12-16T14:07:21 | ---
license: apache-2.0
tags:
- vision
- image-to-image
inference: false
---
# Swin2SR model (image super-resolution)
Swin2SR model that upscales images x4. It was introduced in the paper [Swin2SR: SwinV2 Transformer for Compressed Image Super-Resolution and Restoration](https://arxiv.org/abs/2209.11345)
by Conde et al. and first released in [this repository](https://github.com/mv-lab/swin2sr).
# Intended use cases
This model is intended for image super resolution.
# Usage
Refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/swin2sr#transformers.Swin2SRForImageSuperResolution.forward.example). | 642 | [
[
-0.024627685546875,
0.0022716522216796875,
-0.002368927001953125,
0.0002751350402832031,
-0.03070068359375,
-0.019317626953125,
0.0245361328125,
-0.04705810546875,
0.0030078887939453125,
0.0287933349609375,
-0.053070068359375,
0.01739501953125,
-0.03836059570312... |
pysentimiento/bertweet-irony | 2023-02-20T19:02:11.000Z | [
"pysentimiento",
"pytorch",
"roberta",
"twitter",
"irony",
"en",
"arxiv:2106.09462",
"region:us"
] | null | pysentimiento | null | null | pysentimiento/bertweet-irony | 1 | 443 | pysentimiento | 2023-02-17T02:20:05 | ---
language:
- en
library_name: pysentimiento
tags:
- twitter
- irony
---
# Irony detection in English
## bertweet-irony
Repository: [https://github.com/pysentimiento/pysentimiento/](https://github.com/finiteautomata/pysentimiento/)
Model trained with SemEval 2018 dataset Task 3 (Van Hee et all, 2018) for irony detection. Base model is [BERTweet], a RoBERTa model trained in English tweets.
The positive class marks irony, the negative class marks not ironic content.
## Results
Results for the four tasks evaluated in `pysentimiento`. Results are expressed as Macro F1 scores
| Model | sentiment | emotion | hate_speech | irony |
|:-----------|:------------|:------------|:--------------|:------------|
| bert | 69.6 +- 0.4 | 42.7 +- 0.6 | 56.0 +- 0.8 | 68.1 +- 2.2 |
| electra | 70.9 +- 0.4 | 37.2 +- 2.9 | 55.6 +- 0.6 | 71.3 +- 1.8 |
| roberta | 70.4 +- 0.3 | 45.0 +- 0.9 | 55.1 +- 0.4 | 70.4 +- 2.9 |
| robertuito | 69.6 +- 0.5 | 43.0 +- 3.3 | 57.5 +- 0.2 | 73.9 +- 1.4 |
| bertweet | 72.0 +- 0.4 | 43.1 +- 1.8 | 57.7 +- 0.7 | 80.8 +- 0.7 |
Note that for Hate Speech, these are the results for Semeval 2019, Task 5 Subtask B (HS+TR+AG detection)
## Citation
If you use this model in your research, please cite pysentimiento, dataset and pre-trained model papers:
```
@misc{perez2021pysentimiento,
title={pysentimiento: A Python Toolkit for Sentiment Analysis and SocialNLP tasks},
author={Juan Manuel Pérez and Juan Carlos Giudici and Franco Luque},
year={2021},
eprint={2106.09462},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@inproceedings{van2018semeval,
title={Semeval-2018 task 3: Irony detection in english tweets},
author={Van Hee, Cynthia and Lefever, Els and Hoste, V{\'e}ronique},
booktitle={Proceedings of The 12th International Workshop on Semantic Evaluation},
pages={39--50},
year={2018}
}
@inproceedings{nguyen2020bertweet,
title={BERTweet: A pre-trained language model for English Tweets},
author={Nguyen, Dat Quoc and Vu, Thanh and Nguyen, Anh Tuan},
booktitle={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
pages={9--14},
year={2020}
}
``` | 2,257 | [
[
-0.017303466796875,
-0.050079345703125,
0.032623291015625,
0.03387451171875,
-0.0286102294921875,
-0.012176513671875,
-0.019805908203125,
-0.0379638671875,
0.0203094482421875,
0.0113067626953125,
-0.028564453125,
-0.06689453125,
-0.06170654296875,
0.00868225... |
Kansallisarkisto/finbert-ner | 2023-09-12T08:40:01.000Z | [
"transformers",
"pytorch",
"bert",
"token-classification",
"fi",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | Kansallisarkisto | null | null | Kansallisarkisto/finbert-ner | 0 | 443 | transformers | 2023-06-27T12:38:39 | ---
license: mit
language:
- fi
metrics:
- f1
- precision
- recall
library_name: transformers
pipeline_tag: token-classification
---
## Finnish named entity recognition
The model performs named entity recognition from text input in Finnish.
It was trained by fine-tuning [bert-base-finnish-cased-v1](https://huggingface.co/TurkuNLP/bert-base-finnish-cased-v1),
using 10 named entity categories. Training data contains for instance the [Turku OntoNotes Entities Corpus](https://github.com/TurkuNLP/turku-one),
the Finnish part of the [NewsEye dataset](https://zenodo.org/record/4573313)
as well as an annotated dataset consisting of Finnish document data from the 1970s onwards, digitized by the National Archives of Finland.
Since the latter dataset contains also sensitive data, it has not been made publicly available.
## Intended uses & limitations
The model has been trained to recognize the following named entities from a text in Finnish:
- PERSON (person names)
- ORG (organizations)
- LOC (locations)
- GPE (geopolitical locations)
- PRODUCT (products)
- EVENT (events)
- DATE (dates)
- JON (Finnish journal numbers (diaarinumero))
- FIBC (Finnish business identity codes (y-tunnus))
- NORP (nationality, religious and political groups)
Some entities, like EVENT and LOC, are less common in the training data than the others, which means that
recognition accuracy for these entities also tends to be lower.
Most of the training data is relatively recent, so that the model might face difficulties when the input
contains for example old names or writing styles.
## How to use
The easiest way to use the model is by utilizing the Transformers pipeline for token classification:
```python
from transformers import pipeline
model_checkpoint = "Kansallisarkisto/finbert-ner"
token_classifier = pipeline(
"token-classification", model=model_checkpoint, aggregation_strategy="simple"
)
predictions = token_classifier("'Helsingistä tuli Suomen suuriruhtinaskunnan pääkaupunki vuonna 1812.")
print(predictions)
```
## Training data
Some of the entities (for instance WORK_OF_ART, LAW, MONEY) that have been annotated in the [Turku OntoNotes Entities Corpus](https://github.com/TurkuNLP/turku-one)
dataset were filtered out from the dataset used for training the model. On the other hand, entities that were missing from the [NewsEye dataset](https://zenodo.org/record/4573313)
were added during the annotation process. The different data sources used in model training, validation and testing are listed below:
Dataset|Period covered by the texts|Text type|Percentage of the total data
-|-|-|-
[Turku OntoNotes Entities Corpus](https://github.com/TurkuNLP/turku-one)|2000s|Online texts|23%
[NewsEye dataset](https://zenodo.org/record/4573313)|1850-1950|OCR'd digitized newspaper articles|3%
Diverse document data from Finnish public administration|1970s - 2000s|OCR'd digitized documents|69%
Finnish senate documents|1916|Part manually transcribed, part HTR'd digitized documents|3%
Finnish books from [Project Gutenberg](https://www.gutenberg.org)|Early 20th century|OCR'd texts|1%
Theses from Finnish polytechnic universities |2000s|OCR'd texts|1%
The number of entities belonging to the different
entity classes contained in training, validation and test datasets are listed below:
### Number of entity types in the data
Dataset|PERSON|ORG|LOC|GPE|PRODUCT|EVENT|DATE|JON|FIBC|NORP
-|-|-|-|-|-|-|-|-|-|-
Train|20211|45722|1321|19387|9571|1616|23642|2460|2384|2529
Val|2525|5517|130|2512|1217|240|3047|306|247|283
Test|2414|5577|179|2445|1097|183|2838|272|374|356
## Training procedure
This model was trained using a NVIDIA RTX A6000 GPU with the following hyperparameters:
- learning rate: 2e-05
- train batch size: 24
- epochs: 10
- optimizer: AdamW with betas=(0.9,0.999) and epsilon=1e-08
- scheduler: linear scheduler with num_warmup_steps=round(len(train_dataloader)/5) and num_training_steps=len(train_dataloader)*epochs
- maximum length of data sequence: 512
- patience: 2 epochs
- classifier dropout: 0.3
In the preprocessing stage, the input texts were split into chunks with a maximum length of 300 tokens,
in order to avoid the tokenized chunks exceeding the maximum length of 512. Tokenization was performed
using the tokenizer for the [bert-base-finnish-cased-v1](https://huggingface.co/TurkuNLP/bert-base-finnish-cased-v1)
model.
The training code with instructions is available in [GitHub](https://github.com/DALAI-project/Train_BERT_NER).
## Evaluation results
Evaluation results using the test dataset are listed below:
||Precision|Recall|F1-score
-|-|-|-
PERSON|0.90|0.91|0.90
ORG|0.84|0.87|0.86
LOC|0.84|0.86|0.85
GPE|0.91|0.91|0.91
PRODUCT|0.73|0.77|0.75
EVENT|0.69|0.73|0.71
DATE|0.90|0.92|0.91
JON|0.83|0.95|0.89
FIBC|0.95|0.99|0.97
NORP|0.91|0.95|0.93
The metrics were calculated using the [seqeval](https://github.com/chakki-works/seqeval) library.
## Acknowledgements
The model was developed in an ERDF-funded project "Using Artificial Intelligence to Improve the Quality and Usability of Digital Records"
(Dalai) in 2021-2023. The purpose of the project was to develop the automation of the digitisation of cultural heritage materials and the
automated description of such materials through artificial intelligence. The main target group comprises memory organisations, archives,
museums and libraries that digitise and provide digital materials to their customers, as well as companies that develop services related
to digitisation and the processing of digital materials.
Project partners were the National Archives of Finland, Central Archives for Finnish Business Records (Elka),
South-Eastern Finland University of Applied Sciences Ltd (Xamk) and Disec Ltd.
The selection and definition of the named entity categories, the formulation of the annotation guidelines and the annotation process have been
carried out in cooperation with the [FIN-CLARIAH research infrastructure / University of Jyväskylä](https://jyu.fi/fin-clariah).
| 6,011 | [
[
-0.038299560546875,
-0.0479736328125,
0.022216796875,
-0.01039886474609375,
-0.02545166015625,
-0.01375579833984375,
-0.0207672119140625,
-0.04046630859375,
0.00994110107421875,
0.0352783203125,
-0.03607177734375,
-0.0501708984375,
-0.03790283203125,
0.02523... |
badmonk/rxri | 2023-07-15T10:41:32.000Z | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | badmonk | null | null | badmonk/rxri | 1 | 443 | diffusers | 2023-07-15T10:35:26 | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
# Model Card for RXRI
## Model Description
- **Developed by:** BADMONK
- **Model type:** Dreambooth Model + Extracted LoRA
- **Language(s) (NLP):** EN
- **License:** Creativeml-Openrail-M
- **Parent Model:** Chilloutmix
# How to Get Started with the Model
Use the code below to get started with the model.
### RXRI ### | 419 | [
[
-0.0255279541015625,
-0.023834228515625,
0.022308349609375,
0.012786865234375,
-0.059783935546875,
0.00812530517578125,
0.0284576416015625,
-0.02178955078125,
0.034881591796875,
0.056793212890625,
-0.047271728515625,
-0.042572021484375,
-0.043426513671875,
-... |
Jinouga/yamanaka-ino-realistic-v1 | 2023-07-15T16:12:30.000Z | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | Jinouga | null | null | Jinouga/yamanaka-ino-realistic-v1 | 0 | 443 | diffusers | 2023-07-15T16:06:41 | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### yamanaka-ino-realistic-v1 Dreambooth model trained by Jinouga with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
| 514 | [
[
-0.0312347412109375,
-0.06414794921875,
0.047271728515625,
0.019683837890625,
-0.035858154296875,
0.0261993408203125,
0.0206298828125,
-0.0310821533203125,
0.05230712890625,
0.0172882080078125,
-0.034515380859375,
-0.016754150390625,
-0.035186767578125,
-0.0... |
KoichiYasuoka/bert-base-japanese-upos | 2022-09-18T10:43:26.000Z | [
"transformers",
"pytorch",
"bert",
"token-classification",
"japanese",
"pos",
"wikipedia",
"dependency-parsing",
"ja",
"dataset:universal_dependencies",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | token-classification | KoichiYasuoka | null | null | KoichiYasuoka/bert-base-japanese-upos | 2 | 442 | transformers | 2022-03-02T23:29:04 | ---
language:
- "ja"
tags:
- "japanese"
- "token-classification"
- "pos"
- "wikipedia"
- "dependency-parsing"
datasets:
- "universal_dependencies"
license: "cc-by-sa-4.0"
pipeline_tag: "token-classification"
widget:
- text: "国境の長いトンネルを抜けると雪国であった。"
---
# bert-base-japanese-upos
## Model Description
This is a BERT model pre-trained on Japanese Wikipedia texts for POS-tagging and dependency-parsing, derived from [bert-base-japanese-char-extended](https://huggingface.co/KoichiYasuoka/bert-base-japanese-char-extended). Every short-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
import torch
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-base-japanese-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/bert-base-japanese-upos")
s="国境の長いトンネルを抜けると雪国であった。"
p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]]
print(list(zip(s,p)))
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/bert-base-japanese-upos")
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
| 1,366 | [
[
-0.0237274169921875,
-0.0227813720703125,
0.0215911865234375,
0.0203704833984375,
-0.03717041015625,
-0.00927734375,
-0.0189208984375,
-0.01543426513671875,
0.02911376953125,
0.030059814453125,
-0.0333251953125,
-0.03668212890625,
-0.042724609375,
-0.0010118... |
Falah/arabic-amera | 2023-03-05T09:45:15.000Z | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | Falah | null | null | Falah/arabic-amera | 0 | 442 | diffusers | 2023-03-04T15:43:27 | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### arabic-amera model trained by Falah.G.Salieh
## You can visit my blog: https://iraqprogrammer.wordpress.com/
## FB: https://web.facebook.com/falahgs4ai
## Email: falahgs07@gmail.com
With Stable Diffusion, we can now create artificial intelligence art generation images using trained images.
In this model, we can create images of an Arab princess called Arabic amera in Arabic languages (اميرة عربية)
as famous images, or anything you can think of Test the concept via A1111 Colab fast-Colab-A1111.
# Any prompt and add arabic-amera style word:
# prompts:
25yo Arabic smiling female looking at the viewer, a detailed face,
attractive, full elegant dress, wavy chestnut hair, ((closeup)), perfect eyes,
(interior home background), (photorealistic), intricate, highly detailed, absurd res,
symmetrical, backlighting, colorful, concept art,
(photography:1.5), sharp focus, illustration, award-winning, 8K.by arabic-amera style
# Sample pictures of this concept:










| 2,088 | [
[
-0.054046630859375,
-0.0538330078125,
0.0029773712158203125,
0.0192718505859375,
-0.027923583984375,
0.007320404052734375,
0.0174407958984375,
-0.05279541015625,
0.056182861328125,
0.0303802490234375,
-0.045379638671875,
-0.040679931640625,
-0.055694580078125,
... |
stablediffusionapi/architecture-tuned-model | 2023-08-29T13:43:55.000Z | [
"diffusers",
"stablediffusionapi.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | stablediffusionapi | null | null | stablediffusionapi/architecture-tuned-model | 5 | 442 | diffusers | 2023-06-02T03:12:01 | ---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# Architecture Tuned Model API Inference

## Get API Key
Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed.
Replace Key in below code, change **model_id** to "architecture-tuned-model"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Try model for free: [Generate Images](https://stablediffusionapi.com/models/architecture-tuned-model)
Model link: [View model](https://stablediffusionapi.com/models/architecture-tuned-model)
Credits: [View credits](https://civitai.com/?query=Architecture%20Tuned%20Model)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v4/dreambooth"
payload = json.dumps({
"key": "your_api_key",
"model_id": "architecture-tuned-model",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** | 2,546 | [
[
-0.036376953125,
-0.059600830078125,
0.0323486328125,
0.0164031982421875,
-0.037994384765625,
0.0059967041015625,
0.022369384765625,
-0.034027099609375,
0.03955078125,
0.039703369140625,
-0.062225341796875,
-0.06195068359375,
-0.02276611328125,
-0.0040397644... |
UCSC-VLAA/ViT-L-14-CLIPA-datacomp1B | 2023-10-17T05:46:10.000Z | [
"open_clip",
"clip",
"zero-shot-image-classification",
"dataset:mlfoundations/datacomp_1b",
"arxiv:2306.15658",
"arxiv:2305.07017",
"license:apache-2.0",
"region:us"
] | zero-shot-image-classification | UCSC-VLAA | null | null | UCSC-VLAA/ViT-L-14-CLIPA-datacomp1B | 0 | 442 | open_clip | 2023-10-17T05:42:03 | ---
tags:
- clip
library_name: open_clip
pipeline_tag: zero-shot-image-classification
license: apache-2.0
datasets:
- mlfoundations/datacomp_1b
---
# Model card for ViT-L-14-CLIPA-datacomp1B
A CLIPA-v2 model...
## Model Details
- **Model Type:** Contrastive Image-Text, Zero-Shot Image Classification.
- **Original:** https://github.com/UCSC-VLAA/CLIPA
- **Dataset:** mlfoundations/datacomp_1b
- **Papers:**
- CLIPA-v2: Scaling CLIP Training with 81.1% Zero-shot ImageNet Accuracy within a $10,000 Budget; An Extra $4,000 Unlocks 81.8% Accuracy: https://arxiv.org/abs/2306.15658
- An Inverse Scaling Law for CLIP Training: https://arxiv.org/abs/2305.07017
## Model Usage
### With OpenCLIP
```
import torch
import torch.nn.functional as F
from urllib.request import urlopen
from PIL import Image
from open_clip import create_model_from_pretrained, get_tokenizer
model, preprocess = create_model_from_pretrained('hf-hub:ViT-L-14-CLIPA')
tokenizer = get_tokenizer('hf-hub:ViT-L-14-CLIPA')
image = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
image = preprocess(image).unsqueeze(0)
text = tokenizer(["a diagram", "a dog", "a cat", "a beignet"], context_length=model.context_length)
with torch.no_grad(), torch.cuda.amp.autocast():
image_features = model.encode_image(image)
text_features = model.encode_text(text)
image_features = F.normalize(image_features, dim=-1)
text_features = F.normalize(text_features, dim=-1)
text_probs = (100.0 * image_features @ text_features.T).softmax(dim=-1)
print("Label probs:", text_probs) # prints: [[0., 0., 0., 1.0]]
```
## Citation
```bibtex
@article{li2023clipav2,
title={CLIPA-v2: Scaling CLIP Training with 81.1% Zero-shot ImageNet Accuracy within a $10,000 Budget; An Extra $4,000 Unlocks 81.8% Accuracy},
author={Xianhang Li and Zeyu Wang and Cihang Xie},
journal={arXiv preprint arXiv:2306.15658},
year={2023},
}
```
```bibtex
@inproceedings{li2023clipa,
title={An Inverse Scaling Law for CLIP Training},
author={Xianhang Li and Zeyu Wang and Cihang Xie},
booktitle={NeurIPS},
year={2023},
}
```
| 2,210 | [
[
-0.02557373046875,
-0.032623291015625,
0.00662994384765625,
0.0179901123046875,
-0.0294342041015625,
-0.02166748046875,
0.0003528594970703125,
-0.0281829833984375,
0.03759765625,
0.0134429931640625,
-0.03997802734375,
-0.033355712890625,
-0.047882080078125,
... |
Davlan/xlm-roberta-base-ner-hrl | 2023-08-14T19:35:17.000Z | [
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"token-classification",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | Davlan | null | null | Davlan/xlm-roberta-base-ner-hrl | 13 | 441 | transformers | 2022-03-02T23:29:04 | ---
license: afl-3.0
---
Hugging Face's logo
---
language:
- ar
- de
- en
- es
- fr
- it
- lv
- nl
- pt
- zh
- multilingual
---
# xlm-roberta-base-ner-hrl
## Model description
**xlm-roberta-base-ner-hrl** is a **Named Entity Recognition** model for 10 high resourced languages (Arabic, German, English, Spanish, French, Italian, Latvian, Dutch, Portuguese and Chinese) based on a fine-tuned XLM-RoBERTa base model. It has been trained to recognize three types of entities: location (LOC), organizations (ORG), and person (PER).
Specifically, this model is a *xlm-roberta-base* model that was fine-tuned on an aggregation of 10 high-resourced languages
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for NER.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("Davlan/xlm-roberta-base-ner-hrl")
model = AutoModelForTokenClassification.from_pretrained("Davlan/xlm-roberta-base-ner-hrl")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Nader Jokhadar had given Syria the lead with a well-struck header in the seventh minute."
ner_results = nlp(example)
print(ner_results)
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
The training data for the 10 languages are from:
Language|Dataset
-|-
Arabic | [ANERcorp](https://camel.abudhabi.nyu.edu/anercorp/)
German | [conll 2003](https://www.clips.uantwerpen.be/conll2003/ner/)
English | [conll 2003](https://www.clips.uantwerpen.be/conll2003/ner/)
Spanish | [conll 2002](https://www.clips.uantwerpen.be/conll2002/ner/)
French | [Europeana Newspapers](https://github.com/EuropeanaNewspapers/ner-corpora/tree/master/enp_FR.bnf.bio)
Italian | [Italian I-CAB](https://ontotext.fbk.eu/icab.html)
Latvian | [Latvian NER](https://github.com/LUMII-AILab/FullStack/tree/master/NamedEntities)
Dutch | [conll 2002](https://www.clips.uantwerpen.be/conll2002/ner/)
Portuguese |[Paramopama + Second Harem](https://github.com/davidsbatista/NER-datasets/tree/master/Portuguese)
Chinese | [MSRA](https://huggingface.co/datasets/msra_ner)
The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes:
Abbreviation|Description
-|-
O|Outside of a named entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
## Training procedure
This model was trained on NVIDIA V100 GPU with recommended hyperparameters from HuggingFace code. | 3,042 | [
[
-0.038482666015625,
-0.04583740234375,
0.01152801513671875,
0.0264434814453125,
-0.011810302734375,
0.005298614501953125,
-0.024078369140625,
-0.04541015625,
0.040374755859375,
0.030731201171875,
-0.0350341796875,
-0.0548095703125,
-0.06292724609375,
0.03192... |
savasy/bert-base-turkish-ner-cased | 2023-06-22T14:42:45.000Z | [
"transformers",
"pytorch",
"jax",
"safetensors",
"bert",
"token-classification",
"tr",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | savasy | null | null | savasy/bert-base-turkish-ner-cased | 9 | 441 | transformers | 2022-03-02T23:29:05 | ---
language: tr
---
# For Turkish language, here is an easy-to-use NER application.
** Türkçe için kolay bir python NER (Bert + Transfer Learning) (İsim Varlık Tanıma) modeli...
Thanks to @stefan-it, I applied the followings for training
cd tr-data
for file in train.txt dev.txt test.txt labels.txt
do
wget https://schweter.eu/storage/turkish-bert-wikiann/$file
done
cd ..
It will download the pre-processed datasets with training, dev and test splits and put them in a tr-data folder.
Run pre-training
After downloading the dataset, pre-training can be started. Just set the following environment variables:
```
export MAX_LENGTH=128
export BERT_MODEL=dbmdz/bert-base-turkish-cased
export OUTPUT_DIR=tr-new-model
export BATCH_SIZE=32
export NUM_EPOCHS=3
export SAVE_STEPS=625
export SEED=1
```
Then run pre-training:
```
python3 run_ner_old.py --data_dir ./tr-data3 \
--model_type bert \
--labels ./tr-data/labels.txt \
--model_name_or_path $BERT_MODEL \
--output_dir $OUTPUT_DIR-$SEED \
--max_seq_length $MAX_LENGTH \
--num_train_epochs $NUM_EPOCHS \
--per_gpu_train_batch_size $BATCH_SIZE \
--save_steps $SAVE_STEPS \
--seed $SEED \
--do_train \
--do_eval \
--do_predict \
--fp16
```
# Usage
```
from transformers import pipeline, AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("savasy/bert-base-turkish-ner-cased")
tokenizer = AutoTokenizer.from_pretrained("savasy/bert-base-turkish-ner-cased")
ner=pipeline('ner', model=model, tokenizer=tokenizer)
ner("Mustafa Kemal Atatürk 19 Mayıs 1919'da Samsun'a ayak bastı.")
```
# Some results
Data1: For the data above
Eval Results:
* precision = 0.916400580551524
* recall = 0.9342309684101502
* f1 = 0.9252298787412536
* loss = 0.11335893666411284
Test Results:
* precision = 0.9192058759362955
* recall = 0.9303010230367262
* f1 = 0.9247201697271198
* loss = 0.11182546521618497
Data2:
https://github.com/stefan-it/turkish-bert/files/4558187/nerdata.txt
The performance for the data given by @kemalaraz is as follows
savas@savas-lenova:~/Desktop/trans/tr-new-model-1$ cat eval_results.txt
* precision = 0.9461980692049029
* recall = 0.959309358847465
* f1 = 0.9527086063783312
* loss = 0.037054269206847804
savas@savas-lenova:~/Desktop/trans/tr-new-model-1$ cat test_results.txt
* precision = 0.9458370635631155
* recall = 0.9588201928530913
* f1 = 0.952284378344882
* loss = 0.035431676572445225
| 2,431 | [
[
-0.030487060546875,
-0.053955078125,
0.0138397216796875,
0.01331329345703125,
-0.015167236328125,
-0.01561737060546875,
-0.0218048095703125,
-0.019134521484375,
0.0162200927734375,
0.0169525146484375,
-0.0280609130859375,
-0.033905029296875,
-0.039276123046875,
... |
stas/tiny-wmt19-en-ru | 2021-05-03T01:47:47.000Z | [
"transformers",
"pytorch",
"fsmt",
"text2text-generation",
"wmt19",
"testing",
"en",
"ru",
"dataset:wmt19",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | stas | null | null | stas/tiny-wmt19-en-ru | 0 | 441 | transformers | 2022-03-02T23:29:05 | ---
language:
- en
- ru
thumbnail:
tags:
- wmt19
- testing
license: apache-2.0
datasets:
- wmt19
metrics:
- bleu
---
# Tiny FSMT en-ru
This is a tiny model that is used in the `transformers` test suite. It doesn't do anything useful, other than testing that `modeling_fsmt.py` is functional.
Do not try to use it for anything that requires quality.
The model is indeed 30KB in size.
You can see how it was created [here](https://huggingface.co/stas/tiny-wmt19-en-ru/blob/main/fsmt-make-super-tiny-model.py).
If you're looking for the real model, please go to [https://huggingface.co/facebook/wmt19-en-ru](https://huggingface.co/facebook/wmt19-en-ru).
| 657 | [
[
-0.043792724609375,
-0.0589599609375,
0.010955810546875,
0.0234375,
-0.032684326171875,
-0.0164031982421875,
0.018798828125,
-0.02264404296875,
0.03558349609375,
0.011138916015625,
-0.08544921875,
0.024017333984375,
-0.01062774658203125,
0.0139923095703125,
... |
timm/swinv2_base_window12_192.ms_in22k | 2023-03-18T03:30:30.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-22k",
"arxiv:2111.09883",
"license:mit",
"region:us"
] | image-classification | timm | null | null | timm/swinv2_base_window12_192.ms_in22k | 0 | 441 | timm | 2023-03-18T03:29:56 | ---
tags:
- image-classification
- timm
library_tag: timm
license: mit
datasets:
- imagenet-22k
---
# Model card for swinv2_base_window12_192.ms_in22k
A Swin Transformer V2 image classification model. Pretrained on ImageNet-22k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 109.3
- GMACs: 11.9
- Activations (M): 39.7
- Image size: 192 x 192
- **Papers:**
- Swin Transformer V2: Scaling Up Capacity and Resolution: https://arxiv.org/abs/2111.09883
- **Original:** https://github.com/microsoft/Swin-Transformer
- **Dataset:** ImageNet-22k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('swinv2_base_window12_192.ms_in22k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'swinv2_base_window12_192.ms_in22k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g. for swin_base_patch4_window7_224 (NHWC output)
# torch.Size([1, 56, 56, 128])
# torch.Size([1, 28, 28, 256])
# torch.Size([1, 14, 14, 512])
# torch.Size([1, 7, 7, 1024])
# e.g. for swinv2_cr_small_ns_224 (NCHW output)
# torch.Size([1, 96, 56, 56])
# torch.Size([1, 192, 28, 28])
# torch.Size([1, 384, 14, 14])
# torch.Size([1, 768, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'swinv2_base_window12_192.ms_in22k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled (ie.e a (batch_size, H, W, num_features) tensor for swin / swinv2
# or (batch_size, num_features, H, W) for swinv2_cr
output = model.forward_head(output, pre_logits=True)
# output is (batch_size, num_features) tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@inproceedings{liu2021swinv2,
title={Swin Transformer V2: Scaling Up Capacity and Resolution},
author={Ze Liu and Han Hu and Yutong Lin and Zhuliang Yao and Zhenda Xie and Yixuan Wei and Jia Ning and Yue Cao and Zheng Zhang and Li Dong and Furu Wei and Baining Guo},
booktitle={International Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2022}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
| 4,413 | [
[
-0.0297393798828125,
-0.027679443359375,
-0.009674072265625,
0.0138397216796875,
-0.02459716796875,
-0.032440185546875,
-0.020843505859375,
-0.0400390625,
-0.0013332366943359375,
0.0290069580078125,
-0.03704833984375,
-0.039947509765625,
-0.04656982421875,
-... |
cjvt/sloberta-word-case-classification-multilabel | 2023-08-26T20:44:02.000Z | [
"transformers",
"pytorch",
"camembert",
"token-classification",
"word case classification",
"sl",
"dataset:cjvt/cc_gigafida",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | cjvt | null | null | cjvt/sloberta-word-case-classification-multilabel | 0 | 441 | transformers | 2023-08-26T13:43:52 | ---
license: cc-by-sa-4.0
datasets:
- cjvt/cc_gigafida
language:
- sl
tags:
- word case classification
---
---
language:
- sl
license: cc-by-sa-4.0
---
# sloberta-word-case-classification-multilabel
SloBERTa model finetuned on the Gigafida dataset for word case classification.
The input to the model is expected to be **fully lowercased text**.
The model classifies whether the input words should stay lowercased, be uppercased, or be all-uppercased. In addition, it provides a constrained explanation for its case classification.
See usage example below for more details.
## Usage example
Imagine we have the following Slovenian text. Asterisked words have an incorrect word casing.
```
Linus *torvalds* je *Finski* programer, Poznan kot izumitelj operacijskega sistema Linux.
(EN: Linus Torvalds is a Finnish programer, known as the inventor of the Linux operating sistem)
```
The model expects an all-lowercased input, so we pass it the following text:
```
linus torvalds je finski programer, poznan kot izumitelj operacijskega sistema linux.
```
The model might return the following predictions (note: predictions chosen for demonstration/explanation, not reproducibility!):
```
linus -> UPPER_ENTITY, UPPER_BEGIN
torvalds -> UPPER_ENTITY
je -> LOWER_OTHER
finski -> LOWER_ADJ_SKI
programer -> LOWER_OTHER
, -> LOWER_OTHER
poznan -> LOWER_HYPERCORRECTION
kot -> LOWER_OTHER
izumitelj -> LOWER_OTHER
operacijskega -> LOWER_OTHER
sistema -> LOWER_OTHER
linux -> UPPER_ENTITY
```
Then we would compare the (coarse) predictions (i.e., LOWER/UPPER/UPPER_ALLUC) with the initial casing and observe the following:
- `Torvalds` is originally lowercased, but the model corrects it to uppercase (because it is an entity),
- `finski` is originally uppercased, but the model corrects it to lowercase (because it is an adjective with suffix -ski),
- `poznan` is originally uppercased, but the model corrects it to lowercase (the model assumes that the user made the mistake due to hypercorrection, meaning they naïvely uppercased a word after a character that could be punctuation),
The other predictions agree with the word case in the initial text, so they are assumed to be correct.
## More details
More concretely, the model is a 12-class multi-label classifier with the following class indices and interpretations:
```
0: "LOWER_OTHER", # lowercased for an uncaptured reason
1: "LOWER_HYPERCORRECTION", # lowercase due to hypercorrection (e.g., user automatically uppercased a word after a "." despite it not being a punctuation mark - the word should instead be lowercased)
2: "LOWER_ADJ_SKI", # lowercased because the word is an adjective ending in suffix -ski
3: "LOWER_ENTITY_PART", # lowercased word that is part of an entity (e.g., "Novo **mesto**")
4: "UPPER_OTHER", # upercased for an uncaptured reason
5: "UPPER_BEGIN", # upercased because the word begins a sentence
6: "UPPER_ENTITY", # uppercased word that is part of an entity
7: "UPPER_DIRECT_SPEECH", # upercased word due to direct speech
8: "UPPER_ADJ_OTHER", # upercased adjective for an uncaptured reason (usually this is a possesive adjective)
9: "UPPER_ALLUC_OTHER", # all-uppercased for an uncaptured reason
10: "UPPER_ALLUC_BEGIN", # all-uppercased because the word begins a sentence
11: "UPPER_ALLUC_ENTITY" # all-uppercased because the word is part of an entity
```
As the model is trained for multi-label classification, a word can be assigned multiple labels whose probability is > T. Naïvely T=0.5 can be used, but it is slightly better to use label thresholds optimized on a small validation set -
they are noted in the file `label_thresholds.json` and below (along with the validation set F1 achieved with the best threshold).
```
LOWER_OTHER: T=0.4500 -> F1 = 0.9965
LOWER_HYPERCORRECTION: T=0.5800 -> F1 = 0.8555
LOWER_ADJ_SKI: T=0.4810 -> F1 = 0.9863
LOWER_ENTITY_PART: T=0.4330 -> F1 = 0.8024
UPPER_OTHER: T=0.4460 -> F1 = 0.7538
UPPER_BEGIN: T=0.4690 -> F1 = 0.9905
UPPER_ENTITY: T=0.5030 -> F1 = 0.9670
UPPER_DIRECT_SPEECH: T=0.4170 -> F1 = 0.9852
UPPER_ADJ_OTHER: T=0.5080 -> F1 = 0.9431
UPPER_ALLUC_OTHER: T=0.4850 -> F1 = 0.8463
UPPER_ALLUC_BEGIN: T=0.5170 -> F1 = 0.9798
UPPER_ALLUC_ENTITY: T=0.4490 -> F1 = 0.9391
```
| 4,256 | [
[
-0.025848388671875,
-0.045135498046875,
0.03021240234375,
0.0095977783203125,
-0.01470947265625,
0.003948211669921875,
-0.01543426513671875,
-0.0284271240234375,
0.0235595703125,
0.03228759765625,
-0.037506103515625,
-0.0601806640625,
-0.0472412109375,
0.006... |
WhitePeak/bert-base-cased-Korean-sentiment | 2023-09-19T01:59:03.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"ko",
"dataset:WhitePeak/shopping_review",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | WhitePeak | null | null | WhitePeak/bert-base-cased-Korean-sentiment | 0 | 441 | transformers | 2023-09-18T23:20:53 | ---
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: bert-base-cased-Korean-sentiment
results: []
datasets:
- WhitePeak/shopping_review
language:
- ko
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-Korean-sentiment
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2338
- Accuracy: 0.9234
- F1: 0.9238
## Model description
This is a fine-tuned model for a sentiment analysis for the Korean language based on customer reviews in the Korean language
## Intended uses & limitations
```python
from transformers import pipeline
sentiment_model = pipeline(model="WhitePeak/bert-base-cased-Korean-sentiment")
sentiment_mode("매우 좋아")
```
Result:
```
LABEL_0: negative
LABEL_1: positive
```
## Training and evaluation data
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3 | 1,563 | [
[
-0.034393310546875,
-0.039947509765625,
0.01053619384765625,
0.0382080078125,
-0.04052734375,
-0.01306915283203125,
-0.03265380859375,
-0.00437164306640625,
0.0232391357421875,
0.0204010009765625,
-0.057891845703125,
-0.060394287109375,
-0.04583740234375,
-0... |
timm/vit_base_patch14_reg4_dinov2.lvd142m | 2023-10-30T04:57:02.000Z | [
"timm",
"pytorch",
"safetensors",
"arxiv:2309.16588",
"arxiv:2304.07193",
"arxiv:2010.11929",
"license:apache-2.0",
"region:us"
] | null | timm | null | null | timm/vit_base_patch14_reg4_dinov2.lvd142m | 0 | 441 | timm | 2023-10-30T04:48:08 | ---
tags:
- timm
library_name: timm
license: apache-2.0
---
# Model card for vit_base_patch14_reg4_dinov2.lvd142m
A Vision Transformer (ViT) image feature model with registers. Pretrained on LVD-142M with self-supervised DINOv2 method.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 86.6
- GMACs: 117.5
- Activations (M): 115.0
- Image size: 518 x 518
- **Papers:**
- Vision Transformers Need Registers: https://arxiv.org/abs/2309.16588
- DINOv2: Learning Robust Visual Features without Supervision: https://arxiv.org/abs/2304.07193
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2
- **Original:** https://github.com/facebookresearch/dinov2
- **Pretrain Dataset:** LVD-142M
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('vit_base_patch14_reg4_dinov2.lvd142m', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vit_base_patch14_reg4_dinov2.lvd142m',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1374, 768) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{darcet2023vision,
title={Vision Transformers Need Registers},
author={Darcet, Timoth{'e}e and Oquab, Maxime and Mairal, Julien and Bojanowski, Piotr},
journal={arXiv preprint arXiv:2309.16588},
year={2023}
}
```
```bibtex
@misc{oquab2023dinov2,
title={DINOv2: Learning Robust Visual Features without Supervision},
author={Oquab, Maxime and Darcet, Timothée and Moutakanni, Theo and Vo, Huy V. and Szafraniec, Marc and Khalidov, Vasil and Fernandez, Pierre and Haziza, Daniel and Massa, Francisco and El-Nouby, Alaaeldin and Howes, Russell and Huang, Po-Yao and Xu, Hu and Sharma, Vasu and Li, Shang-Wen and Galuba, Wojciech and Rabbat, Mike and Assran, Mido and Ballas, Nicolas and Synnaeve, Gabriel and Misra, Ishan and Jegou, Herve and Mairal, Julien and Labatut, Patrick and Joulin, Armand and Bojanowski, Piotr},
journal={arXiv:2304.07193},
year={2023}
}
```
```bibtex
@article{dosovitskiy2020vit,
title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
journal={ICLR},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
``` | 4,304 | [
[
-0.03680419921875,
-0.0247039794921875,
0.00897979736328125,
0.004344940185546875,
-0.0328369140625,
-0.02557373046875,
-0.018402099609375,
-0.033599853515625,
0.00868988037109375,
0.02069091796875,
-0.03387451171875,
-0.0390625,
-0.050018310546875,
-0.00494... |
kiddothe2b/longformer-mini-1024 | 2023-03-21T15:13:14.000Z | [
"transformers",
"pytorch",
"safetensors",
"longformer",
"fill-mask",
"long_documents",
"en",
"dataset:c4",
"arxiv:2004.05150",
"arxiv:2210.05529",
"arxiv:1908.08962",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | kiddothe2b | null | null | kiddothe2b/longformer-mini-1024 | 1 | 440 | transformers | 2022-10-11T09:08:35 | ---
license: cc-by-sa-4.0
pipeline_tag: fill-mask
language: en
arxiv:
tags:
- long_documents
datasets:
- c4
model-index:
- name: kiddothe2b/longformer-mini-1024
results: []
---
# Longformer / longformer-mini-1024
## Model description
[Longformer](https://arxiv.org/abs/2004.05150) is a transformer model for long documents. This version of Longformer is presented in [An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification (Chalkidis et al., 2022)](https://arxiv.org/abs/2210.05529).
The model has been warm-started re-using the weights of miniature BERT (Turc et al., 2019), and continued pre-trained for MLM following the paradigm of Longformer released by [Beltagy et al. (2020)](](https://arxiv.org/abs/1908.08962)). It supports sequences of length up to 1,024.
Longformer uses a combination of a sliding window (local) attention and global attention. Global attention is user-configured based on the task to allow the model to learn task-specific representations.
## Intended uses & limitations
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
See the [model hub](https://huggingface.co/models?filter=longformer) to look for fine-tuned versions on a task that
interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole document to make decisions, such as document classification, sequential sentence classification, or question answering.
## How to use
You can use this model directly with a pipeline for masked language modeling:
```python
from transformers import pipeline
mlm_model = pipeline('fill-mask', model='kiddothe2b/longformer-mini-1024', trust_remote_code=True)
mlm_model("Hello I'm a <mask> model.")
```
You can also fine-tune it for SequenceClassification, SequentialSentenceClassification, and MultipleChoice down-stream tasks:
```python
from transformers import AutoTokenizer, AutoModelforSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("kiddothe2b/longformer-mini-1024", trust_remote_code=True)
doc_classifier = AutoModelforSequenceClassification("kiddothe2b/longformer-mini-1024", trust_remote_code=True)
```
## Limitations and bias
The training data used for this model contains a lot of unfiltered content from the internet, which is far from
neutral. Therefore, the model can have biased predictions.
## Training procedure
### Training and evaluation data
The model has been warm-started from [google/bert_uncased_L-6_H-256_A-4](https://huggingface.co/google/bert_uncased_L-6_H-256_A-4) checkpoint and has been continued pre-trained for additional 50k steps on English [Wikipedia](https://huggingface.co/datasets/wikipedia).
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 50000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.7067 | 0.2 | 10000 | 1.5923 | 0.6714 |
| 1.6532 | 0.4 | 20000 | 1.5494 | 0.6784 |
| 1.622 | 0.6 | 30000 | 1.5208 | 0.6830 |
| 1.588 | 0.8 | 40000 | 1.4880 | 0.6876 |
| 1.5682 | 1.0 | 50000 | 1.4680 | 0.6908 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.11.6
## Citing
If you use HAT in your research, please cite:
[An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification](https://arxiv.org/abs/2210.05529). Ilias Chalkidis, Xiang Dai, Manos Fergadiotis, Prodromos Malakasiotis, and Desmond Elliott. 2022. arXiv:2210.05529 (Preprint).
```
@misc{chalkidis-etal-2022-hat,
url = {https://arxiv.org/abs/2210.05529},
author = {Chalkidis, Ilias and Dai, Xiang and Fergadiotis, Manos and Malakasiotis, Prodromos and Elliott, Desmond},
title = {An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification},
publisher = {arXiv},
year = {2022},
}
```
Also cite the original work: [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150).
```
@article{Beltagy2020Longformer,
title={Longformer: The Long-Document Transformer},
author={Iz Beltagy and Matthew E. Peters and Arman Cohan},
journal={arXiv:2004.05150},
year={2020},
}
```
| 4,683 | [
[
-0.03802490234375,
-0.044769287109375,
0.020263671875,
0.011260986328125,
0.0005698204040527344,
-0.0181884765625,
-0.0318603515625,
-0.037689208984375,
0.00969696044921875,
0.03314208984375,
-0.048126220703125,
-0.0276947021484375,
-0.060546875,
0.011627197... |
timm/mobilevitv2_150.cvnets_in1k | 2023-04-24T22:24:44.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2206.02680",
"license:other",
"region:us"
] | image-classification | timm | null | null | timm/mobilevitv2_150.cvnets_in1k | 0 | 440 | timm | 2023-04-24T22:24:29 | ---
tags:
- image-classification
- timm
library_name: timm
license: other
datasets:
- imagenet-1k
---
# Model card for mobilevitv2_150.cvnets_in1k
A MobileViT-v2 image classification model. Trained on ImageNet-1k by paper authors.
See license details at https://github.com/apple/ml-cvnets/blob/main/LICENSE
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 10.6
- GMACs: 4.1
- Activations (M): 24.1
- Image size: 256 x 256
- **Papers:**
- Separable Self-attention for Mobile Vision Transformers: https://arxiv.org/abs/2206.02680
- **Original:** https://github.com/apple/ml-cvnets
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('mobilevitv2_150.cvnets_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'mobilevitv2_150.cvnets_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 96, 128, 128])
# torch.Size([1, 192, 64, 64])
# torch.Size([1, 384, 32, 32])
# torch.Size([1, 576, 16, 16])
# torch.Size([1, 768, 8, 8])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'mobilevitv2_150.cvnets_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 768, 8, 8) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{Mehta2022SeparableSF,
title={Separable Self-attention for Mobile Vision Transformers},
author={Sachin Mehta and Mohammad Rastegari},
journal={ArXiv},
year={2022},
volume={abs/2206.02680}
}
```
| 3,700 | [
[
-0.0335693359375,
-0.0222930908203125,
-0.00389862060546875,
0.0169219970703125,
-0.027252197265625,
-0.027587890625,
-0.0075225830078125,
-0.020294189453125,
0.0200958251953125,
0.03424072265625,
-0.03570556640625,
-0.04901123046875,
-0.04779052734375,
-0.0... |
THUMT/mGPT | 2021-10-14T05:49:41.000Z | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"arxiv:2110.06609",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | THUMT | null | null | THUMT/mGPT | 5 | 439 | transformers | 2022-03-02T23:29:05 |
# mGPT
mGPT is pre-trained on the [mC4 dataset](https://huggingface.co/datasets/mc4) using a causal language modeling objective. It was introduced in this [paper](https://arxiv.org/abs/2110.06609) and first released on this page.
## Model description
mGPT is a Transformer-based model which pre-trained on massive multilingual data covering over 101 languages. Similar to GPT-2, It was pre-trained on the raw texts only, with no human labeling. We use the same tokenization and vocabulary as the [mT5 model](https://huggingface.co/google/mt5-base).
## Intended uses
You can use the raw model for text generation or using prompts for adapting it to a downstream task.
## How to use
You can use this model directly with a pipeline for text generation. Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import MT5Tokenizer, GPT2LMHeadModel, TextGenerationPipeline
tokenizer = MT5Tokenizer.from_pretrained("THUMT/mGPT")
model = GPT2LMHeadModel.from_pretrained("THUMT/mGPT")
pipeline = TextGenerationPipeline(model=model, tokenizer=tokenizer)
text = "Replace me by any text you'd like."
text = pipeline(text, do_sample=True, max_length=1024)[0]["generated_text"]
```
## Preprocessing
The texts are tokenized using `sentencepiece` and a vocabulary size of 250,100. The inputs are sequences of 1,024 consecutive tokens. We use `<extra_id_0>` to separate lines in a document.
## BibTeX entry and citation info
```bibtex
@misc{tan2021msp,
title={MSP: Multi-Stage Prompting for Making Pre-trained Language Models Better Translators},
author={Zhixing Tan and Xiangwen Zhang and Shuo Wang and Yang Liu},
year={2021},
eprint={2110.06609},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| 1,773 | [
[
-0.0318603515625,
-0.0382080078125,
0.033050537109375,
0.00388336181640625,
-0.0228118896484375,
-0.0158538818359375,
-0.010223388671875,
-0.00905609130859375,
-0.002376556396484375,
0.02630615234375,
-0.046142578125,
-0.01800537109375,
-0.06011962890625,
0.... |
alx-ai/noggles6000 | 2023-05-16T09:29:57.000Z | [
"diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | alx-ai | null | null | alx-ai/noggles6000 | 0 | 439 | diffusers | 2022-11-19T18:06:08 | ---
license: creativeml-openrail-m
tags:
- text-to-image
---
### noggles6000 on Stable Diffusion via Dreambooth trained on the [fast-DreamBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
#### Model by alxdfy
This your the Stable Diffusion model fine-tuned the noggles6000 concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt(s)`: **nounsbud.jpg**
You can also train your own concepts and upload them to the library by using [the fast-DremaBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb).
You can run your new concept via A1111 Colab :[Fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Sample pictures of this concept:
nounsbud.jpg

| 1,390 | [
[
-0.0276336669921875,
-0.06842041015625,
0.036773681640625,
0.0244598388671875,
-0.006687164306640625,
0.0238800048828125,
0.0110931396484375,
-0.025360107421875,
0.045196533203125,
0.0207366943359375,
-0.0106048583984375,
-0.0225677490234375,
-0.031158447265625,... |
jozhang97/deta-swin-large-o365 | 2023-01-30T20:40:49.000Z | [
"transformers",
"pytorch",
"deta",
"object-detection",
"vision",
"arxiv:2212.06137",
"endpoints_compatible",
"region:us"
] | object-detection | jozhang97 | null | null | jozhang97/deta-swin-large-o365 | 0 | 439 | transformers | 2023-01-30T16:21:01 | ---
pipeline_tag: object-detection
tags:
- vision
---
# Detection Transformers with Assignment
By [Jeffrey Ouyang-Zhang](https://jozhang97.github.io/), [Jang Hyun Cho](https://sites.google.com/view/janghyuncho/), [Xingyi Zhou](https://www.cs.utexas.edu/~zhouxy/), [Philipp Krähenbühl](http://www.philkr.net/)
From the paper [NMS Strikes Back](https://arxiv.org/abs/2212.06137).
**TL; DR.** **De**tection **T**ransformers with **A**ssignment (DETA) re-introduce IoU assignment and NMS for transformer-based detectors. DETA trains and tests comparibly as fast as Deformable-DETR and converges much faster (50.2 mAP in 12 epochs on COCO). | 640 | [
[
-0.044464111328125,
-0.0067596435546875,
0.043304443359375,
-0.006710052490234375,
-0.0012693405151367188,
0.025726318359375,
0.00043702125549316406,
-0.0162200927734375,
0.0087432861328125,
0.0217437744140625,
-0.0390625,
-0.0256500244140625,
-0.041748046875,
... |
AIMH/mental-roberta-large | 2023-02-27T19:11:40.000Z | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"arxiv:2110.15621",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | AIMH | null | null | AIMH/mental-roberta-large | 2 | 439 | transformers | 2023-02-26T14:58:01 | ---
license: cc-by-nc-4.0
---
[MentalBERT](https://arxiv.org/abs/2110.15621) is a model initialized with RoBERTa-large (`uncased_L-24_H-1024_A-16`) and trained with mental health-related posts collected from Reddit.
We follow the standard pretraining protocols of BERT and RoBERTa with [Huggingface’s Transformers library](https://github.com/huggingface/transformers).
We use four Nvidia Tesla v100 GPUs to train the two language models. We set the batch size to 8 per GPU, evaluate every 1,000 steps, and train for 312,000 iterations.
## Usage
Load the model via [Huggingface’s Transformers library](https://github.com/huggingface/transformers):
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("AIMH/mental-roberta-large")
model = AutoModel.from_pretrained("AIMH/mental-roberta-large")
```
To minimize the influence of worrying mask predictions, this model is gated. To download a gated model, you’ll need to be authenticated.
Know more about [gated models](https://huggingface.co/docs/hub/models-gated).
## Paper
[MentalBERT: Publicly Available Pretrained Language Models for Mental Healthcare](https://arxiv.org/abs/2110.15621).
```
@inproceedings{ji2022mentalbert,
title = {{MentalBERT: Publicly Available Pretrained Language Models for Mental Healthcare}},
author = {Shaoxiong Ji and Tianlin Zhang and Luna Ansari and Jie Fu and Prayag Tiwari and Erik Cambria},
year = {2022},
booktitle = {Proceedings of LREC}
}
``` | 1,501 | [
[
-0.03912353515625,
-0.046722412109375,
0.045562744140625,
0.0236968994140625,
-0.006786346435546875,
-0.0122528076171875,
-0.0300445556640625,
-0.03643798828125,
0.0017309188842773438,
0.0230712890625,
-0.056884765625,
-0.024444580078125,
-0.06561279296875,
... |
porntech/sex-position | 2023-08-13T23:49:16.000Z | [
"timm",
"pytorch",
"image-classification",
"license:mit",
"has_space",
"region:us"
] | image-classification | porntech | null | null | porntech/sex-position | 9 | 439 | timm | 2023-06-04T11:10:42 | ---
license: mit
library_name: timm
pipeline_tag: image-classification
---
# classify sex positions in a sexy or NSFW image.
WARNING! Leave now if you are less than 18 years old!
* The following sex positions are supported: ["blowjob", "hardcore", "titjob", "handjob", "pussy-licking", "fingering", "other", "solo"]
* Input image must be a sexy or NSFW image, otherwise the prediction is undefined. For example, a clothed women eating a banana would most likely to be predicted as blowjob.
* "hardcore" actually represents four subclasses: "missionary", "doggystyle", "cowgirl" and "spooning". I will support these four classes in the future.
* "other" means some other behavior such as kissing or talking, "solo" means a single woman.
* This repo is for image classification, for sex position classification for videos, see [this repo](https://huggingface.co/spaces/porntech/sex-position-video) of mine.
* Here are two sample SFW images you can try with model:
[single woman](https://st.depositphotos.com/1022904/2166/i/950/depositphotos_21668751-stock-photo-yang-and-beautiful-sexy-woman.jpg): predicted as "solo"
[kissing](https://www.verywellmind.com/thmb/8nU7Yax1VdiTTKzIg6c48aFXkP0=/750x0/filters:no_upscale():max_bytes(150000):strip_icc():format(webp)/GettyImages-471932267-58bc89565f9b58af5ca9d09d.jpg): predicted as "other"
I will soon be on job market and is now looking for full time or part time jobs focusing on developping AI models for sexy/NSFW videos/images, if you are interested in me or this work, feel free to contact porntech@126.com
| 1,569 | [
[
-0.0141448974609375,
-0.0513916015625,
0.0150909423828125,
0.0445556640625,
-0.038177490234375,
-0.0208587646484375,
0.0181427001953125,
-0.0309906005859375,
0.0112457275390625,
0.059722900390625,
-0.04205322265625,
-0.0640869140625,
-0.04736328125,
0.010528... |
linhd-postdata/alberti-bert-base-multilingual-cased | 2023-07-12T09:53:52.000Z | [
"transformers",
"pytorch",
"jax",
"safetensors",
"bert",
"fill-mask",
"multilingual",
"poetry",
"digital humanities",
"es",
"fr",
"it",
"cs",
"pt",
"en",
"ar",
"fi",
"de",
"ru",
"hu",
"zh",
"dataset:linhd-postdata/pulpo",
"arxiv:2307.01387",
"arxiv:1910.09700",
"licen... | fill-mask | linhd-postdata | null | null | linhd-postdata/alberti-bert-base-multilingual-cased | 1 | 439 | transformers | 2023-06-05T15:27:41 | ---
language:
- es
- fr
- it
- cs
- pt
- en
- ar
- fi
- de
- ru
- hu
- zh
license: cc-by-4.0
tags:
- multilingual
- bert
- poetry
- digital humanities
pipeline_tag: fill-mask
widget:
- text: ¿Qué es la vida? Un [MASK].
datasets:
- linhd-postdata/pulpo
metrics:
- accuracy
library_name: transformers
---
# Model Card for Aʟʙᴇʀᴛɪ:
Aʟʙᴇʀᴛɪ is the first multilingual domain-specific language model for poetry analysis.
## Model Details
### Model Description
As a pre-trained language model, it is trained using the masked language modeling objective on top of [multilingual BERT](https://huggingface.co/bert-base-multilingual-cased), and therefore **it needs to be fine-tuned to specific tasks**.
- **Developed by:** [Javier de la Rosa](https://huggingface.co/versae), and [Álvaro Pérez Pozo](https://huggingface.co/)
- **Shared by:** [Javier de la Rosa](https://huggingface.co/versae)
- **Model type:** `bert`
- **Language(s) (NLP):** Spanish, French, Italian, Czech, Portuguese, English, Arabic, Finnish, German, Russian, Hungarian, Chinese.
- **License:** [CC-BY 4.0](https://creativecommons.org/licenses/by/4.0/)
- **Finetuned from model:** [`bert-base-multilingual-cased`](https://huggingface.co/bert-base-multilingual-cased)
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Paper:** https://arxiv.org/abs/2307.01387
- **Demo:** https://huggingface.co/spaces/linhd-postdata/alberti
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| 5,647 | [
[
-0.03875732421875,
-0.04888916015625,
0.0251312255859375,
0.0098419189453125,
-0.023162841796875,
-0.0186004638671875,
-0.0030918121337890625,
-0.049560546875,
0.0183258056640625,
0.04705810546875,
-0.056060791015625,
-0.052734375,
-0.04742431640625,
-0.0118... |
stabilityai/japanese-stablelm-3b-4e1t-base | 2023-10-25T01:41:00.000Z | [
"transformers",
"safetensors",
"stablelm_epoch",
"text-generation",
"japanese-stablelm",
"causal-lm",
"custom_code",
"ja",
"dataset:wikipedia",
"dataset:mc4",
"dataset:cc100",
"dataset:oscar-corpus/OSCAR-2301",
"dataset:oscar-corpus/OSCAR-2201",
"dataset:cerebras/SlimPajama-627B",
"arxiv... | text-generation | stabilityai | null | null | stabilityai/japanese-stablelm-3b-4e1t-base | 6 | 439 | transformers | 2023-10-16T06:04:58 | ---
license: apache-2.0
tags:
- japanese-stablelm
- causal-lm
pipeline_tag: text-generation
datasets:
- wikipedia
- mc4
- cc100
- oscar-corpus/OSCAR-2301
- oscar-corpus/OSCAR-2201
- cerebras/SlimPajama-627B
language:
- ja
extra_gated_fields:
Name: text
Email: text
Country: text
Organization or Affiliation: text
I allow Stability AI to contact me about information related to its models and research: checkbox
---
# Japanese StableLM-3B-4E1T Base
## Model Description
This is a 3B-parameter decoder-only language model with a focus on maximizing Japanese language modeling performance and Japanese downstream task performance.
We conducted continued pretraining using Japanese data on the English language model, [StableLM-3B-4E1T](https://huggingface.co/stabilityai/stablelm-3b-4e1t/), to transfer the model's knowledge and capabilities to Japanese.
*If you are looking for an instruction-following model, please check [Japanese StableLM-3B-4E1T Instruct](https://huggingface.co/stabilityai/japanese-stablelm-3b-4e1t-instruct)*.
*If you are in search of a larger model, please check [Japanese Stable LM Base Gamma 7B](https://huggingface.co/stabilityai/japanese-stablelm-base-gamma-7b)*.
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("stabilityai/japanese-stablelm-3b-4e1t-base")
model = AutoModelForCausalLM.from_pretrained(
"stabilityai/japanese-stablelm-3b-4e1t-base",
trust_remote_code=True,
torch_dtype="auto",
)
model.cuda()
inputs = tokenizer("AI で科学研究を加速するには、", return_tensors="pt").to("cuda")
tokens = model.generate(
**inputs,
max_new_tokens=64,
temperature=0.75,
top_p=0.95,
do_sample=True,
)
print(tokenizer.decode(tokens[0], skip_special_tokens=True))
```
## Model Details
* **Developed by**: [Stability AI](https://stability.ai/)
* **Model type**: `Japanese StableLM-3B-4E1T Base` model is an auto-regressive language models based on the transformer decoder architecture.
* **Language(s)**: Japanese
* **License**: This model is licensed under [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
* **Contact**: For questions and comments about the model, please join [Stable Community Japan](https://discord.gg/StableJP). For future announcements / information about Stability AI models, research, and events, please follow https://twitter.com/StabilityAI_JP.
### Model Architecture
The model is a decoder-only transformer similar to the LLaMA ([Touvron et al., 2023](https://arxiv.org/abs/2307.09288)) architecture with the following modifications:
| Parameters | Hidden Size | Layers | Heads | Sequence Length |
|----------------|-------------|--------|-------|-----------------|
| 2,795,443,200 | 2560 | 32 | 32 | 4096 |
* **Position Embeddings**: Rotary Position Embeddings ([Su et al., 2021](https://arxiv.org/abs/2104.09864)) applied to the first 25% of head embedding dimensions for improved throughput following [Black et al. (2022)](https://arxiv.org/pdf/2204.06745.pdf).
* **Normalization**: LayerNorm ([Ba et al., 2016](https://arxiv.org/abs/1607.06450)) with learned bias terms as opposed to RMSNorm ([Zhang & Sennrich, 2019](https://arxiv.org/abs/1910.07467)).
* **Tokenizer**: GPT-NeoX ([Black et al., 2022](https://arxiv.org/abs/2204.06745)).
### Training Dataset
Around 100B tokens from a mixture of the following corpora were used for the continued pretraining.
- [Japanese/English Wikipedia](https://dumps.wikimedia.org/other/cirrussearch)
- [Japanese mc4](https://huggingface.co/datasets/mc4)
- [Japanese CC-100](http://data.statmt.org/cc-100/ja.txt.xz)
- [Japanese OSCAR](https://oscar-project.github.io/documentation/)
- [SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B) without the Books3 subset
## Use and Limitations
### Intended Use
The model is intended to be used by all individuals as a foundational model for application-specific fine-tuning without strict limitations on commercial use.
### Limitations and bias
The pre-training dataset may have contained offensive or inappropriate content even after applying data cleansing filters which can be reflected in the model-generated text. We recommend users exercise reasonable caution when using these models in production systems. Do not use the model for any applications that may cause harm or distress to individuals or groups.
## Credits
The continued pre-training was carried out by [Takuya Akiba](https://huggingface.co/iwiwi).
Other aspects, including data preparation and evaluation, were handled by the Language Team of Stability AI Japan, notably [Meng Lee](https://huggingface.co/leemeng), [Fujiki Nakamura](https://huggingface.co/fujiki), [Makoto Shing](https://huggingface.co/mkshing), [Paul McCann](https://huggingface.co/polm-stability), and [Naoki Orii](https://huggingface.co/mrorii).
## Acknowledgements
We are grateful for the contributions of the EleutherAI Polyglot-JA team in helping us to collect a large amount of pre-training data in Japanese. Polyglot-JA members includes Hyunwoong Ko (Project Lead), Fujiki Nakamura (originally started this project when he commited to the Polyglot team), Yunho Mo, Minji Jung, KeunSeok Im, and Su-Kyeong Jang.
We are also appreciative of [AI Novelist/Sta (Bit192, Inc.)](https://ai-novel.com/index.php) and the numerous contributors from [Stable Community Japan](https://discord.gg/VPrcE475HB) for assisting us in gathering a large amount of high-quality Japanese textual data for model training. | 5,558 | [
[
-0.033203125,
-0.050750732421875,
0.00792694091796875,
0.01064300537109375,
-0.0266876220703125,
-0.01010894775390625,
-0.028289794921875,
-0.0297088623046875,
0.019378662109375,
0.032196044921875,
-0.037994384765625,
-0.0455322265625,
-0.0521240234375,
0.01... |
kyujinpy/Kosy-platypus2-13B-v5 | 2023-11-02T01:53:02.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"ko",
"dataset:kyujinpy/KOpen-platypus",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | kyujinpy | null | null | kyujinpy/Kosy-platypus2-13B-v5 | 0 | 439 | transformers | 2023-11-01T17:25:47 | ---
language:
- ko
datasets:
- kyujinpy/KOpen-platypus
library_name: transformers
pipeline_tag: text-generation
license: cc-by-nc-sa-4.0
---
# **Kosy🍵llama**

## Model Details
**Model Developers** Kyujin Han (kyujinpy)
**Model Description**
[NEFTune](https://github.com/neelsjain/NEFTune) method를 활용하여 훈련한 Ko-platypus2 new version!
(Noisy + KO + llama = Kosy🍵llama)
**Repo Link**
Github **KoNEFTune**: [Kosy🍵llama](https://github.com/Marker-Inc-Korea/KoNEFTune)
If you visit our github, you can easily apply **Random_noisy_embedding_fine-tuning**!!
**Base Model**
[hyunseoki/ko-en-llama2-13b](https://huggingface.co/hyunseoki/ko-en-llama2-13b)
**Training Dataset**
Version of combined dataset: [kyujinpy/KOpen-platypus](https://huggingface.co/datasets/kyujinpy/KOpen-platypus)
I use A100 GPU 40GB and COLAB, when trianing.
# **Model comparisons**
[KO-LLM leaderboard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard)
# **NEFT comparisons**

| Model | Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 |
| --- | --- | --- | --- | --- | --- | --- |
| [Ko-Platypus2-13B](https://huggingface.co/kyujinpy/KO-Platypus2-13B) | 45.60 | 44.20 | 54.31 | 42.47 | 44.41 | 42.62 |
| *NEFT(🍵kosy)+MLP-v1 | 43.64 | 43.94 | 53.88 | 42.68 | 43.46 | 34.24 |
| *NEFT(🍵kosy)+MLP-v2 | 45.45 | 44.20 | 54.56 | 42.60 | 42.68 | 42.98 |
| [***NEFT(🍵kosy)+MLP-v3**](https://huggingface.co/kyujinpy/Kosy-platypus2-13B-v3) | 46.31 | 43.34 | 54.54 | 43.38 | 44.11 | 46.16 |
| NEFT(🍵kosy)+Attention | 44.92 |42.92 | 54.48 | 42.99 | 43.00 | 41.20 |
| NEFT(🍵kosy) | 45.08 | 43.09 | 53.61 | 41.06 | 43.47 | 43.21 |
> *Different Hyperparameters such that learning_rate, batch_size, epoch, etc...
# Implementation Code
```python
### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "kyujinpy/Koisy-Platypus2-13B"
OpenOrca = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)
```
--- | 2,181 | [
[
-0.046539306640625,
-0.0550537109375,
0.0229339599609375,
0.027313232421875,
-0.046783447265625,
0.00016260147094726562,
-0.01514434814453125,
-0.0211334228515625,
0.02093505859375,
0.0248870849609375,
-0.0286712646484375,
-0.049591064453125,
-0.050933837890625,... |
Yntec/Deliberate2 | 2023-11-05T19:15:06.000Z | [
"diffusers",
"General",
"Anime",
"Art",
"XpucT",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:cc-by-nc-nd-4.0",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | Yntec | null | null | Yntec/Deliberate2 | 0 | 439 | diffusers | 2023-11-05T18:19:03 | ---
license: cc-by-nc-nd-4.0
library_name: diffusers
pipeline_tag: text-to-image
tags:
- General
- Anime
- Art
- XpucT
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
---
# Deliberate 2
768x768 version of this model with the MoistMix V2 VAE baked in for the Inference API.
Samples and prompt:


masterpiece,best quality, retro artstyle, a cute little witch's prophecy comes true, logo, cover, 1980s /style/
Original page:
https://huggingface.co/XpucT/Deliberate | 727 | [
[
-0.00763702392578125,
-0.044952392578125,
0.04150390625,
0.03558349609375,
-0.01255035400390625,
-0.038726806640625,
0.025848388671875,
-0.0369873046875,
0.041229248046875,
0.0758056640625,
-0.061676025390625,
-0.01033782958984375,
-0.03570556640625,
-0.0136... |
naver/efficient-splade-VI-BT-large-doc | 2022-07-08T13:12:18.000Z | [
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"splade",
"query-expansion",
"document-expansion",
"bag-of-words",
"passage-retrieval",
"knowledge-distillation",
"document encoder",
"en",
"dataset:ms_marco",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible... | fill-mask | naver | null | null | naver/efficient-splade-VI-BT-large-doc | 1 | 438 | transformers | 2022-07-05T11:37:51 | ---
license: cc-by-nc-sa-4.0
language: "en"
tags:
- splade
- query-expansion
- document-expansion
- bag-of-words
- passage-retrieval
- knowledge-distillation
- document encoder
datasets:
- ms_marco
---
## Efficient SPLADE
Efficient SPLADE model for passage retrieval. This architecture uses two distinct models for query and document inference. This is the **doc** one, please also download the **query** one (https://huggingface.co/naver/efficient-splade-VI-BT-large-query). For additional details, please visit:
* paper: https://dl.acm.org/doi/10.1145/3477495.3531833
* code: https://github.com/naver/splade
| | MRR@10 (MS MARCO dev) | R@1000 (MS MARCO dev) | Latency (PISA) ms | Latency (Inference) ms
| --- | --- | --- | --- | --- |
| `naver/efficient-splade-V-large` | 38.8 | 98.0 | 29.0 | 45.3
| `naver/efficient-splade-VI-BT-large` | 38.0 | 97.8 | 31.1 | 0.7
## Citation
If you use our checkpoint, please cite our work:
```
@inproceedings{10.1145/3477495.3531833,
author = {Lassance, Carlos and Clinchant, St\'{e}phane},
title = {An Efficiency Study for SPLADE Models},
year = {2022},
isbn = {9781450387323},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3477495.3531833},
doi = {10.1145/3477495.3531833},
abstract = {Latency and efficiency issues are often overlooked when evaluating IR models based on Pretrained Language Models (PLMs) in reason of multiple hardware and software testing scenarios. Nevertheless, efficiency is an important part of such systems and should not be overlooked. In this paper, we focus on improving the efficiency of the SPLADE model since it has achieved state-of-the-art zero-shot performance and competitive results on TREC collections. SPLADE efficiency can be controlled via a regularization factor, but solely controlling this regularization has been shown to not be efficient enough. In order to reduce the latency gap between SPLADE and traditional retrieval systems, we propose several techniques including L1 regularization for queries, a separation of document/query encoders, a FLOPS-regularized middle-training, and the use of faster query encoders. Our benchmark demonstrates that we can drastically improve the efficiency of these models while increasing the performance metrics on in-domain data. To our knowledge, we propose the first neural models that, under the same computing constraints, achieve similar latency (less than 4ms difference) as traditional BM25, while having similar performance (less than 10% MRR@10 reduction) as the state-of-the-art single-stage neural rankers on in-domain data.},
booktitle = {Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval},
pages = {2220–2226},
numpages = {7},
keywords = {splade, latency, information retrieval, sparse representations},
location = {Madrid, Spain},
series = {SIGIR '22}
}
```
| 2,923 | [
[
-0.022003173828125,
-0.052337646484375,
0.0309906005859375,
0.042022705078125,
-0.0225372314453125,
-0.016510009765625,
-0.0204315185546875,
-0.0157012939453125,
0.00878143310546875,
0.02508544921875,
-0.0165863037109375,
-0.0362548828125,
-0.051483154296875,
... |
Langboat/bloom-389m-zh | 2022-08-31T11:52:22.000Z | [
"transformers",
"pytorch",
"bloom",
"text-generation",
"zh",
"license:bigscience-bloom-rail-1.0",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | text-generation | Langboat | null | null | Langboat/bloom-389m-zh | 6 | 438 | transformers | 2022-08-22T06:39:40 | ---
license: bigscience-bloom-rail-1.0
language:
- zh
pipeline_tag: text-generation
widget:
- text: "中国的首都是"
---
This model is based on [bigscience/bloom-560m](https://huggingface.co/bigscience/bloom-560m).
We pruned its vocabulary from 250880 to 42437 with Chinese corpus to reduce GPU memory usage. So the total parameter is 389m now.
# How to use
```python
from transformers import BloomTokenizerFast, BloomForCausalLM
tokenizer = BloomTokenizerFast.from_pretrained('Langboat/bloom-389m-zh')
model = BloomForCausalLM.from_pretrained('Langboat/bloom-389m-zh')
print(tokenizer.batch_decode(model.generate(tokenizer.encode('中国的首都是', return_tensors='pt'))))
``` | 667 | [
[
-0.035675048828125,
-0.039794921875,
0.0213623046875,
0.0290679931640625,
-0.035400390625,
-0.025299072265625,
-0.035003662109375,
-0.006175994873046875,
-0.000759124755859375,
0.034210205078125,
-0.031402587890625,
-0.0211334228515625,
-0.0292205810546875,
... |
MIT/ast-finetuned-audioset-16-16-0.442 | 2023-09-12T18:34:51.000Z | [
"transformers",
"pytorch",
"safetensors",
"audio-spectrogram-transformer",
"audio-classification",
"arxiv:2104.01778",
"license:bsd-3-clause",
"endpoints_compatible",
"region:us"
] | audio-classification | MIT | null | null | MIT/ast-finetuned-audioset-16-16-0.442 | 0 | 438 | transformers | 2022-11-14T19:08:00 | ---
license: bsd-3-clause
tags:
- audio-classification
---
# Audio Spectrogram Transformer (fine-tuned on AudioSet)
Audio Spectrogram Transformer (AST) model fine-tuned on AudioSet. It was introduced in the paper [AST: Audio Spectrogram Transformer](https://arxiv.org/abs/2104.01778) by Gong et al. and first released in [this repository](https://github.com/YuanGongND/ast).
Disclaimer: The team releasing Audio Spectrogram Transformer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Audio Spectrogram Transformer is equivalent to [ViT](https://huggingface.co/docs/transformers/model_doc/vit), but applied on audio. Audio is first turned into an image (as a spectrogram), after which a Vision Transformer is applied. The model gets state-of-the-art results on several audio classification benchmarks.
## Usage
You can use the raw model for classifying audio into one of the AudioSet classes. See the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/audio-spectrogram-transformer) for more info. | 1,110 | [
[
-0.0577392578125,
-0.0167083740234375,
0.0094757080078125,
0.006771087646484375,
-0.023834228515625,
0.0038509368896484375,
-0.01470947265625,
-0.050140380859375,
0.0322265625,
0.040679931640625,
-0.0618896484375,
-0.032501220703125,
-0.0467529296875,
-0.010... |
kyujinpy/Kosy-platypus2-13B-v3 | 2023-11-02T01:52:46.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"ko",
"dataset:kyujinpy/KOpen-platypus",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | kyujinpy | null | null | kyujinpy/Kosy-platypus2-13B-v3 | 0 | 438 | transformers | 2023-10-27T09:32:42 | ---
language:
- ko
datasets:
- kyujinpy/KOpen-platypus
library_name: transformers
pipeline_tag: text-generation
license: cc-by-nc-sa-4.0
---
# **Kosy🍵llama**

## Model Details
**Model Developers** Kyujin Han (kyujinpy)
**Model Description**
[NEFTune](https://github.com/neelsjain/NEFTune) method를 활용하여 훈련한 Ko-platypus2 new version!
(Noisy + KO + llama = Kosy🍵llama)
**Repo Link**
Github **KoNEFTune**: [Kosy🍵llama](https://github.com/Marker-Inc-Korea/KoNEFTune)
If you visit our github, you can easily apply **Random_noisy_embedding_fine-tuning**!!
**Base Model**
[hyunseoki/ko-en-llama2-13b](https://huggingface.co/hyunseoki/ko-en-llama2-13b)
**Training Dataset**
Version of combined dataset: [kyujinpy/KOpen-platypus](https://huggingface.co/datasets/kyujinpy/KOpen-platypus)
I use A100 GPU 40GB and COLAB, when trianing.
# **Model comparisons**
[KO-LLM leaderboard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard)
# **NEFT comparisons**

| Model | Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 |
| --- | --- | --- | --- | --- | --- | --- |
| [Ko-Platypus2-13B](https://huggingface.co/kyujinpy/KO-Platypus2-13B) | 45.60 | 44.20 | 54.31 | 42.47 | 44.41 | 42.62 |
| *NEFT(🍵kosy)+MLP-v1 | 43.64 | 43.94 | 53.88 | 42.68 | 43.46 | 34.24 |
| *NEFT(🍵kosy)+MLP-v2 | 45.45 | 44.20 | 54.56 | 42.60 | 42.68 | 42.98 |
| [***NEFT(🍵kosy)+MLP-v3**](https://huggingface.co/kyujinpy/Kosy-platypus2-13B-v3) | 46.31 | 43.34 | 54.54 | 43.38 | 44.11 | 46.16 |
| NEFT(🍵kosy)+Attention | 44.92 |42.92 | 54.48 | 42.99 | 43.00 | 41.20 |
| NEFT(🍵kosy) | 45.08 | 43.09 | 53.61 | 41.06 | 43.47 | 43.21 |
> *Different Hyperparameters such that learning_rate, batch_size, epoch, etc...
# Implementation Code
```python
### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "kyujinpy/Koisy-Platypus2-13B"
OpenOrca = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)
```
--- | 2,181 | [
[
-0.046539306640625,
-0.0550537109375,
0.0229644775390625,
0.027313232421875,
-0.046783447265625,
0.00011414289474487305,
-0.01514434814453125,
-0.0211334228515625,
0.020965576171875,
0.0248870849609375,
-0.0286865234375,
-0.049591064453125,
-0.050933837890625,
... |
microsoft/tapex-base | 2023-05-03T03:48:52.000Z | [
"transformers",
"pytorch",
"safetensors",
"bart",
"text2text-generation",
"tapex",
"table-question-answering",
"en",
"arxiv:2107.07653",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"has_space"
] | table-question-answering | microsoft | null | null | microsoft/tapex-base | 24 | 437 | transformers | 2022-03-02T23:29:05 | ---
language: en
tags:
- tapex
- table-question-answering
license: mit
---
# TAPEX (base-sized model)
TAPEX was proposed in [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. The original repo can be found [here](https://github.com/microsoft/Table-Pretraining).
## Model description
TAPEX (**Ta**ble **P**re-training via **Ex**ecution) is a conceptually simple and empirically powerful pre-training approach to empower existing models with *table reasoning* skills. TAPEX realizes table pre-training by learning a neural SQL executor over a synthetic corpus, which is obtained by automatically synthesizing executable SQL queries.
TAPEX is based on the BART architecture, the transformer encoder-decoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder.
## Intended Uses
You can use the raw model for simulating neural SQL execution, i.e., employ TAPEX to execute a SQL query on a given table. However, the model is mostly meant to be fine-tuned on a supervised dataset. Currently TAPEX can be fine-tuned to tackle table question answering tasks and table fact verification tasks. See the [model hub](https://huggingface.co/models?search=tapex) to look for fine-tuned versions on a task that interests you.
### How to Use
Here is how to use this model in transformers:
```python
from transformers import TapexTokenizer, BartForConditionalGeneration
import pandas as pd
tokenizer = TapexTokenizer.from_pretrained("microsoft/tapex-base")
model = BartForConditionalGeneration.from_pretrained("microsoft/tapex-base")
data = {
"year": [1896, 1900, 1904, 2004, 2008, 2012],
"city": ["athens", "paris", "st. louis", "athens", "beijing", "london"]
}
table = pd.DataFrame.from_dict(data)
# tapex accepts uncased input since it is pre-trained on the uncased corpus
query = "select year where city = beijing"
encoding = tokenizer(table=table, query=query, return_tensors="pt")
outputs = model.generate(**encoding)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
# ['2008']
```
### How to Fine-tuning
Please find the fine-tuning script [here](https://github.com/SivilTaram/transformers/tree/add_tapex_bis/examples/research_projects/tapex).
### BibTeX entry and citation info
```bibtex
@inproceedings{
liu2022tapex,
title={{TAPEX}: Table Pre-training via Learning a Neural {SQL} Executor},
author={Qian Liu and Bei Chen and Jiaqi Guo and Morteza Ziyadi and Zeqi Lin and Weizhu Chen and Jian-Guang Lou},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=O50443AsCP}
}
``` | 2,776 | [
[
-0.028411865234375,
-0.054595947265625,
0.0380859375,
-0.01200103759765625,
-0.0253448486328125,
-0.004116058349609375,
-0.01708984375,
-0.004131317138671875,
0.02777099609375,
0.04217529296875,
-0.03448486328125,
-0.050445556640625,
-0.03521728515625,
-0.01... |
sultan/ArabicT5-17GB-base | 2023-11-05T02:15:24.000Z | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"arxiv:2109.10686",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | sultan | null | null | sultan/ArabicT5-17GB-base | 3 | 437 | transformers | 2022-10-14T13:41:29 | # ArabicT5: Efficient Adaptation of T5 on Arabic Language
# Model Description
This model adapts T5 on the Arabic Language by pre-training T5 on :
- Arabic Wikipedia.
- Marefa encyclopedia.
- Hindawi Books.
- a collection of Arabic News.
Total Corpora size is 17GB. This model uses an efficient implementation of T5 which reduces the fine-tuning and memory used [Link](https://arxiv.org/abs/2109.10686) and uses T5x for pre-training [Link](https://github.com/google-research/t5x)
## Pre-training Settings and Results on TyDi QA Development Dataset ( Model in this card is highlighted in bold )
| Model | Hidden Layer | Atten. head | Atten. Layers | Vocab | Hardware |Training Steps | Batch | Train x Batch Factor |Corpora |
|------------------|--------------|-------------|---------------|-------|-----------|---------------|--------|-----------------------|------------------------|
| AraT5-base | 768 | 12 | 12 | 110K |TPUv3-8 | 1M | 128 | 1.0x |248GB 29B tokens (MSA + Tweets) |
| AraT5-msa-base | 768 | 12 | 12 | 110K |TPUv3-8 | 1M | 128 | 1.0x |70GB (MSA) |
| AraT5-tweets-base| 768 | 12 | 12 | 110K |TPUv3-8 | 1M | 128 | 1.0x |178GB (Tweets) |
| AraBART-base | 768 | 12 | 12 | 50K | 128 V100 GPUs (60h) |25 epochs| - | - |73GB (MSA) |
| mT5-base | 768 | 12 | 12 | 250K |TPUv3-32 | 1M | 1024 | 8.0x |6.3T tokens (mC4)|
| ArabicT5-17GB-small | 512 | 8 | 20 | 32K |TPUv3-32 | 256K | 256 | 0.5x |17GB (MSA) |
| ArabicT5-49GB-small | 512 | 8 | 16 | 32K |TPUv3-64 | 500K | 256 | 1.0x |49GB (MSA + OSCAR) |
| ArabicT5-17GB-base | 768 | 12 | 16 | 32K |TPUv3-128 | 500K | 512 | 2.0x |17GB (MSA) |
| ArabicT5-49GB-base | 768 | 12 | 16 | 32K |TPUv3-64 | 500K | 256 | 1.0x |49GB (MSA + OSCAR) |
| ArabicT5-17GB-large | 768 | 12 | 36 | 32K |TPUv3-128 | 500K | 512 | 2.0x |17GB (MSA) |
## Results on TyDi QA, HARD, Sentiment Analysis, Sarcasm Detection ( Best Score is highlighted in bold )
| Model | <center>TyDi QA| <center>HARD| <center>ArSarcasm-v2-Sentiment| <center>ArSarcasm-v2-Sarcasm| XL-SUM |
|----------------------|---------------|---------------------|-------------------------------------|----------------------------------|----------------------------------
| AraT5-base | <center>70.4/84.2 |<center>**96.5**|<center>69.7/72.6|<center>60.4|<center>30.3|
| AraT5-msa-base | <center>70.9/84.0 |<center>**96.5**|<center>70.0/72.7|<center>60.7|<center>27.4|
| AraT5-tweets-base | <center>65.1/79.0 |<center>96.3|<center>70.7/73.5|<center>61.1|<center>25.1|
| mT5-base | <center>72.2/84.1 |<center>96.2|<center>67.3/68.8|<center>52.2|<center>25.7|
| AraBART-base | <center>48.8/71.2 |<center>96.1|<center>66.2/68.2|<center>56.3|<center>31.2|
| ArabicT5-17GB-small | <center>70.8/84.8 |<center>96.4|<center>68.9/71.2|<center>58.9|<center>29.2|
| ArabicT5-49GB-small | <center>72.4/85.1 |<center>96.4|<center>70.2/73.4|<center>61.0|<center>30.2|
| ArabicT5-17GB-base | <center>73.3/86.1 |<center>96.4|<center>70.4/73.0|<center>59.8|<center>30.3|
| ArabicT5-49GB-base | <center>72.1/85.1 |<center>**96.5**|<center>71.3/74.1|<center>60.4|<center>30.9|
| ArabicT5-17GB-large | <center>**75.5/87.1** |<center>**96.5**| <center>**72.2/75.2**|<center>**61.7**|<center>**31.7**|
Evaluation Metrics: TyDi QA (EM/F1), HARD (Accuracy), Sentiment Analysis (Accuracy / F1-PN positive-negative), Sarcasm Detection (F1-sarcastic), XL-SUM (Rouge-L with Stemmer).
You can download the full details of our grid search for all models in all tasks above from this link: https://github.com/salrowili/ArabicT5/raw/main/ArabicT5_Grid_Search.zip
For the XL-Sum task, we choose our best run for each model using the eval set. We use the official evaluation script from XL-Sum, which uses the stemmer function, which may show better results than papers that don't use the stemmer function. The official XL-Sum paper uses a stemmer function.
# FineTuning our efficient ArabicT5-49GB-Small model with Torch on 3070 laptop GPU ###
[![Open In Colab][COLAB]](https://colab.research.google.com/github/salrowili/ArabicT5/blob/main/ArabicT5_49GB_Small_on_3070_Laptop_GPU.ipynb)
If you are running your code on a laptop GPU (e.g., a gaming laptop) or limited GPU memory, we recommended using our ArabicT5-49GB-Small model, which was the only model from the list that we were able to run on 3070 Laptop card with a batch size of 8. We manage to achieve an F1 score of 85.391 (slightly better than our FLAX code ) on the TyDi QA task.
# FineTuning our ArabicT5 model on generative and abstractive tasks with FLAX ###
[![Open In Colab][COLAB]](https://colab.research.google.com/github/salrowili/ArabicT5/blob/main/FineTuning_ArabicT5_with_FLAX_and_TPU.ipynb)
[COLAB]: https://colab.research.google.com/assets/colab-badge.svg
# FineTuning ArabicT5 on TPUv3-8 with free Kaggle ###
https://www.kaggle.com/code/sultanalrowili/arabict5-on-tydi-with-free-tpuv3-8-with-kaggle
# Continual Pre-Training of ArabicT5 with T5x
if you want to continue pre-training ArabicT5 on your own data, we have uploaded the raw t5x checkpoint to this link https://huggingface.co/sultan/ArabicT5-49GB-base/blob/main/arabict5_49GB_base_t5x.tar.gz
We will soon share a tutorial on how you can do that for free with Kaggle TPU
## GitHub Page
https://github.com/salrowili/ArabicT5
# Acknowledgment
We want to acknowledge the support we have from The TPU Research Cloud (TRC) team to grant us access to TPUv3 units.
# Paper
[Generative Approach for Gender-Rewriting Task with ArabicT5](https://aclanthology.org/2022.wanlp-1.55/)
# Citation
```bibtex
@inproceedings{alrowili-shanker-2022-generative,
title = "Generative Approach for Gender-Rewriting Task with {A}rabic{T}5",
author = "Alrowili, Sultan and
Shanker, Vijay",
booktitle = "Proceedings of the The Seventh Arabic Natural Language Processing Workshop (WANLP)",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates (Hybrid)",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.wanlp-1.55",
pages = "491--495",
abstract = "Addressing the correct gender in generative tasks (e.g., Machine Translation) has been an overlooked issue in the Arabic NLP. However, the recent introduction of the Arabic Parallel Gender Corpus (APGC) dataset has established new baselines for the Arabic Gender Rewriting task. To address the Gender Rewriting task, we first pre-train our new Seq2Seq ArabicT5 model on a 17GB of Arabic Corpora. Then, we continue pre-training our ArabicT5 model on the APGC dataset using a newly proposed method. Our evaluation shows that our ArabicT5 model, when trained on the APGC dataset, achieved competitive results against existing state-of-the-art methods. In addition, our ArabicT5 model shows better results on the APGC dataset compared to other Arabic and multilingual T5 models.",
}
``` | 7,702 | [
[
-0.04620361328125,
-0.0300750732421875,
0.019744873046875,
0.011810302734375,
-0.0235137939453125,
0.026275634765625,
-0.007030487060546875,
-0.021270751953125,
0.007720947265625,
0.0081939697265625,
-0.044158935546875,
-0.0836181640625,
-0.05572509765625,
0... |
Uminosachi/realisticVisionV30_v30VAE-inpainting | 2023-07-07T09:15:20.000Z | [
"diffusers",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | null | Uminosachi | null | null | Uminosachi/realisticVisionV30_v30VAE-inpainting | 2 | 437 | diffusers | 2023-07-03T23:54:35 | ---
license: creativeml-openrail-m
---
This is an inpainting model, which has been converted from the [realisticVisionV30_v30VAE-inpainting](https://civitai.com/models/4201?modelVersionId=105723). | 196 | [
[
-0.026092529296875,
-0.0136871337890625,
0.034423828125,
0.01171875,
-0.025634765625,
0.0215301513671875,
0.033782958984375,
-0.027069091796875,
0.02984619140625,
0.07427978515625,
-0.07757568359375,
0.0136566162109375,
-0.006229400634765625,
-0.026809692382... |
SaiRaj03/my-pet-cat-xzg | 2023-10-23T17:01:22.000Z | [
"diffusers",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | SaiRaj03 | null | null | SaiRaj03/my-pet-cat-xzg | 0 | 437 | diffusers | 2023-10-21T11:51:30 | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Cat-XZG Dreambooth model trained by SaiRaj03 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: BITS 683
Sample pictures of this concept:





| 746 | [
[
-0.05792236328125,
-0.0195770263671875,
0.0243682861328125,
0.0227203369140625,
-0.0272064208984375,
0.035736083984375,
0.020477294921875,
-0.0273895263671875,
0.0504150390625,
0.02777099609375,
-0.052001953125,
-0.0322265625,
-0.022491455078125,
0.009674072... |
digiplay/CamelliaMIx_2.5D_diffusers | 2023-06-19T13:32:11.000Z | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | digiplay | null | null | digiplay/CamelliaMIx_2.5D_diffusers | 2 | 436 | diffusers | 2023-05-27T11:22:01 | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info:
https://civitai.com/models/44219/camelliamix25d
| 184 | [
[
-0.021453857421875,
0.00647735595703125,
0.00911712646484375,
0.03192138671875,
-0.025726318359375,
0.00466156005859375,
0.04779052734375,
-0.0112762451171875,
0.03045654296875,
0.051910400390625,
-0.05938720703125,
-0.010345458984375,
-0.0026035308837890625,
... |
vuiseng9/ov-gpt2-fp32-no-cache | 2023-06-27T22:58:37.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"tflite",
"rust",
"safetensors",
"openvino",
"gpt2",
"text-generation",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | vuiseng9 | null | null | vuiseng9/ov-gpt2-fp32-no-cache | 0 | 436 | transformers | 2023-06-27T22:07:52 | # Notes:
This model is inherited directly from gpt2 in HF model hub. Then, GPT2 Openvino IR from OMZ is copied here. The intended usage of this model is for optimum-intel.
```bash
# Install Optimum-Intel
from transformers import AutoTokenizer, pipeline, set_seed, AutoModelForCausalLM
from optimum.intel.openvino import OVModelForCausalLM
model_id="vuiseng9/ov-gpt2-fp32-no-cache"
model = OVModelForCausalLM.from_pretrained(model_id, use_cache=False)
tokenizer = AutoTokenizer.from_pretrained(model_id)
generator_pipe = pipeline('text-generation', model=model, tokenizer=tokenizer)
output = generator_pipe("It's a beautiful day ...", max_length=30, num_return_sequences=1)
```
| 681 | [
[
-0.0219879150390625,
-0.053741455078125,
0.0350341796875,
0.006069183349609375,
-0.029266357421875,
-0.0158843994140625,
0.00127410888671875,
-0.01422882080078125,
-0.0242462158203125,
0.0278778076171875,
-0.05889892578125,
-0.027191162109375,
-0.038543701171875... |
audeering/wav2vec2-large-robust-24-ft-age-gender | 2023-09-21T13:23:33.000Z | [
"transformers",
"pytorch",
"wav2vec2",
"speech",
"audio",
"audio-classification",
"age-recognition",
"gender-recognition",
"dataset:agender",
"dataset:mozillacommonvoice",
"dataset:timit",
"dataset:voxceleb2",
"arxiv:2306.16962",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"regi... | audio-classification | audeering | null | null | audeering/wav2vec2-large-robust-24-ft-age-gender | 0 | 436 | transformers | 2023-09-04T11:50:44 | ---
datasets:
- agender
- mozillacommonvoice
- timit
- voxceleb2
inference: true
tags:
- speech
- audio
- wav2vec2
- audio-classification
- age-recognition
- gender-recognition
license: cc-by-nc-sa-4.0
---
# Model for Age and Gender Recognition based on Wav2vec 2.0 (24 layers)
The model expects a raw audio signal as input and outputs predictions
for age in a range of approximately 0...1 (0...100 years)
and gender expressing the probababilty for being child, female, or male.
In addition, it also provides the pooled states of the last transformer layer.
The model was created by fine-tuning [
Wav2Vec2-Large-Robust](https://huggingface.co/facebook/wav2vec2-large-robust)
on [aGender](https://paperswithcode.com/dataset/agender),
[Mozilla Common Voice](https://commonvoice.mozilla.org/),
[Timit](https://catalog.ldc.upenn.edu/LDC93s1) and
[Voxceleb 2](https://www.robots.ox.ac.uk/~vgg/data/voxceleb/vox2.html).
For this version of the model we trained all 24 transformer layers.
An [ONNX](https://onnx.ai/") export of the model is available from
[doi:10.5281/zenodo.7761387](https://doi.org/10.5281/zenodo.7761387).
Further details are given in the associated [paper](https://arxiv.org/abs/2306.16962)
and [tutorial](https://github.com/audeering/w2v2-age-gender-how-to).
# Usage
```python
import numpy as np
import torch
import torch.nn as nn
from transformers import Wav2Vec2Processor
from transformers.models.wav2vec2.modeling_wav2vec2 import (
Wav2Vec2Model,
Wav2Vec2PreTrainedModel,
)
class ModelHead(nn.Module):
r"""Classification head."""
def __init__(self, config, num_labels):
super().__init__()
self.dense = nn.Linear(config.hidden_size, config.hidden_size)
self.dropout = nn.Dropout(config.final_dropout)
self.out_proj = nn.Linear(config.hidden_size, num_labels)
def forward(self, features, **kwargs):
x = features
x = self.dropout(x)
x = self.dense(x)
x = torch.tanh(x)
x = self.dropout(x)
x = self.out_proj(x)
return x
class AgeGenderModel(Wav2Vec2PreTrainedModel):
r"""Speech emotion classifier."""
def __init__(self, config):
super().__init__(config)
self.config = config
self.wav2vec2 = Wav2Vec2Model(config)
self.age = ModelHead(config, 1)
self.gender = ModelHead(config, 3)
self.init_weights()
def forward(
self,
input_values,
):
outputs = self.wav2vec2(input_values)
hidden_states = outputs[0]
hidden_states = torch.mean(hidden_states, dim=1)
logits_age = self.age(hidden_states)
logits_gender = torch.softmax(self.gender(hidden_states), dim=1)
return hidden_states, logits_age, logits_gender
# load model from hub
device = 'cpu'
model_name = 'audeering/wav2vec2-large-robust-24-ft-age-gender'
processor = Wav2Vec2Processor.from_pretrained(model_name)
model = AgeGenderModel.from_pretrained(model_name)
# dummy signal
sampling_rate = 16000
signal = np.zeros((1, sampling_rate), dtype=np.float32)
def process_func(
x: np.ndarray,
sampling_rate: int,
embeddings: bool = False,
) -> np.ndarray:
r"""Predict age and gender or extract embeddings from raw audio signal."""
# run through processor to normalize signal
# always returns a batch, so we just get the first entry
# then we put it on the device
y = processor(x, sampling_rate=sampling_rate)
y = y['input_values'][0]
y = y.reshape(1, -1)
y = torch.from_numpy(y).to(device)
# run through model
with torch.no_grad():
y = model(y)
if embeddings:
y = y[0]
else:
y = torch.hstack([y[1], y[2]])
# convert to numpy
y = y.detach().cpu().numpy()
return y
print(process_func(signal, sampling_rate))
# Age child female male
# [[ 0.33793038 0.2715511 0.2275236 0.5009253 ]]
print(process_func(signal, sampling_rate, embeddings=True))
# Pooled hidden states of last transformer layer
# [[ 0.024444 0.0508722 0.04930823 ... 0.07247854 -0.0697901
# -0.0170537 ]]
```
| 4,155 | [
[
-0.019866943359375,
-0.037109375,
0.018035888671875,
0.0126190185546875,
0.01111602783203125,
-0.0159759521484375,
-0.00800323486328125,
-0.0260009765625,
-0.030548095703125,
0.01885986328125,
-0.06597900390625,
-0.039276123046875,
-0.039520263671875,
-0.023... |
alx-ai/noggles11400 | 2023-05-16T09:29:42.000Z | [
"diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | alx-ai | null | null | alx-ai/noggles11400 | 0 | 435 | diffusers | 2022-11-19T05:06:15 | ---
license: creativeml-openrail-m
tags:
- text-to-image
---
### noggles11400 on Stable Diffusion via Dreambooth trained on the [fast-DreamBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
#### Model by alxdfy
This your the Stable Diffusion model fine-tuned the noggles11400 concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt(s)`: **nounsbud.jpg**
You can also train your own concepts and upload them to the library by using [the fast-DremaBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb).
You can run your new concept via A1111 Colab :[Fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Sample pictures of this concept:
nounsbud.jpg

| 1,393 | [
[
-0.02801513671875,
-0.06982421875,
0.0391845703125,
0.02764892578125,
-0.01052093505859375,
0.027618408203125,
0.0063018798828125,
-0.023162841796875,
0.04736328125,
0.0188446044921875,
-0.0101470947265625,
-0.0248565673828125,
-0.032562255859375,
-0.0248413... |
ItsJayQz/Valorant_Diffusion | 2023-01-28T01:05:27.000Z | [
"diffusers",
"stable-diffusion",
"text-to-image",
"valorant",
"game",
"en",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | ItsJayQz | null | null | ItsJayQz/Valorant_Diffusion | 38 | 435 | diffusers | 2022-12-17T20:57:17 | ---
language:
- en
license: creativeml-openrail-m
tags:
- stable-diffusion
- text-to-image
- diffusers
- valorant
- game
inference: true
---
### Valorant Diffusion
This model was trained on the Valorant agents splash arts, and some extra arts on the official website.
I thought about including the agents trailers and lore videos, but for the art style is slightly ever so different. I might make an updated version which includes them.
The model can do portraits and landscape (possibly animal as well?), but not many objects, or at least not cars.
To reference the art style, use the token: valorant style
There is already an existing model that uses textual inversion. This is trained using Dreambooth instead, whether or not this method is better, I will let you judge.
### Gradio
We support a [Gradio](https://github.com/gradio-app/gradio) Web UI to run Valorant_Diffusion:
[](https://huggingface.co/spaces/akhaliq/Valorant_Diffusion)
Here are some samples.
**Portraits**


**Landscapes**

**Disclaimers**
- I'm in no way affliated with RiotGames, or any entities relating to the ownership of the game artworks.
- The phrase Valorant is simply a reference for accessibility.
- This was created entirely for research, and entertainment purpose.
- I did not plan, or is planning on turning this model into a commercial product, or use for commercial purposes.
- I do not condone the usage of the model for making counterfeit products that might infringe on RiotGames's copyrights/trademarks.
**License**
- This model is under Creative OpenRAIL-M.
- This means the model can be used royalty-free, and flexible with the model usage, such as redistribution of the model, or of any derivatives of the model.
- However, there are restrictions on the openess of the license.
More info into the restrictions can be found [here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
**Responsibilities**
- By using/downloading the model, you are responsible for:
- All outputs/usage of the model.
- Understanding the Disclaimers
- Upholding the terms of the license.
Thanks for checking out the model! | 2,696 | [
[
-0.0181121826171875,
-0.04931640625,
0.040069580078125,
0.036834716796875,
-0.0176544189453125,
0.0003294944763183594,
0.0170745849609375,
-0.039154052734375,
0.0377197265625,
0.06756591796875,
-0.034027099609375,
-0.048370361328125,
-0.034332275390625,
-0.0... |
vanadhi/roberta-base-fiqa-flm-sq-flit | 2021-12-25T18:36:54.000Z | [
"transformers",
"pytorch",
"roberta",
"question-answering",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | question-answering | vanadhi | null | null | vanadhi/roberta-base-fiqa-flm-sq-flit | 1 | 434 | transformers | 2022-03-02T23:29:05 | ---
tags:
- generated_from_trainer
model-index:
- name: roberta-base-fiqa-flm-sq-flit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-fiqa-flm-sq-flit
This model is a fine-tuned version of roberta-base on a custom dataset create for question answering in
financial domain.
## Model description
RoBERTa is a transformers model pretrained on a large corpus of English data in a self-supervised fashion.
The model was further processed as below for the specific downstream QA task.
1. Pretrained for domain adaptation with Masked language modeling (MLM) objective with
the FIQA challenge Opinion-based QA task is available here - https://drive.google.com/file/d/1BlWaV-qVPfpGyJoWQJU9bXQgWCATgxEP/view
2. Pretrained with MLM objective with custom generated dataset for Banking and Finance.
3. Fine Tuned with SQuAD V2 dataset for QA task adaptation.
4. Fine Tuned with custom labeled dataset in SQuAD format for domain and task adaptation.
## Intended uses & limitations
The model is intended to be used for a custom Questions Answering system in the BFSI domain.
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.15.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1,762 | [
[
-0.0322265625,
-0.0728759765625,
0.0095062255859375,
0.0089263916015625,
-0.0021533966064453125,
-0.0035419464111328125,
-0.005126953125,
-0.0141754150390625,
-0.0020656585693359375,
0.052490234375,
-0.06866455078125,
-0.030914306640625,
-0.04449462890625,
0... |
Fictiverse/Stable_Diffusion_Microscopic_model | 2023-05-19T18:14:09.000Z | [
"diffusers",
"text-to-image",
"license:openrail",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | Fictiverse | null | null | Fictiverse/Stable_Diffusion_Microscopic_model | 74 | 434 | diffusers | 2022-11-08T07:03:22 | ---
license: openrail
tags:
- text-to-image
---
# Microscopic model V1
This is the fine-tuned Stable Diffusion model trained on microscopic images.
Use **Microscopic** in your prompts.
### Sample images:


Image enhancing : Before/After

Based on StableDiffusion 1.5 model
### 🧨 Diffusers
This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX]().
```python
from diffusers import StableDiffusionPipeline
import torch
model_id = "Fictiverse/Stable_Diffusion_Microscopic_model"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "microscopic creature"
image = pipe(prompt).images[0]
image.save("./microscopic.png")
``` | 1,349 | [
[
-0.038116455078125,
-0.0635986328125,
0.04718017578125,
-0.0007925033569335938,
-0.029296875,
-0.01385498046875,
0.0082855224609375,
0.0016193389892578125,
0.015625,
0.034393310546875,
-0.0220947265625,
-0.033172607421875,
-0.045654296875,
-0.007228851318359... |
digiplay/xxMix_4 | 2023-07-13T23:48:49.000Z | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | digiplay | null | null | digiplay/xxMix_4 | 0 | 434 | diffusers | 2023-07-13T23:33:56 | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info:
https://civitai.com/models/47919?modelVersionId=52513
Original Author's DEMO images :
 | 379 | [
[
-0.030303955078125,
-0.0153961181640625,
0.028839111328125,
0.0124664306640625,
-0.02532958984375,
-0.007808685302734375,
0.0216217041015625,
-0.0018568038940429688,
0.048126220703125,
0.05047607421875,
-0.0650634765625,
-0.0158538818359375,
-0.00526809692382812... |
kyujinpy/Kosy-platypus2-13B-v2 | 2023-11-02T01:52:37.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"ko",
"dataset:kyujinpy/KOpen-platypus",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | kyujinpy | null | null | kyujinpy/Kosy-platypus2-13B-v2 | 0 | 434 | transformers | 2023-10-26T16:32:35 | ---
language:
- ko
datasets:
- kyujinpy/KOpen-platypus
library_name: transformers
pipeline_tag: text-generation
license: cc-by-nc-sa-4.0
---
# **Kosy🍵llama**

## Model Details
**Model Developers** Kyujin Han (kyujinpy)
**Model Description**
[NEFTune](https://github.com/neelsjain/NEFTune) method를 활용하여 훈련한 Ko-platypus2 new version!
(Noisy + KO + llama = Kosy🍵llama)
**Repo Link**
Github **KoNEFTune**: [Kosy🍵llama](https://github.com/Marker-Inc-Korea/KoNEFTune)
If you visit our github, you can easily apply **Random_noisy_embedding_fine-tuning**!!
**Base Model**
[hyunseoki/ko-en-llama2-13b](https://huggingface.co/hyunseoki/ko-en-llama2-13b)
**Training Dataset**
Version of combined dataset: [kyujinpy/KOpen-platypus](https://huggingface.co/datasets/kyujinpy/KOpen-platypus)
I use A100 GPU 40GB and COLAB, when trianing.
# **Model comparisons**
[KO-LLM leaderboard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard)
# **NEFT comparisons**

| Model | Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 |
| --- | --- | --- | --- | --- | --- | --- |
| [Ko-Platypus2-13B](https://huggingface.co/kyujinpy/KO-Platypus2-13B) | 45.60 | 44.20 | 54.31 | 42.47 | 44.41 | 42.62 |
| *NEFT(🍵kosy)+MLP-v1 | 43.64 | 43.94 | 53.88 | 42.68 | 43.46 | 34.24 |
| *NEFT(🍵kosy)+MLP-v2 | 45.45 | 44.20 | 54.56 | 42.60 | 42.68 | 42.98 |
| [***NEFT(🍵kosy)+MLP-v3**](https://huggingface.co/kyujinpy/Kosy-platypus2-13B-v3) | 46.31 | 43.34 | 54.54 | 43.38 | 44.11 | 46.16 |
| NEFT(🍵kosy)+Attention | 44.92 |42.92 | 54.48 | 42.99 | 43.00 | 41.20 |
| NEFT(🍵kosy) | 45.08 | 43.09 | 53.61 | 41.06 | 43.47 | 43.21 |
> *Different Hyperparameters such that learning_rate, batch_size, epoch, etc...
# Implementation Code
```python
### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "kyujinpy/Koisy-Platypus2-13B"
OpenOrca = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)
```
--- | 2,181 | [
[
-0.046539306640625,
-0.0550537109375,
0.0229339599609375,
0.027313232421875,
-0.046783447265625,
0.00016260147094726562,
-0.01514434814453125,
-0.0211334228515625,
0.02093505859375,
0.0248870849609375,
-0.0286712646484375,
-0.049591064453125,
-0.050933837890625,... |
alx-ai/noggles9000 | 2023-05-16T09:29:48.000Z | [
"diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | alx-ai | null | null | alx-ai/noggles9000 | 1 | 433 | diffusers | 2022-11-19T07:42:20 | ---
license: creativeml-openrail-m
tags:
- text-to-image
---
### noggles9000 on Stable Diffusion via Dreambooth trained on the [fast-DreamBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
#### Model by alxdfy
This your the Stable Diffusion model fine-tuned the noggles9000 concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt(s)`: **nounfootball.jpg**
You can also train your own concepts and upload them to the library by using [the fast-DremaBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb).
You can run your new concept via A1111 Colab :[Fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Sample pictures of this concept:
nounfootball.jpg

| 1,406 | [
[
-0.0245361328125,
-0.067138671875,
0.036956787109375,
0.0303802490234375,
-0.004978179931640625,
0.0257110595703125,
0.0114288330078125,
-0.022674560546875,
0.05035400390625,
0.023406982421875,
-0.0154266357421875,
-0.0252838134765625,
-0.030548095703125,
-0... |
facebook/convnextv2-base-22k-224 | 2023-02-20T13:13:23.000Z | [
"transformers",
"pytorch",
"convnextv2",
"image-classification",
"vision",
"dataset:imagenet-22k",
"arxiv:2301.00808",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | facebook | null | null | facebook/convnextv2-base-22k-224 | 2 | 433 | transformers | 2023-02-19T07:08:46 | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-22k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# ConvNeXt V2 (base-sized model)
ConvNeXt V2 model pretrained using the FCMAE framework and fine-tuned on the ImageNet-22K dataset at resolution 224x224. It was introduced in the paper [ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders](https://arxiv.org/abs/2301.00808) by Woo et al. and first released in [this repository](https://github.com/facebookresearch/ConvNeXt-V2).
Disclaimer: The team releasing ConvNeXT V2 did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
ConvNeXt V2 is a pure convolutional model (ConvNet) that introduces a fully convolutional masked autoencoder framework (FCMAE) and a new Global Response Normalization (GRN) layer to ConvNeXt. ConvNeXt V2 significantly improves the performance of pure ConvNets on various recognition benchmarks.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=convnextv2) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import AutoImageProcessor, ConvNextV2ForImageClassification
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
preprocessor = AutoImageProcessor.from_pretrained("facebook/convnextv2-base-22k-224")
model = ConvNextV2ForImageClassification.from_pretrained("facebook/convnextv2-base-22k-224")
inputs = preprocessor(image, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label]),
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/convnextv2).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2301-00808,
author = {Sanghyun Woo and
Shoubhik Debnath and
Ronghang Hu and
Xinlei Chen and
Zhuang Liu and
In So Kweon and
Saining Xie},
title = {ConvNeXt {V2:} Co-designing and Scaling ConvNets with Masked Autoencoders},
journal = {CoRR},
volume = {abs/2301.00808},
year = {2023},
url = {https://doi.org/10.48550/arXiv.2301.00808},
doi = {10.48550/arXiv.2301.00808},
eprinttype = {arXiv},
eprint = {2301.00808},
timestamp = {Tue, 10 Jan 2023 15:10:12 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2301-00808.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | 3,374 | [
[
-0.051513671875,
-0.02813720703125,
-0.0292816162109375,
0.0165557861328125,
-0.0277862548828125,
-0.019500732421875,
-0.01201629638671875,
-0.060943603515625,
0.02325439453125,
0.03399658203125,
-0.0418701171875,
-0.00933837890625,
-0.045501708984375,
-0.00... |
Hinataaa/autotrain-text_summary_arp-45146113306 | 2023-03-30T09:07:17.000Z | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain",
"summarization",
"en",
"dataset:Hinataaa/autotrain-data-text_summary_arp",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | summarization | Hinataaa | null | null | Hinataaa/autotrain-text_summary_arp-45146113306 | 1 | 433 | transformers | 2023-03-30T08:57:50 | ---
tags:
- autotrain
- summarization
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Hinataaa/autotrain-data-text_summary_arp
co2_eq_emissions:
emissions: 3.673615303025701
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 45146113306
- CO2 Emissions (in grams): 3.6736
## Validation Metrics
- Loss: 1.492
- Rouge1: 49.267
- Rouge2: 26.900
- RougeL: 46.736
- RougeLsum: 46.679
- Gen Len: 18.636
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/Hinataaa/autotrain-text_summary_arp-45146113306
``` | 728 | [
[
-0.0345458984375,
-0.031463623046875,
0.02703857421875,
0.0192718505859375,
-0.0022430419921875,
-0.0003056526184082031,
0.0151824951171875,
-0.01515960693359375,
0.0205078125,
0.02154541015625,
-0.056884765625,
-0.031341552734375,
-0.054779052734375,
-0.000... |
timm/tiny_vit_5m_224.dist_in22k | 2023-09-01T18:12:39.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-22k",
"arxiv:2207.10666",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/tiny_vit_5m_224.dist_in22k | 0 | 433 | timm | 2023-09-01T16:03:40 | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-22k
---
# Model card for tiny_vit_5m_224.dist_in22k
A TinyViT image classification model. Pretrained on ImageNet-22k with distillation by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 12.1
- GMACs: 1.2
- Activations (M): 9.3
- Image size: 224 x 224
- **Papers:**
- TinyViT: Fast Pretraining Distillation for Small Vision Transformers: https://arxiv.org/abs/2207.10666
- **Original:** https://github.com/microsoft/Cream/tree/main/TinyViT
- **Dataset:** ImageNet-22k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('tiny_vit_5m_224.dist_in22k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'tiny_vit_5m_224.dist_in22k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 56, 56])
# torch.Size([1, 128, 28, 28])
# torch.Size([1, 160, 14, 14])
# torch.Size([1, 320, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'tiny_vit_5m_224.dist_in22k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 320, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Citation
```bibtex
@InProceedings{tiny_vit,
title={TinyViT: Fast Pretraining Distillation for Small Vision Transformers},
author={Wu, Kan and Zhang, Jinnian and Peng, Houwen and Liu, Mengchen and Xiao, Bin and Fu, Jianlong and Yuan, Lu},
booktitle={European conference on computer vision (ECCV)},
year={2022}
}
```
| 3,556 | [
[
-0.03668212890625,
-0.0347900390625,
0.015960693359375,
0.0027828216552734375,
-0.034454345703125,
-0.029388427734375,
-0.0252227783203125,
-0.015960693359375,
0.016754150390625,
0.02093505859375,
-0.04095458984375,
-0.044677734375,
-0.048614501953125,
-0.00... |
LeoLM/leo-hessianai-13b-chat-bilingual | 2023-09-29T13:16:56.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"custom_code",
"en",
"de",
"dataset:LeoLM/OpenSchnabeltier",
"dataset:OpenAssistant/OASST-DE",
"dataset:FreedomIntelligence/alpaca-gpt4-deutsch",
"dataset:FreedomIntelligence/evol-instruct-deutsch",
"dataset:LeoLM/German_Poems",
"dataset... | text-generation | LeoLM | null | null | LeoLM/leo-hessianai-13b-chat-bilingual | 6 | 433 | transformers | 2023-09-10T08:27:09 | ---
datasets:
- LeoLM/OpenSchnabeltier
- OpenAssistant/OASST-DE
- FreedomIntelligence/alpaca-gpt4-deutsch
- FreedomIntelligence/evol-instruct-deutsch
- LeoLM/German_Poems
- LeoLM/German_Songs
- garage-bAInd/Open-Platypus
- WizardLM/WizardLM_evol_instruct_70k
- bjoernp/oasst25-08-23-filtered
language:
- en
- de
library_name: transformers
pipeline_tag: text-generation
---
# LAION LeoLM: **L**inguistically **E**nhanced **O**pen **L**anguage **M**odel
Meet LeoLM, the first open and commercially available German Foundation Language Model built on Llama-2.
Our models extend Llama-2's capabilities into German through continued pretraining on a large corpus of German-language and mostly locality specific text.
Thanks to a compute grant at HessianAI's new supercomputer **42**, we release two foundation models trained with 8k context length,
[`LeoLM/leo-hessianai-7b`](https://huggingface.co/LeoLM/leo-hessianai-7b) and [`LeoLM/leo-hessianai-13b`](https://huggingface.co/LeoLM/leo-hessianai-13b) under the [Llama-2 community license](https://huggingface.co/meta-llama/Llama-2-70b/raw/main/LICENSE.txt) (70b also coming soon! 👀).
With this release, we hope to bring a new wave of opportunities to German open-source and commercial LLM research and accelerate adoption.
Read our [blog post]() or our paper (preprint coming soon) for more details!
*A project by Björn Plüster and Christoph Schuhmann in collaboration with LAION and HessianAI.*
## LeoLM Chat
`LeoLM/leo-hessianai-13b-chat-bilingual` is a bilingual English-German chat model built on our foundation model `LeoLM/leo-hessianai-13b` and finetuned on a selection of German translateed instruction datasets and their English counterparts.
The model performs exceptionally well on writing, explanation and discussion tasks but struggles somewhat with math and advanced reasoning. See our MT-Bench scores:
```
{
"first_turn": 6.13125,
"second_turn": 4.88125,
"categories": {
"writing": 6.75,
"roleplay": 5.55,
"reasoning": 3.3,
"math": 2.25,
"coding": 3.9,
"extraction": 5.8,
"stem": 7.55,
"humanities": 8.95
},
"average": 5.50625
}
```
## Model Details
- **Finetuned from:** [LeoLM/leo-hessianai-13b](https://huggingface.co/LeoLM/leo-hessianai-13b)
- **Model type:** Causal decoder-only transformer language model
- **Language:** English and German
- **Demo:** [Web Demo]()
- **License:** [LLAMA 2 COMMUNITY LICENSE AGREEMENT](https://huggingface.co/meta-llama/Llama-2-70b/raw/main/LICENSE.txt)
- **Contact:** [LAION Discord](https://discord.com/invite/eq3cAMZtCC) or [Björn Plüster](mailto:bjoern.pl@outlook.de)
## Use in 🤗Transformers
First install direct dependencies:
```
pip install transformers torch sentencepiece
```
If you want faster inference using flash-attention2, you need to install these dependencies:
```bash
pip install packaging ninja
pip install flash-attn==v2.1.1 --no-build-isolation
pip install git+https://github.com/HazyResearch/flash-attention.git@v2.1.1#subdirectory=csrc/rotary
```
Then load the model in transformers:
```python
from transformers import pipeline
import torch
system_prompt = """<|im_start|>system
Dies ist eine Unterhaltung zwischen einem intelligenten, hilfsbereitem KI-Assistenten und einem Nutzer.
Der Assistent gibt ausführliche, hilfreiche und ehrliche Antworten.<|im_end|>
"""
prompt_format = "<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant\n"
prompt = "Erkläre mir wie die Fahrradwegesituation in Hamburg ist."
generator = pipeline(model="LeoLM/leo-hessianai-13b-chat-bilingual", device="cuda", torch_dtype=torch.float16, trust_remote_code=True) # True for flash-attn2 else False
print(generator(prompt_format.format(prompt=prompt), do_sample=True, top_p=0.95, max_length=8192))
```
"*Als KI kann ich keine persönlichen Beobachtungen teilen, aber ich kann einige allgemeine Informationen zur Fahrradwegesituation in Hamburg liefern. Im Vergleich zu vielen anderen großen Städten hat Hamburg eine hohe Anzahl von Fahrradfahrern und nimmt seine Verantwortung für nachhaltige Verkehrsmittel sehr ernst. Es gibt viele Fahrradwege und separate Fahrspuren, die Radfahrern ermöglichen, zusammen mit dem Autoverkehr zu fahren. Diese Fahrradspuren sind oft mit Markierungen gekennzeichnet und durch physische Trennungen von anderen Fahrspuren abgegrenzt. Darüber hinaus gibt es viele Fahrradstraßen, auf denen Radfahrer Vorfahrt haben und Autos langsamer fahren müssen.*
*In einigen städtischen Gebieten können Fahrradwege jedoch eng oder überfüllt sein, besonders während der Stoßzeiten. Es gibt auch viele Kreuzungen, an denen Radfahrer anhalten und auf Grün warten müssen, ähnlich wie Autofahrer. Insgesamt ist die Fahrradinfrastruktur in Hamburg ziemlich gut, aber wie überall gibt es immer Raum für Verbesserungen.*"
## Prompting / Prompt Template
Prompt dialogue template (ChatML format):
```
"""
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
"""
```
The model input can contain multiple conversation turns between user and assistant, e.g.
```
<|im_start|>user
{prompt 1}<|im_end|>
<|im_start|>assistant
{reply 1}<|im_end|>
<|im_start|>user
{prompt 2}<|im_end|>
<|im_start|>assistant
(...)
```
## Ethical Considerations and Limitations
LeoLM has been tested in English and German, and has not covered, nor could it cover all scenarios.
For these reasons, as with all LLMs, the potential outputs of `LeoLM/leo-hessianai-7b-chat` cannot be predicted
in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses
to user prompts. Therefore, before deploying any applications of `LeoLM/leo-hessianai-7b-chat`, developers should
perform safety testing and tuning tailored to their specific applications of the model.
Please see Meta's [Responsible Use Guide](https://ai.meta.com/llama/responsible-use-guide/).
## Finetuning Details
| Hyperparameter | Value |
|---|---|
| Num epochs | 3 |
| Examples per epoch | 233275 |
| Global batch size | 256 |
| Learning rate | 3e-5 |
| Warmup steps | 100 |
| LR scheduler | Cosine |
| Adam betas | (0.9, 0.95) |
| Weight decay | 0.001 |
## Dataset Details
```
## Stats for 'Subset of LeoLM/OpenSchnabeltier' (21314 samples (100.0%))
-----------------
Accepted: 21314/21314 (100.0%)
Accepted tokens: 8134690
Skipped: 0 (0.0%)
Min tokens per sample: 25
Max tokens per sample: 1202
Avg tokens per sample: 381.65947264708643
-----------------
## Stats for 'Subset of garage-bAInd/Open-Platypus' (24427 samples (100.0%))
-----------------
Accepted: 24427/24427 (100.0%)
Accepted tokens: 9549043
Skipped: 0 (0.0%)
Min tokens per sample: 23
Max tokens per sample: 5054
Avg tokens per sample: 390.9216440823679
-----------------
## Stats for 'Subset of WizardLM/WizardLM_evol_instruct_70k' (68600 samples (100.0%))
-----------------
Accepted: 68600/68600 (100.0%)
Accepted tokens: 33045040
Skipped: 0 (0.0%)
Min tokens per sample: 18
Max tokens per sample: 11810
Avg tokens per sample: 481.7061224489796
-----------------
## Stats for 'Subset of FreedomIntelligence/evol-instruct-deutsch' (57841 samples (100.0%))
-----------------
Accepted: 57841/57841 (100.0%)
Accepted tokens: 42958192
Skipped: 0 (0.0%)
Min tokens per sample: 33
Max tokens per sample: 5507
Avg tokens per sample: 742.6944900675991
-----------------
## Stats for 'Subset of FreedomIntelligence/alpaca-gpt4-deutsch' (48969 samples (100.0%))
-----------------
Accepted: 48969/48969 (100.0%)
Accepted tokens: 13372005
Skipped: 0 (0.0%)
Min tokens per sample: 19
Max tokens per sample: 1359
Avg tokens per sample: 273.07082031489307
-----------------
## Stats for 'Subset of LeoLM/German_Songs' (490 samples (100.0%))
-----------------
Accepted: 490/490 (100.0%)
Accepted tokens: 618642
Skipped: 0 (0.0%)
Min tokens per sample: 747
Max tokens per sample: 1678
Avg tokens per sample: 1262.534693877551
-----------------
## Stats for 'Subset of LeoLM/German_Poems' (392 samples (100.0%))
-----------------
Accepted: 392/392 (100.0%)
Accepted tokens: 187897
Skipped: 0 (0.0%)
Min tokens per sample: 231
Max tokens per sample: 826
Avg tokens per sample: 479.3290816326531
-----------------
## Stats for 'Subset of OpenAssistant/OASST_DE' (3646 samples (100.0%))
-----------------
Accepted: 3646/3646 (100.0%)
Accepted tokens: 2338738
Skipped: 0 (0.0%)
Min tokens per sample: 29
Max tokens per sample: 2484
Avg tokens per sample: 641.4530992868897
-----------------
## Stats for 'Subset of bjoernp/oasst25-08-23-filtered' (8922 samples (100.0%))
-----------------
Accepted: 8922/8922 (100.0%)
Accepted tokens: 4526427
Skipped: 0 (0.0%)
Min tokens per sample: 23
Max tokens per sample: 5407
Avg tokens per sample: 507.3332212508406
-----------------
## Stats for 'total' (235632 samples (100.0%))
-----------------
Accepted: 235632/235632 (100.0%)
Accepted tokens: 115862397
Skipped: 0 (0.0%)
Min tokens per sample: 18
Max tokens per sample: 11810
Avg tokens per sample: 491.70909299246284
-----------------
``` | 9,174 | [
[
-0.028411865234375,
-0.056793212890625,
0.00727081298828125,
0.0360107421875,
-0.01837158203125,
-0.0172119140625,
-0.01186370849609375,
-0.0343017578125,
0.0286407470703125,
0.018096923828125,
-0.047821044921875,
-0.053924560546875,
-0.045989990234375,
0.01... |
K024/mt5-zh-ja-en-trimmed | 2022-03-24T14:57:22.000Z | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"translation",
"zh",
"ja",
"en",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | translation | K024 | null | null | K024/mt5-zh-ja-en-trimmed | 39 | 432 | transformers | 2022-03-02T23:29:04 | ---
language:
- zh
- ja
- en
tags:
- translation
widget:
- text: "ja2zh: 吾輩は猫である。名前はまだ無い。"
license: cc-by-nc-sa-4.0
---
This model is finetuned from [mt5-base](https://huggingface.co/google/mt5-base).
The model vocabulary is trimmed to ~1/3 by selecting top 85000 tokens in the training data. The code to trim the vocabulary can be found [here](https://gist.github.com/K024/4a100a0f4f4b07208958e0f3244da6ad).
Usage:
```python
from transformers import (
T5Tokenizer,
MT5ForConditionalGeneration,
Text2TextGenerationPipeline,
)
path = "K024/mt5-zh-ja-en-trimmed"
pipe = Text2TextGenerationPipeline(
model=MT5ForConditionalGeneration.from_pretrained(path),
tokenizer=T5Tokenizer.from_pretrained(path),
)
sentence = "ja2zh: 吾輩は猫である。名前はまだ無い。"
res = pipe(sentence, max_length=100, num_beams=4)
res[0]['generated_text']
```
Training data:
```
wikimedia-en-ja
wikimedia-en-zh
wikimedia-ja-zh
wikititles-ja-en
wikititles-zh-en
wikimatrix-ja-zh
news-commentary-en-ja
news-commentary-en-zh
news-commentary-ja-zh
ted2020-en-ja
ted2020-en-zh
ted2020-ja-zh
```
License: [![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa]
[cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/
[cc-by-nc-sa-image]: https://licensebuttons.net/l/by-nc-sa/4.0/88x31.png
| 1,326 | [
[
-0.034576416015625,
-0.039825439453125,
0.0114898681640625,
0.0010890960693359375,
-0.03668212890625,
-0.00461578369140625,
-0.01302337646484375,
-0.0012979507446289062,
0.01322174072265625,
0.03594970703125,
-0.06781005859375,
-0.044830322265625,
-0.04513549804... |
AMHR/T5-for-Adversarial-Paraphrasing | 2023-08-16T19:25:16.000Z | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | AMHR | null | null | AMHR/T5-for-Adversarial-Paraphrasing | 5 | 432 | transformers | 2022-03-02T23:29:05 | This model is a paraphraser designed for the Adversarial Paraphrasing Task described and used in this paper: https://aclanthology.org/2021.acl-long.552/.
Please refer to `nap_generation.py` on the github repository for ways to better utilize this model using concepts of top-k sampling and top-p sampling. The demo on huggingface will output only one sentence which will most likely be the same as the input sentence since the model is supposed to output using beam search and sampling.
Github repository: https://github.com/Advancing-Machine-Human-Reasoning-Lab/apt.git
Please cite the following if you use this model:
```bib
@inproceedings{nighojkar-licato-2021-improving,
title = "Improving Paraphrase Detection with the Adversarial Paraphrasing Task",
author = "Nighojkar, Animesh and
Licato, John",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.552",
pages = "7106--7116",
abstract = "If two sentences have the same meaning, it should follow that they are equivalent in their inferential properties, i.e., each sentence should textually entail the other. However, many paraphrase datasets currently in widespread use rely on a sense of paraphrase based on word overlap and syntax. Can we teach them instead to identify paraphrases in a way that draws on the inferential properties of the sentences, and is not over-reliant on lexical and syntactic similarities of a sentence pair? We apply the adversarial paradigm to this question, and introduce a new adversarial method of dataset creation for paraphrase identification: the Adversarial Paraphrasing Task (APT), which asks participants to generate semantically equivalent (in the sense of mutually implicative) but lexically and syntactically disparate paraphrases. These sentence pairs can then be used both to test paraphrase identification models (which get barely random accuracy) and then improve their performance. To accelerate dataset generation, we explore automation of APT using T5, and show that the resulting dataset also improves accuracy. We discuss implications for paraphrase detection and release our dataset in the hope of making paraphrase detection models better able to detect sentence-level meaning equivalence.",
}
``` | 2,555 | [
[
-0.022186279296875,
-0.074462890625,
0.0282745361328125,
0.0087890625,
-0.03143310546875,
-0.006610870361328125,
-0.0018949508666992188,
-0.018585205078125,
-0.0006928443908691406,
0.04486083984375,
-0.0251617431640625,
-0.0248565673828125,
-0.0513916015625,
... |
nyu-mll/roberta-base-10M-2 | 2021-05-20T18:58:09.000Z | [
"transformers",
"pytorch",
"jax",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | nyu-mll | null | null | nyu-mll/roberta-base-10M-2 | 0 | 432 | transformers | 2022-03-02T23:29:05 | # RoBERTa Pretrained on Smaller Datasets
We pretrain RoBERTa on smaller datasets (1M, 10M, 100M, 1B tokens). We release 3 models with lowest perplexities for each pretraining data size out of 25 runs (or 10 in the case of 1B tokens). The pretraining data reproduces that of BERT: We combine English Wikipedia and a reproduction of BookCorpus using texts from smashwords in a ratio of approximately 3:1.
### Hyperparameters and Validation Perplexity
The hyperparameters and validation perplexities corresponding to each model are as follows:
| Model Name | Training Size | Model Size | Max Steps | Batch Size | Validation Perplexity |
|--------------------------|---------------|------------|-----------|------------|-----------------------|
| [roberta-base-1B-1][link-roberta-base-1B-1] | 1B | BASE | 100K | 512 | 3.93 |
| [roberta-base-1B-2][link-roberta-base-1B-2] | 1B | BASE | 31K | 1024 | 4.25 |
| [roberta-base-1B-3][link-roberta-base-1B-3] | 1B | BASE | 31K | 4096 | 3.84 |
| [roberta-base-100M-1][link-roberta-base-100M-1] | 100M | BASE | 100K | 512 | 4.99 |
| [roberta-base-100M-2][link-roberta-base-100M-2] | 100M | BASE | 31K | 1024 | 4.61 |
| [roberta-base-100M-3][link-roberta-base-100M-3] | 100M | BASE | 31K | 512 | 5.02 |
| [roberta-base-10M-1][link-roberta-base-10M-1] | 10M | BASE | 10K | 1024 | 11.31 |
| [roberta-base-10M-2][link-roberta-base-10M-2] | 10M | BASE | 10K | 512 | 10.78 |
| [roberta-base-10M-3][link-roberta-base-10M-3] | 10M | BASE | 31K | 512 | 11.58 |
| [roberta-med-small-1M-1][link-roberta-med-small-1M-1] | 1M | MED-SMALL | 100K | 512 | 153.38 |
| [roberta-med-small-1M-2][link-roberta-med-small-1M-2] | 1M | MED-SMALL | 10K | 512 | 134.18 |
| [roberta-med-small-1M-3][link-roberta-med-small-1M-3] | 1M | MED-SMALL | 31K | 512 | 139.39 |
The hyperparameters corresponding to model sizes mentioned above are as follows:
| Model Size | L | AH | HS | FFN | P |
|------------|----|----|-----|------|------|
| BASE | 12 | 12 | 768 | 3072 | 125M |
| MED-SMALL | 6 | 8 | 512 | 2048 | 45M |
(AH = number of attention heads; HS = hidden size; FFN = feedforward network dimension; P = number of parameters.)
For other hyperparameters, we select:
- Peak Learning rate: 5e-4
- Warmup Steps: 6% of max steps
- Dropout: 0.1
[link-roberta-med-small-1M-1]: https://huggingface.co/nyu-mll/roberta-med-small-1M-1
[link-roberta-med-small-1M-2]: https://huggingface.co/nyu-mll/roberta-med-small-1M-2
[link-roberta-med-small-1M-3]: https://huggingface.co/nyu-mll/roberta-med-small-1M-3
[link-roberta-base-10M-1]: https://huggingface.co/nyu-mll/roberta-base-10M-1
[link-roberta-base-10M-2]: https://huggingface.co/nyu-mll/roberta-base-10M-2
[link-roberta-base-10M-3]: https://huggingface.co/nyu-mll/roberta-base-10M-3
[link-roberta-base-100M-1]: https://huggingface.co/nyu-mll/roberta-base-100M-1
[link-roberta-base-100M-2]: https://huggingface.co/nyu-mll/roberta-base-100M-2
[link-roberta-base-100M-3]: https://huggingface.co/nyu-mll/roberta-base-100M-3
[link-roberta-base-1B-1]: https://huggingface.co/nyu-mll/roberta-base-1B-1
[link-roberta-base-1B-2]: https://huggingface.co/nyu-mll/roberta-base-1B-2
[link-roberta-base-1B-3]: https://huggingface.co/nyu-mll/roberta-base-1B-3
| 3,820 | [
[
-0.0379638671875,
-0.0276031494140625,
0.0240325927734375,
0.019622802734375,
-0.017364501953125,
-0.02099609375,
-0.0199737548828125,
-0.0293731689453125,
0.025726318359375,
0.0190582275390625,
-0.06427001953125,
-0.050994873046875,
-0.055389404296875,
0.01... |
nguyenvulebinh/wav2vec2-large-vi-vlsp2020 | 2023-02-21T08:56:01.000Z | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"vi",
"dataset:vlsp-asr-2020",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | nguyenvulebinh | null | null | nguyenvulebinh/wav2vec2-large-vi-vlsp2020 | 1 | 432 | transformers | 2022-11-04T21:32:45 | ---
language: vi
datasets:
- vlsp-asr-2020
tags:
- audio
- automatic-speech-recognition
license: cc-by-nc-4.0
---
## Model description
Our models use wav2vec2 architecture, pre-trained on 13k hours of Vietnamese youtube audio (un-label data) and fine-tuned on 250 hours labeled of VLSP ASR dataset on 16kHz sampled speech audio. You can find more description [here](https://github.com/nguyenvulebinh/vietnamese-wav2vec2)
## Benchmark WER result on VLSP T1 testset:
| | [base model](https://huggingface.co/nguyenvulebinh/wav2vec2-base-vi-vlsp2020) | [large model](https://huggingface.co/nguyenvulebinh/wav2vec2-large-vi-vlsp2020) |
|---|---|---|
|without LM| 8.66 | 6.90 |
|with 5-grams LM| 6.53 | 5.32 |
## Usage
[](https://colab.research.google.com/drive/1z3FQUQ2t7nIPR-dBR4bkcee6oCDGmcd4?usp=sharing)
```python
#pytorch
#!pip install transformers==4.20.0
#!pip install https://github.com/kpu/kenlm/archive/master.zip
#!pip install pyctcdecode==0.4.0
#!pip install huggingface_hub==0.10.0
from transformers.file_utils import cached_path, hf_bucket_url
from importlib.machinery import SourceFileLoader
from transformers import Wav2Vec2ProcessorWithLM
from IPython.lib.display import Audio
import torchaudio
import torch
# Load model & processor
model_name = "nguyenvulebinh/wav2vec2-large-vi-vlsp2020"
model = SourceFileLoader("model", cached_path(hf_bucket_url(model_name,filename="model_handling.py"))).load_module().Wav2Vec2ForCTC.from_pretrained(model_name)
processor = Wav2Vec2ProcessorWithLM.from_pretrained(model_name)
# Load an example audio (16k)
audio, sample_rate = torchaudio.load(cached_path(hf_bucket_url(model_name, filename="t2_0000006682.wav")))
input_data = processor.feature_extractor(audio[0], sampling_rate=16000, return_tensors='pt')
# Infer
output = model(**input_data)
# Output transcript without LM
print(processor.tokenizer.decode(output.logits.argmax(dim=-1)[0].detach().cpu().numpy()))
# Output transcript with LM
print(processor.decode(output.logits.cpu().detach().numpy()[0], beam_width=100).text)
```
### Model Parameters License
The ASR model parameters are made available for non-commercial use only, under the terms of the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license. You can find details at: https://creativecommons.org/licenses/by-nc/4.0/legalcode
### Contact
nguyenvulebinh@gmail.com
[](https://twitter.com/intent/follow?screen_name=nguyenvulebinh) | 2,595 | [
[
-0.0225982666015625,
-0.0399169921875,
0.01132965087890625,
0.0108489990234375,
-0.0135650634765625,
-0.007965087890625,
-0.018280029296875,
-0.0299530029296875,
-0.0068206787109375,
0.032135009765625,
-0.042572021484375,
-0.047088623046875,
-0.052642822265625,
... |
BIDEQUITY/autotrain-software_picture_preselection_classifier-2804582686 | 2023-01-10T11:08:27.000Z | [
"transformers",
"pytorch",
"convnext",
"image-classification",
"autotrain",
"vision",
"dataset:BIDEQUITY/autotrain-data-software_picture_preselection_classifier",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | BIDEQUITY | null | null | BIDEQUITY/autotrain-software_picture_preselection_classifier-2804582686 | 0 | 432 | transformers | 2023-01-10T11:06:08 | ---
tags:
- autotrain
- vision
- image-classification
datasets:
- BIDEQUITY/autotrain-data-software_picture_preselection_classifier
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
co2_eq_emissions:
emissions: 2.0734204068239874
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 2804582686
- CO2 Emissions (in grams): 2.0734
## Validation Metrics
- Loss: 0.209
- Accuracy: 0.973
- Macro F1: 0.980
- Micro F1: 0.973
- Weighted F1: 0.973
- Macro Precision: 0.980
- Micro Precision: 0.973
- Weighted Precision: 0.973
- Macro Recall: 0.980
- Micro Recall: 0.973
- Weighted Recall: 0.973 | 910 | [
[
-0.0218048095703125,
-0.0113525390625,
0.017974853515625,
0.0013408660888671875,
0.00595855712890625,
0.012847900390625,
0.0079193115234375,
-0.014434814453125,
-0.0189208984375,
-0.00458526611328125,
-0.03173828125,
-0.042510986328125,
-0.0489501953125,
-0.... |
NickKolok/meryl-stryfe-20230123-2300-6k-3600-steps | 2023-01-22T22:59:28.000Z | [
"diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | NickKolok | null | null | NickKolok/meryl-stryfe-20230123-2300-6k-3600-steps | 0 | 432 | diffusers | 2023-01-22T22:30:08 | ---
license: creativeml-openrail-m
tags:
- text-to-image
---
### Meryl_Stryfe_20230123_2300_6k_3600_steps on Stable Diffusion via Dreambooth
#### model by NickKolok
This your the Stable Diffusion model fine-tuned the Meryl_Stryfe_20230123_2300_6k_3600_steps concept taught to Stable Diffusion with Dreambooth.
#It can be used by modifying the `instance_prompt`: **merylstryfetrigun**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:


















































| 8,125 | [
[
-0.07452392578125,
-0.031524658203125,
0.01483154296875,
0.01202392578125,
-0.03173828125,
-0.0126953125,
-0.00673675537109375,
-0.06402587890625,
0.0867919921875,
0.0201873779296875,
-0.053863525390625,
-0.041595458984375,
-0.046112060546875,
0.016220092773... |
dooglet/doog-the | 2023-07-19T03:42:45.000Z | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | dooglet | null | null | dooglet/doog-the | 0 | 432 | diffusers | 2023-07-18T23:04:34 | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Dreambooth model i created based on 10 images of my character
### colab i used --> [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb)
### this too i think --> [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
this model is used to generate silly images based
<br> on this character ↓ called doog
<div style="display: flex; margin-bottom: 0;">
<img src="https://cdn.discordapp.com/attachments/645710686445371422/1131063764494581842/doog_1.png " width="150">
<img src="https://cdn.discordapp.com/attachments/645710686445371422/1131063764842729562/doog_4.png" width="150">
</div>
here are some images i generated:
<img src="https://cdn.discordapp.com/attachments/645710686445371422/1131057674709119048/6066c136-1b82-42f1-9d7a-ace7fc73af25.jfif" width="450" height="450">
<img src="https://cdn.discordapp.com/attachments/645710686445371422/1131063075454328863/bc2ce55a-459a-4454-b315-97be9dcfac27.png" width="450" height="450">
<div style="display: flex; margin-bottom: 0;">
<img src="https://cdn.discordapp.com/attachments/645710686445371422/1131054347053191228/image0.png" width="150" height="150">
<img src="https://cdn.discordapp.com/attachments/645710686445371422/1131056146162122872/image0_6.png" width="150" height="150">
<img src="https://cdn.discordapp.com/attachments/645710686445371422/1131056209223487488/00000-2492683076.png" width="150" height="150">
</div><div style="display: flex; margin-bottom: 0;"><img src="https://cdn.discordapp.com/attachments/645710686445371422/1131056231130333224/doog_hehehehe.jfif" width="150" height="150">
<img src="https://cdn.discordapp.com/attachments/645710686445371422/1131056740536954900/212c921a-d277-4072-8d6f-b29f380714cc.jfif" width="150" height="150">
<img src="https://cdn.discordapp.com/attachments/645710686445371422/1131056810766381116/f4119298-efe4-4ac6-9393-6503f6be3422.jfif" width="150" height="150">
</div><div style="display: flex; margin-bottom: 0;"><img src="https://cdn.discordapp.com/attachments/645710686445371422/1131056874062614558/2e055c4e-bbae-412e-8d22-18cdda24591a.jfif" width="150" height="150">
<img src="https://cdn.discordapp.com/attachments/645710686445371422/1131056908707561583/4660623c-dc7d-4e5a-8265-88d101d0710c.jfif" width="150" height="150">
<img src="https://cdn.discordapp.com/attachments/645710686445371422/1131056924251668581/416fe7d7-e195-484a-a880-9c4b5206743c.jfif" width="150" height="150">
</div><div style="display: flex; margin-bottom: 0;"><img src="https://cdn.discordapp.com/attachments/645710686445371422/1131056953645342801/8bdf2473-b594-4739-a00b-909f8e3c9d2a.jfif" width="150" height="150">
<img src="https://cdn.discordapp.com/attachments/645710686445371422/1131057206503157832/5112121e-a08e-4ae2-8855-ac3da641691a.jfif" width="150" height="150">
<img src="https://cdn.discordapp.com/attachments/645710686445371422/1131057206868054106/95738e02-60fd-4aa7-8ee7-d3790b651601.jfif" width="150" height="150">
</div><div style="display: flex; margin-bottom: 0;"><img src="https://cdn.discordapp.com/attachments/645710686445371422/1131057207258132480/18fdfd80-b1d1-464d-9fbc-f91de3318c7c.jfif" width="150" height="150">
<img src="https://cdn.discordapp.com/attachments/645710686445371422/1131057207593668618/e582ee8e-5ad6-4d33-8883-31b67bd29418.jfif" width="150" height="150">
<img src="https://cdn.discordapp.com/attachments/645710686445371422/1131057207987941426/afc5c32d-07a3-4d21-9b36-703c267fe6bf.jfif" width="150" height="150">
</div><div style="display: flex; margin-bottom: 0;"><img src="https://cdn.discordapp.com/attachments/645710686445371422/1131057208289939536/21016bdf-0539-44a1-acf3-dbbce09cdb39.jfif" width="150" height="150">
<img src="https://cdn.discordapp.com/attachments/645710686445371422/1131057208726130778/321ba20f-9de8-43e1-bc42-597c8694bd60.jfif" width="150" height="150">
<img src="https://cdn.discordapp.com/attachments/645710686445371422/1131057209070080011/d25df0c6-eb84-4ed8-aa96-2a26a9865a43.jfif" width="150" height="150">
</div><div style="display: flex; margin-bottom: 0;"><img src="https://cdn.discordapp.com/attachments/645710686445371422/1131057220054949918/6b88ca85-3772-4cd0-a4f9-d6fca8c1f25b.jfif" width="150" height="150">
<img src="https://cdn.discordapp.com/attachments/645710686445371422/1131057253227692132/cdf55b89-8c71-4e60-8e1a-58b6b5dec355.jfif" width="150" height="150">
<img src="https://cdn.discordapp.com/attachments/645710686445371422/1131057253710053456/a0eb9c7e-ddeb-4319-8925-7e252c3354ba.jfif" width="150" height="150">
</div><div style="display: flex; margin-bottom: 0;"><img src="https://cdn.discordapp.com/attachments/645710686445371422/1131057273444245656/0e681281-bb05-4c65-907b-deee31760cfa.jfif" width="150" height="150">
<img src="https://cdn.discordapp.com/attachments/645710686445371422/1131057305312571473/14b5cf03-4c3f-4a3a-bc89-0c65c67b27eb.jfif" width="150" height="150">
<img src="https://cdn.discordapp.com/attachments/645710686445371422/1131057334790139904/6a92244e-3b93-422c-9d25-447bbb13c48f.jfif" width="150" height="150">
</div><div style="display: flex; margin-bottom: 0;"><img src="https://cdn.discordapp.com/attachments/645710686445371422/1131057357883969576/33fa4f89-a71e-469f-8b76-f1c4a5e3b2a1.jfif" width="150" height="150">
<img src="https://cdn.discordapp.com/attachments/645710686445371422/1131057389173477386/904b8a7c-dbee-4e8a-bb7a-3da0932eb217.jfif" width="150" height="150">
<img src="https://cdn.discordapp.com/attachments/645710686445371422/1131057417808007308/76f15e77-7ffe-414a-80fe-203d1d3c9b77.jfif" width="150" height="150">
</div><div style="display: flex; margin-bottom: 0;"><img src="https://cdn.discordapp.com/attachments/645710686445371422/1131057441908478022/15861816-a8b7-484a-ab87-9cb5aaf3071b.jfif" width="150" height="150">
<img src="https://cdn.discordapp.com/attachments/645710686445371422/1131057469163048970/e7f67ad2-5667-4fe3-af6a-d99248e3ce9f.jfif" width="150" height="150">
<img src="https://cdn.discordapp.com/attachments/645710686445371422/1131057481569812540/7f8673da-dfae-475f-a199-2b22acdf9c58.jfif" width="150" height="150">
</div><div style="display: flex; margin-bottom: 0;"><img src="https://cdn.discordapp.com/attachments/645710686445371422/1131057495947886603/328c32e5-1e2f-4e84-ac34-531d87d407b7.jfif" width="150" height="150">
<img src="https://cdn.discordapp.com/attachments/645710686445371422/1131057514323128400/c600c696-3abb-409d-9497-35d0cf4d3c4d.jfif" width="150" height="150">
<img src="https://cdn.discordapp.com/attachments/645710686445371422/1131057550889074778/4c57cae1-630b-4a8a-af43-a614c9d0787f.jfif" width="150" height="150">
</div><div style="display: flex; margin-bottom: 0;"><img src="https://cdn.discordapp.com/attachments/645710686445371422/1131057571705405440/a21a32c1-06c9-47c5-9b1d-832f5c694bf5.jfif" width="150" height="150">
<img src="https://cdn.discordapp.com/attachments/645710686445371422/1131057595671654541/dc20ceaf-57ca-44a6-8899-49840916d8c9.jfif" width="150" height="150">
<img src="https://cdn.discordapp.com/attachments/645710686445371422/1131057611349950505/60fff45a-1bd5-4b71-a2b6-a3801a25b851.jfif" width="150" height="150">
</div><div style="display: flex; margin-bottom: 0;"><img src="https://cdn.discordapp.com/attachments/645710686445371422/1131057643822260314/aa84395e-6dc5-4263-86d6-4d796bb3ab11.jfif" width="150" height="150">
<img src="https://cdn.discordapp.com/attachments/645710686445371422/1131057675032076288/daf44c81-4a37-4783-8078-d6f63e17dd5f.jfif" width="150" height="150">
<img src="https://cdn.discordapp.com/attachments/645710686445371422/1131057675413766244/fe40ea52-73d1-4875-9d3b-ec36111a67bf.jfif" width="150" height="150">
</div><div style="display: flex; margin-bottom: 0;"><img src="https://cdn.discordapp.com/attachments/645710686445371422/1131057727565725736/cef5ba6f-d349-486a-8778-5bd9ba13ccea.jfif" width="150" height="150">
<img src="https://cdn.discordapp.com/attachments/645710686445371422/1131057727909675058/09d827be-d08b-45b8-882b-2d7314d78088.jfif" width="150" height="150">
<img src="https://cdn.discordapp.com/attachments/645710686445371422/1131057728345886823/dd972746-d243-4dd5-809e-27480b65429e.jfif" width="150" height="150"></div>
i'm not really an ai guy so i dont know the best prompts <br>
if you generated any funny ones show them to me please i wanna see | 8,541 | [
[
-0.05487060546875,
-0.04461669921875,
0.01319122314453125,
-0.024444580078125,
-0.009552001953125,
0.0160980224609375,
0.0253448486328125,
-0.049285888671875,
0.0384521484375,
-0.00858306884765625,
-0.06280517578125,
-0.017059326171875,
-0.051605224609375,
0... |
aspis/swin-finetuned-food101 | 2022-06-28T11:02:36.000Z | [
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:food101",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | aspis | null | null | aspis/swin-finetuned-food101 | 3 | 431 | transformers | 2022-06-09T10:48:09 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- food101
metrics:
- accuracy
model-index:
- name: swin-finetuned-food101
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: food101
type: food101
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9210297029702971
- task:
type: image-classification
name: Image Classification
dataset:
name: food101
type: food101
config: default
split: validation
metrics:
- name: Accuracy
type: accuracy
value: 0.9135841584158416
verified: true
- name: Precision Macro
type: precision
value: 0.9151645786633058
verified: true
- name: Precision Micro
type: precision
value: 0.9135841584158416
verified: true
- name: Precision Weighted
type: precision
value: 0.915164578663306
verified: true
- name: Recall Macro
type: recall
value: 0.9135841584158414
verified: true
- name: Recall Micro
type: recall
value: 0.9135841584158416
verified: true
- name: Recall Weighted
type: recall
value: 0.9135841584158416
verified: true
- name: F1 Macro
type: f1
value: 0.9138785016966742
verified: true
- name: F1 Micro
type: f1
value: 0.9135841584158415
verified: true
- name: F1 Weighted
type: f1
value: 0.9138785016966743
verified: true
- name: loss
type: loss
value: 0.30761435627937317
verified: true
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-finetuned-food101
This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224](https://huggingface.co/microsoft/swin-base-patch4-window7-224) on the food101 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2772
- Accuracy: 0.9210
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5077 | 1.0 | 1183 | 0.3851 | 0.8893 |
| 0.3523 | 2.0 | 2366 | 0.3124 | 0.9088 |
| 0.1158 | 3.0 | 3549 | 0.2772 | 0.9210 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| 3,099 | [
[
-0.025390625,
-0.03216552734375,
-0.0023326873779296875,
0.01129150390625,
-0.00710296630859375,
-0.03399658203125,
-0.00907135009765625,
-0.023284912109375,
0.00820159912109375,
0.017059326171875,
-0.05096435546875,
-0.03778076171875,
-0.043914794921875,
-0... |
philschmid/instruct-igel-001 | 2023-10-27T07:02:50.000Z | [
"transformers",
"pytorch",
"bloom",
"text-generation",
"LLM",
"de",
"license:bigscience-openrail-m",
"has_space",
"text-generation-inference",
"region:us"
] | text-generation | philschmid | null | null | philschmid/instruct-igel-001 | 40 | 431 | transformers | 2023-04-03T06:57:57 | ---
language:
- de
pipeline_tag: text-generation
library_name: transformers
tags:
- bloom
- LLM
inference: false
widget:
- text: TODO
license: bigscience-openrail-m
---
# IGEL: Instruction-tuned German large Language Model for Text
IGEL is an LLM model family developed for German. The first version of IGEL is built on top **[BigScience BLOOM](https://bigscience.huggingface.co/blog/bloom),** adapted to **[German from Malte Ostendorff](https://huggingface.co/malteos/bloom-6b4-clp-german)**. IGEL is designed to provide accurate and reliable language understanding capabilities for a wide range of natural language understanding tasks, including sentiment analysis, language translation, and question answering.
### **You can try out the model at [igel-playground](https://huggingface.co/spaces/philschmid/igel-playground).**
The IGEL family currently includes `instruct-igel-001` and `chat-igel-001` _(coming soon)_.
## Model Description
LoRA tuned [BLOOM-CLP German (6.4B parameters)](https://huggingface.co/malteos/bloom-6b4-clp-german) with merged weights. The `001` was designed as a naive test to determine whether it is possible to create an german instruction-tuned model using a small, undertrained LLM and a naive translated dataset. The goal of this test was to explore the potential of the BLOOM architecture for language modeling tasks that require instruction-based responses.
To achieve this goal, we used a pre-trained LLM model with limited training, and fine-tuned it using a dataset of naive translations of instruction-based content. The dataset was created by taking instructions in English and translating them into German using an automated translation tool. While this approach may introduce errors in the translated content, we wanted to test whether the model could still learn to generate instruction-based responses in a variety of languages.
## Training data
`instruct-igel-001` is trained on naive translated instruction datasets, without much post-processing.
### Known limitations
`instruct-igel-001` also exhibits several common deficiencies of language models, including hallucination, toxicity, and stereotypes.
For example, in the following figure, `instruct-igel-001` wrongly says that the cancelor of Germany is Angela Merkel.

### Training procedure
_coming soon_
## How to use
You can test the model in this LLM playground.
_coming soon_ | 2,439 | [
[
-0.027557373046875,
-0.08428955078125,
0.021575927734375,
0.047760009765625,
0.00902557373046875,
-0.009674072265625,
-0.0263824462890625,
-0.0295867919921875,
-0.01111602783203125,
0.02569580078125,
-0.058990478515625,
-0.04791259765625,
-0.04443359375,
0.0... |
timm/mvitv2_large_cls.fb_inw21k | 2023-04-13T00:49:04.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"dataset:imagenet-22k",
"arxiv:2112.01526",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/mvitv2_large_cls.fb_inw21k | 0 | 431 | timm | 2023-04-13T00:45:46 | ---
tags:
- image-classification
- timm
library_tag: timm
license: apache-2.0
datasets:
- imagenet-1k
- imagenet-22k
---
# Model card for mvitv2_large_cls.fb_inw21k
A MViT-v2 (multi-scale ViT) image classification model. Pretrained on ImageNet-22k (Winter21 variant) and fine-tuned on ImageNet-1k by paper authors. The classifier layout for this model was not shared and does not match expected lexicographical sorted synset order.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 234.6
- GMACs: 42.2
- Activations (M): 111.7
- Image size: 224 x 224
- **Papers:**
- MViTv2: Improved Multiscale Vision Transformers for Classification and Detection: https://arxiv.org/abs/2112.01526
- **Dataset:** ImageNet-1k
- **Pretrain Dataset:** ImageNet-22k
- **Original:** https://github.com/facebookresearch/mvit
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('mvitv2_large_cls.fb_inw21k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'mvitv2_large_cls.fb_inw21k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 50, 1152) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
| 2,553 | [
[
-0.035430908203125,
-0.0224609375,
0.00007432699203491211,
0.0230712890625,
-0.0335693359375,
-0.026123046875,
-0.00650787353515625,
-0.019317626953125,
0.01316070556640625,
0.0289306640625,
-0.047027587890625,
-0.0377197265625,
-0.0577392578125,
-0.01486968... |
TokenfreeEMNLPSubmission/bert-base-finetuned-pos-ud-english-ewt | 2023-05-06T04:30:29.000Z | [
"transformers",
"pytorch",
"bert",
"token-classification",
"canine",
"pretrained-on-english-language",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | TokenfreeEMNLPSubmission | null | null | TokenfreeEMNLPSubmission/bert-base-finetuned-pos-ud-english-ewt | 1 | 431 | transformers | 2023-05-06T04:30:18 | ---
license: apache-2.0
tags:
- canine
- pretrained-on-english-language
---
### How to use
Here is how to use this model:
```python
from transformers import CanineModel
model = CanineModel.from_pretrained('mushfiqur11/<repo name>')
``` | 238 | [
[
-0.004276275634765625,
0.0004057884216308594,
-0.0018606185913085938,
0.00745391845703125,
-0.0229339599609375,
-0.004047393798828125,
0.0148162841796875,
0.00775146484375,
0.006633758544921875,
0.0384521484375,
-0.05633544921875,
-0.01148223876953125,
-0.029373... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.