repo_id stringlengths 4 110 | author stringlengths 2 27 ⌀ | model_type stringlengths 2 29 ⌀ | files_per_repo int64 2 15.4k | downloads_30d int64 0 19.9M | library stringlengths 2 37 ⌀ | likes int64 0 4.34k | pipeline stringlengths 5 30 ⌀ | pytorch bool 2 classes | tensorflow bool 2 classes | jax bool 2 classes | license stringlengths 2 30 | languages stringlengths 4 1.63k ⌀ | datasets stringlengths 2 2.58k ⌀ | co2 stringclasses 29 values | prs_count int64 0 125 | prs_open int64 0 120 | prs_merged int64 0 15 | prs_closed int64 0 28 | discussions_count int64 0 218 | discussions_open int64 0 148 | discussions_closed int64 0 70 | tags stringlengths 2 513 | has_model_index bool 2 classes | has_metadata bool 1 class | has_text bool 1 class | text_length int64 401 598k | is_nc bool 1 class | readme stringlengths 0 598k | hash stringlengths 32 32 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
seomh/distilbert-base-uncased-finetuned-squad | seomh | distilbert | 12 | 3 | transformers | 0 | question-answering | true | false | false | apache-2.0 | null | ['squad'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,284 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0083
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2258 | 1.0 | 5533 | 0.0560 |
| 0.952 | 2.0 | 11066 | 0.0096 |
| 0.7492 | 3.0 | 16599 | 0.0083 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| 3480352677e4591705f6749e32757783 |
yuhuizhang/finetuned_gpt2-large_sst2_negation0.8 | yuhuizhang | gpt2 | 11 | 0 | transformers | 0 | text-generation | true | false | false | mit | null | ['sst2'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,248 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_gpt2-large_sst2_negation0.8
This model is a fine-tuned version of [gpt2-large](https://huggingface.co/gpt2-large) on the sst2 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6201
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.3586 | 1.0 | 1111 | 3.3100 |
| 1.812 | 2.0 | 2222 | 3.5114 |
| 1.5574 | 3.0 | 3333 | 3.6201 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.2
- Tokenizers 0.12.1
| 9b4e290512323ab29c33817d9ee6f5c9 |
google/maxim-s3-deblurring-realblur-r | google | null | 6 | 11 | keras | 1 | image-to-image | false | false | false | apache-2.0 | ['en'] | ['realblur_r'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['vision', 'maxim', 'image-to-image'] | false | true | true | 2,532 | false |
# MAXIM pre-trained on RealBlur-R for image deblurring
MAXIM model pre-trained for image deblurring. It was introduced in the paper [MAXIM: Multi-Axis MLP for Image Processing](https://arxiv.org/abs/2201.02973) by Zhengzhong Tu, Hossein Talebi, Han Zhang, Feng Yang, Peyman Milanfar, Alan Bovik, Yinxiao Li and first released in [this repository](https://github.com/google-research/maxim).
Disclaimer: The team releasing MAXIM did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MAXIM introduces a shared MLP-based backbone for different image processing tasks such as image deblurring, deraining, denoising, dehazing, low-light image enhancement, and retouching. The following figure depicts the main components of MAXIM:

## Training procedure and results
The authors didn't release the training code. For more details on how the model was trained, refer to the [original paper](https://arxiv.org/abs/2201.02973).
As per the [table](https://github.com/google-research/maxim#results-and-pre-trained-models), the model achieves a PSNR of 39.45 and an SSIM of 0.962.
## Intended uses & limitations
You can use the raw model for image deblurring tasks.
The model is [officially released in JAX](https://github.com/google-research/maxim). It was ported to TensorFlow in [this repository](https://github.com/sayakpaul/maxim-tf).
### How to use
Here is how to use this model:
```python
from huggingface_hub import from_pretrained_keras
from PIL import Image
import tensorflow as tf
import numpy as np
import requests
url = "https://github.com/sayakpaul/maxim-tf/raw/main/images/Deblurring/input/1fromGOPR0950.png"
image = Image.open(requests.get(url, stream=True).raw)
image = np.array(image)
image = tf.convert_to_tensor(image)
image = tf.image.resize(image, (256, 256))
model = from_pretrained_keras("google/maxim-s3-deblurring-realblur-r")
predictions = model.predict(tf.expand_dims(image, 0))
```
For a more elaborate prediction pipeline, refer to [this Colab Notebook](https://colab.research.google.com/github/sayakpaul/maxim-tf/blob/main/notebooks/inference-dynamic-resize.ipynb).
### Citation
```bibtex
@article{tu2022maxim,
title={MAXIM: Multi-Axis MLP for Image Processing},
author={Tu, Zhengzhong and Talebi, Hossein and Zhang, Han and Yang, Feng and Milanfar, Peyman and Bovik, Alan and Li, Yinxiao},
journal={CVPR},
year={2022},
}
```
| 656f5737fb608a8a695876e3a24498e5 |
sd-dreambooth-library/angus-mcbride-style-v4 | sd-dreambooth-library | null | 64 | 4 | diffusers | 3 | null | false | false | false | mit | null | null | null | 2 | 2 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 6,229 | false | ### angus mcbride style v4 on Stable Diffusion via Dreambooth
#### model by hiero
This your the Stable Diffusion model fine-tuned the angus mcbride style v4 concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **mcbride_style**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:














































| 6604de7bbbb99e676d69c4cef8ca35c8 |
Helsinki-NLP/opus-mt-es-nso | Helsinki-NLP | marian | 10 | 31 | transformers | 0 | translation | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 776 | false |
### opus-mt-es-nso
* source languages: es
* target languages: nso
* OPUS readme: [es-nso](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-nso/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-nso/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-nso/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-nso/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.nso | 33.2 | 0.531 |
| 20487d85c8a2baa2310039642d7ed47e |
Nhat1904/test_trainer_XLNET_3ep_5e-5 | Nhat1904 | xlnet | 8 | 3 | transformers | 0 | text-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,324 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_trainer_XLNET_3ep_5e-5
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5405
- Accuracy: 0.8773
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7984 | 1.0 | 1125 | 0.6647 | 0.7923 |
| 0.5126 | 2.0 | 2250 | 0.4625 | 0.862 |
| 0.409 | 3.0 | 3375 | 0.5405 | 0.8773 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
| 11e0ea78245eb9e189b63c374039f5e9 |
daspartho/text-emotion | daspartho | distilbert | 10 | 1 | transformers | 1 | text-classification | true | false | false | apache-2.0 | null | ['emotion'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,527 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1414
- Accuracy: 0.9367
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 256
- eval_batch_size: 512
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0232 | 1.0 | 63 | 0.2424 | 0.917 |
| 0.1925 | 2.0 | 126 | 0.1600 | 0.934 |
| 0.1134 | 3.0 | 189 | 0.1418 | 0.935 |
| 0.076 | 4.0 | 252 | 0.1461 | 0.931 |
| 0.0604 | 5.0 | 315 | 0.1414 | 0.9367 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
| 19bbd6411a132d7b4051403d824592b0 |
Gnanesh5/SF5 | Gnanesh5 | xlnet | 6 | 3 | transformers | 0 | text-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 900 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SF5
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
| bffe15bd7ed68e9ff182773ccb623419 |
anas-awadalla/gpt2-span-head-few-shot-k-32-finetuned-squad-seed-0 | anas-awadalla | gpt2 | 20 | 7 | transformers | 0 | question-answering | true | false | false | mit | null | ['squad'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 968 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-span-head-few-shot-k-32-finetuned-squad-seed-0
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
| 238c0f3c272713d92cba3c824edc55ce |
pere/norwegian-t5-base | pere | t5 | 18 | 8 | transformers | 0 | text2text-generation | false | false | true | cc-by-4.0 | False | ['Norwegian Nynorsk/Bokmål'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['seq2seq'] | false | true | true | 895 | false | # 🇳🇴 Norwegian T5 Base model 🇳🇴
This T5-base model is trained from scratch on a 19GB Balanced Bokmål-Nynorsk Corpus.
Update: Due to disk space errors, the model had to be restarted July 20. It is currently still running.
Parameters used in training:
```bash
python3 ./run_t5_mlm_flax_streaming.py
--model_name_or_path="./norwegian-t5-base"
--output_dir="./norwegian-t5-base"
--config_name="./norwegian-t5-base"
--tokenizer_name="./norwegian-t5-base"
--dataset_name="pere/nb_nn_balanced_shuffled"
--max_seq_length="512"
--per_device_train_batch_size="32"
--per_device_eval_batch_size="32"
--learning_rate="0.005"
--weight_decay="0.001"
--warmup_steps="2000"
--overwrite_output_dir
--logging_steps="100"
--save_steps="500"
--eval_steps="500"
--push_to_hub
--preprocessing_num_workers 96
--adafactor
``` | d7cbe6eac76e9b0c7e079c09a9316d71 |
anas-awadalla/roberta-base-lora-squad | anas-awadalla | null | 22 | 0 | null | 0 | null | false | false | false | mit | null | ['squad'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,022 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-lora-squad
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 128
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15.0
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
| 5ed9c0220a7de18cfd81e313c2b77532 |
funnel-transformer/small | funnel-transformer | funnel | 9 | 93,085 | transformers | 4 | feature-extraction | true | true | false | apache-2.0 | ['en'] | ['bookcorpus', 'wikipedia', 'gigaword'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 3,774 | false |
# Funnel Transformer small model (B4-4-4 with decoder)
Pretrained model on English language using a similar objective objective as [ELECTRA](https://huggingface.co/transformers/model_doc/electra.html). It was introduced in
[this paper](https://arxiv.org/pdf/2006.03236.pdf) and first released in
[this repository](https://github.com/laiguokun/Funnel-Transformer). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been
written by the Hugging Face team.
## Model description
Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and
the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Intended uses & limitations
You can use the raw model to extract a vector representation of a given text, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=funnel-transformer) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import FunnelTokenizer, FunnelModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/small")
model = FunneModel.from_pretrained("funnel-transformer/small")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import FunnelTokenizer, TFFunnelModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/small")
model = TFFunnelModel.from_pretrained("funnel-transformer/small")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
The BERT model was pretrained on:
- [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books,
- [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers),
- [Clue Web](https://lemurproject.org/clueweb12/), a dataset of 733,019,372 English web pages,
- [GigaWord](https://catalog.ldc.upenn.edu/LDC2011T07), an archive of newswire text data,
- [Common Crawl](https://commoncrawl.org/), a dataset of raw web pages.
### BibTeX entry and citation info
```bibtex
@misc{dai2020funneltransformer,
title={Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing},
author={Zihang Dai and Guokun Lai and Yiming Yang and Quoc V. Le},
year={2020},
eprint={2006.03236},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
| 0eb5743e0e3bc114272a0c18fdf20535 |
Kayvane/rick-and-morty-ramrick-character | Kayvane | null | 17 | 39 | diffusers | 0 | text-to-image | true | false | false | creativeml-openrail-m | null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'wildcard'] | false | true | true | 1,082 | false |
# DreamBooth model for the ramrick concept trained by Kayvane on the Kayvane/dreambooth-hackathon-rick-and-morty-images-square dataset.
Notes:
- trained on square images, 20k steps on google colab
- character name is ramrick, many pictures get blocked as nsfw - possibly because the subtoken #ick is close to something else
- model is trained for too many steps / overfitted as it is effectively recreating the input images
This is a Stable Diffusion model fine-tuned on the ramrick concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of ramrick character**
This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
## Description
This is a Stable Diffusion model fine-tuned on `character` images for the wildcard theme.
## Usage
```python
from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained('Kayvane/rick-and-morty-ramrick-character')
image = pipeline().images[0]
image
```
| e61dea39c1f159bf09600296e67b22c8 |
Helsinki-NLP/opus-mt-lus-sv | Helsinki-NLP | marian | 10 | 8 | transformers | 0 | translation | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 776 | false |
### opus-mt-lus-sv
* source languages: lus
* target languages: sv
* OPUS readme: [lus-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lus-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lus-sv/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lus-sv/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lus-sv/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lus.sv | 25.5 | 0.439 |
| af46eedd4bb7c755a0253f84757ac771 |
Xhaheen/srkay-man_6-1-2022 | Xhaheen | null | 17 | 136 | diffusers | 91 | text-to-image | true | false | false | creativeml-openrail-m | null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'wildcard'] | false | true | true | 1,288 | false |
# DreamBooth model for the srkay concept trained by Xhaheen on the Xhaheen/dreambooth-hackathon-images-srkman-2 dataset.
This is a Stable Diffusion model fine-tuned on the sha rukh khan images with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of srkay man**
This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
## Dataset used



## Description
This is a Stable Diffusion model fine-tuned on `man` images for the wildcard theme.
## Usage
```python
from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained('Xhaheen/srkay-man_6-1-2022')
image = pipeline().images[0]
image
```
[](https://colab.research.google.com/drive/1FmTaUN38enNdCgi4HxG0LMZ4HobM0Iq3?usp=sharing)
| d222eaa26bf88b1b457e96c76e6d086c |
deepset/gbert-large | deepset | null | 8 | 102,101 | transformers | 17 | fill-mask | true | true | false | mit | ['de'] | ['wikipedia', 'OPUS', 'OpenLegalData', 'oscar'] | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | [] | false | true | true | 2,787 | false |
# German BERT large
Released, Oct 2020, this is a German BERT language model trained collaboratively by the makers of the original German BERT (aka "bert-base-german-cased") and the dbmdz BERT (aka bert-base-german-dbmdz-cased). In our [paper](https://arxiv.org/pdf/2010.10906.pdf), we outline the steps taken to train our model and show that it outperforms its predecessors.
## Overview
**Paper:** [here](https://arxiv.org/pdf/2010.10906.pdf)
**Architecture:** BERT large
**Language:** German
## Performance
```
GermEval18 Coarse: 80.08
GermEval18 Fine: 52.48
GermEval14: 88.16
```
See also:
deepset/gbert-base
deepset/gbert-large
deepset/gelectra-base
deepset/gelectra-large
deepset/gelectra-base-generator
deepset/gelectra-large-generator
## Authors
**Branden Chan:** branden.chan@deepset.ai
**Stefan Schweter:** stefan@schweter.eu
**Timo Möller:** timo.moeller@deepset.ai
## About us
<div class="grid lg:grid-cols-2 gap-x-4 gap-y-3">
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/>
</div>
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/haystack-logo-colored.png" class="w-40"/>
</div>
</div>
[deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.
Some of our other work:
- [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")]([https://huggingface.co/deepset/tinyroberta-squad2)
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
## Get in touch and join the Haystack community
<p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://docs.haystack.deepset.ai">Documentation</a></strong>.
We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community">Discord community open to everyone!</a></strong></p>
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs) | 5832ff0d5b3b74b9bde9efea9ad4c31b |
sd-dreambooth-library/tempa | sd-dreambooth-library | null | 22 | 2 | diffusers | 0 | null | false | false | false | mit | null | null | null | 2 | 2 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 909 | false | ### Tempa on Stable Diffusion via Dreambooth
#### model by Giordyman
This your the Stable Diffusion model fine-tuned the Tempa concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **a photo of sks Tempa**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
Here are the images used for training this concept:




| c282c5f6a77ecd06c3dd6a2e4cb4851e |
Tusarkant/xlm-roberta-base-finetuned-ner | Tusarkant | xlm-roberta | 9 | 2 | transformers | 0 | token-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 904 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-ner
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
| 19f8bb2bd6d0b7127da00f28772a66ac |
Santiagot1105/wav2vec2-lar-xlsr-es-col | Santiagot1105 | wav2vec2 | 12 | 10 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,502 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-lar-xlsr-es-col
This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-spanish](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-spanish) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0947
- Wer: 0.1884
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.8446 | 8.51 | 400 | 2.8174 | 0.9854 |
| 0.5146 | 17.02 | 800 | 0.1022 | 0.2020 |
| 0.0706 | 25.53 | 1200 | 0.0947 | 0.1884 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.1+cu102
- Datasets 1.13.3
- Tokenizers 0.10.3
| f97dc92bf8681dd12293248381e66362 |
tzvc/8f6b362c-26c3-4c26-9e7f-2b8ff6ef353e | tzvc | null | 31 | 12 | diffusers | 0 | text-to-image | false | false | false | creativeml-openrail-m | null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['text-to-image'] | false | true | true | 1,743 | false | ### 8f6b362c-26c3-4c26-9e7f-2b8ff6ef353e Dreambooth model trained by tzvc with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
sdcid (use that on your prompt)

| dc1050bdbb79c063e876ff0ddd9fd5c1 |
sd-concepts-library/glass-pipe | sd-concepts-library | null | 12 | 0 | null | 0 | null | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,386 | false | ### glass pipe on Stable Diffusion
This is the `<glass-sherlock>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:







| daa2edac56b70d28a91d581c0fec766c |
Bugjuhjugjyy/tails-diffusion | Bugjuhjugjyy | null | 27 | 36 | diffusers | 0 | text-to-image | false | false | false | mit | null | null | null | 2 | 1 | 0 | 1 | 0 | 0 | 0 | [] | false | true | true | 1,971 | false | ### tails diffusion on Stable Diffusion via Dreambooth trained on the [fast-DreamBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
#### model by Bugjuhjugjyy
This your the Stable Diffusion model fine-tuned the tails diffusion concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt(s)`: **images**
You can also train your own concepts and upload them to the library by using [the fast-DremaBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:
images
.png)
.png)
.png)
.png)
.png)
.png)
.png)
.png)
| 79607e2c54f129ab5248f38703e97ca8 |
Shobhank-iiitdwd/BERT-L-QA | Shobhank-iiitdwd | bert | 10 | 1 | transformers | 0 | question-answering | true | false | true | cc-by-4.0 | ['en'] | ['squad_v2'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | true | true | true | 3,425 | false |
# bert-large-uncased-whole-word-masking-squad2
This is a berta-large model, fine-tuned using the SQuAD2.0 dataset for the task of question answering.
## Overview
**Language model:** bert-large
**Language:** English
**Downstream-task:** Extractive QA
**Training data:** SQuAD 2.0
**Eval data:** SQuAD 2.0
**Code:** See [an example QA pipeline on Haystack](https://haystack.deepset.ai/tutorials/first-qa-system)
## Usage
### In Haystack
Haystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in [Haystack](https://github.com/deepset-ai/haystack/):
```python
reader = FARMReader(model_name_or_path="deepset/bert-large-uncased-whole-word-masking-squad2")
# or
reader = TransformersReader(model_name_or_path="FILL",tokenizer="deepset/bert-large-uncased-whole-word-masking-squad2")
```
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "deepset/bert-large-uncased-whole-word-masking-squad2"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Why is model conversion important?',
'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## About us
<div class="grid lg:grid-cols-2 gap-x-4 gap-y-3">
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/>
</div>
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/haystack-logo-colored.png" class="w-40"/>
</div>
</div>
[deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.
Some of our other work:
- [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")]([https://huggingface.co/deepset/tinyroberta-squad2)
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
## Get in touch and join the Haystack community
<p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://docs.haystack.deepset.ai">Documentation</a></strong>.
We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community">Discord community open to everyone!</a></strong></p>
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs) | fa5d04dcead89e57c9db4a716764512c |
Rocketknight1/temp-colab-upload-test2 | Rocketknight1 | distilbert | 8 | 5 | transformers | 0 | text-classification | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,200 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Rocketknight1/temp-colab-upload-test2
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6931
- Validation Loss: 0.6931
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 0.001, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.6931 | 0.6931 | 0 |
| 0.6931 | 0.6931 | 1 |
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.8.0
- Datasets 2.0.0
- Tokenizers 0.11.6
| 9389effddd257620e4c7b4f224bcc541 |
osanseviero/asr-with-transformers-wav2vec2 | osanseviero | wav2vec2 | 11 | 7 | superb | 0 | automatic-speech-recognition | true | true | false | apache-2.0 | ['en'] | ['librispeech_asr'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['audio', 'automatic-speech-recognition', 'superb'] | false | true | true | 3,808 | false |
# Fork of Wav2Vec2-Base-960h
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/)
The base model pretrained and fine-tuned on 960 hours of Librispeech on 16kHz sampled speech audio. When using the model
make sure that your speech input is also sampled at 16Khz.
[Paper](https://arxiv.org/abs/2006.11477)
Authors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli
**Abstract**
We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
# Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
from transformers import Wav2Vec2Tokenizer, Wav2Vec2ForCTC
from datasets import load_dataset
import soundfile as sf
import torch
# load model and tokenizer
tokenizer = Wav2Vec2Tokenizer.from_pretrained("facebook/wav2vec2-base-960h")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h")
# define function to read in sound file
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
# load dummy dataset and read soundfiles
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
ds = ds.map(map_to_array)
# tokenize
input_values = tokenizer(ds["speech"][:2], return_tensors="pt", padding="longest").input_values # Batch size 1
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = tokenizer.batch_decode(predicted_ids)
```
## Evaluation
This code snippet shows how to evaluate **facebook/wav2vec2-base-960h** on LibriSpeech's "clean" and "other" test data.
```python
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Tokenizer
import soundfile as sf
import torch
from jiwer import wer
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h").to("cuda")
tokenizer = Wav2Vec2Tokenizer.from_pretrained("facebook/wav2vec2-base-960h")
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
librispeech_eval = librispeech_eval.map(map_to_array)
def map_to_pred(batch):
input_values = tokenizer(batch["speech"], return_tensors="pt", padding="longest").input_values
with torch.no_grad():
logits = model(input_values.to("cuda")).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = tokenizer.batch_decode(predicted_ids)
batch["transcription"] = transcription
return batch
result = librispeech_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["speech"])
print("WER:", wer(result["text"], result["transcription"]))
```
*Result (WER)*:
| "clean" | "other" |
|---|---|
| 3.4 | 8.6 | | 044b12c9bbc0f0fbbc8ba2828c549230 |
AlphaNinja27/wav2vec2-large-xls-r-300m-panjabi-colab | AlphaNinja27 | wav2vec2 | 16 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | ['common_voice'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,105 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-panjabi-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
| 4ce1b523c3eb5a4f5176e971e0848dd8 |
Helsinki-NLP/opus-mt-fi-hr | Helsinki-NLP | marian | 10 | 17 | transformers | 0 | translation | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 768 | false |
### opus-mt-fi-hr
* source languages: fi
* target languages: hr
* OPUS readme: [fi-hr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-hr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-hr/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-hr/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-hr/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fi.hr | 23.5 | 0.476 |
| 03609731a4e2819e7e7cb4fc95cebeec |
pritoms/opt-350m-finetuned-stack | pritoms | opt | 10 | 2 | transformers | 0 | text-generation | true | false | false | other | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 900 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opt-350m-finetuned-stack
This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| bc014fd712437b5489a74d6037cfe85b |
sd-concepts-library/handstand | sd-concepts-library | null | 9 | 0 | null | 1 | null | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,020 | false | ### handstand on Stable Diffusion
This is the `<handstand>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:




| 882caaf6fc3c30593a1d446872d1ed38 |
tftransformers/bert-large-cased | tftransformers | null | 6 | 1 | null | 0 | null | false | false | false | apache-2.0 | ['en'] | ['bookcorpus', 'wikipedia'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['exbert'] | false | true | true | 6,119 | false |
# BERT Large model (cased)
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/google-research/bert). This model is case-sensitive: it makes a difference between
english and English.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
In tf_transformers
```python
from tf_transformers.models import BertModel
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained('bert-large-cased')
model = BertModel.from_pretrained("bert-large-cased")
text = "Replace me by any text you'd like."
inputs_tf = {}
inputs = tokenizer(text, return_tensors='tf')
inputs_tf["input_ids"] = inputs["input_ids"]
inputs_tf["input_type_ids"] = inputs["token_type_ids"]
inputs_tf["input_mask"] = inputs["attention_mask"]
outputs_tf = model(inputs_tf)
```
## Training data
The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Evaluation results
When fine-tuned on downstream tasks, this model achieves the following results:
Glue test results:
| Task | MNLI-(m/mm) | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE | Average |
|:----:|:-----------:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|:-------:|
| | 84.6/83.4 | 71.2 | 90.5 | 93.5 | 52.1 | 85.8 | 88.9 | 66.4 | 79.6 |
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1810-04805,
author = {Jacob Devlin and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
Understanding},
journal = {CoRR},
volume = {abs/1810.04805},
year = {2018},
url = {http://arxiv.org/abs/1810.04805},
archivePrefix = {arXiv},
eprint = {1810.04805},
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=bert-base-cased">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a> | d8025808f2173c1de54adb340ac35483 |
google/t5-efficient-small-dl4 | google | t5 | 12 | 7 | transformers | 0 | text2text-generation | true | true | true | apache-2.0 | ['en'] | ['c4'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['deep-narrow'] | false | true | true | 6,251 | false |
# T5-Efficient-SMALL-DL4 (Deep-Narrow version)
T5-Efficient-SMALL-DL4 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-small-dl4** - is of model type **Small** with the following variations:
- **dl** is **4**
It has **52.13** million parameters and thus requires *ca.* **208.51 MB** of memory in full precision (*fp32*)
or **104.25 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future. | ce7024bc6f001704b16589e9e39da2d0 |
muks14/og-deberta-extra-o | muks14 | deberta | 13 | 7 | transformers | 0 | token-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 3,554 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# og-deberta-extra-o
This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5184
- Precision: 0.5981
- Recall: 0.6667
- F1: 0.6305
- Accuracy: 0.9226
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 55 | 0.4813 | 0.2863 | 0.3467 | 0.3136 | 0.8720 |
| No log | 2.0 | 110 | 0.3469 | 0.4456 | 0.4587 | 0.4520 | 0.9010 |
| No log | 3.0 | 165 | 0.3166 | 0.5206 | 0.5387 | 0.5295 | 0.9147 |
| No log | 4.0 | 220 | 0.3338 | 0.4899 | 0.584 | 0.5328 | 0.9087 |
| No log | 5.0 | 275 | 0.3166 | 0.5625 | 0.648 | 0.6022 | 0.9198 |
| No log | 6.0 | 330 | 0.3464 | 0.5707 | 0.6027 | 0.5863 | 0.9207 |
| No log | 7.0 | 385 | 0.3548 | 0.5489 | 0.6133 | 0.5793 | 0.9207 |
| No log | 8.0 | 440 | 0.4005 | 0.6125 | 0.6027 | 0.6075 | 0.9210 |
| No log | 9.0 | 495 | 0.4185 | 0.5763 | 0.6347 | 0.6041 | 0.9171 |
| 0.2019 | 10.0 | 550 | 0.4174 | 0.5596 | 0.6507 | 0.6017 | 0.9179 |
| 0.2019 | 11.0 | 605 | 0.4558 | 0.5603 | 0.632 | 0.5940 | 0.9179 |
| 0.2019 | 12.0 | 660 | 0.4615 | 0.5632 | 0.6533 | 0.6049 | 0.9166 |
| 0.2019 | 13.0 | 715 | 0.4899 | 0.5815 | 0.6187 | 0.5995 | 0.9208 |
| 0.2019 | 14.0 | 770 | 0.4800 | 0.5581 | 0.64 | 0.5963 | 0.9186 |
| 0.2019 | 15.0 | 825 | 0.4752 | 0.5905 | 0.6613 | 0.6239 | 0.9212 |
| 0.2019 | 16.0 | 880 | 0.5014 | 0.5773 | 0.6373 | 0.6058 | 0.9174 |
| 0.2019 | 17.0 | 935 | 0.5095 | 0.5917 | 0.6453 | 0.6173 | 0.9195 |
| 0.2019 | 18.0 | 990 | 0.5249 | 0.5807 | 0.6427 | 0.6101 | 0.9203 |
| 0.0077 | 19.0 | 1045 | 0.5086 | 0.5761 | 0.656 | 0.6135 | 0.9222 |
| 0.0077 | 20.0 | 1100 | 0.5108 | 0.5962 | 0.6693 | 0.6307 | 0.9219 |
| 0.0077 | 21.0 | 1155 | 0.5144 | 0.5977 | 0.6853 | 0.6385 | 0.9231 |
| 0.0077 | 22.0 | 1210 | 0.5176 | 0.5990 | 0.6613 | 0.6286 | 0.9229 |
| 0.0077 | 23.0 | 1265 | 0.5171 | 0.6039 | 0.6667 | 0.6337 | 0.9226 |
| 0.0077 | 24.0 | 1320 | 0.5184 | 0.6043 | 0.672 | 0.6364 | 0.9226 |
| 0.0077 | 25.0 | 1375 | 0.5184 | 0.5981 | 0.6667 | 0.6305 | 0.9226 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
| e79305d7136be23cb688bf9677bdf9e4 |
JovialValley/model_syllable_onSet4 | JovialValley | wav2vec2 | 15 | 0 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 11,452 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_syllable_onSet4
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1349
- 0 Precision: 1.0
- 0 Recall: 1.0
- 0 F1-score: 1.0
- 0 Support: 26
- 1 Precision: 1.0
- 1 Recall: 0.9677
- 1 F1-score: 0.9836
- 1 Support: 31
- 2 Precision: 0.9630
- 2 Recall: 1.0
- 2 F1-score: 0.9811
- 2 Support: 26
- 3 Precision: 1.0
- 3 Recall: 1.0
- 3 F1-score: 1.0
- 3 Support: 14
- Accuracy: 0.9897
- Macro avg Precision: 0.9907
- Macro avg Recall: 0.9919
- Macro avg F1-score: 0.9912
- Macro avg Support: 97
- Weighted avg Precision: 0.9901
- Weighted avg Recall: 0.9897
- Weighted avg F1-score: 0.9897
- Weighted avg Support: 97
- Wer: 0.2258
- Mtrix: [[0, 1, 2, 3], [0, 26, 0, 0, 0], [1, 0, 30, 1, 0], [2, 0, 0, 26, 0], [3, 0, 0, 0, 14]]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 70
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | 0 Precision | 0 Recall | 0 F1-score | 0 Support | 1 Precision | 1 Recall | 1 F1-score | 1 Support | 2 Precision | 2 Recall | 2 F1-score | 2 Support | 3 Precision | 3 Recall | 3 F1-score | 3 Support | Accuracy | Macro avg Precision | Macro avg Recall | Macro avg F1-score | Macro avg Support | Weighted avg Precision | Weighted avg Recall | Weighted avg F1-score | Weighted avg Support | Wer | Mtrix |
|:-------------:|:-----:|:----:|:---------------:|:-----------:|:--------:|:----------:|:---------:|:-----------:|:--------:|:----------:|:---------:|:-----------:|:--------:|:----------:|:---------:|:-----------:|:--------:|:----------:|:---------:|:--------:|:-------------------:|:----------------:|:------------------:|:-----------------:|:----------------------:|:-------------------:|:---------------------:|:--------------------:|:------:|:--------------------------------------------------------------------------------------:|
| 1.6602 | 4.16 | 100 | 1.5639 | 0.0 | 0.0 | 0.0 | 26 | 0.0 | 0.0 | 0.0 | 31 | 0.2584 | 0.8846 | 0.4 | 26 | 0.0 | 0.0 | 0.0 | 14 | 0.2371 | 0.0646 | 0.2212 | 0.1 | 97 | 0.0693 | 0.2371 | 0.1072 | 97 | 0.9732 | [[0, 1, 2, 3], [0, 0, 0, 26, 0], [1, 0, 0, 31, 0], [2, 3, 0, 23, 0], [3, 5, 0, 9, 0]] |
| 1.616 | 8.33 | 200 | 1.4203 | 0.0 | 0.0 | 0.0 | 26 | 0.0 | 0.0 | 0.0 | 31 | 0.2584 | 0.8846 | 0.4 | 26 | 0.0 | 0.0 | 0.0 | 14 | 0.2371 | 0.0646 | 0.2212 | 0.1 | 97 | 0.0693 | 0.2371 | 0.1072 | 97 | 0.9732 | [[0, 1, 2, 3], [0, 0, 0, 26, 0], [1, 0, 0, 31, 0], [2, 3, 0, 23, 0], [3, 5, 0, 9, 0]] |
| 1.2107 | 12.49 | 300 | 1.1249 | 0.0 | 0.0 | 0.0 | 26 | 0.0 | 0.0 | 0.0 | 31 | 0.2584 | 0.8846 | 0.4 | 26 | 0.0 | 0.0 | 0.0 | 14 | 0.2371 | 0.0646 | 0.2212 | 0.1 | 97 | 0.0693 | 0.2371 | 0.1072 | 97 | 0.9732 | [[0, 1, 2, 3], [0, 0, 0, 26, 0], [1, 0, 0, 31, 0], [2, 3, 0, 23, 0], [3, 5, 0, 9, 0]] |
| 1.1283 | 16.65 | 400 | 1.0201 | 0.0 | 0.0 | 0.0 | 26 | 0.0 | 0.0 | 0.0 | 31 | 0.2584 | 0.8846 | 0.4 | 26 | 0.0 | 0.0 | 0.0 | 14 | 0.2371 | 0.0646 | 0.2212 | 0.1 | 97 | 0.0693 | 0.2371 | 0.1072 | 97 | 0.9732 | [[0, 1, 2, 3], [0, 0, 0, 26, 0], [1, 0, 0, 31, 0], [2, 3, 0, 23, 0], [3, 5, 0, 9, 0]] |
| 0.8868 | 20.82 | 500 | 0.8944 | 0.0 | 0.0 | 0.0 | 26 | 0.0 | 0.0 | 0.0 | 31 | 0.2584 | 0.8846 | 0.4 | 26 | 0.0 | 0.0 | 0.0 | 14 | 0.2371 | 0.0646 | 0.2212 | 0.1 | 97 | 0.0693 | 0.2371 | 0.1072 | 97 | 0.9732 | [[0, 1, 2, 3], [0, 0, 0, 26, 0], [1, 0, 0, 31, 0], [2, 3, 0, 23, 0], [3, 5, 0, 9, 0]] |
| 0.8863 | 24.98 | 600 | 0.9316 | 0.0 | 0.0 | 0.0 | 26 | 0.0 | 0.0 | 0.0 | 31 | 0.2584 | 0.8846 | 0.4 | 26 | 0.0 | 0.0 | 0.0 | 14 | 0.2371 | 0.0646 | 0.2212 | 0.1 | 97 | 0.0693 | 0.2371 | 0.1072 | 97 | 0.9732 | [[0, 1, 2, 3], [0, 0, 0, 26, 0], [1, 0, 0, 31, 0], [2, 3, 0, 23, 0], [3, 5, 0, 9, 0]] |
| 0.9019 | 29.16 | 700 | 0.8688 | 0.7647 | 1.0 | 0.8667 | 26 | 0.0 | 0.0 | 0.0 | 31 | 0.3651 | 0.8846 | 0.5169 | 26 | 0.0 | 0.0 | 0.0 | 14 | 0.5052 | 0.2824 | 0.4712 | 0.3459 | 97 | 0.3028 | 0.5052 | 0.3708 | 97 | 0.9732 | [[0, 1, 2, 3], [0, 26, 0, 0, 0], [1, 0, 0, 31, 0], [2, 3, 0, 23, 0], [3, 5, 0, 9, 0]] |
| 0.7977 | 33.33 | 800 | 0.8014 | 1.0 | 1.0 | 1.0 | 26 | 0.9667 | 0.9355 | 0.9508 | 31 | 0.9259 | 0.9615 | 0.9434 | 26 | 1.0 | 1.0 | 1.0 | 14 | 0.9691 | 0.9731 | 0.9743 | 0.9736 | 97 | 0.9695 | 0.9691 | 0.9691 | 97 | 1.0 | [[0, 1, 2, 3], [0, 26, 0, 0, 0], [1, 0, 29, 2, 0], [2, 0, 1, 25, 0], [3, 0, 0, 0, 14]] |
| 0.729 | 37.49 | 900 | 0.8163 | 1.0 | 1.0 | 1.0 | 26 | 0.9091 | 0.9677 | 0.9375 | 31 | 0.9583 | 0.8846 | 0.9200 | 26 | 1.0 | 1.0 | 1.0 | 14 | 0.9588 | 0.9669 | 0.9631 | 0.9644 | 97 | 0.9598 | 0.9588 | 0.9586 | 97 | 1.0 | [[0, 1, 2, 3], [0, 26, 0, 0, 0], [1, 0, 30, 1, 0], [2, 0, 3, 23, 0], [3, 0, 0, 0, 14]] |
| 0.6526 | 41.65 | 1000 | 0.6691 | 1.0 | 1.0 | 1.0 | 26 | 0.9667 | 0.9355 | 0.9508 | 31 | 0.9259 | 0.9615 | 0.9434 | 26 | 1.0 | 1.0 | 1.0 | 14 | 0.9691 | 0.9731 | 0.9743 | 0.9736 | 97 | 0.9695 | 0.9691 | 0.9691 | 97 | 0.7055 | [[0, 1, 2, 3], [0, 26, 0, 0, 0], [1, 0, 29, 2, 0], [2, 0, 1, 25, 0], [3, 0, 0, 0, 14]] |
| 0.6633 | 45.82 | 1100 | 0.3445 | 1.0 | 1.0 | 1.0 | 26 | 0.9394 | 1.0 | 0.9688 | 31 | 1.0 | 0.9231 | 0.9600 | 26 | 1.0 | 1.0 | 1.0 | 14 | 0.9794 | 0.9848 | 0.9808 | 0.9822 | 97 | 0.9806 | 0.9794 | 0.9793 | 97 | 0.5017 | [[0, 1, 2, 3], [0, 26, 0, 0, 0], [1, 0, 31, 0, 0], [2, 0, 2, 24, 0], [3, 0, 0, 0, 14]] |
| 0.1913 | 49.98 | 1200 | 0.2455 | 1.0 | 1.0 | 1.0 | 26 | 0.9677 | 0.9677 | 0.9677 | 31 | 0.96 | 0.9231 | 0.9412 | 26 | 0.9333 | 1.0 | 0.9655 | 14 | 0.9691 | 0.9653 | 0.9727 | 0.9686 | 97 | 0.9693 | 0.9691 | 0.9689 | 97 | 0.3946 | [[0, 1, 2, 3], [0, 26, 0, 0, 0], [1, 0, 30, 1, 0], [2, 0, 1, 24, 1], [3, 0, 0, 0, 14]] |
| 0.2024 | 54.16 | 1300 | 0.1865 | 1.0 | 1.0 | 1.0 | 26 | 1.0 | 0.9355 | 0.9667 | 31 | 0.9286 | 1.0 | 0.9630 | 26 | 1.0 | 1.0 | 1.0 | 14 | 0.9794 | 0.9821 | 0.9839 | 0.9824 | 97 | 0.9809 | 0.9794 | 0.9794 | 97 | 0.3423 | [[0, 1, 2, 3], [0, 26, 0, 0, 0], [1, 0, 29, 2, 0], [2, 0, 0, 26, 0], [3, 0, 0, 0, 14]] |
| 0.1212 | 58.33 | 1400 | 0.1485 | 1.0 | 1.0 | 1.0 | 26 | 1.0 | 0.9677 | 0.9836 | 31 | 0.9630 | 1.0 | 0.9811 | 26 | 1.0 | 1.0 | 1.0 | 14 | 0.9897 | 0.9907 | 0.9919 | 0.9912 | 97 | 0.9901 | 0.9897 | 0.9897 | 97 | 0.2957 | [[0, 1, 2, 3], [0, 26, 0, 0, 0], [1, 0, 30, 1, 0], [2, 0, 0, 26, 0], [3, 0, 0, 0, 14]] |
| 0.108 | 62.49 | 1500 | 0.1348 | 1.0 | 1.0 | 1.0 | 26 | 1.0 | 0.9677 | 0.9836 | 31 | 0.9630 | 1.0 | 0.9811 | 26 | 1.0 | 1.0 | 1.0 | 14 | 0.9897 | 0.9907 | 0.9919 | 0.9912 | 97 | 0.9901 | 0.9897 | 0.9897 | 97 | 0.2433 | [[0, 1, 2, 3], [0, 26, 0, 0, 0], [1, 0, 30, 1, 0], [2, 0, 0, 26, 0], [3, 0, 0, 0, 14]] |
| 0.1058 | 66.65 | 1600 | 0.1328 | 1.0 | 1.0 | 1.0 | 26 | 1.0 | 0.9677 | 0.9836 | 31 | 0.9630 | 1.0 | 0.9811 | 26 | 1.0 | 1.0 | 1.0 | 14 | 0.9897 | 0.9907 | 0.9919 | 0.9912 | 97 | 0.9901 | 0.9897 | 0.9897 | 97 | 0.2224 | [[0, 1, 2, 3], [0, 26, 0, 0, 0], [1, 0, 30, 1, 0], [2, 0, 0, 26, 0], [3, 0, 0, 0, 14]] |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
| cebf784784b02faa326bb7adb2edf7a6 |
plncmm/roberta-clinical-wl-es | plncmm | roberta | 13 | 9 | transformers | 0 | fill-mask | true | false | false | apache-2.0 | ['es'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,015 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# plncmm/roberta-clinical-wl-es
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-biomedical-clinical-es](https://huggingface.co/PlanTL-GOB-ES/roberta-base-biomedical-clinical-es) on the Chilean waiting list dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
| 8d9a306bef94de65cb92b8c4518d011c |
Helsinki-NLP/opus-mt-wal-en | Helsinki-NLP | marian | 10 | 14 | transformers | 0 | translation | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 776 | false |
### opus-mt-wal-en
* source languages: wal
* target languages: en
* OPUS readme: [wal-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/wal-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/wal-en/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/wal-en/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/wal-en/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.wal.en | 22.5 | 0.386 |
| d972b1c9efac432aa48e7bf50d55793c |
Geotrend/distilbert-base-vi-cased | Geotrend | distilbert | 6 | 2 | transformers | 1 | fill-mask | true | false | false | apache-2.0 | ['vi'] | ['wikipedia'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,215 | false |
# distilbert-base-vi-cased
We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages.
Our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-vi-cased")
model = AutoModel.from_pretrained("Geotrend/distilbert-base-vi-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermdistilbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request. | 2c3a5aa8ac020d41fc90885440ca3cc9 |
google/t5-efficient-base-dm2000 | google | t5 | 12 | 11 | transformers | 0 | text2text-generation | true | true | true | apache-2.0 | ['en'] | ['c4'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['deep-narrow'] | false | true | true | 6,262 | false |
# T5-Efficient-BASE-DM2000 (Deep-Narrow version)
T5-Efficient-BASE-DM2000 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-base-dm2000** - is of model type **Base** with the following variations:
- **dm** is **2000**
It has **594.44** million parameters and thus requires *ca.* **2377.75 MB** of memory in full precision (*fp32*)
or **1188.87 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future. | 0e8846ab0f494c817bb77303c0a60c36 |
uumlaut/ddpm-vangogh-128 | uumlaut | null | 19 | 0 | diffusers | 0 | null | false | false | false | apache-2.0 | ['en'] | ['imagefolder'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,552 | false |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-vangogh-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `imagefolder` dataset.
## Intended uses & limitations
#### How to use
```python
# !pip install diffusers
from diffusers import DDPMPipeline, DDIMPipeline, PNDMPipeline
model_id = "uumlaut/ddpm-vangogh-128"
# load model and scheduler
ddpm = DDPMPipeline.from_pretrained(model_id) # you can replace DDPMPipeline with DDIMPipeline or PNDMPipeline for faster inference
# run pipeline in inference (sample random noise and denoise)
image = ddpm().images[0]
# save image
image.save("ddpm_generated_image.png")
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/uumlaut/ddpm-vangogh-128/tensorboard?#scalars)
| 3614022ec79f9db0815e1761cfc36165 |
SDAddictsAnon/itspoidamansd | SDAddictsAnon | null | 6 | 0 | null | 0 | null | false | false | false | other | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 724 | false | Psychedelia Diffusion Model, and maybe others to come.
Tips for psychedelicmerger.ckpt:
High step count, ancestral samplers seem to give the best results. Using words that imply any form of psychedelia in the prompt should help to get it's style out, but may not be necessary.
if you want to try the training tokens, don't expect great results:
sdpsydiffsyle and sdpsydiffstylev2,
this model is a merge between two different training sets.
Also works nicely with pop art, phunkadelic, surreal etc.
Can't offer much advice on what CFG scale setting will work best typically, it seems pretty dependent on the prompt
the clip aesthetic/stylepile in webui seems to play nicely with this too, worth experimenting.
Have fun! | 4781c592572dfcb7539bc1364a91278c |
manu/lilt-infoxlm-base | manu | liltrobertalike | 4 | 16 | transformers | 2 | token-classification | true | false | false | mit | ['es', 'fr', 'ru', 'en', 'it'] | ['iit-cdip'] | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['token-classification', 'fill-mask'] | false | true | true | 1,523 | false |
This model is the pretrained infoxlm checkpoint from the paper "LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding".
Original repository: https://github.com/jpWang/LiLT
To use it, it is necessary to fork the modeling and configuration files from the original repository, and load the pretrained model from the corresponding classes (LiLTRobertaLikeConfig, LiLTRobertaLikeForRelationExtraction, LiLTRobertaLikeForTokenClassification, LiLTRobertaLikeModel).
They can also be preloaded with the AutoConfig/model factories as such:
```python
from transformers import AutoModelForTokenClassification, AutoConfig
from path_to_custom_classes import (
LiLTRobertaLikeConfig,
LiLTRobertaLikeForRelationExtraction,
LiLTRobertaLikeForTokenClassification,
LiLTRobertaLikeModel
)
def patch_transformers():
AutoConfig.register("liltrobertalike", LiLTRobertaLikeConfig)
AutoModel.register(LiLTRobertaLikeConfig, LiLTRobertaLikeModel)
AutoModelForTokenClassification.register(LiLTRobertaLikeConfig, LiLTRobertaLikeForTokenClassification)
# etc...
```
To load the model, it is then possible to use:
```python
# patch_transformers() must have been executed beforehand
tokenizer = AutoTokenizer.from_pretrained("microsoft/infoxlm-base")
model = AutoModel.from_pretrained("manu/lilt-infoxlm-base")
model = AutoModelForTokenClassification.from_pretrained("manu/lilt-infoxlm-base") # to be fine-tuned on a token classification task
``` | 64bc92ec0de2fbdb925fe608783f95aa |
Sushant45/Catalan_language-clustered | Sushant45 | distilbert | 8 | 24 | transformers | 0 | question-answering | false | true | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,871 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Sushant45/Catalan_language-clustered
This model is a fine-tuned version of [nandysoham16/13-clustered_aug](https://huggingface.co/nandysoham16/13-clustered_aug) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5260
- Train End Logits Accuracy: 0.8611
- Train Start Logits Accuracy: 0.8576
- Validation Loss: 0.8536
- Validation End Logits Accuracy: 0.7273
- Validation Start Logits Accuracy: 0.9091
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.5260 | 0.8611 | 0.8576 | 0.8536 | 0.7273 | 0.9091 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
| 22d37dda479ebdba5a9ba45c088db860 |
dandelin/vilt-b32-finetuned-coco | dandelin | vilt | 9 | 2,325 | transformers | 0 | null | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,900 | false |
# Vision-and-Language Transformer (ViLT), fine-tuned on COCO
Vision-and-Language Transformer (ViLT) model fine-tuned on [COCO](https://cocodataset.org/#home). It was introduced in the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Kim et al. and first released in [this repository](https://github.com/dandelin/ViLT).
Disclaimer: The team releasing ViLT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Intended uses & limitations
You can use the model for image and text retrieval.
### How to use
Here is how to use the model in PyTorch:
```
from transformers import ViltProcessor, ViltForImageAndTextRetrieval
import requests
from PIL import Image
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
texts = ["An image of two cats chilling on a couch", "A football player scoring a goal"]
processor = ViltProcessor.from_pretrained("dandelin/vilt-b32-finetuned-coco")
model = ViltForImageAndTextRetrieval.from_pretrained("dandelin/vilt-b32-finetuned-coco")
# prepare inputs
encoding = processor(image, text, return_tensors="pt")
# forward pass
scores = dict()
for text in texts:
encoding = processor(image, text, return_tensors="pt")
outputs = model(**encoding)
scores[text] = outputs.logits[0, :].item()
```
## Training data
(to do)
## Training procedure
### Preprocessing
(to do)
### Pretraining
(to do)
## Evaluation results
(to do)
### BibTeX entry and citation info
```bibtex
@misc{kim2021vilt,
title={ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision},
author={Wonjae Kim and Bokyung Son and Ildoo Kim},
year={2021},
eprint={2102.03334},
archivePrefix={arXiv},
primaryClass={stat.ML}
}
``` | 1dcb4119f2794ccb2f46d852e9e03ba6 |
anindabitm/sagemaker-distilbert-emotion | anindabitm | distilbert | 10 | 3 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['emotion'] | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,286 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sagemaker-distilbert-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2434
- Accuracy: 0.9165
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9423 | 1.0 | 500 | 0.2434 | 0.9165 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
| 96b685d56158c358319285a0750a51ee |
adsjklfsd/xlm-roberta-base-finetuned-panx-de | adsjklfsd | xlm-roberta | 12 | 0 | transformers | 0 | token-classification | true | false | false | mit | null | ['xtreme'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,319 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1344
- F1: 0.8617
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2564 | 1.0 | 525 | 0.1610 | 0.8285 |
| 0.1307 | 2.0 | 1050 | 0.1378 | 0.8491 |
| 0.0813 | 3.0 | 1575 | 0.1344 | 0.8617 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.10.1+cu113
- Datasets 2.9.0
- Tokenizers 0.13.2
| a9640a086d6ea7c6fa5ee2a7892d0e38 |
anferico/bert-for-patents | anferico | null | 6 | 12,482 | transformers | 33 | fill-mask | true | true | false | apache-2.0 | ['en'] | null | null | 1 | 0 | 1 | 0 | 1 | 1 | 0 | ['masked-lm', 'pytorch'] | false | true | true | 836 | false |
# BERT for Patents
BERT for Patents is a model trained by Google on 100M+ patents (not just US patents). It is based on BERT<sub>LARGE</sub>.
If you want to learn more about the model, check out the [blog post](https://cloud.google.com/blog/products/ai-machine-learning/how-ai-improves-patent-analysis), [white paper](https://services.google.com/fh/files/blogs/bert_for_patents_white_paper.pdf) and [GitHub page](https://github.com/google/patents-public-data/blob/master/models/BERT%20for%20Patents.md) containing the original TensorFlow checkpoint.
---
### Projects using this model (or variants of it):
- [Patents4IPPC](https://github.com/ec-jrc/Patents4IPPC) (carried out by [Pi School](https://picampus-school.com/) and commissioned by the [Joint Research Centre (JRC)](https://ec.europa.eu/jrc/en) of the European Commission)
| d7bc369971a62c545c5da330a2c16e9c |
Hamid-reza/mt5-small-finetuned-digikala-titleGen | Hamid-reza | mt5 | 18 | 6 | transformers | 0 | summarization | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['summarization', 'generated_from_trainer'] | true | true | true | 1,918 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-digikala-titleGen
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8801
- Rouge1: 70.3489
- Rouge2: 43.245
- Rougel: 34.6608
- Rougelsum: 34.6608
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 7.5555 | 1.0 | 847 | 3.2594 | 45.6729 | 19.6446 | 31.5974 | 31.5974 |
| 4.1386 | 2.0 | 1694 | 3.0347 | 58.3021 | 32.8172 | 33.9012 | 33.9012 |
| 3.7449 | 3.0 | 2541 | 2.9665 | 66.731 | 40.8991 | 34.2203 | 34.2203 |
| 3.5575 | 4.0 | 3388 | 2.9102 | 65.598 | 39.4081 | 34.5116 | 34.5116 |
| 3.4062 | 5.0 | 4235 | 2.8944 | 69.6081 | 42.8707 | 34.6622 | 34.6622 |
| 3.3408 | 6.0 | 5082 | 2.8888 | 70.2123 | 42.8639 | 34.5669 | 34.5669 |
| 3.3025 | 7.0 | 5929 | 2.8801 | 70.3489 | 43.245 | 34.6608 | 34.6608 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
| f3522e8707cd38d8ec2f4ebf1d52b15d |
dipteshkanojia/hing-roberta-NCM-run-2 | dipteshkanojia | xlm-roberta | 9 | 2 | transformers | 0 | text-classification | true | false | false | cc-by-4.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 3,124 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hing-roberta-NCM-run-2
This model is a fine-tuned version of [l3cube-pune/hing-roberta](https://huggingface.co/l3cube-pune/hing-roberta) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3647
- Accuracy: 0.6483
- Precision: 0.6369
- Recall: 0.6325
- F1: 0.6341
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.8973 | 1.0 | 927 | 0.8166 | 0.6483 | 0.6545 | 0.6576 | 0.6460 |
| 0.6827 | 2.0 | 1854 | 0.9071 | 0.6526 | 0.6444 | 0.6261 | 0.6299 |
| 0.4672 | 3.0 | 2781 | 1.1600 | 0.6764 | 0.6657 | 0.6634 | 0.6643 |
| 0.3388 | 4.0 | 3708 | 1.7426 | 0.6548 | 0.6406 | 0.6442 | 0.6418 |
| 0.2786 | 5.0 | 4635 | 1.9385 | 0.6505 | 0.6484 | 0.6437 | 0.6434 |
| 0.1794 | 6.0 | 5562 | 2.3158 | 0.6472 | 0.6564 | 0.6365 | 0.6388 |
| 0.12 | 7.0 | 6489 | 2.6961 | 0.6591 | 0.6458 | 0.6531 | 0.6466 |
| 0.1298 | 8.0 | 7416 | 2.7196 | 0.6505 | 0.6523 | 0.6307 | 0.6342 |
| 0.0941 | 9.0 | 8343 | 2.5853 | 0.6548 | 0.6406 | 0.6426 | 0.6415 |
| 0.0696 | 10.0 | 9270 | 2.8386 | 0.6613 | 0.6616 | 0.6314 | 0.6348 |
| 0.0722 | 11.0 | 10197 | 2.9658 | 0.6537 | 0.6356 | 0.6356 | 0.6355 |
| 0.0509 | 12.0 | 11124 | 3.3286 | 0.6429 | 0.6262 | 0.6192 | 0.6214 |
| 0.0444 | 13.0 | 12051 | 3.1654 | 0.6483 | 0.6347 | 0.6302 | 0.6319 |
| 0.0341 | 14.0 | 12978 | 2.9509 | 0.6537 | 0.6430 | 0.6394 | 0.6401 |
| 0.0345 | 15.0 | 13905 | 3.3416 | 0.6656 | 0.6514 | 0.6488 | 0.6499 |
| 0.0303 | 16.0 | 14832 | 3.3874 | 0.6419 | 0.6267 | 0.6339 | 0.6272 |
| 0.0245 | 17.0 | 15759 | 3.2854 | 0.6570 | 0.6428 | 0.6420 | 0.6421 |
| 0.0174 | 18.0 | 16686 | 3.2863 | 0.6602 | 0.6569 | 0.6427 | 0.6465 |
| 0.0136 | 19.0 | 17613 | 3.3674 | 0.6494 | 0.6361 | 0.6341 | 0.6349 |
| 0.0111 | 20.0 | 18540 | 3.3647 | 0.6483 | 0.6369 | 0.6325 | 0.6341 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.10.1+cu111
- Datasets 2.3.2
- Tokenizers 0.12.1
| 87d80ea8de2f410ef68389205ff4fe04 |
blizrys/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-pubmedqa-2 | blizrys | bert | 18 | 1 | transformers | 0 | text-classification | true | false | false | mit | null | [] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | false | true | true | 1,581 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-pubmedqa-2
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0005
- Accuracy: 0.54
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 57 | 1.3510 | 0.54 |
| No log | 2.0 | 114 | 0.9606 | 0.54 |
| No log | 3.0 | 171 | 0.9693 | 0.54 |
| No log | 4.0 | 228 | 1.0445 | 0.54 |
| No log | 5.0 | 285 | 1.0005 | 0.54 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
| f583f36e5a5b622907705cebf72f4476 |
Jiva/xlm-roberta-large-it-mnli | Jiva | xlm-roberta | 8 | 202 | transformers | 4 | zero-shot-classification | true | false | false | mit | ['it'] | ['multi_nli', 'glue'] | null | 2 | 0 | 2 | 0 | 0 | 0 | 0 | ['text-classification', 'pytorch', 'tensorflow'] | true | true | true | 5,228 | false |
# XLM-roBERTa-large-it-mnli
## Version 0.1
| | matched-it acc | mismatched-it acc |
| -------------------------------------------------------------------------------- |----------------|-------------------|
| XLM-roBERTa-large-it-mnli | 84.75 | 85.39 |
## Model Description
This model takes [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) and fine-tunes it on a subset of NLI data taken from a automatically translated version of the MNLI corpus. It is intended to be used for zero-shot text classification, such as with the Hugging Face [ZeroShotClassificationPipeline](https://huggingface.co/transformers/master/main_classes/pipelines.html#transformers.ZeroShotClassificationPipeline).
## Intended Usage
This model is intended to be used for zero-shot text classification of italian texts.
Since the base model was pre-trained trained on 100 different languages, the
model has shown some effectiveness in languages beyond those listed above as
well. See the full list of pre-trained languages in appendix A of the
[XLM Roberata paper](https://arxiv.org/abs/1911.02116)
For English-only classification, it is recommended to use
[bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli) or
[a distilled bart MNLI model](https://huggingface.co/models?filter=pipeline_tag%3Azero-shot-classification&search=valhalla).
#### With the zero-shot classification pipeline
The model can be loaded with the `zero-shot-classification` pipeline like so:
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification",
model="Jiva/xlm-roberta-large-it-mnli", device=0, use_fast=True, multi_label=True)
```
You can then classify in any of the above languages. You can even pass the labels in one language and the sequence to
classify in another:
```python
# we will classify the following wikipedia entry about Sardinia"
sequence_to_classify = "La Sardegna è una regione italiana a statuto speciale di 1 592 730 abitanti con capoluogo Cagliari, la cui denominazione bilingue utilizzata nella comunicazione ufficiale è Regione Autonoma della Sardegna / Regione Autònoma de Sardigna."
# we can specify candidate labels in Italian:
candidate_labels = ["geografia", "politica", "macchine", "cibo", "moda"]
classifier(sequence_to_classify, candidate_labels)
# {'labels': ['geografia', 'moda', 'politica', 'macchine', 'cibo'],
# 'scores': [0.38871392607688904, 0.22633370757102966, 0.19398456811904907, 0.13735772669315338, 0.13708525896072388]}
```
The default hypothesis template is the English, `This text is {}`. With this model better results are achieving when providing a translated template:
```python
sequence_to_classify = "La Sardegna è una regione italiana a statuto speciale di 1 592 730 abitanti con capoluogo Cagliari, la cui denominazione bilingue utilizzata nella comunicazione ufficiale è Regione Autonoma della Sardegna / Regione Autònoma de Sardigna."
candidate_labels = ["geografia", "politica", "macchine", "cibo", "moda"]
hypothesis_template = "si parla di {}"
# classifier(sequence_to_classify, candidate_labels, hypothesis_template=hypothesis_template)
# 'scores': [0.6068345904350281, 0.34715887904167175, 0.32433947920799255, 0.3068877160549164, 0.18744681775569916]}
```
#### With manual PyTorch
```python
# pose sequence as a NLI premise and label as a hypothesis
from transformers import AutoModelForSequenceClassification, AutoTokenizer
nli_model = AutoModelForSequenceClassification.from_pretrained('Jiva/xlm-roberta-large-it-mnli')
tokenizer = AutoTokenizer.from_pretrained('Jiva/xlm-roberta-large-it-mnli')
premise = sequence
hypothesis = f'si parla di {}.'
# run through model pre-trained on MNLI
x = tokenizer.encode(premise, hypothesis, return_tensors='pt',
truncation_strategy='only_first')
logits = nli_model(x.to(device))[0]
# we throw away "neutral" (dim 1) and take the probability of
# "entailment" (2) as the probability of the label being true
entail_contradiction_logits = logits[:,[0,2]]
probs = entail_contradiction_logits.softmax(dim=1)
prob_label_is_true = probs[:,1]
```
## Training
## Version 0.1
The model has been now retrained on the full training set. Around 1000 sentences pairs have been removed from the set because their translation was botched by the translation model.
| metric | value |
|----------------- |------- |
| learning_rate | 4e-6 |
| optimizer | AdamW |
| batch_size | 80 |
| mcc | 0.77 |
| train_loss | 0.34 |
| eval_loss | 0.40 |
| stopped_at_step | 9754 |
## Version 0.0
This model was pre-trained on set of 100 languages, as described in
[the original paper](https://arxiv.org/abs/1911.02116). It was then fine-tuned on the task of NLI on an Italian translation of the MNLI dataset (85% of the train set only so far). The model used for translating the texts is Helsinki-NLP/opus-mt-en-it, with a max output sequence lenght of 120. The model has been trained for 1 epoch with learning rate 4e-6 and batch size 80, currently it scores 82 acc. on the remaining 15% of the training. | afe060308948fc4561e74e08d782dd28 |
nateraw/nu-wave-x2 | nateraw | null | 3 | 0 | pytorch-lightning | 1 | audio-to-audio | false | false | false | bsd-3-clause | ['en'] | ['vctk'] | null | 3 | 0 | 0 | 3 | 0 | 0 | 0 | ['pytorch-lightning', 'audio-to-audio'] | false | true | true | 1,797 | false |
# nu-wave-x2
## Model description
NU-Wave: A Diffusion Probabilistic Model for Neural Audio Upsampling
- [GitHub Repo](https://github.com/mindslab-ai/nuwave)
- [Paper](https://arxiv.org/pdf/2104.02321.pdf)
This model was trained by contributor [Frederico S. Oliveira](https://huggingface.co/freds0), who graciously [provided the checkpoint](https://github.com/mindslab-ai/nuwave/issues/18) in the original author's GitHub repo.
This model was trained using source code written by Junhyeok Lee and Seungu Han under the BSD 3.0 License. All credit goes to them for this work.
This model takes in audio at 24kHz and upsamples it to 48kHz.
## Intended uses & limitations
#### How to use
You can try out this model here: [](https://colab.research.google.com/gist/nateraw/bd78af284ef78a960e18a75cb13deab1/nu-wave-x2.ipynb)
#### Limitations and bias
Provide examples of latent issues and potential remediations.
## Training data
Describe the data you used to train the model.
If you initialized it with pre-trained weights, add a link to the pre-trained model card or repository with description of the pre-training data.
## Training procedure
Preprocessing, hardware used, hyperparameters...
## Eval results
You can check out the authors' results at [their project page](https://mindslab-ai.github.io/nuwave/). The project page contains many samples of upsampled audio from the authors' models.
### BibTeX entry and citation info
```bibtex
@inproceedings{lee21nuwave,
author={Junhyeok Lee and Seungu Han},
title={{NU-Wave: A Diffusion Probabilistic Model for Neural Audio Upsampling}},
year=2021,
booktitle={Proc. Interspeech 2021},
pages={1634--1638},
doi={10.21437/Interspeech.2021-36}
}
``` | 916469c3f6e883ff830063b6b438823a |
sayakpaul/glpn-kitti-finetuned-diode-221214-123047 | sayakpaul | glpn | 7 | 1 | transformers | 0 | depth-estimation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['vision', 'depth-estimation', 'generated_from_trainer'] | true | true | true | 4,649 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# glpn-kitti-finetuned-diode-221214-123047
This model is a fine-tuned version of [vinvino02/glpn-kitti](https://huggingface.co/vinvino02/glpn-kitti) on the diode-subset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3497
- Mae: 0.2847
- Rmse: 0.3977
- Abs Rel: 0.3477
- Log Mae: 0.1203
- Log Rmse: 0.1726
- Delta1: 0.5217
- Delta2: 0.8246
- Delta3: 0.9436
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 48
- seed: 2022
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.15
- num_epochs: 25
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae | Rmse | Abs Rel | Log Mae | Log Rmse | Delta1 | Delta2 | Delta3 |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:-------:|:-------:|:--------:|:------:|:------:|:------:|
| 0.6103 | 1.0 | 72 | 0.4449 | 0.3914 | 0.5513 | 0.4625 | 0.1615 | 0.2186 | 0.3918 | 0.6910 | 0.8549 |
| 0.3762 | 2.0 | 144 | 0.4095 | 0.3583 | 0.4876 | 0.4281 | 0.1505 | 0.2015 | 0.4065 | 0.7121 | 0.8901 |
| 0.341 | 3.0 | 216 | 0.3768 | 0.3046 | 0.4061 | 0.4016 | 0.1313 | 0.1840 | 0.4757 | 0.7938 | 0.9309 |
| 0.291 | 4.0 | 288 | 0.3853 | 0.3227 | 0.4495 | 0.3724 | 0.1360 | 0.1869 | 0.4646 | 0.7680 | 0.9127 |
| 0.2861 | 5.0 | 360 | 0.3786 | 0.3151 | 0.4257 | 0.4065 | 0.1344 | 0.1876 | 0.4597 | 0.7785 | 0.9329 |
| 0.2539 | 6.0 | 432 | 0.3687 | 0.3158 | 0.4546 | 0.3329 | 0.1316 | 0.1821 | 0.4732 | 0.7869 | 0.9138 |
| 0.2199 | 7.0 | 504 | 0.3705 | 0.3122 | 0.4479 | 0.3378 | 0.1312 | 0.1820 | 0.4784 | 0.7888 | 0.9189 |
| 0.1728 | 8.0 | 576 | 0.3578 | 0.2895 | 0.4008 | 0.3675 | 0.1235 | 0.1766 | 0.5101 | 0.8178 | 0.9420 |
| 0.1877 | 9.0 | 648 | 0.3589 | 0.2846 | 0.3846 | 0.3721 | 0.1235 | 0.1764 | 0.5144 | 0.8170 | 0.9403 |
| 0.1541 | 10.0 | 720 | 0.3521 | 0.2831 | 0.3997 | 0.3283 | 0.1201 | 0.1712 | 0.5241 | 0.8260 | 0.9422 |
| 0.1414 | 11.0 | 792 | 0.3460 | 0.2735 | 0.3772 | 0.3419 | 0.1173 | 0.1691 | 0.5409 | 0.8360 | 0.9469 |
| 0.1643 | 12.0 | 864 | 0.3530 | 0.2878 | 0.4100 | 0.3313 | 0.1214 | 0.1736 | 0.5249 | 0.8214 | 0.9344 |
| 0.1724 | 13.0 | 936 | 0.3606 | 0.2995 | 0.4249 | 0.3459 | 0.1255 | 0.1775 | 0.5057 | 0.8069 | 0.9323 |
| 0.1514 | 14.0 | 1008 | 0.3477 | 0.2832 | 0.3881 | 0.3596 | 0.1206 | 0.1726 | 0.5174 | 0.8253 | 0.9437 |
| 0.1535 | 15.0 | 1080 | 0.3535 | 0.2961 | 0.4242 | 0.3412 | 0.1231 | 0.1753 | 0.5186 | 0.8080 | 0.9332 |
| 0.1233 | 16.0 | 1152 | 0.3508 | 0.2896 | 0.4104 | 0.3391 | 0.1213 | 0.1727 | 0.5225 | 0.8165 | 0.9398 |
| 0.116 | 17.0 | 1224 | 0.3519 | 0.2874 | 0.3989 | 0.3533 | 0.1215 | 0.1731 | 0.5200 | 0.8179 | 0.9407 |
| 0.1532 | 18.0 | 1296 | 0.3532 | 0.2965 | 0.4200 | 0.3459 | 0.1236 | 0.1747 | 0.5147 | 0.8035 | 0.9353 |
| 0.1179 | 19.0 | 1368 | 0.3497 | 0.2828 | 0.3896 | 0.3557 | 0.1204 | 0.1728 | 0.5200 | 0.8260 | 0.9457 |
| 0.1326 | 20.0 | 1440 | 0.3467 | 0.2787 | 0.3848 | 0.3475 | 0.1185 | 0.1704 | 0.5257 | 0.8330 | 0.9479 |
| 0.1069 | 21.0 | 1512 | 0.3471 | 0.2807 | 0.3922 | 0.3418 | 0.1187 | 0.1707 | 0.5288 | 0.8297 | 0.9452 |
| 0.1049 | 22.0 | 1584 | 0.3474 | 0.2864 | 0.4048 | 0.3387 | 0.1199 | 0.1717 | 0.5227 | 0.8251 | 0.9428 |
| 0.103 | 23.0 | 1656 | 0.3483 | 0.2840 | 0.3991 | 0.3416 | 0.1196 | 0.1717 | 0.5254 | 0.8269 | 0.9431 |
| 0.1184 | 24.0 | 1728 | 0.3473 | 0.2839 | 0.3960 | 0.3450 | 0.1198 | 0.1717 | 0.5223 | 0.8251 | 0.9443 |
| 0.1258 | 25.0 | 1800 | 0.3497 | 0.2847 | 0.3977 | 0.3477 | 0.1203 | 0.1726 | 0.5217 | 0.8246 | 0.9436 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu116
- Tokenizers 0.13.2
| cd92042ef8b601f605f707e75fde1c8c |
junnyu/flash_base_wwm_cluecorpussmall | junnyu | flash | 8 | 2 | transformers | 0 | fill-mask | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 8,016 | false |
# PS: 效果不怎么好,体验一下就行了。。。。。。wwm-MLM最终准确率55.5左右。
# cluner NER实验(globalpointer的结果差不多,softmax结果差好多- -)
```python
# flash base + globalpointer
04/08/2022 10:53:34 - INFO - __main__ - ADDRESS = Score(f1=0.607703, precision=0.64939, recall=0.571046, tp=213, pred=328, gold=373)
04/08/2022 10:53:34 - INFO - __main__ - BOOK = Score(f1=0.8125, precision=0.873134, recall=0.75974, tp=117, pred=134, gold=154)
04/08/2022 10:53:34 - INFO - __main__ - COMPANY = Score(f1=0.818304, precision=0.832877, recall=0.804233, tp=304, pred=365, gold=378)
04/08/2022 10:53:34 - INFO - __main__ - GAME = Score(f1=0.854305, precision=0.834951, recall=0.874576, tp=258, pred=309, gold=295)
04/08/2022 10:53:34 - INFO - __main__ - GOVERNMENT = Score(f1=0.823529, precision=0.775, recall=0.878543, tp=217, pred=280, gold=247)
04/08/2022 10:53:34 - INFO - __main__ - MOVIE = Score(f1=0.810997, precision=0.842857, recall=0.781457, tp=118, pred=140, gold=151)
04/08/2022 10:53:34 - INFO - __main__ - NAME = Score(f1=0.874042, precision=0.890625, recall=0.858065, tp=399, pred=448, gold=465)
04/08/2022 10:53:34 - INFO - __main__ - ORGANIZATION = Score(f1=0.813986, precision=0.836207, recall=0.792916, tp=291, pred=348, gold=367)
04/08/2022 10:53:34 - INFO - __main__ - POSITION = Score(f1=0.78478, precision=0.808824, recall=0.762125, tp=330, pred=408, gold=433)
04/08/2022 10:53:34 - INFO - __main__ - SCENE = Score(f1=0.683805, precision=0.738889, recall=0.636364, tp=133, pred=180, gold=209)
04/08/2022 10:53:34 - INFO - __main__ - micro_f1 = Score(f1=0.79175, precision=0.809524, recall=0.77474, tp=2380, pred=2940, gold=3072)
04/08/2022 10:53:34 - INFO - __main__ - macro_f1 = Score(f1=0.788395, precision=0.808275, recall=0.771906, tp=0, pred=0, gold=0)
04/08/2022 10:53:34 - INFO - __main__ - mean_f1 = 0.790072
# flash base + softmax
04/08/2022 11:10:44 - INFO - __main__ - ADDRESS = Score(f1=0.568987, precision=0.522422, recall=0.624665, tp=233, pred=446, gold=373)
04/08/2022 11:10:44 - INFO - __main__ - BOOK = Score(f1=0.750789, precision=0.730061, recall=0.772727, tp=119, pred=163, gold=154)
04/08/2022 11:10:44 - INFO - __main__ - COMPANY = Score(f1=0.75528, precision=0.711944, recall=0.804233, tp=304, pred=427, gold=378)
04/08/2022 11:10:44 - INFO - __main__ - GAME = Score(f1=0.811502, precision=0.767372, recall=0.861017, tp=254, pred=331, gold=295)
04/08/2022 11:10:44 - INFO - __main__ - GOVERNMENT = Score(f1=0.738636, precision=0.69395, recall=0.789474, tp=195, pred=281, gold=247)
04/08/2022 11:10:44 - INFO - __main__ - MOVIE = Score(f1=0.74359, precision=0.720497, recall=0.768212, tp=116, pred=161, gold=151)
04/08/2022 11:10:44 - INFO - __main__ - NAME = Score(f1=0.831967, precision=0.794521, recall=0.873118, tp=406, pred=511, gold=465)
04/08/2022 11:10:44 - INFO - __main__ - ORGANIZATION = Score(f1=0.754054, precision=0.747989, recall=0.760218, tp=279, pred=373, gold=367)
04/08/2022 11:10:44 - INFO - __main__ - POSITION = Score(f1=0.742729, precision=0.720174, recall=0.766744, tp=332, pred=461, gold=433)
04/08/2022 11:10:44 - INFO - __main__ - SCENE = Score(f1=0.628842, precision=0.621495, recall=0.636364, tp=133, pred=214, gold=209)
04/08/2022 11:10:44 - INFO - __main__ - micro_f1 = Score(f1=0.736335, precision=0.703979, recall=0.77181, tp=2371, pred=3368, gold=3072)
04/08/2022 11:10:44 - INFO - __main__ - macro_f1 = Score(f1=0.732638, precision=0.703043, recall=0.765677, tp=0, pred=0, gold=0)
04/08/2022 11:10:44 - INFO - __main__ - mean_f1 = 0.734486
# bert base + globalpointer
04/08/2022 11:22:48 - INFO - __main__ - ADDRESS = Score(f1=0.641558, precision=0.622166, recall=0.662198, tp=247, pred=397, gold=373)
04/08/2022 11:22:48 - INFO - __main__ - BOOK = Score(f1=0.813115, precision=0.821192, recall=0.805195, tp=124, pred=151, gold=154)
04/08/2022 11:22:48 - INFO - __main__ - COMPANY = Score(f1=0.823684, precision=0.819372, recall=0.828042, tp=313, pred=382, gold=378)
04/08/2022 11:22:48 - INFO - __main__ - GAME = Score(f1=0.841762, precision=0.811321, recall=0.874576, tp=258, pred=318, gold=295)
04/08/2022 11:22:48 - INFO - __main__ - GOVERNMENT = Score(f1=0.827324, precision=0.778571, recall=0.882591, tp=218, pred=280, gold=247)
04/08/2022 11:22:48 - INFO - __main__ - MOVIE = Score(f1=0.82392, precision=0.826667, recall=0.821192, tp=124, pred=150, gold=151)
04/08/2022 11:22:48 - INFO - __main__ - NAME = Score(f1=0.861345, precision=0.840164, recall=0.883621, tp=410, pred=488, gold=464)
04/08/2022 11:22:48 - INFO - __main__ - ORGANIZATION = Score(f1=0.804911, precision=0.806011, recall=0.803815, tp=295, pred=366, gold=367)
04/08/2022 11:22:48 - INFO - __main__ - POSITION = Score(f1=0.805046, precision=0.799544, recall=0.810624, tp=351, pred=439, gold=433)
04/08/2022 11:22:48 - INFO - __main__ - SCENE = Score(f1=0.702703, precision=0.722222, recall=0.684211, tp=143, pred=198, gold=209)
04/08/2022 11:22:48 - INFO - __main__ - micro_f1 = Score(f1=0.795833, precision=0.783528, recall=0.808531, tp=2483, pred=3169, gold=3071)
04/08/2022 11:22:48 - INFO - __main__ - macro_f1 = Score(f1=0.794537, precision=0.784723, recall=0.805606, tp=0, pred=0, gold=0)
04/08/2022 11:22:48 - INFO - __main__ - mean_f1 = 0.795185
```
# cmeee + globalpointer
```python
04/08/2022 11:50:41 - INFO - __main__ - bod = Score(f1=0.639522, precision=0.642318, recall=0.63675, tp=3746, pred=5832, gold=5883)
04/08/2022 11:50:41 - INFO - __main__ - dep = Score(f1=0.473988, precision=0.650794, recall=0.372727, tp=41, pred=63, gold=110)
04/08/2022 11:50:41 - INFO - __main__ - dis = Score(f1=0.716959, precision=0.704479, recall=0.729889, tp=3602, pred=5113, gold=4935)
04/08/2022 11:50:41 - INFO - __main__ - dru = Score(f1=0.756328, precision=0.829329, recall=0.695139, tp=1001, pred=1207, gold=1440)
04/08/2022 11:50:41 - INFO - __main__ - equ = Score(f1=0.518703, precision=0.638037, recall=0.436975, tp=104, pred=163, gold=238)
04/08/2022 11:50:41 - INFO - __main__ - ite = Score(f1=0.322533, precision=0.503448, recall=0.23727, tp=219, pred=435, gold=923)
04/08/2022 11:50:41 - INFO - __main__ - mic = Score(f1=0.746967, precision=0.75614, recall=0.738014, tp=431, pred=570, gold=584)
04/08/2022 11:50:41 - INFO - __main__ - pro = Score(f1=0.611138, precision=0.614138, recall=0.608167, tp=1251, pred=2037, gold=2057)
04/08/2022 11:50:41 - INFO - __main__ - sym = Score(f1=0.47969, precision=0.495738, recall=0.464649, tp=1919, pred=3871, gold=4130)
04/08/2022 11:50:41 - INFO - __main__ - micro_f1 = Score(f1=0.622061, precision=0.638329, recall=0.606601, tp=12314, pred=19291, gold=20300)
04/08/2022 11:50:41 - INFO - __main__ - macro_f1 = Score(f1=0.585092, precision=0.648269, recall=0.54662, tp=0, pred=0, gold=0)
04/08/2022 11:50:41 - INFO - __main__ - mean_f1 = 0.603576
```
# install
- https://github.com/JunnYu/FLASHQuad_pytorch
# usage
```python
import torch
from flash import FLASHForMaskedLM
from transformers import BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained("junnyu/flash_base_wwm_cluecorpussmall")
model = FLASHForMaskedLM.from_pretrained("junnyu/flash_base_wwm_cluecorpussmall")
model.eval()
text = "天气预报说今天的天[MASK]很好,那么我[MASK]一起去公园玩吧!"
inputs = tokenizer(text, return_tensors="pt", padding="max_length", max_length=512, return_token_type_ids=False) #这里必须是512,不然结果可能不对。
with torch.no_grad():
pt_outputs = model(**inputs).logits[0]
pt_outputs_sentence = "pytorch: "
for i, id in enumerate(tokenizer.encode(text)):
if id == tokenizer.mask_token_id:
val,idx = pt_outputs[i].softmax(-1).topk(k=5)
tokens = tokenizer.convert_ids_to_tokens(idx)
new_tokens = []
for v,t in zip(val.cpu(),tokens):
new_tokens.append(f"{t}+{round(v.item(),4)}")
pt_outputs_sentence += "[" + "||".join(new_tokens) + "]"
else:
pt_outputs_sentence += "".join(
tokenizer.convert_ids_to_tokens([id], skip_special_tokens=True))
print(pt_outputs_sentence)
# pytorch: 天气预报说今天的天[气+0.994||天+0.0015||空+0.0014||晴+0.0005||阳+0.0003]很好,那么我[们+0.9563||就+0.0381||也+0.0032||俩+0.0004||来+0.0002]一起去公园玩吧!
``` | dfaabbc8556f03afdf600dca6a3f3bdc |
technillogue/waifu-diffusion | technillogue | null | 17 | 4 | diffusers | 4 | text-to-image | false | false | false | creativeml-openrail-m | ['en'] | null | null | 3 | 3 | 0 | 0 | 0 | 0 | 0 | ['stable-diffusion', 'text-to-image'] | false | true | true | 3,125 | false |
# waifu-diffusion v1.3 - Diffusion for Weebs
waifu-diffusion is a latent text-to-image diffusion model that has been conditioned on high-quality anime images through fine-tuning.
<img src=https://i.imgur.com/Y5Tmw1S.png width=75% height=75%>
[Original Weights](https://huggingface.co/hakurei/waifu-diffusion-v1-3)
# Gradio & Colab
We also support a [Gradio](https://github.com/gradio-app/gradio) Web UI and Colab with Diffusers to run Waifu Diffusion:
[](https://huggingface.co/spaces/hakurei/waifu-diffusion-demo)
[](https://colab.research.google.com/drive/1_8wPN7dJO746QXsFnB09Uq2VGgSRFuYE#scrollTo=1HaCauSq546O)
## Model Description
[See here for a full model overview.](https://gist.github.com/harubaru/f727cedacae336d1f7877c4bbe2196e1)
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
## Downstream Uses
This model can be used for entertainment purposes and as a generative art assistant.
## Example Code
```python
import torch
from torch import autocast
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained(
'waifu-diffusion',
torch_dtype=torch.float32
).to('cuda')
prompt = "1girl, aqua eyes, baseball cap, blonde hair, closed mouth, earrings, green background, hat, hoop earrings, jewelry, looking at viewer, shirt, short hair, simple background, solo, upper body, yellow shirt"
with autocast("cuda"):
image = pipe(prompt, guidance_scale=6)["sample"][0]
image.save("test.png")
```
## Team Members and Acknowledgements
This project would not have been possible without the incredible work by the [CompVis Researchers](https://ommer-lab.com/).
- [Anthony Mercurio](https://github.com/harubaru)
- [Salt](https://github.com/sALTaccount/)
- [Sta @ Bit192](https://twitter.com/naclbbr)
In order to reach us, you can join our [Discord server](https://discord.gg/touhouai).
[](https://discord.gg/touhouai) | f0b4e36342955d0424d213eeb6d06aa5 |
huggingnft/cyberkongz | huggingnft | null | 5 | 50 | transformers | 2 | unconditional-image-generation | false | false | false | mit | null | ['huggingnft/cyberkongz'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['huggingnft', 'nft', 'huggan', 'gan', 'image', 'images', 'unconditional-image-generation'] | false | true | true | 2,182 | false |
# Hugging NFT: cyberkongz
## Disclaimer
All rights belong to their owners. Models and datasets can be removed from the site at the request of the copyright
holder.
## Model description
LightWeight GAN model for unconditional generation.
NFT collection available [here](https://opensea.io/collection/cyberkongz).
Dataset is available [here](https://huggingface.co/datasets/huggingnft/cyberkongz).
Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft).
Project repository: [link](https://github.com/AlekseyKorshuk/huggingnft).
[](https://github.com/AlekseyKorshuk/huggingnft)
## Intended uses & limitations
#### How to use
Check project repository: [link](https://github.com/AlekseyKorshuk/huggingnft).
#### Limitations and bias
Check project repository: [link](https://github.com/AlekseyKorshuk/huggingnft).
## Training data
Dataset is available [here](https://huggingface.co/datasets/huggingnft/cyberkongz).
## Training procedure
Training script is available [here](https://github.com/AlekseyKorshuk/huggingnft).
## Generated Images
Check results with Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft).
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingnft)
### BibTeX entry and citation info
```bibtex
@InProceedings{huggingnft,
author={Aleksey Korshuk}
year=2022
}
```
| edf1a8438b0ba191f75c091e80b57767 |
laion/CLIP-convnext_large_d.laion2B-s26B-b102K-augreg | laion | null | 10 | 203 | open_clip | 2 | null | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 11,487 | false | # Model Card for CLIP-convnext_large_d.laion2B-s26B-b102K-augreg
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Training Details](#training-details)
4. [Evaluation](#evaluation)
5. [Acknowledgements](#acknowledgements)
6. [Citation](#citation)
# Model Details
## Model Description
A series of CLIP [ConvNeXt-Large](https://arxiv.org/abs/2201.03545) (w/ extra text depth, vision MLP head) models trained on LAION-2B (english), a subset of [LAION-5B](https://arxiv.org/abs/2210.08402), using [OpenCLIP](https://github.com/mlfoundations/open_clip).
Goals:
* Explore an alternative to ViT and ResNet (w/ AttentionPooling) CLIP models that scales well with model size and image resolution
Firsts:
* First known ConvNeXt CLIP models trained at scale in the range of CLIP ViT-L/16, ViT-L14, and RN50x16
* First released model weights exploring increase of augmentation + regularization for image tower via adding (greater scale range of RRC, random erasing, stochastic depth)
The models utilize:
* the [timm](https://github.com/rwightman/pytorch-image-models) ConvNeXt-Large model (`convnext_large`) as the image tower
* a MLP (`fc - gelu - drop - fc`) head in vision tower instead of the single projection of other CLIP models
* a text tower with same width but 4 layers more depth than ViT-L / RN50x16 models (depth 16, embed dim 768).
The models are trained at 256x256 (working on 384 variants) image resolution.
At 256x256, the ConvNext-Large-D used roughly 1/2 the training FLOPs to achieve accuracy greater than previous L/14 model trained on LAION-2B. L/14 model is ~1.65x more GMAC, 1.45x more activations, and 1.22x more parameters. The ConvNeXt was trained with 26B samples-seen and L/14 with 34B.
| Model | Dataset | Resolution | AugReg | Top-1 ImageNet Zero-Shot (%) |
| ----- | ------- | ---------- | ------------ | --------- |
| [convnext_large_d.laion2b_s26b_b102k-augreg](https://huggingface.co/laion/CLIP-convnext_large_d.laion2B-s26B-b102K-augreg) | LAION-2B | 256x256 | RRC (0.33, 1.0), RE (0.35), SD (0.1), D(0.1) | 75.9 |
| [convnext_large_d_320.laion2b_s29b_b131k-ft](https://huggingface.co/laion/CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft) | LAION-2B | 320x320 | RRC (0.5, 1.0), RE (0.4), SD (0.1), D(0.0) | 76.6 |
| [convnext_large_d_320.laion2b_s29b_b131k-ft-soup](https://huggingface.co/laion/CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft-soup) | LAION-2B | 320x320 | RRC (0.5, 1.0), RE (0.4), SD (0.1), D(0.0) | 76.9 |
RRC = Random Resize Crop (crop pcts), RE = Random Erasing (prob), SD = Stochastic Depth (prob) -- image tower only, D = Dropout (prob) -- image tower head only
LAION-A = LAION Aesthetic, an ~900M sample subset of LAION-2B with pHash dedupe and asthetic score filtering.
Model training done by Ross Wightman on the [stability.ai](https://stability.ai/) cluster.
# Uses
As per the original [OpenAI CLIP model card](https://github.com/openai/CLIP/blob/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1/model-card.md), this model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such model.
The OpenAI CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis. Additionally, the LAION-5B blog (https://laion.ai/blog/laion-5b/) and upcoming paper include additional discussion as it relates specifically to the training dataset.
## Direct Use
Zero-shot image classification, image and text retrieval, among others.
## Downstream Use
Image classification and other image task fine-tuning, linear probe image classification, image generation guiding and conditioning, among others.
## Out-of-Scope Use
As per the OpenAI models,
**Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful.
Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use.
Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases.
Further the above notice, the LAION-5B dataset used in training of these models has additional considerations, see below.
# Training Details
## Training Data
This model was trained with one of (see table in intro):
* LAION-2B - A 2 billion sample English subset of LAION-5B (https://laion.ai/blog/laion-5b/).
* LAION-Aesthetic - A 900M sample subset of LAION-2B with pHash dedupe and asthetic score filtering
**IMPORTANT NOTE:** The motivation behind dataset creation is to democratize research and experimentation around large-scale multi-modal model training and handling of uncurated, large-scale datasets crawled from publically available internet. Our recommendation is therefore to use the dataset for research purposes. Be aware that this large-scale dataset is uncurated. Keep in mind that the uncurated nature of the dataset means that collected links may lead to strongly discomforting and disturbing content for a human viewer. Therefore, please use the demo links with caution and at your own risk. It is possible to extract a “safe” subset by filtering out samples based on the safety tags (using a customized trained NSFW classifier that we built). While this strongly reduces the chance for encountering potentially harmful content when viewing, we cannot entirely exclude the possibility for harmful content being still present in safe mode, so that the warning holds also there. We think that providing the dataset openly to broad research and other interested communities will allow for transparent investigation of benefits that come along with training large-scale models as well as pitfalls and dangers that may stay unreported or unnoticed when working with closed large datasets that remain restricted to a small community. Providing our dataset openly, we however do not recommend using it for creating ready-to-go industrial products, as the basic research about general properties and safety of such large-scale models, which we would like to encourage with this release, is still in progress.
## Training Procedure
All models were trained with a global batch size of 102400 for 128 checkpoint intervals of 203.7M samples for a total of ~26B samples seen over training.
For 256x256 models, a slurm script w/ srun below was used on 16 8-GPU (A100 80GB) nodes (Stability).
```
/opt/slurm/sbin/srun --cpu_bind=v --accel-bind=gn python -m training.main \
--save-frequency 1 \
--name "convnext_large_256" \
--resume 'latest' \
--train-data="pipe:aws s3 cp s3://mybucket/path/{laion{00000..xxxxx}.tar -" \
--train-num-samples 203666042 \
--dataset-type webdataset \
--precision amp_bfloat16 \
--beta2 0.98 \
--warmup 10000 \
--batch-size=800 \
--epochs=128 \
--dataset-resampled \
--aug-cfg use_timm=True scale='(0.33, 1.0)' re_prob=0.35 \
--clip-grad-norm 5.0 \
--lr 1.667e-3 \
--workers=6 \
--model "convnext_large_d" \
--seed 0 \
--ddp-static-graph \
--local-loss \
--gather-with-grad \
--grad-checkpointing
```
# Evaluation
Evaluation done with code in the [LAION CLIP Benchmark suite](https://github.com/LAION-AI/CLIP_benchmark).
## Testing Data, Factors & Metrics
### Testing Data
The testing is performed with VTAB+ (A combination of VTAB (https://arxiv.org/abs/1910.04867) w/ additional robustness datasets) for classification and COCO and Flickr for retrieval.
## Results
The models achieve between 75.9 top-1 zero-shot accuracy on ImageNet-1k.

An initial round of benchmarks have been performed on a wider range of datasets, to be viewable at https://github.com/LAION-AI/CLIP_benchmark/blob/main/benchmark/results.ipynb
# Acknowledgements
Acknowledging [stability.ai](https://stability.ai/) for compute used to train this model.
# Citation
**BibTeX:**
LAION-5B
```bibtex
@inproceedings{schuhmann2022laionb,
title={{LAION}-5B: An open large-scale dataset for training next generation image-text models},
author={Christoph Schuhmann and
Romain Beaumont and
Richard Vencu and
Cade W Gordon and
Ross Wightman and
Mehdi Cherti and
Theo Coombes and
Aarush Katta and
Clayton Mullis and
Mitchell Wortsman and
Patrick Schramowski and
Srivatsa R Kundurthy and
Katherine Crowson and
Ludwig Schmidt and
Robert Kaczmarczyk and
Jenia Jitsev},
booktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2022},
url={https://openreview.net/forum?id=M3Y74vmsMcY}
}
```
OpenCLIP software
```bibtex
@software{ilharco_gabriel_2021_5143773,
author = {Ilharco, Gabriel and
Wortsman, Mitchell and
Wightman, Ross and
Gordon, Cade and
Carlini, Nicholas and
Taori, Rohan and
Dave, Achal and
Shankar, Vaishaal and
Namkoong, Hongseok and
Miller, John and
Hajishirzi, Hannaneh and
Farhadi, Ali and
Schmidt, Ludwig},
title = {OpenCLIP},
month = jul,
year = 2021,
note = {If you use this software, please cite it as below.},
publisher = {Zenodo},
version = {0.1},
doi = {10.5281/zenodo.5143773},
url = {https://doi.org/10.5281/zenodo.5143773}
}
```
OpenAI CLIP paper
```bibtex
@inproceedings{Radford2021LearningTV,
title={Learning Transferable Visual Models From Natural Language Supervision},
author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever},
booktitle={ICML},
year={2021}
}
```
```bibtex
@Article{liu2022convnet,
author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie},
title = {A ConvNet for the 2020s},
journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2022},
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/rwightman/pytorch-image-models}}
}
``` | 842787f79dbfd209e6866e58e3ca08e1 |
EnglishVoice/t5-base-keywords-to-headline | EnglishVoice | t5 | 9 | 35 | transformers | 1 | text2text-generation | true | true | true | apache-2.0 | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text2text-generation', 'paraphrase-generation'] | false | true | true | 2,675 | false |
### About the model
The model has been trained on [a dataset containing 138927 article titles](https://www.englishvoice.ai/p/keywords-and-titles/ "a dataset containing 138927 article titles") along with their keywords.
The purpose of the model is to generate suggestions of article headlines, given a keyword or multiple keywords.
### Generation examples
| Input | Output |
| :------------ | :------------ |
| weight loss | The Last Weight Loss Plan: Lose Weight, Feel Great, and Get in Shape <br/>How to Lose Weight Without Giving Up Your Favorite Foods <br/> I Lost Weight and Finally Feel Good About My Body |
| property rental, property management | Property rental: The new way to make money <br/> We take the hassle out of property rental <br/> Is property management your new best friend? |
| diabetic diet plan | A diabetic diet plan that actually works! <br/> Lose weight, feel great, and live better with our diabetic diet plan! <br/> Diet has never been so tasty: Our diabetic diet plan puts you to the test! |
You can supply multiple keywords by separating them with commas. Higher temperature settings result in more creative headlines; we recommend testing first with the temperature set to 1.5.
### The dataset
The dataset was developed by English Voice AI Labs. You can download it from our website:
[https://www.EnglishVoice.ai/](https://www.EnglishVoice.ai/ "https://www.EnglishVoice.ai/")
### Sample code
Python code for generating headlines:
```python
import torch
from transformers import T5ForConditionalGeneration,T5Tokenizer
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = T5ForConditionalGeneration.from_pretrained("EnglishVoice/t5-base-keywords-to-headline")
tokenizer = T5Tokenizer.from_pretrained("EnglishVoice/t5-base-keywords-to-headline")
model = model.to(device)
keywords = "weight loss, weight pills"
text = "headline: " + keywords
encoding = tokenizer.encode_plus(text, return_tensors = "pt")
input_ids = encoding["input_ids"].to(device)
attention_masks = encoding["attention_mask"].to(device)
beam_outputs = model.generate(
input_ids = input_ids,
attention_mask = attention_masks,
do_sample = True,
num_return_sequences = 5,
temperature = 0.95,
early_stopping = True,
top_k = 50,
top_p = 0.95,
)
for i in range(len(beam_outputs)):
result = tokenizer.decode(beam_outputs[i], skip_special_tokens=True)
print(result)
```
Sample result:
I Am Losing Weight and I Love It!
New Weight Loss Pill Helps You Get the Body You Want!
I Lost Weight By Taking Pills!
The Truth About Weight Loss Pills!
The Best Weight Loss Pills Money Can Buy!
| be189bc8c791d126a8c2169ff513abaa |
HarrisDePerceptron/xls-r-300m-ur | HarrisDePerceptron | wav2vec2 | 29 | 0 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['ur'] | ['mozilla-foundation/common_voice_8_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'ur', 'robust-speech-event', 'hf-asr-leaderboard'] | true | true | true | 4,415 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [HarrisDePerceptron/xls-r-300m-ur](https://huggingface.co/HarrisDePerceptron/xls-r-300m-ur) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - UR dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0517
- WER: 0.5151291512915129
- CER: 0.23689640940982254
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.2991 | 1.96 | 100 | 0.9769 | 0.6627 |
| 1.3415 | 3.92 | 200 | 0.9701 | 0.6594 |
| 1.2998 | 5.88 | 300 | 0.9678 | 0.6668 |
| 1.2881 | 7.84 | 400 | 0.9650 | 0.6613 |
| 1.2369 | 9.8 | 500 | 0.9392 | 0.6502 |
| 1.2293 | 11.76 | 600 | 0.9536 | 0.6480 |
| 1.1709 | 13.73 | 700 | 0.9265 | 0.6402 |
| 1.1492 | 15.69 | 800 | 0.9636 | 0.6506 |
| 1.1044 | 17.65 | 900 | 0.9305 | 0.6351 |
| 1.0704 | 19.61 | 1000 | 0.9329 | 0.6280 |
| 1.0039 | 21.57 | 1100 | 0.9413 | 0.6295 |
| 0.9756 | 23.53 | 1200 | 0.9718 | 0.6185 |
| 0.9633 | 25.49 | 1300 | 0.9731 | 0.6133 |
| 0.932 | 27.45 | 1400 | 0.9659 | 0.6199 |
| 0.9252 | 29.41 | 1500 | 0.9766 | 0.6196 |
| 0.9172 | 31.37 | 1600 | 1.0052 | 0.6199 |
| 0.8733 | 33.33 | 1700 | 0.9955 | 0.6203 |
| 0.868 | 35.29 | 1800 | 1.0069 | 0.6240 |
| 0.8547 | 37.25 | 1900 | 0.9783 | 0.6258 |
| 0.8451 | 39.22 | 2000 | 0.9845 | 0.6052 |
| 0.8374 | 41.18 | 2100 | 0.9496 | 0.6137 |
| 0.8153 | 43.14 | 2200 | 0.9756 | 0.6122 |
| 0.8134 | 45.1 | 2300 | 0.9712 | 0.6096 |
| 0.8019 | 47.06 | 2400 | 0.9565 | 0.5970 |
| 0.7746 | 49.02 | 2500 | 0.9864 | 0.6096 |
| 0.7664 | 50.98 | 2600 | 0.9988 | 0.6092 |
| 0.7708 | 52.94 | 2700 | 1.0181 | 0.6255 |
| 0.7468 | 54.9 | 2800 | 0.9918 | 0.6148 |
| 0.7241 | 56.86 | 2900 | 1.0150 | 0.6018 |
| 0.7165 | 58.82 | 3000 | 1.0439 | 0.6063 |
| 0.7104 | 60.78 | 3100 | 1.0016 | 0.6037 |
| 0.6954 | 62.75 | 3200 | 1.0117 | 0.5970 |
| 0.6753 | 64.71 | 3300 | 1.0191 | 0.6037 |
| 0.6803 | 66.67 | 3400 | 1.0190 | 0.6033 |
| 0.661 | 68.63 | 3500 | 1.0284 | 0.6007 |
| 0.6597 | 70.59 | 3600 | 1.0060 | 0.5967 |
| 0.6398 | 72.55 | 3700 | 1.0372 | 0.6048 |
| 0.6105 | 74.51 | 3800 | 1.0048 | 0.6044 |
| 0.6164 | 76.47 | 3900 | 1.0398 | 0.6148 |
| 0.6354 | 78.43 | 4000 | 1.0272 | 0.6133 |
| 0.5952 | 80.39 | 4100 | 1.0364 | 0.6081 |
| 0.5814 | 82.35 | 4200 | 1.0418 | 0.6092 |
| 0.6079 | 84.31 | 4300 | 1.0277 | 0.5967 |
| 0.5748 | 86.27 | 4400 | 1.0362 | 0.6041 |
| 0.5624 | 88.24 | 4500 | 1.0427 | 0.6007 |
| 0.5767 | 90.2 | 4600 | 1.0370 | 0.5919 |
| 0.5793 | 92.16 | 4700 | 1.0442 | 0.6011 |
| 0.547 | 94.12 | 4800 | 1.0516 | 0.5982 |
| 0.5513 | 96.08 | 4900 | 1.0461 | 0.5989 |
| 0.5429 | 98.04 | 5000 | 1.0504 | 0.5996 |
| 0.5404 | 100.0 | 5100 | 1.0517 | 0.5967 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
| d0ba51d2760a56927d09af6008472856 |
d4niel92/xlm-roberta-base-finetuned-marc-en | d4niel92 | xlm-roberta | 12 | 1 | transformers | 0 | text-classification | true | false | false | mit | null | ['amazon_reviews_multi'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,274 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8976
- Mae: 0.4268
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.092 | 1.0 | 235 | 0.9514 | 0.5122 |
| 0.9509 | 2.0 | 470 | 0.8976 | 0.4268 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
| 3d2268c7dbe199226b914d203592c47e |
Jiqing/bert-large-uncased-whole-word-masking-finetuned-squad-finetuned-squad | Jiqing | bert | 10 | 5 | transformers | 0 | question-answering | true | false | false | apache-2.0 | null | ['squad'] | null | 3 | 3 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,018 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-whole-word-masking-finetuned-squad-finetuned-squad
This model is a fine-tuned version of [bert-large-uncased-whole-word-masking-finetuned-squad](https://huggingface.co/bert-large-uncased-whole-word-masking-finetuned-squad) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu102
- Datasets 2.4.0
- Tokenizers 0.12.1
| 55cc4f5bcfc26d6517be70862fecb299 |
Helsinki-NLP/opus-mt-en-fi | Helsinki-NLP | marian | 10 | 7,615 | transformers | 1 | translation | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 830 | false |
### opus-mt-en-fi
* source languages: en
* target languages: fi
* OPUS readme: [en-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-fi/README.md)
* dataset: opus+bt-news
* model: transformer
* pre-processing: normalization + SentencePiece
* download original weights: [opus+bt-news-2020-03-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-fi/opus+bt-news-2020-03-21.zip)
* test set translations: [opus+bt-news-2020-03-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-fi/opus+bt-news-2020-03-21.test.txt)
* test set scores: [opus+bt-news-2020-03-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-fi/opus+bt-news-2020-03-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newstest2019-enfi.en.fi | 25.7 | 0.578 |
| 83ff5a03d2ef5e0ea338de28cf675939 |
hakurei/lit-6B | hakurei | gptj | 10 | 2,043 | transformers | 12 | text-generation | true | false | false | mit | ['en'] | null | null | 1 | 0 | 0 | 1 | 0 | 0 | 0 | ['pytorch', 'causal-lm'] | false | true | true | 2,814 | false |
# Lit-6B - A Large Fine-tuned Model For Fictional Storytelling
Lit-6B is a GPT-J 6B model fine-tuned on 2GB of a diverse range of light novels, erotica, and annotated literature for the purpose of generating novel-like fictional text.
## Model Description
The model used for fine-tuning is [GPT-J](https://github.com/kingoflolz/mesh-transformer-jax), which is a 6 billion parameter auto-regressive language model trained on [The Pile](https://pile.eleuther.ai/).
## Training Data & Annotative Prompting
The data used in fine-tuning has been gathered from various sources such as the [Gutenberg Project](https://www.gutenberg.org/). The annotated fiction dataset has prepended tags to assist in generating towards a particular style. Here is an example prompt that shows how to use the annotations.
```
[ Title: The Dunwich Horror; Author: H. P. Lovecraft; Genre: Horror; Tags: 3rdperson, scary; Style: Dark ]
***
When a traveler in north central Massachusetts takes the wrong fork...
```
The annotations can be mixed and matched to help generate towards a specific style.
## Downstream Uses
This model can be used for entertainment purposes and as a creative writing assistant for fiction writers.
## Example Code
```
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained('hakurei/lit-6B')
tokenizer = AutoTokenizer.from_pretrained('hakurei/lit-6B')
prompt = '''[ Title: The Dunwich Horror; Author: H. P. Lovecraft; Genre: Horror ]
***
When a traveler'''
input_ids = tokenizer.encode(prompt, return_tensors='pt')
output = model.generate(input_ids, do_sample=True, temperature=1.0, top_p=0.9, repetition_penalty=1.2, max_length=len(input_ids[0])+100, pad_token_id=tokenizer.eos_token_id)
generated_text = tokenizer.decode(output[0])
print(generated_text)
```
An example output from this code produces a result that will look similar to:
```
[ Title: The Dunwich Horror; Author: H. P. Lovecraft; Genre: Horror ]
***
When a traveler comes to an unknown region, his thoughts turn inevitably towards the old gods and legends which cluster around its appearance. It is not that he believes in them or suspects their reality—but merely because they are present somewhere else in creation just as truly as himself, and so belong of necessity in any landscape whose features cannot be altogether strange to him. Moreover, man has been prone from ancient times to brood over those things most connected with the places where he dwells. Thus the Olympian deities who ruled Hyper
```
## Team members and Acknowledgements
This project would not have been possible without the computational resources graciously provided by the [TPU Research Cloud](https://sites.research.google/trc/)
- [Anthony Mercurio](https://github.com/harubaru)
- Imperishable_NEET | 051010a3a7e6841435eac4041dfeac91 |
scasutt/wav2vec2-large-xlsr-52_Swiss_German | scasutt | wav2vec2 | 16 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,523 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53_full_train
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the Swissdial dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2811
- Wer: 0.2909
## Model description
Wav2Vec2-XLSR-53 trained on augmented Swiss Dial dataset
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.7666 | 2.69 | 1000 | 0.4356 | 0.4954 |
| 0.7868 | 5.39 | 2000 | 0.2693 | 0.3180 |
| 0.6948 | 8.09 | 3000 | 0.2811 | 0.2909 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu102
- Datasets 2.2.1
- Tokenizers 0.12.1
| 8ca358967b750c9c1f14ab813c78b162 |
ViktorDo/DistilBERT-POWO_MGH_Lifecycle_Finetuned | ViktorDo | distilbert | 12 | 5 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,317 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DistilBERT-POWO_MGH_Lifecycle_Finetuned
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0728
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0716 | 1.0 | 1625 | 0.0843 |
| 0.0695 | 2.0 | 3250 | 0.0701 |
| 0.0603 | 3.0 | 4875 | 0.0728 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
| ed6ae7e882ba3748498602764b14d901 |
austin/adr-ner | austin | deberta | 11 | 9 | transformers | 0 | token-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,626 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# adr-ner
This model is a fine-tuned version of [austin/Austin-MeDeBERTa](https://huggingface.co/austin/Austin-MeDeBERTa) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0434
- Precision: 0.7305
- Recall: 0.6934
- F1: 0.7115
- Accuracy: 0.9941
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 107 | 0.0630 | 0.0 | 0.0 | 0.0 | 0.9876 |
| No log | 2.0 | 214 | 0.0308 | 0.4282 | 0.3467 | 0.3832 | 0.9900 |
| No log | 3.0 | 321 | 0.0254 | 0.5544 | 0.5603 | 0.5573 | 0.9920 |
| No log | 4.0 | 428 | 0.0280 | 0.6430 | 0.5751 | 0.6071 | 0.9929 |
| 0.0465 | 5.0 | 535 | 0.0266 | 0.5348 | 0.7146 | 0.6118 | 0.9915 |
| 0.0465 | 6.0 | 642 | 0.0423 | 0.7632 | 0.5793 | 0.6587 | 0.9939 |
| 0.0465 | 7.0 | 749 | 0.0336 | 0.6957 | 0.6765 | 0.6860 | 0.9939 |
| 0.0465 | 8.0 | 856 | 0.0370 | 0.6876 | 0.6702 | 0.6788 | 0.9936 |
| 0.0465 | 9.0 | 963 | 0.0349 | 0.6555 | 0.7040 | 0.6789 | 0.9932 |
| 0.0044 | 10.0 | 1070 | 0.0403 | 0.6910 | 0.6808 | 0.6858 | 0.9938 |
| 0.0044 | 11.0 | 1177 | 0.0415 | 0.7140 | 0.6808 | 0.6970 | 0.9939 |
| 0.0044 | 12.0 | 1284 | 0.0440 | 0.7349 | 0.6681 | 0.6999 | 0.9941 |
| 0.0044 | 13.0 | 1391 | 0.0423 | 0.7097 | 0.6977 | 0.7036 | 0.9941 |
| 0.0044 | 14.0 | 1498 | 0.0435 | 0.7174 | 0.6977 | 0.7074 | 0.9941 |
| 0.0006 | 15.0 | 1605 | 0.0434 | 0.7305 | 0.6934 | 0.7115 | 0.9941 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
| d8ebf54f74e323398f96081a6968f6eb |
crumb/midjourney-textual-inversions | crumb | null | 5 | 0 | null | 15 | null | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 924 | false | These are the midjourney styles that are pre-loaded in [Whatchamacallit](https://colab.research.google.com/github/aicrumb/whatchamacallit/blob/main/Whatchamacallit.ipynb)
Using original textual inversion bins that are compatible with most webuis/notebooks that support text inversion loading. They can be easily converted to diffusers-style and in Whatchamacallit there is code to do that already if you need reference.
\- midj-strong: <br>
good at that weird surreal melty almost golden sort of style, looks like clip guided diffusion in my opinion
\- midj-portrait: <br>
a bit more subtle but still very cinematic and changes the image significantly but less so than midj-strong
\- midj-anthro: <br>
was finetuned on some anthropomorphic animals (not traditional furry style, but just animals standing like humans). good on other subjects though.
 | 3800e273cb4b4105af3c79b8f97b955e |
kejian/curious-mle | kejian | gpt2 | 23 | 0 | transformers | 0 | null | true | false | false | apache-2.0 | ['en'] | ['kejian/codeparrot-train-more-filter-3.3b-cleaned'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 4,616 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# curious-mle
This model was trained from scratch on the kejian/codeparrot-train-more-filter-3.3b-cleaned dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 25177
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.23.0
- Pytorch 1.13.0+cu116
- Datasets 2.0.0
- Tokenizers 0.12.1
# Full config
{'dataset': {'datasets': ['kejian/codeparrot-train-more-filter-3.3b-cleaned'],
'is_split_by_sentences': True,
'skip_tokens': 1649999872},
'generation': {'batch_size': 128,
'every_n_steps': 512,
'force_call_on': [25177],
'metrics_configs': [{}, {'n': 1}, {}],
'scenario_configs': [{'display_as_html': True,
'generate_kwargs': {'do_sample': True,
'eos_token_id': 0,
'max_length': 640,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_hits_threshold': 0,
'num_samples': 2048},
{'display_as_html': True,
'generate_kwargs': {'do_sample': True,
'eos_token_id': 0,
'max_length': 272,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'functions',
'num_hits_threshold': 0,
'num_samples': 2048,
'prompts_path': 'resources/functions_csnet.jsonl',
'use_prompt_for_scoring': True}],
'scorer_config': {}},
'kl_gpt3_callback': {'every_n_steps': 512,
'force_call_on': [25177],
'gpt3_kwargs': {'model_name': 'code-cushman-001'},
'max_tokens': 64,
'num_samples': 4096},
'model': {'from_scratch': False,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'model_kwargs': {'revision': 'cf05a2b0558c03b08c78f07662c22989785b9520'},
'path_or_name': 'kejian/mighty-mle'},
'objective': {'name': 'MLE'},
'tokenizer': {'path_or_name': 'codeparrot/codeparrot-small'},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 64,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'curious-mle',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0005,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000.0,
'output_dir': 'training_output',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 25177,
'save_strategy': 'steps',
'seed': 42,
'tokens_already_seen': 1649999872,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/kejian/uncategorized/runs/2ejbyi19 | 15650faa927dcc64046bcd379aaa695e |
muhtasham/tiny-mlm-glue-sst2-target-glue-wnli | muhtasham | bert | 10 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,434 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-mlm-glue-sst2-target-glue-wnli
This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-sst2](https://huggingface.co/muhtasham/tiny-mlm-glue-sst2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2687
- Accuracy: 0.1127
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6895 | 25.0 | 500 | 0.7649 | 0.2535 |
| 0.6628 | 50.0 | 1000 | 1.1357 | 0.1268 |
| 0.6042 | 75.0 | 1500 | 1.7250 | 0.0986 |
| 0.5319 | 100.0 | 2000 | 2.2687 | 0.1127 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
| 520159b567f3ab31a0c04c8497a22fa7 |
intogen/milestone_classification | intogen | roberta | 33 | 7 | transformers | 0 | text-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 916 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# milestone_classification
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
| 7e0e422e516f6543869d53371feaec4d |
sentence-transformers/sentence-t5-large | sentence-transformers | t5 | 14 | 431 | sentence-transformers | 3 | sentence-similarity | true | false | false | apache-2.0 | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers'] | false | true | true | 1,848 | false |
# sentence-transformers/sentence-t5-large
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space. The model works well for sentence similarity tasks, but doesn't perform that well for semantic search tasks.
This model was converted from the Tensorflow model [st5-large-1](https://tfhub.dev/google/sentence-t5/st5-large/1) to PyTorch. When using this model, have a look at the publication: [Sentence-T5: Scalable sentence encoders from pre-trained text-to-text models](https://arxiv.org/abs/2108.08877). The tfhub model and this PyTorch model can produce slightly different embeddings, however, when run on the same benchmarks, they produce identical results.
The model uses only the encoder from a T5-large model. The weights are stored in FP16.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/sentence-t5-large')
embeddings = model.encode(sentences)
print(embeddings)
```
The model requires sentence-transformers version 2.2.0 or newer.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/sentence-t5-large)
## Citing & Authors
If you find this model helpful, please cite the respective publication:
[Sentence-T5: Scalable sentence encoders from pre-trained text-to-text models](https://arxiv.org/abs/2108.08877)
| 33701c2a3a62de3fb257ecd07f77e79f |
browndw/en_docusco_spacy_fc_trf | browndw | null | 18 | 7 | spacy | 0 | token-classification | false | false | false | mit | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['spacy', 'token-classification'] | false | true | true | 3,246 | false | English pipeline for part-of-speech and rhetorical tagging.
| Feature | Description |
| --- | --- |
| **Name** | `en_docusco_spacy_fc_trf` |
| **Version** | `1.1` |
| **spaCy** | `>=3.4.3,<3.5.0` |
| **Default Pipeline** | `transformer`, `tagger`, `ner` |
| **Components** | `transformer`, `tagger`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | `MIT` |
| **Author** | [David Brown](https://browndw.github.io/docuscope-docs/) |
### Label Scheme
<details>
<summary>View label scheme (269 labels for 2 components)</summary>
| Component | Labels |
| --- | --- |
| **`tagger`** | `APPGE`, `AT`, `AT1`, `BCL21`, `BCL22`, `CC`, `CCB`, `CS`, `CS21`, `CS22`, `CS31`, `CS32`, `CS33`, `CS41`, `CS42`, `CS43`, `CS44`, `CSA`, `CSN`, `CST`, `CSW`, `CSW31`, `CSW32`, `CSW33`, `DA`, `DA1`, `DA2`, `DAR`, `DAT`, `DB`, `DB2`, `DD`, `DD1`, `DD2`, `DDQ`, `DDQGE`, `DDQV`, `DDQV31`, `DDQV32`, `DDQV33`, `EX`, `FO`, `FU`, `FW`, `GE`, `IF`, `II`, `II21`, `II22`, `II31`, `II32`, `II33`, `II41`, `II42`, `II43`, `II44`, `IO`, `IW`, `JJ`, `JJ21`, `JJ22`, `JJ31`, `JJ32`, `JJ33`, `JJR`, `JJT`, `JK`, `MC`, `MC1`, `MC2`, `MC221`, `MC222`, `MCMC`, `MD`, `MF`, `ND1`, `NN`, `NN1`, `NN121`, `NN122`, `NN131`, `NN132`, `NN133`, `NN141`, `NN142`, `NN143`, `NN144`, `NN2`, `NN21`, `NN22`, `NN221`, `NN222`, `NN231`, `NN232`, `NN233`, `NN31`, `NN33`, `NNA`, `NNB`, `NNL1`, `NNL2`, `NNO`, `NNO2`, `NNT1`, `NNT2`, `NNU`, `NNU1`, `NNU2`, `NNU21`, `NNU22`, `NP`, `NP1`, `NP2`, `NPD1`, `NPD2`, `NPM1`, `NPM2`, `PN`, `PN1`, `PN121`, `PN122`, `PN21`, `PN22`, `PNQO`, `PNQS`, `PNQS31`, `PNQS32`, `PNQS33`, `PNQV`, `PNX1`, `PPGE`, `PPH1`, `PPHO1`, `PPHO2`, `PPHS1`, `PPHS2`, `PPIO1`, `PPIO2`, `PPIS1`, `PPIS2`, `PPX1`, `PPX121`, `PPX122`, `PPX2`, `PPX221`, `PPX222`, `PPY`, `RA`, `RA21`, `RA22`, `REX`, `REX21`, `REX22`, `REX41`, `REX42`, `REX43`, `REX44`, `RG`, `RG21`, `RG22`, `RGQ`, `RGQV`, `RGQV31`, `RGQV32`, `RGQV33`, `RGR`, `RGT`, `RL`, `RL21`, `RL22`, `RP`, `RPK`, `RR`, `RR21`, `RR22`, `RR31`, `RR32`, `RR33`, `RR41`, `RR42`, `RR43`, `RR44`, `RR51`, `RR52`, `RR53`, `RR54`, `RR55`, `RRQ`, `RRQV`, `RRQV31`, `RRQV32`, `RRQV33`, `RRR`, `RRT`, `RT`, `RT21`, `RT22`, `RT31`, `RT32`, `RT33`, `RT41`, `RT42`, `RT43`, `RT44`, `TO`, `UH`, `UH21`, `UH22`, `UH31`, `UH32`, `UH33`, `VB0`, `VBDR`, `VBDZ`, `VBG`, `VBI`, `VBM`, `VBN`, `VBR`, `VBZ`, `VD0`, `VDD`, `VDG`, `VDI`, `VDN`, `VDZ`, `VH0`, `VHD`, `VHG`, `VHI`, `VHN`, `VHZ`, `VM`, `VM21`, `VM22`, `VMK`, `VV0`, `VVD`, `VVG`, `VVGK`, `VVI`, `VVN`, `VVNK`, `VVZ`, `XX`, `Y`, `ZZ1`, `ZZ2`, `ZZ221`, `ZZ222` |
| **`ner`** | `ActorsAbstractions`, `ActorsFirstPerson`, `ActorsPeople`, `ActorsPublicEntities`, `CitationAuthority`, `CitationControversy`, `CitationNeutral`, `ConfidenceHedged`, `ConfidenceHigh`, `OrganizationNarrative`, `OrganizationReasoning`, `PlanningFuture`, `PlanningStrategy`, `SentimentNegative`, `SentimentPositive`, `SignpostingAcademicWritingMoves`, `SignpostingMetadiscourse`, `StanceEmphatic`, `StanceModerated` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `TAG_ACC` | 98.39 |
| `ENTS_F` | 88.62 |
| `ENTS_P` | 88.90 |
| `ENTS_R` | 88.34 |
| `TRANSFORMER_LOSS` | 2319800.36 |
| `TAGGER_LOSS` | 669777.78 |
| `NER_LOSS` | 2048423.35 | | e5d934b5e359f1e872ab85d4af0d6003 |
gustavecortal/camembert-base-cae-fait-ext | gustavecortal | camembert | 6 | 5 | transformers | 0 | text-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,051 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# camembert-base-cae-fait-ext
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3098
- Precision: 0.7339
- Recall: 0.7107
- F1: 0.7161
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| 1.2626 | 1.0 | 61 | 1.1255 | 0.2541 | 0.5041 | 0.3379 |
| 1.0858 | 2.0 | 122 | 0.9264 | 0.6300 | 0.6198 | 0.5705 |
| 0.8364 | 3.0 | 183 | 0.8741 | 0.6460 | 0.6446 | 0.6391 |
| 0.5045 | 4.0 | 244 | 0.7836 | 0.7252 | 0.7273 | 0.7171 |
| 0.2866 | 5.0 | 305 | 0.9903 | 0.7352 | 0.6860 | 0.6918 |
| 0.1896 | 6.0 | 366 | 1.0289 | 0.7422 | 0.7190 | 0.7257 |
| 0.0975 | 7.0 | 427 | 1.1272 | 0.7565 | 0.7355 | 0.7396 |
| 0.0679 | 8.0 | 488 | 1.2209 | 0.7389 | 0.7190 | 0.7237 |
| 0.058 | 9.0 | 549 | 1.2647 | 0.7318 | 0.7025 | 0.7079 |
| 0.0431 | 10.0 | 610 | 1.3098 | 0.7339 | 0.7107 | 0.7161 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.0
- Tokenizers 0.13.1
| e2f79ae1d4e8abcc6e51721135102c77 |
spktsagar/wav2vec2-large-xls-r-300m-nepali-openslr | spktsagar | wav2vec2 | 33 | 4 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['ne'] | ['spktsagar/openslr-nepali-asr-cleaned'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer', 'automatic-speech-recognition', 'speech', 'openslr', 'nepali'] | true | true | true | 2,859 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-nepali-openslr
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on an [OpenSLR Nepali ASR](https://huggingface.co/datasets/spktsagar/openslr-nepali-asr-cleaned) dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.1767
- eval_wer: 0.2127
- eval_runtime: 595.3962
- eval_samples_per_second: 36.273
- eval_steps_per_second: 4.535
- epoch: 6.07
- step: 23200
## Model description
Wav2Vec2 is a pretrained model for Automatic Speech Recognition (ASR) and was released in September 2020 by Alexei Baevski, Michael Auli, and Alex Conneau. Soon after the superior performance of Wav2Vec2 was demonstrated on one of the most popular English datasets for ASR, called LibriSpeech, Facebook AI presented a multi-lingual version of Wav2Vec2, called XLSR. XLSR stands for cross-lingual speech representations and refers to model's ability to learn speech representations that are useful across multiple languages.
## How to use?
1. Install transformers and librosa
```
pip install librosa, transformers
```
2. Run the following code which loads your audio file, preprocessor, models, and returns your prediction
```python
import librosa
from transformers import pipeline
audio, sample_rate = librosa.load("<path to your audio file>", sr=16000)
recognizer = pipeline("automatic-speech-recognition", model="spktsagar/wav2vec2-large-xls-r-300m-nepali-openslr")
prediction = recognizer(audio)
```
## Intended uses & limitations
The model is trained on the OpenSLR Nepali ASR dataset, which in itself has some incorrect transcriptions, so it is obvious that the model will not have perfect predictions for your transcript. Similarly, due to colab's resource limit utterances longer than 5 sec are filtered out from the dataset during training and evaluation. Hence, the model might not perform as expected when given audio input longer than 5 sec.
## Training and evaluation data and Training procedure
For dataset preparation and training code, please consult [my blog](https://sagar-spkt.github.io/posts/2022/08/finetune-xlsr-nepali/).
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.23.1
- Pytorch 1.11.0+cu113
- Datasets 2.6.0
- Tokenizers 0.13.1
| b95aa4548248d96734b79bfc8dfb5d71 |
huggingnft/mini-mutants | huggingnft | null | 5 | 26 | transformers | 1 | unconditional-image-generation | false | false | false | mit | null | ['huggingnft/mini-mutants'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['huggingnft', 'nft', 'huggan', 'gan', 'image', 'images', 'unconditional-image-generation'] | false | true | true | 2,190 | false |
# Hugging NFT: mini-mutants
## Disclaimer
All rights belong to their owners. Models and datasets can be removed from the site at the request of the copyright
holder.
## Model description
LightWeight GAN model for unconditional generation.
NFT collection available [here](https://opensea.io/collection/mini-mutants).
Dataset is available [here](https://huggingface.co/datasets/huggingnft/mini-mutants).
Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft).
Project repository: [link](https://github.com/AlekseyKorshuk/huggingnft).
[](https://github.com/AlekseyKorshuk/huggingnft)
## Intended uses & limitations
#### How to use
Check project repository: [link](https://github.com/AlekseyKorshuk/huggingnft).
#### Limitations and bias
Check project repository: [link](https://github.com/AlekseyKorshuk/huggingnft).
## Training data
Dataset is available [here](https://huggingface.co/datasets/huggingnft/mini-mutants).
## Training procedure
Training script is available [here](https://github.com/AlekseyKorshuk/huggingnft).
## Generated Images
Check results with Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft).
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingnft)
### BibTeX entry and citation info
```bibtex
@InProceedings{huggingnft,
author={Aleksey Korshuk}
year=2022
}
```
| 65d5a01cf3d5de5e9dca307c2ac3c6ac |
pixyz/distilbert-base-uncased-finetuned-squad | pixyz | distilbert | 12 | 5 | transformers | 0 | question-answering | true | false | false | apache-2.0 | null | ['squad'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,285 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1586
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2203 | 1.0 | 5533 | 1.1569 |
| 0.9452 | 2.0 | 11066 | 1.1234 |
| 0.7656 | 3.0 | 16599 | 1.1586 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
| 880d584addc991eae1be1581cdda0df1 |
kejian/final-cond-10-0.01 | kejian | gpt2 | 53 | 1 | transformers | 0 | null | true | false | false | apache-2.0 | ['en'] | ['kejian/codeparrot-train-more-filter-3.3b-cleaned'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 4,833 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kejian/final-cond-10-0.01
This model was trained from scratch on the kejian/codeparrot-train-more-filter-3.3b-cleaned dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0008
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 50354
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.23.0
- Pytorch 1.13.0+cu116
- Datasets 2.0.0
- Tokenizers 0.12.1
# Full config
{'dataset': {'conditional_training_config': {'aligned_prefix': '<|aligned|>',
'drop_token_fraction': 0.01,
'misaligned_prefix': '<|misaligned|>',
'threshold': 0},
'datasets': ['kejian/codeparrot-train-more-filter-3.3b-cleaned'],
'is_split_by_sentences': True},
'generation': {'batch_size': 64,
'metrics_configs': [{}, {'n': 1}, {}],
'scenario_configs': [{'display_as_html': True,
'generate_kwargs': {'do_sample': True,
'eos_token_id': 0,
'max_length': 704,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 512,
'prefix': '<|aligned|>',
'use_prompt_for_scoring': False},
{'display_as_html': True,
'generate_kwargs': {'do_sample': True,
'eos_token_id': 0,
'max_length': 272,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'functions',
'num_samples': 512,
'prefix': '<|aligned|>',
'prompt_before_control': True,
'prompts_path': 'resources/functions_csnet.jsonl',
'use_prompt_for_scoring': True}],
'scorer_config': {}},
'kl_gpt3_callback': {'gpt3_kwargs': {'model_name': 'code-cushman-001'},
'max_tokens': 64,
'num_samples': 4096,
'prefix': '<|aligned|>'},
'model': {'from_scratch': True,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'num_additional_tokens': 2,
'path_or_name': 'codeparrot/codeparrot-small'},
'objective': {'name': 'MLE'},
'tokenizer': {'path_or_name': 'codeparrot/codeparrot-small',
'special_tokens': ['<|aligned|>', '<|misaligned|>']},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 64,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'kejian/final-cond-10-0.01',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0008,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000.0,
'output_dir': 'training_output',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 5000,
'save_strategy': 'steps',
'seed': 42,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/kejian/uncategorized/runs/1wgqepja | a778cae17922e632bbbedc9608163de2 |
burakyldrm/wav2vec2-burak-new-300-v2-4 | burakyldrm | wav2vec2 | 13 | 9 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 4,480 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-burak-new-300-v2-4
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3402
- Wer: 0.2237
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 131
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:------:|
| 7.7711 | 2.45 | 500 | 3.1768 | 1.0 |
| 3.1194 | 4.9 | 1000 | 2.6401 | 1.0 |
| 1.4593 | 7.35 | 1500 | 0.5243 | 0.5960 |
| 0.7581 | 9.8 | 2000 | 0.3534 | 0.4432 |
| 0.5843 | 12.25 | 2500 | 0.3159 | 0.4157 |
| 0.4703 | 14.71 | 3000 | 0.3003 | 0.3668 |
| 0.4045 | 17.16 | 3500 | 0.2891 | 0.3414 |
| 0.3581 | 19.61 | 4000 | 0.2609 | 0.3207 |
| 0.3268 | 22.06 | 4500 | 0.2622 | 0.3207 |
| 0.3063 | 24.51 | 5000 | 0.2805 | 0.3193 |
| 0.2729 | 26.96 | 5500 | 0.2674 | 0.2884 |
| 0.249 | 29.41 | 6000 | 0.2740 | 0.2953 |
| 0.2275 | 31.86 | 6500 | 0.2729 | 0.2753 |
| 0.2295 | 34.31 | 7000 | 0.2801 | 0.2691 |
| 0.2105 | 36.76 | 7500 | 0.2992 | 0.2801 |
| 0.1905 | 39.22 | 8000 | 0.2967 | 0.2663 |
| 0.1884 | 41.67 | 8500 | 0.2911 | 0.2691 |
| 0.1773 | 44.12 | 9000 | 0.2966 | 0.2753 |
| 0.1672 | 46.57 | 9500 | 0.3051 | 0.2505 |
| 0.1632 | 49.02 | 10000 | 0.2872 | 0.2491 |
| 0.1553 | 51.47 | 10500 | 0.3121 | 0.2629 |
| 0.1619 | 53.92 | 11000 | 0.3044 | 0.2581 |
| 0.1444 | 56.37 | 11500 | 0.3135 | 0.2567 |
| 0.1451 | 58.82 | 12000 | 0.3033 | 0.2519 |
| 0.1386 | 61.27 | 12500 | 0.3079 | 0.2622 |
| 0.1261 | 63.73 | 13000 | 0.3037 | 0.2395 |
| 0.1287 | 66.18 | 13500 | 0.3221 | 0.2409 |
| 0.1236 | 68.63 | 14000 | 0.3179 | 0.2464 |
| 0.1215 | 71.08 | 14500 | 0.3521 | 0.2429 |
| 0.1208 | 73.53 | 15000 | 0.3481 | 0.2540 |
| 0.1128 | 75.98 | 15500 | 0.3288 | 0.2402 |
| 0.1108 | 78.43 | 16000 | 0.3238 | 0.2450 |
| 0.1074 | 80.88 | 16500 | 0.3178 | 0.2416 |
| 0.1086 | 83.33 | 17000 | 0.3461 | 0.2361 |
| 0.1059 | 85.78 | 17500 | 0.3342 | 0.2457 |
| 0.0981 | 88.24 | 18000 | 0.3382 | 0.2354 |
| 0.0995 | 90.69 | 18500 | 0.3466 | 0.2416 |
| 0.0995 | 93.14 | 19000 | 0.3326 | 0.2271 |
| 0.0929 | 95.59 | 19500 | 0.3526 | 0.2237 |
| 0.0944 | 98.04 | 20000 | 0.3516 | 0.2347 |
| 0.089 | 100.49 | 20500 | 0.3504 | 0.2271 |
| 0.0915 | 102.94 | 21000 | 0.3425 | 0.2285 |
| 0.0845 | 105.39 | 21500 | 0.3309 | 0.2306 |
| 0.0887 | 107.84 | 22000 | 0.3196 | 0.2264 |
| 0.0812 | 110.29 | 22500 | 0.3285 | 0.2264 |
| 0.0856 | 112.75 | 23000 | 0.3347 | 0.2251 |
| 0.0778 | 115.2 | 23500 | 0.3403 | 0.2271 |
| 0.0748 | 117.65 | 24000 | 0.3427 | 0.2278 |
| 0.0803 | 120.1 | 24500 | 0.3380 | 0.2223 |
| 0.0768 | 122.55 | 25000 | 0.3392 | 0.2189 |
| 0.0764 | 125.0 | 25500 | 0.3423 | 0.2278 |
| 0.0786 | 127.45 | 26000 | 0.3423 | 0.2230 |
| 0.0766 | 129.9 | 26500 | 0.3402 | 0.2237 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.2
- Tokenizers 0.12.1
| 2e217142f32cb04ca486b84b220b3a52 |
b3ck1/gpt-neo-125M-finetuned-beer-recipes | b3ck1 | gpt_neo | 9 | 928 | transformers | 1 | text-generation | true | false | false | apache-2.0 | ['en'] | ['custom'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text generation', 'pytorch', 'causal-lm'] | false | true | true | 2,691 | false |
# GPT-Neo 125M finetuned with beer recipes
## Model Description
GPT-Neo 125M is a transformer model based on EleutherAI's replication of the GPT-3 architecture https://huggingface.co/EleutherAI/gpt-neo-125M.
It generates recipes for brewing beer in a YAML-like format which can be easily used for different purposes.
## Training data
This model was trained on a custom dataset of ~ 76,800 beer recipes from the internet. It includes recipes for the following
styles of beer:
* Strong American Ale
* Pale American Ale
* India Pale Ale (IPA)
* Standard American Beer
* Stout
* English Pale Ale
* IPA
* American Porter and Stout
* Sour Ale
* Irish Beer
* Strong British Ale
* Belgian and French Ale
* German Wheat and Rye Beer
* Czech Lager
* Spice/Herb/Vegetable Beer
* Specialty Beer
* American Ale
* Pilsner
* Belgian Ale
* Strong Belgian Ale
* Bock
* Brown British Beer
* German Wheat Beer
* Fruit Beer
* Amber Malty European Lager
* Pale Malty European Lager
* British Bitter
* Amber and Brown American Beer
* Light Hybrid Beer
* Pale Commonwealth Beer
* American Wild Ale
* European Amber Lager
* Belgian Strong Ale
* International Lager
* Amber Bitter European Lager
* Light Lager
* Scottish and Irish Ale
* European Sour Ale
* Trappist Ale
* Strong European Beer
* Porter
* Historical Beer
* Pale Bitter European Beer
* Amber Hybrid Beer
* Smoke Flavored/Wood-Aged Beer
* Spiced Beer
* Dark European Lager
* Alternative Fermentables Beer
* Mead
* Strong Ale
* Dark British Beer
* Scottish Ale
* Smoked Beer
* English Brown Ale
* Dark Lager
* Cider or Perry
* Wood Beer
### How to use
You can use this model directly with a pipeline for text generation. This example generates a different recipe each time it's run:
```py
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model='b3ck1/gpt-neo-125M-finetuned-beer-recipes')
>>> generator("style: Pilsner\nbatch_size: 20\nefficiency: 75\nboil_size:", do_sample=True, min_length=50, max_length=500)
>>> print(output[0]['generated_text'])
style: Pilsner
batch_size: 20
efficiency: 70
boil_size: 24
boil_time: 60
fermentables:
- name: Pale Ale
type: Grain
amount: 6.5
hops:
- name: Saaz
alpha: 3.5
use: Boil
time: 60
amount: 0.06
- name: Saaz
alpha: 3.5
use: Boil
time: 30
amount: 0.06
- name: Saaz
alpha: 3.5
use: Boil
time: 10
amount: 0.06
- name: Saaz
alpha: 3.5
use: Boil
time: 0
amount: 0.06
yeasts:
- name: Safale - American Ale Yeast US-05
amount: 0.11
min_temperature: 12
max_temperature: 25
primary_temp: null
mash_steps:
- step_temp: 65
step_time: 60
miscs: []
```
### See this model in action
This model was used to build https://beerai.net. | ea91848533b42ddfaf9df56209fc3bb4 |
espnet/kan-bayashi_csmsc_fastspeech2 | espnet | null | 21 | 18 | espnet | 0 | text-to-speech | false | false | false | cc-by-4.0 | ['zh'] | ['csmsc'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['espnet', 'audio', 'text-to-speech'] | false | true | true | 1,798 | false | ## Example ESPnet2 TTS model
### `kan-bayashi/csmsc_fastspeech2`
♻️ Imported from https://zenodo.org/record/4031953/
This model was trained by kan-bayashi using csmsc/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | 31f1761b2beccbed3c9efb9ef838b5ba |
Helsinki-NLP/opus-mt-cel-en | Helsinki-NLP | marian | 11 | 17 | transformers | 0 | translation | true | true | false | apache-2.0 | ['gd', 'ga', 'br', 'kw', 'gv', 'cy', 'cel', 'en'] | null | null | 2 | 2 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 2,414 | false |
### cel-eng
* source group: Celtic languages
* target group: English
* OPUS readme: [cel-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cel-eng/README.md)
* model: transformer
* source language(s): bre cor cym gla gle glv
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/cel-eng/opus2m-2020-07-31.zip)
* test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cel-eng/opus2m-2020-07-31.test.txt)
* test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cel-eng/opus2m-2020-07-31.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.bre-eng.bre.eng | 17.2 | 0.385 |
| Tatoeba-test.cor-eng.cor.eng | 3.0 | 0.172 |
| Tatoeba-test.cym-eng.cym.eng | 41.5 | 0.582 |
| Tatoeba-test.gla-eng.gla.eng | 15.4 | 0.330 |
| Tatoeba-test.gle-eng.gle.eng | 50.8 | 0.668 |
| Tatoeba-test.glv-eng.glv.eng | 11.0 | 0.297 |
| Tatoeba-test.multi.eng | 22.8 | 0.398 |
### System Info:
- hf_name: cel-eng
- source_languages: cel
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cel-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['gd', 'ga', 'br', 'kw', 'gv', 'cy', 'cel', 'en']
- src_constituents: {'gla', 'gle', 'bre', 'cor', 'glv', 'cym'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/cel-eng/opus2m-2020-07-31.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/cel-eng/opus2m-2020-07-31.test.txt
- src_alpha3: cel
- tgt_alpha3: eng
- short_pair: cel-en
- chrF2_score: 0.39799999999999996
- bleu: 22.8
- brevity_penalty: 1.0
- ref_len: 42097.0
- src_name: Celtic languages
- tgt_name: English
- train_date: 2020-07-31
- src_alpha2: cel
- tgt_alpha2: en
- prefer_old: False
- long_pair: cel-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | 82490da34a424d958fe31a445a0efc3b |
speechbrain/asr-wav2vec2-commonvoice-rw | speechbrain | wav2vec2 | 9 | 24 | speechbrain | 1 | automatic-speech-recognition | true | false | false | apache-2.0 | ['rw'] | ['commonvoice'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['CTC', 'Attention', 'pytorch', 'speechbrain', 'Transformer', 'hf-asr-leaderboard'] | false | true | true | 4,112 | false |
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# wav2vec 2.0 with CTC/Attention trained on CommonVoice Kinyarwanda (No LM)
This repository provides all the necessary tools to perform automatic speech
recognition from an end-to-end system pretrained on CommonVoice (Kinyarwanda Language) within
SpeechBrain. For a better experience, we encourage you to learn more about
[SpeechBrain](https://speechbrain.github.io).
The performance of the model is the following:
| Release | Test WER | GPUs |
|:--------------:|:--------------:| :--------:|
| 03-06-21 | 18.91 | 2xV100 32GB |
## Pipeline description
This ASR system is composed of 2 different but linked blocks:
- Tokenizer (unigram) that transforms words into subword units and trained with
the train transcriptions (train.tsv) of CommonVoice (RW).
- Acoustic model (wav2vec2.0 + CTC/Attention). A pretrained wav2vec 2.0 model ([wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)) is combined with two DNN layers and finetuned on CommonVoice En.
The obtained final acoustic representation is given to the CTC and attention decoders.
The system is trained with recordings sampled at 16kHz (single channel).
The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *transcribe_file* if needed.
## Install SpeechBrain
First of all, please install tranformers and SpeechBrain with the following command:
```
pip install speechbrain transformers
```
Please notice that we encourage you to read our tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
### Transcribing your own audio files (in Kinyarwanda)
```python
from speechbrain.pretrained import EncoderDecoderASR
asr_model = EncoderDecoderASR.from_hparams(source="speechbrain/asr-wav2vec2-commonvoice-rw", savedir="pretrained_models/asr-wav2vec2-commonvoice-rw")
asr_model.transcribe_file("speechbrain/asr-wav2vec2-commonvoice-rw/example.mp3")
```
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
## Parallel Inference on a Batch
Please, [see this Colab notebook](https://colab.research.google.com/drive/1hX5ZI9S4jHIjahFCZnhwwQmFoGAi3tmu?usp=sharing) to figure out how to transcribe in parallel a batch of input sentences using a pre-trained model.
### Training
The model was trained with SpeechBrain.
To train it from scratch follow these steps:
1. Clone SpeechBrain:
```bash
git clone https://github.com/speechbrain/speechbrain/
```
2. Install it:
```bash
cd speechbrain
pip install -r requirements.txt
pip install -e .
```
3. Run Training:
```bash
cd recipes/CommonVoice/ASR/seq2seq
python train_with_wav2vec.py hparams/train_rw_with_wav2vec.yaml --data_folder=your_data_folder
```
You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1tjz6IZmVRkuRE97E7h1cXFoGTer7pT73?usp=sharing).
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
# **About SpeechBrain**
- Website: https://speechbrain.github.io/
- Code: https://github.com/speechbrain/speechbrain/
- HuggingFace: https://huggingface.co/speechbrain/
# **Citing SpeechBrain**
Please, cite SpeechBrain if you use it for your research or business.
```bibtex
@misc{speechbrain,
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
year={2021},
eprint={2106.04624},
archivePrefix={arXiv},
primaryClass={eess.AS},
note={arXiv:2106.04624}
}
```
| 97a9999836c07e18e0abbfb699f5c48f |
ali2066/finetuned_token_3e-05_all_16_02_2022-16_16_08 | ali2066 | distilbert | 13 | 16 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,791 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_3e-05_all_16_02_2022-16_16_08
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1630
- Precision: 0.3684
- Recall: 0.3714
- F1: 0.3699
- Accuracy: 0.9482
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.3339 | 0.1075 | 0.2324 | 0.1470 | 0.8379 |
| No log | 2.0 | 76 | 0.3074 | 0.1589 | 0.2926 | 0.2060 | 0.8489 |
| No log | 3.0 | 114 | 0.2914 | 0.2142 | 0.3278 | 0.2591 | 0.8591 |
| No log | 4.0 | 152 | 0.2983 | 0.1951 | 0.3595 | 0.2529 | 0.8454 |
| No log | 5.0 | 190 | 0.2997 | 0.1851 | 0.3528 | 0.2428 | 0.8487 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
| 928378a3736f42a3b26de97bb9d4a1cd |
jonatasgrosman/exp_w2v2t_ja_unispeech_s569 | jonatasgrosman | unispeech | 10 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['ja'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'ja'] | false | true | true | 469 | false | # exp_w2v2t_ja_unispeech_s569
Fine-tuned [microsoft/unispeech-large-1500h-cv](https://huggingface.co/microsoft/unispeech-large-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (ja)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| 3b25c7e63bf4053558b2c94f5205487a |
PlanTL-GOB-ES/es_bsc_demo_md | PlanTL-GOB-ES | null | 23 | 14 | spacy | 0 | text-classification | false | false | false | mit | ['es'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['spacy', 'token-classification', 'text-classification'] | false | true | true | 26,683 | false | To install this model:
pip install https://huggingface.co/PlanTL-GOB-ES/es_bsc_demo_md/resolve/main/es_bsc_demo_md-any-py3-none-any.whl
Spanish light weight pipeline by BSC. Components: floret static vectors, morphologizer, parser, attribute_ruler, lemmatizer, text classification.
| Feature | Description |
| --- | --- |
| **Name** | `es_bsc_demo_md` |
| **Version** | `3.4.1` |
| **spaCy** | `>=3.4.1,<3.5.0` |
| **Default Pipeline** | `tok2vec`, `tagger`, `morphologizer`, `lemmatizer`, `parser`, `textcat` |
| **Components** | `tok2vec`, `tagger`, `morphologizer`, `lemmatizer`, `parser`, `textcat` |
| **Vectors** | -1 keys, 50000 unique vectors (300 dimensions) |
| **Sources** | [UD Spanish AnCora v2.10](https://github.com/UniversalDependencies/UD_Spanish-AnCora) (Martínez Alonso, Héctor; Zeman, Daniel)<br /> [Spanish floret embeddings from BNE corpus] (https://zenodo.org/record/7314098) <br /> |
| **License** | `mit` |
| **Author** | [Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)](https://huggingface.co/PlanTL-GOB-ES/es_bsc_demo_md) |
| **Copyright** | Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022) |
| **Funding** | This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL |
### Label Scheme
<details>
<summary>View label scheme (734 labels for 4 components)</summary>
| Component | Labels |
| --- | --- |
| **`tagger`** | `ADJ`, `ADP`, `ADV`, `AUX`, `CCONJ`, `DET`, `INTJ`, `NOUN`, `NUM`, `PART`, `PRON`, `PROPN`, `PUNCT`, `SCONJ`, `SYM`, `VERB`, `X`, `ao0fp0`, `ao0fs0`, `ao0mp0`, `ao0ms0`, `aq0000`, `aq00p0`, `aq00s0`, `aq0cc0`, `aq0cn0`, `aq0cp0`, `aq0cs0`, `aq0fp0`, `aq0fpp`, `aq0fs0`, `aq0fsp`, `aq0fsp-B2`, `aq0mn0`, `aq0mp0`, `aq0mpp`, `aq0ms0`, `aq0msp`, `cc`, `cs`, `da0fp0`, `da0fs0`, `da0m00`, `da0mp0`, `da0ms0`, `da0ns0`, `dd0cp0`, `dd0cs0`, `dd0fp0`, `dd0fs0`, `dd0mp0`, `dd0ms0`, `de0cn0`, `di00p0`, `di0cp0`, `di0cs0`, `di0fp0`, `di0fs0`, `di0mp0`, `di0ms0`, `dn00p0`, `dn0cp0`, `dn0cs0`, `dn0fp0`, `dn0fs0`, `dn0mp0`, `dn0ms0`, `dp1cps`, `dp1css`, `dp1fpp`, `dp1fsp`, `dp1mpp`, `dp1msp`, `dp1mss`, `dp2cps`, `dp2css`, `dp2fpp`, `dp2fsp`, `dp3cp0`, `dp3cs0`, `dp3fs0`, `dp3mp0`, `dp3ms0`, `dt0cn0`, `dt0fs0`, `dt0ms0`, `faa`, `fat`, `fc`, `fd`, `fe`, `fg`, `fh`, `fia`, `fit`, `fp`, `fpa`, `fpt`, `fs`, `fx`, `fz`, `i`, `nc00000`, `nccn000`, `nccp000`, `nccs000`, `ncf0000`, `ncfn000`, `ncfp000`, `ncfs000`, `ncfs00a`, `ncmn000`, `ncmp000`, `ncms00`, `ncms000`, `np00000`, `np0000a`, `np0000l`, `np0000o`, `np0000p`, `p0000000`, `p010p000`, `p010s000`, `p020s000`, `p0300000`, `pd0cp000`, `pd0cs000`, `pd0fp000`, `pd0fs000`, `pd0mp000`, `pd0ms000`, `pd0ns000`, `pe000000`, `pi000000`, `pi00s000`, `pi0cp000`, `pi0cs000`, `pi0fp000`, `pi0fs000`, `pi0mp0`, `pi0mp000`, `pi0ms0`, `pi0ms000`, `pn0cp000`, `pn0cs000`, `pn0fp000`, `pn0fs000`, `pn0mp000`, `pn0ms000`, `pp1cn000`, `pp1cp000`, `pp1cs000`, `pp1csn00`, `pp1cso00`, `pp1fs000`, `pp1mp000`, `pp2cp000`, `pp2cp00p`, `pp2cs000`, `pp2cs00p`, `pp2csn00`, `pp2cso00`, `pp300000`, `pp30p000`, `pp30sa00`, `pp3cn000`, `pp3cna00`, `pp3cno00`, `pp3cpa00`, `pp3cpd00`, `pp3csa00`, `pp3csd00`, `pp3fp000`, `pp3fpa00`, `pp3fs000`, `pp3fsa00`, `pp3mp000`, `pp3mpa00`, `pp3ms000`, `pp3msa00`, `pp3ns000`, `pr00000`, `pr000000`, `pr0cn000`, `pr0cp000`, `pr0cs000`, `pr0fp000`, `pr0fs000`, `pr0mp000`, `pr0ms000`, `pt000000`, `pt0cp000`, `pt0cs000`, `pt0fp000`, `pt0mp000`, `pt0ms000`, `px1fp0p0`, `px1fs0p0`, `px1fs0s0`, `px1mp0p0`, `px1ms0p0`, `px1ms0s0`, `px2fs0s0`, `px2mp000`, `px2ms0s0`, `px3fp000`, `px3fs000`, `px3mp000`, `px3ms000`, `px3ns000`, `rg`, `rn`, `spcms`, `sps00`, `vag0000`, `vaic1p0`, `vaic3p0`, `vaic3s0`, `vaif1p0`, `vaif1s0`, `vaif2s0`, `vaif3p0`, `vaif3s0`, `vaii1p0`, `vaii1s0`, `vaii2s0`, `vaii3p0`, `vaii3s0`, `vaip1p0`, `vaip1s0`, `vaip2s0`, `vaip3p0`, `vaip3s0`, `vais3p0`, `vais3s0`, `vam02s0`, `vam03s0`, `van0000`, `vap00sm`, `vasi1p0`, `vasi1s0`, `vasi3p0`, `vasi3s0`, `vasp1p0`, `vasp1s0`, `vasp3p0`, `vasp3s0`, `vmg0000`, `vmic1p0`, `vmic1s0`, `vmic2s0`, `vmic3p0`, `vmic3s0`, `vmif1p0`, `vmif1s0`, `vmif2s0`, `vmif3p0`, `vmif3s0`, `vmii1p0`, `vmii1s0`, `vmii2s0`, `vmii3p0`, `vmii3s0`, `vmip1p0`, `vmip1s0`, `vmip2p0`, `vmip2s0`, `vmip3p0`, `vmip3s0`, `vmip3sm`, `vmis1p0`, `vmis1s0`, `vmis2s0`, `vmis3p0`, `vmis3s0`, `vmm01p0`, `vmm02p0`, `vmm02s0`, `vmm03p0`, `vmm03s0`, `vmn0000`, `vmp00fs`, `vmp00ms`, `vmp00pf`, `vmp00pm`, `vmp00sf`, `vmp00sm`, `vmsi1p0`, `vmsi1s0`, `vmsi3p0`, `vmsi3s0`, `vmsp1p0`, `vmsp1s0`, `vmsp2p0`, `vmsp2s0`, `vmsp3p0`, `vmsp3s0`, `vsg0000`, `vsic1s0`, `vsic2s0`, `vsic3p0`, `vsic3s0`, `vsif1s0`, `vsif3p0`, `vsif3s0`, `vsii1p0`, `vsii1s0`, `vsii3p0`, `vsii3s0`, `vsip1p0`, `vsip1s0`, `vsip2s0`, `vsip3p0`, `vsip3s0`, `vsis1s0`, `vsis3p0`, `vsis3s0`, `vsm02s0`, `vsm03s0`, `vsn0000`, `vsp00sm`, `vssi3p0`, `vssi3s0`, `vssp1p0`, `vssp1s0`, `vssp2s0`, `vssp3p0`, `vssp3s0`, `w`, `z`, `zm`, `zp`, `zu` |
| **`morphologizer`** | `Definite=Def\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Gender=Fem\|Number=Plur\|POS=NOUN`, `POS=ADP`, `Gender=Masc\|Number=Sing\|POS=NOUN`, `POS=CCONJ`, `POS=PROPN`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `NumForm=Digit\|NumType=Card\|POS=NUM`, `Gender=Masc\|Number=Plur\|POS=NOUN`, `NumForm=Digit\|POS=NOUN`, `Definite=Def\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `POS=NOUN`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `POS=PUNCT\|PunctType=Comm`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `POS=ADV`, `Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `POS=PUNCT\|PunctType=Peri`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Definite=Def\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Acc\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Number=Plur\|POS=ADJ`, `Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Number=Sing\|POS=ADJ`, `POS=PRON\|PronType=Int,Rel`, `Number=Sing\|POS=DET\|PronType=Tot`, `Gender=Fem\|Number=Sing\|POS=NOUN`, `Definite=Def\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Masc\|Number=Sing\|POS=ADJ`, `POS=SCONJ`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Number=Plur\|POS=NOUN`, `POS=AUX\|VerbForm=Inf`, `POS=VERB\|VerbForm=Inf`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Ind`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part`, `Gender=Fem\|Number=Sing\|POS=ADJ`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `POS=PUNCT\|PunctType=Quot`, `POS=ADV\|Polarity=Neg`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `NumType=Card\|Number=Plur\|POS=NUM`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Number=Sing\|POS=PRON\|PronType=Int,Rel`, `Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Tot`, `Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Definite=Ind\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Acc,Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `POS=VERB\|VerbForm=Ger`, `Degree=Cmp\|POS=ADV`, `Number=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|Number=Plur\|POS=ADJ\|VerbForm=Part`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Gender=Masc\|Number=Plur\|POS=ADJ`, `Gender=Fem\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Dem`, `AdvType=Tim\|POS=NOUN`, `Number=Sing\|POS=NOUN`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Ind`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part`, `NumType=Card\|POS=NUM`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Number=Plur\|POS=PRON\|PronType=Ind`, `Gender=Fem\|Number=Plur\|POS=ADJ\|VerbForm=Part`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|PronType=Int,Rel`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `POS=PART`, `Degree=Cmp\|Number=Sing\|POS=ADJ`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Number=Sing\|POS=DET\|PronType=Ind`, `Gender=Masc\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Ind`, `Mood=Cnd\|Number=Plur\|POS=AUX\|Person=3\|VerbForm=Fin`, `NumForm=Digit\|POS=SYM`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Dem`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `AdvType=Tim\|POS=ADJ`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=1\|VerbForm=Fin`, `POS=PUNCT\|PunctSide=Ini\|PunctType=Brck`, `POS=PUNCT\|PunctSide=Fin\|PunctType=Brck`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `NumForm=Digit\|NumType=Frac\|POS=NUM`, `Gender=Fem\|Number=Sing\|POS=PRON\|Poss=Yes\|PronType=Int,Rel`, `POS=PUNCT`, `POS=ADJ`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Gender=Masc\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Dem`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Tot`, `POS=PRON\|PronType=Ind`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs`, `Gender=Fem\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Number=Plur\|POS=DET\|PronType=Ind`, `Number=Plur\|POS=DET\|PronType=Dem`, `Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Ind`, `Number=Sing\|POS=PRON\|PronType=Dem`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Ind`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Tot`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Gender=Masc\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Gender=Masc\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Gender=Masc\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Part`, `Case=Dat\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Degree=Cmp\|Number=Plur\|POS=ADJ`, `POS=AUX\|VerbForm=Ger`, `Gender=Fem\|POS=NOUN`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=PRON\|Poss=Yes\|PronType=Int,Rel`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs`, `Definite=Ind\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `POS=PUNCT\|PunctType=Colo`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=3\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Neg`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs`, `Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Number=Sing\|POS=PRON\|PronType=Neg`, `POS=PUNCT\|PunctType=Semi`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Number=Sing\|POS=PRON\|PronType=Ind`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `POS=INTJ`, `Gender=Masc\|NumType=Card\|Number=Sing\|POS=PRON\|PronType=Dem`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `POS=PUNCT\|PunctType=Dash`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=1\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Neg`, `Gender=Fem\|NumType=Card\|Number=Plur\|POS=NUM`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Tot`, `Gender=Masc\|NumType=Card\|Number=Plur\|POS=NUM`, `Gender=Masc\|POS=NOUN`, `Gender=Fem\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Ind`, `Gender=Fem\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Ind`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `POS=NOUN\|VerbForm=Inf`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=PRON\|Poss=Yes\|PronType=Int,Rel`, `Gender=Fem\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Dem`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Fin`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Neg`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin`, `Degree=Abs\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Imp\|Number=Sing\|POS=AUX\|Person=3\|VerbForm=Fin`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `POS=DET\|PronType=Ind`, `POS=DET\|PronType=Int,Rel`, `AdvType=Tim\|POS=ADV`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=3\|VerbForm=Fin`, `POS=PUNCT\|PunctSide=Ini\|PunctType=Qest`, `POS=PUNCT\|PunctSide=Fin\|PunctType=Qest`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs`, `Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Ind`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=3\|VerbForm=Fin`, `Degree=Abs\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `POS=PUNCT\|PunctSide=Ini\|PunctType=Excl`, `POS=PUNCT\|PunctSide=Fin\|PunctType=Excl`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Tot`, `Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=1\|VerbForm=Fin`, `Gender=Masc\|NumType=Card\|Number=Plur\|POS=PRON\|PronType=Ind`, `Gender=Masc\|NumType=Card\|Number=Sing\|POS=PRON\|PronType=Ind`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Dem`, `Degree=Abs\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PrepCase=Pre\|PronType=Prs`, `Definite=Ind\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `POS=SCONJ\|PronType=Int,Rel`, `Case=Dat\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs`, `Case=Acc\|POS=PRON\|Person=3\|PrepCase=Pre\|PronType=Prs\|Reflex=Yes`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `NumType=Card\|Number=Sing\|POS=DET\|PronType=Ind`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|PrepCase=Npr\|PronType=Prs`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Number=Sing\|POS=DET\|PronType=Dem`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Neg`, `Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Ind`, `Case=Acc,Nom\|Number=Sing\|POS=PRON\|Person=2\|Polite=Form\|PronType=Prs`, `Gender=Masc\|NumType=Card\|Number=Sing\|POS=PRON\|PronType=Int,Rel`, `Gender=Fem\|NumType=Card\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Acc,Dat\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=2\|PrepCase=Npr\|PronType=Prs`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=2\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Mood=Cnd\|Number=Plur\|POS=AUX\|Person=1\|VerbForm=Fin`, `NumType=Card\|Number=Plur\|POS=PRON\|PronType=Ind`, `Gender=Masc\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Dem`, `Degree=Abs\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Gender=Fem\|Number=Plur\|POS=PRON\|Poss=Yes\|PronType=Int,Rel`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin`, `Case=Acc,Nom\|Number=Plur\|POS=PRON\|Person=2\|Polite=Form\|PronType=Prs`, `Mood=Imp\|Number=Sing\|POS=AUX\|Person=2\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=2\|Poss=Yes\|PronType=Ind`, `NumType=Card\|Number=Sing\|POS=NUM`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Imp\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Com\|Number=Sing\|POS=PRON\|Person=2\|PrepCase=Pre\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Imp\|VerbForm=Fin`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Number=Sing\|POS=PRON\|Person=2\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|PrepCase=Pre\|PronType=Prs`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=2\|VerbForm=Fin`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Number=Sing\|POS=NOUN\|VerbForm=Fin`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=2\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Fut\|VerbForm=Fin`, `Gender=Fem\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Fut\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Tot`, `Gender=Masc\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `POS=SYM`, `Number=Sing\|POS=VERB\|VerbForm=Fin`, `POS=VERB\|VerbForm=Fin`, `Degree=Abs\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Degree=Abs\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Dem`, `Definite=Ind\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Art`, `Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Acc,Dat\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=2\|PrepCase=Npr\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=AUX\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin`, `Gender=Masc\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Gender=Masc\|Number=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|NumType=Card\|Number=Plur\|POS=PRON\|PronType=Int,Rel`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Number=Plur\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Mood=Ind\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Gender=Fem\|NumType=Card\|Number=Sing\|POS=PRON\|PronType=Ind`, `Definite=Def\|Foreign=Yes\|POS=DET\|PronType=Art`, `Case=Com\|POS=PRON\|Person=3\|PrepCase=Pre\|PronType=Prs\|Reflex=Yes`, `Number=Sing\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=2\|PrepCase=Npr\|PronType=Prs`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `NumForm=Digit\|NumType=Frac\|POS=SYM`, `Gender=Masc\|NumType=Card\|Number=Plur\|POS=PRON\|PronType=Dem`, `Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `NumType=Card\|Number=Plur\|POS=DET\|PronType=Ind`, `Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|Typo=Yes\|VerbForm=Fin`, `Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Number=Sing\|POS=PRON\|PronType=Tot`, `AdvType=Tim\|Gender=Masc\|Number=Sing\|POS=NOUN`, `POS=AUX\|VerbForm=Fin`, `Gender=Fem\|NumType=Card\|Number=Plur\|POS=PRON\|PronType=Int,Rel`, `Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Ind`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=2\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=2\|PrepCase=Npr\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `POS=PRON\|PronType=Dem`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `Foreign=Yes\|POS=X`, `Degree=Abs\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=PRON\|PronType=Ind`, `Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Ind`, `Definite=Def\|Foreign=Yes\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Foreign=Yes\|POS=NOUN`, `Foreign=Yes\|POS=ADP`, `Foreign=Yes\|POS=CCONJ`, `Foreign=Yes\|POS=PROPN`, `Case=Com\|Number=Sing\|POS=PRON\|Person=1\|PrepCase=Pre\|PronType=Prs`, `Gender=Fem\|Number=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Ind`, `Gender=Masc\|Number=Sing\|POS=NOUN\|VerbForm=Part`, `Case=Com\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Ind`, `Case=Acc,Dat\|Number=Sing\|POS=PRON\|Person=2\|PrepCase=Npr\|PronType=Prs`, `Gender=Masc\|Number=Plur\|POS=PRON\|Person=2\|Poss=Yes\|PronType=Ind`, `Number=Sing\|POS=DET\|PronType=Int,Rel`, `Gender=Fem\|NumType=Card\|Number=Sing\|POS=PRON\|PronType=Dem`, `Number=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|POS=ADJ`, `Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `POS=X`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=2\|Poss=Yes\|PronType=Ind`, `Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Degree=Cmp\|POS=ADJ`, `Case=Acc\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs`, `Case=Acc,Dat\|Number=Sing\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Ind`, `POS=NOUN\|PunctType=Comm`, `POS=PRON\|PronType=Neg`, `Case=Acc,Dat\|Number=Plur\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs`, `Gender=Fem\|Number=Sing\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs` |
| **`parser`** | `ROOT`, `acl`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `case`, `cc`, `ccomp`, `compound`, `conj`, `cop`, `csubj`, `dep`, `det`, `expl:impers`, `expl:pass`, `expl:pv`, `fixed`, `flat`, `iobj`, `mark`, `nmod`, `nsubj`, `nummod`, `obj`, `obl`, `parataxis`, `punct`, `xcomp` |
| **`textcat`** | `Economía`, `Entretenimiento`, `Historia`, `Humanidades`, `Derecho`, `Matemáticas`, `Música`, `Filosofía`, `Política`, `Religión`, `Deporte`, `Ciencia_y_Tecnología` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `TAG_ACC` | 95.39 |
| `POS_ACC` | 98.60 |
| `MORPH_ACC` | 98.10 |
| `LEMMA_ACC` | 97.98 |
| `DEP_UAS` | 91.26 |
| `DEP_LAS` | 88.09 |
| `SENTS_P` | 95.38 |
| `SENTS_R` | 96.54 |
| `SENTS_F` | 95.96 |
| `TOK2VEC_LOSS` | 7166396.29 |
| `TAGGER_LOSS` | 1262344.25 |
| `MORPHOLOGIZER_LOSS` | 311469.37 |
| `PARSER_LOSS` | 4991259.73 |
| `CATS_SCORE` | 99.14 |
| `CATS_MICRO_P` | 97.52 |
| `CATS_MICRO_R` | 96.19 |
| `CATS_MICRO_F` | 96.85 |
| `CATS_MACRO_P` | 97.25 |
| `CATS_MACRO_R` | 95.42 |
| `CATS_MACRO_F` | 96.31 |
| `CATS_MACRO_AUC` | 99.14 | | 3d95f74648f7ec47ab1119ba889d06f7 |
mukayese/mt5-base-turkish-summarization | mukayese | mt5 | 8 | 65 | transformers | 1 | text2text-generation | true | false | false | apache-2.0 | null | ['mlsum'] | null | 0 | 0 | 0 | 0 | 1 | 0 | 1 | ['generated_from_trainer'] | true | true | true | 1,310 | false |
# [Mukayese: Turkish NLP Strikes Back](https://arxiv.org/abs/2203.01215)
## Summarization: mukayese/mbart-large-turkish-sum
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the mlsum/tu dataset.
It achieves the following results on the evaluation set:
- Rouge1: 47.4222
- Rouge2: 34.8624
- Rougel: 42.2487
- Rougelsum: 43.9494
Check [this](https://arxiv.org/abs/2203.01215) paper for more details on the model and the dataset.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
- label_smoothing_factor: 0.1
### Framework versions
- Transformers 4.11.3
- Pytorch 1.8.2+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
### Citation
```
@misc{safaya-etal-2022-mukayese,
title={Mukayese: Turkish NLP Strikes Back},
author={Ali Safaya and Emirhan Kurtuluş and Arda Göktoğan and Deniz Yuret},
year={2022},
eprint={2203.01215},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| c03eab42a6a301deea5b8f428dbea16f |
sd-dreambooth-library/face2contra | sd-dreambooth-library | null | 39 | 3 | diffusers | 2 | null | false | false | false | mit | null | null | null | 2 | 2 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 3,182 | false | ### face2contra-sd-dreambooth on Stable Diffusion via Dreambooth
#### model by avantcontra
This your the Stable Diffusion model fine-tuned the face2contra-sd-dreambooth concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **a photo of sks face2contra**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:





















| 07b6e3b31e6d5589b402521a7031b8c7 |
MultiversexPeeps/duskfall-s-pink-spider-plushie | MultiversexPeeps | null | 21 | 5 | diffusers | 0 | text-to-image | false | false | false | creativeml-openrail-m | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text-to-image'] | false | true | true | 867 | false | ### Duskfall's Pink Spider Plushie Dreambooth model trained by Duskfallcrew with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Information on this model will be here: https://civitai.com/user/duskfallcrew
If you want to donate towards costs and don't want to subscribe:
https://ko-fi.com/DUSKFALLcrew
If you want to monthly support the EARTH & DUSK media projects and not just AI:
https://www.patreon.com/earthndusk
plushiedsk (use that on your prompt) | 4046ffffce82e73cd1733e06c01936c7 |
anas-awadalla/t5-base-few-shot-k-512-finetuned-squad-infilling-seed-2 | anas-awadalla | t5 | 17 | 1 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | ['squad'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 965 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-few-shot-k-512-finetuned-squad-infilling-seed-2
This model is a fine-tuned version of [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 35.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
| 8b1370e38e4bff97e3434e1deab9c461 |
cahya/wav2vec2-base-turkish-cv7 | cahya | wav2vec2 | 21 | 3 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['tr'] | ['common_voice'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'mozilla-foundation/common_voice_7_0', 'generated_from_trainer'] | true | true | true | 1,750 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [cahya/wav2vec2-base-turkish-artificial](https://huggingface.co/cahya/wav2vec2-base-turkish-artificial) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2893
- Wer: 0.2713
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.8647 | 14.28 | 200 | 0.2758 | 0.2568 |
| 1.3376 | 28.56 | 400 | 0.2754 | 0.2722 |
| 1.1975 | 42.84 | 600 | 0.2929 | 0.2901 |
| 1.1024 | 57.14 | 800 | 0.2904 | 0.2928 |
| 1.0257 | 71.42 | 1000 | 0.2915 | 0.2823 |
| 0.9628 | 85.7 | 1200 | 0.2936 | 0.2749 |
| 0.9109 | 99.98 | 1400 | 0.2893 | 0.2713 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
| 4352f79f9d7b95afff97e6a09cc71214 |
pritoms/gpt-neo-125M-Byethon | pritoms | gpt_neo | 15 | 4 | transformers | 0 | text-generation | true | false | false | apache-2.0 | null | [] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | false | true | true | 1,259 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt-neo-125M-Byethon
This model is a fine-tuned version of [EleutherAI/gpt-neo-125M](https://huggingface.co/EleutherAI/gpt-neo-125M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6609
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 237 | 0.8348 |
| No log | 2.0 | 474 | 0.6931 |
| 0.8151 | 3.0 | 711 | 0.6609 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
| 56c85a864005d22b4c9f2704cf73b821 |
subaqua/_unofficial-WD1.4-fp16-safetensors | subaqua | null | 13 | 0 | null | 8 | text-to-image | false | false | false | creativeml-openrail-m | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['stable-diffusion', 'text-to-image'] | false | true | true | 1,747 | false |
# See https://huggingface.co/hakurei/waifu-diffusion-v1-4
## This is WD1.4 with .safetensors and fp16, which is unofficial fork
- [Waifu Diffusion 1.4 Anime Epoch 2 Safetensors](https://huggingface.co/subaqua/_unofficial-WD1.4-fp16-safetensors/resolve/main/wd-1-4-anime_e2-fp16.safetensors): A faster-loading and lighter version of WD1.4 Anime E2
- [Waifu Diffusion 1.4 Anime Safetensors Inference Config](https://huggingface.co/subaqua/_unofficial-WD1.4-fp16-safetensors/resolve/main/wd-1-4-anime_e2-fp16.yaml): A file included to allow for inference with Automatic's WebUI and with the original Stable Diffusion codebase.
This configuration file is modified for "Waifu Diffusion 1.4 Anime Inference Config" with the following changes:
```
model:
params:
unet_config:
params:
use_checkpoint: False
```
## Grate respect to the WD1.4 development team!
## Inherited License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) | 6ad2d90f54d1dc7eb5a164837b732063 |
gokul-g-menon/xls-r_fine_tuned | gokul-g-menon | wav2vec2 | 17 | 2 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,070 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xls-r_fine_tuned
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
| a320ec93380dc6326336d10ad705dc5f |
google/muril-base-cased | google | bert | 9 | 3,555 | transformers | 16 | fill-mask | true | true | true | apache-2.0 | null | null | null | 1 | 0 | 1 | 0 | 1 | 1 | 0 | [] | false | true | true | 8,947 | false | MuRIL: Multilingual Representations for Indian Languages
===
MuRIL is a BERT model pre-trained on 17 Indian languages and their transliterated counterparts. We have released the pre-trained model (with the MLM layer intact, enabling masked word predictions) in this repository. We have also released the encoder on [TFHub](https://tfhub.dev/google/MuRIL/1) with an additional pre-processing module, that processes raw text into the expected input format for the encoder. You can find more details on MuRIL in this [paper](http://arxiv.org/abs/2103.10730).
## Overview
This model uses a BERT base architecture [1] pretrained from scratch using the
Wikipedia [2], Common Crawl [3], PMINDIA [4] and Dakshina [5] corpora for 17 [6]
Indian languages.
We use a training paradigm similar to multilingual bert, with a few
modifications as listed:
* We include translation and transliteration segment pairs in training as
well.
* We keep an exponent value of 0.3 and not 0.7 for upsampling, shown to
enhance low-resource performance. [7]
See the Training section for more details.
## Training
The MuRIL model is pre-trained on monolingual segments as well as parallel
segments as detailed below :
* Monolingual Data : We make use of publicly available corpora from Wikipedia
and Common Crawl for 17 Indian languages.
* Parallel Data : We have two types of parallel data :
* Translated Data : We obtain translations of the above monolingual
corpora using the Google NMT pipeline. We feed translated segment pairs
as input. We also make use of the publicly available PMINDIA corpus.
* Transliterated Data : We obtain transliterations of Wikipedia using the
IndicTrans [8] library. We feed transliterated segment pairs as input.
We also make use of the publicly available Dakshina dataset.
We keep an exponent value of 0.3 to calculate duplication multiplier values for
upsampling of lower resourced languages and set dupe factors accordingly. Note,
we limit transliterated pairs to Wikipedia only.
The model was trained using a self-supervised masked language modeling task. We
do whole word masking with a maximum of 80 predictions. The model was trained
for 1000K steps, with a batch size of 4096, and a max sequence length of 512.
### Trainable parameters
All parameters in the module are trainable, and fine-tuning all parameters is
the recommended practice.
## Uses & Limitations
This model is intended to be used for a variety of downstream NLP tasks for
Indian languages. This model is trained on transliterated data as well, a
phenomomenon commonly observed in the Indian context. This model is not expected
to perform well on languages other than the ones used in pretraining, i.e. 17
Indian languages.
## Evaluation
We provide the results of fine-tuning this model on a set of downstream tasks.<br/>
We choose these tasks from the XTREME benchmark, with evaluation done on Indian language test-sets.<br/>
We also transliterate the test-sets and evaluate on the same.<br/>
We use the same fine-tuning setting as is used by [9], except for TyDiQA, where we use additional SQuAD v1.1 English training data, similar to [10].<br/>
For Tatoeba, we do not fine-tune the model, and use the pooled_output of the last layer as the sentence embedding.<br/>
All results are computed in a zero-shot setting, with English being the high resource training set language.
* Shown below are results on datasets from the XTREME benchmark (in %)
<br/>
PANX (F1) | ml | ta | te | en | bn | hi | mr | ur | Average
:-------- | ----: | ----: | ----: | ----: | ----: | ----: | ----: | ----: | ------:
mBERT | 54.77 | 51.24 | 50.16 | 84.40 | 68.59 | 65.13 | 58.44 | 31.36 | 58.01
MuRIL | 75.74 | 71.86 | 64.99 | 84.43 | 85.97 | 78.09 | 74.63 | 85.07 | 77.60
<br/>
UDPOS (F1) | en | hi | mr | ta | te | ur | Average
:--------- | ----: | ----: | ----: | ----: | ----: | ----: | ------:
mBERT | 95.35 | 66.09 | 71.27 | 59.58 | 76.98 | 57.85 | 71.19
MuRIL | 95.55 | 64.47 | 82.95 | 62.57 | 85.63 | 58.93 | 75.02
<br/>
XNLI (Accuracy) | en | hi | ur | Average
:-------------- | ----: | ----: | ----: | ------:
mBERT | 81.72 | 60.52 | 58.20 | 66.81
MuRIL | 83.85 | 70.66 | 67.70 | 74.07
<br/>
Tatoeba (Accuracy) | ml | ta | te | bn | hi | mr | ur | Average
:----------------- | ----: | ----: | ----: | ----: | ----: | ----: | ----: | ------:
mBERT | 20.23 | 12.38 | 14.96 | 12.80 | 27.80 | 18.00 | 22.70 | 18.41
MuRIL | 26.35 | 36.81 | 17.52 | 20.20 | 31.50 | 26.60 | 17.10 | 25.15
<br/>
XQUAD (F1/EM) | en | hi | Average
:------------ | ----------: | ----------: | ----------:
mBERT | 83.85/72.86 | 58.46/43.53 | 71.15/58.19
MuRIL | 84.31/72.94 | 73.93/58.32 | 79.12/65.63
<br/>
MLQA (F1/EM) | en | hi | Average
:----------- | ----------: | ----------: | ----------:
mBERT | 80.39/67.30 | 50.28/35.18 | 65.34/51.24
MuRIL | 80.28/67.37 | 67.34/50.22 | 73.81/58.80
<br/>
TyDiQA (F1/EM) | en | bn | te | Average
:---------------- | ----------: | ----------: | ----------: | ----------:
mBERT | 75.21/65.00 | 60.62/45.13 | 53.55/44.54 | 63.13/51.66
MuRIL | 74.10/64.55 | 78.03/66.37 | 73.95/46.94 | 75.36/59.28
* Shown below are results on the transliterated versions of the above
test-sets.
PANX (F1) | ml_tr | ta_tr | te_tr | bn_tr | hi_tr | mr_tr | ur_tr | Average
:-------- | ----: | ----: | ----: | ----: | ----: | ----: | ----: | ------:
mBERT | 7.53 | 1.04 | 8.24 | 41.77 | 25.46 | 8.34 | 7.30 | 14.24
MuRIL | 63.39 | 7.00 | 53.62 | 72.94 | 69.75 | 68.77 | 68.41 | 57.70
<br/>
UDPOS (F1) | hi_tr | mr_tr | ta_tr | te_tr | ur_tr | Average
:--------- | ----: | ----: | ----: | ----: | ----: | ------:
mBERT | 25.00 | 33.67 | 24.02 | 36.21 | 22.07 | 28.20
MuRIL | 63.09 | 67.19 | 58.40 | 65.30 | 56.49 | 62.09
<br/>
XNLI (Accuracy) | hi_tr | ur_tr | Average
:-------------- | ----: | ----: | ------:
mBERT | 39.6 | 38.86 | 39.23
MuRIL | 68.24 | 61.16 | 64.70
<br/>
Tatoeba (Accuracy) | ml_tr | ta_tr | te_tr | bn_tr | hi_tr | mr_tr | ur_tr | Average
:----------------- | ----: | ----: | ----: | ----: | ----: | ----: | ----: | ------:
mBERT | 2.18 | 1.95 | 5.13 | 1.80 | 3.00 | 2.40 | 2.30 | 2.68
MuRIL | 10.33 | 11.07 | 11.54 | 8.10 | 14.90 | 7.20 | 13.70 | 10.98
<br/>
## References
\[1]: Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova. [BERT:
Pre-training of Deep Bidirectional Transformers for Language
Understanding](https://arxiv.org/abs/1810.04805). arXiv preprint
arXiv:1810.04805, 2018.
\[2]: [Wikipedia](https://www.tensorflow.org/datasets/catalog/wikipedia)
\[3]: [Common Crawl](http://commoncrawl.org/the-data/)
\[4]:
[PMINDIA](http://lotus.kuee.kyoto-u.ac.jp/WAT/indic-multilingual/index.html)
\[5]: [Dakshina](https://github.com/google-research-datasets/dakshina)
\[6]: Assamese (as), Bengali (bn), English (en), Gujarati (gu), Hindi (hi),
Kannada (kn), Kashmiri (ks), Malayalam (ml), Marathi (mr), Nepali (ne), Oriya
(or), Punjabi (pa), Sanskrit (sa), Sindhi (sd), Tamil (ta), Telugu (te) and Urdu
(ur).
\[7]: Conneau, Alexis, et al.
[Unsupervised cross-lingual representation learning at scale](https://arxiv.org/pdf/1911.02116.pdf).
arXiv preprint arXiv:1911.02116 (2019).
\[8]: [IndicTrans](https://github.com/libindic/indic-trans)
\[9]: Hu, J., Ruder, S., Siddhant, A., Neubig, G., Firat, O., & Johnson, M.
(2020). [Xtreme: A massively multilingual multi-task benchmark for evaluating
cross-lingual generalization.](https://arxiv.org/pdf/2003.11080.pdf) arXiv
preprint arXiv:2003.11080.
\[10]: Fang, Y., Wang, S., Gan, Z., Sun, S., & Liu, J. (2020).
[FILTER: An Enhanced Fusion Method for Cross-lingual Language Understanding.](https://arxiv.org/pdf/2009.05166.pdf)
arXiv preprint arXiv:2009.05166.
## Citation
If you find MuRIL useful in your applications, please cite the following paper:
```
@misc{khanuja2021muril,
title={MuRIL: Multilingual Representations for Indian Languages},
author={Simran Khanuja and Diksha Bansal and Sarvesh Mehtani and Savya Khosla and Atreyee Dey and Balaji Gopalan and Dilip Kumar Margam and Pooja Aggarwal and Rajiv Teja Nagipogu and Shachi Dave and Shruti Gupta and Subhash Chandra Bose Gali and Vish Subramanian and Partha Talukdar},
year={2021},
eprint={2103.10730},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Contact
Please mail your queries/feedback to muril-contact@google.com. | 3b421be99a00bdaa9cf3ec933b3ec287 |
mrm8488/bertin-gpt-j-6B-ES-8bit | mrm8488 | gptj | 10 | 39 | transformers | 2 | text-generation | true | false | false | wtfpl | ['es'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['gpt-j', 'spanish', 'LLM', 'gpt-j-6b'] | false | true | true | 5,391 | false |
# BERTIN-GPT-J-6B with 8-bit weights (Quantized)
### Go [here](https://huggingface.co/mrm8488/bertin-gpt-j-6B-ES-v1-8bit) to use the latest checkpoint.
This model (and model card) is an adaptation of [hivemind/gpt-j-6B-8bit](https://huggingface.co/hivemind/gpt-j-6B-8bit), so all credits to him/her.
This is a version of **[bertin-project/bertin-gpt-j-6B](https://huggingface.co/bertin-project/bertin-gpt-j-6B)** that is modified so you can generate **and fine-tune the model in colab or equivalent desktop GPU (e.g. single 1080Ti)**.
Here's how to run it: [](https://colab.research.google.com/drive/1ft6wQU0BhqG5PRlwgaZJv2VukKKjU4Es)
__The [original GPT-J](https://huggingface.co/EleutherAI/gpt-j-6B/tree/main)__ takes 22+ GB memory for float32 parameters alone, and that's before you account for gradients & optimizer. Even if you cast everything to 16-bit, it will still not fit onto most single-GPU setups short of A6000 and A100. You can inference it [on TPU](https://colab.research.google.com/github/kingoflolz/mesh-transformer-jax/blob/master/colab_demo.ipynb) or CPUs, but fine-tuning is way more expensive.
Here, we apply several techniques to make GPT-J usable and fine-tunable on a single GPU with ~11 GB memory:
- large weight tensors are quantized using dynamic 8-bit quantization and de-quantized just-in-time for multiplication
- using gradient checkpoints to store one only activation per layer: using dramatically less memory at the cost of 30% slower training
- scalable fine-tuning with [LoRA](https://arxiv.org/abs/2106.09685) and [8-bit Adam](https://arxiv.org/abs/2110.02861)
In other words, all of the large weight-matrices are frozen in 8-bit, and you only train small adapters and optionally 1d tensors (layernorm scales, biases).

__Does 8-bit affect model quality?__ Technically yes, but the effect is negligible in practice. [This notebook measures wikitext test perplexity](https://nbviewer.org/urls/huggingface.co/hivemind/gpt-j-6B-8bit/raw/main/check_perplexity.ipynb) and it is nigh indistinguishable from the original GPT-J. Quantized model is even slightly better, but that is not statistically significant.
Our code differs from other 8-bit methods in that we use **8-bit only for storage, and all computations are performed in float16 or float32**. As a result, we can take advantage of nonlinear quantization that fits to each individual weight distribution. Such nonlinear quantization does not accelerate inference, but it allows for much smaller error.
__What about performance?__ Both checkpointing and de-quantization has some overhead, but it's surprisingly manageable. Depending on GPU and batch size, the quantized model is 1-10% slower than the original model on top of using gradient checkpoints (which is 30% overhead). In short, this is because block-wise quantization from bitsandbytes is really fast on GPU.
### How should I fine-tune the model?
We recommend starting with the original hyperparameters from [the LoRA paper](https://arxiv.org/pdf/2106.09685.pdf).
On top of that, there is one more trick to consider: the overhead from de-quantizing weights does not depend on batch size.
As a result, the larger batch size you can fit, the more efficient you will train.
### Where can I train for free?
You can train fine in colab, but if you get a K80, it's probably best to switch to other free gpu providers: [kaggle](https://towardsdatascience.com/amazon-sagemaker-studio-lab-a-great-alternative-to-google-colab-7194de6ef69a), [aws sagemaker](https://towardsdatascience.com/amazon-sagemaker-studio-lab-a-great-alternative-to-google-colab-7194de6ef69a) or [paperspace](https://docs.paperspace.com/gradient/more/instance-types/free-instances). For intance, this is the same notebook [running in kaggle](https://www.kaggle.com/justheuristic/dmazur-converted) using a more powerful P100 instance.
### Can I use this technique with other models?
The model was converted using [this notebook](https://nbviewer.org/urls/huggingface.co/hivemind/gpt-j-6B-8bit/raw/main/convert-gpt-j.ipynb). It can be adapted to work with other model types. However, please bear in mind that some models replace Linear and Embedding with custom alternatives that require their own BNBWhateverWithAdapters.
### How to use
```sh
wget https://huggingface.co/mrm8488/bertin-gpt-j-6B-ES-8bit/resolve/main/utils.py -O Utils.py
pip install transformers
pip install bitsandbytes-cuda111==0.26.0
```
```py
import transformers
import torch
from Utils import GPTJBlock, GPTJForCausalLM
device = "cuda" if torch.cuda.is_available() else "cpu"
transformers.models.gptj.modeling_gptj.GPTJBlock = GPTJBlock # monkey-patch GPT-J
ckpt = "mrm8488/bertin-gpt-j-6B-ES-8bit"
tokenizer = transformers.AutoTokenizer.from_pretrained(ckpt)
model = GPTJForCausalLM.from_pretrained(ckpt, pad_token_id=tokenizer.eos_token_id, low_cpu_mem_usage=True).to(device)
prompt = tokenizer("El sentido de la vida es", return_tensors='pt')
prompt = {key: value.to(device) for key, value in prompt.items()}
out = model.generate(**prompt, max_length=64, do_sample=True)
print(tokenizer.decode(out[0]))
``` | e2d7d53cbc8c878d766ab3d6a1bd8988 |
timm/davit_small.msft_in1k | timm | null | 4 | 72 | timm | 0 | image-classification | true | false | false | apache-2.0 | null | ['imagenet-1k'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['image-classification', 'timm'] | false | true | true | 3,944 | false | # Model card for davit_small.msft_in1k
A DaViT image classification model. Trained on ImageNet-1k by paper authors.
Thanks to [Fredo Guan](https://github.com/fffffgggg54) for bringing the classification backbone to `timm`.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 49.7
- GMACs: 8.8
- Activations (M): 30.5
- Image size: 224 x 224
- **Papers:**
- DaViT: Dual Attention Vision Transformers: https://arxiv.org/abs/2204.03645
- **Original:** https://github.com/dingmyu/davit
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model('davit_small.msft_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model(
'davit_small.msft_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 96, 56, 56])
# torch.Size([1, 192, 28, 28])
# torch.Size([1, 384, 14, 14])
# torch.Size([1, 768, 7, 7]
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model(
'davit_small.msft_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled (ie.e a (batch_size, num_features, H, W) tensor
output = model.forward_head(output, pre_logits=True)
# output is (batch_size, num_features) tensor
```
## Model Comparison
### By Top-1
|model |top1 |top1_err|top5 |top5_err|param_count|img_size|crop_pct|interpolation|
|---------------------|------|--------|------|--------|-----------|--------|--------|-------------|
|davit_base.msft_in1k |84.634|15.366 |97.014|2.986 |87.95 |224 |0.95 |bicubic |
|davit_small.msft_in1k|84.25 |15.75 |96.94 |3.06 |49.75 |224 |0.95 |bicubic |
|davit_tiny.msft_in1k |82.676|17.324 |96.276|3.724 |28.36 |224 |0.95 |bicubic |
## Citation
```bibtex
@inproceedings{ding2022davit,
title={DaViT: Dual Attention Vision Transformer},
author={Ding, Mingyu and Xiao, Bin and Codella, Noel and Luo, Ping and Wang, Jingdong and Yuan, Lu},
booktitle={ECCV},
year={2022},
}
```
| fcfaea39123dd4b20f921d2d35563a5f |
yokoe/xlm-roberta-base-finetuned-panx-de | yokoe | xlm-roberta | 13 | 4 | transformers | 0 | token-classification | true | false | false | mit | null | ['xtreme'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,319 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1365
- F1: 0.8649
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2553 | 1.0 | 525 | 0.1575 | 0.8279 |
| 0.1284 | 2.0 | 1050 | 0.1386 | 0.8463 |
| 0.0813 | 3.0 | 1575 | 0.1365 | 0.8649 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| deb66e045464fa6ad2aa8ead85f8263c |
jonatasgrosman/exp_w2v2t_nl_vp-es_s496 | jonatasgrosman | wav2vec2 | 10 | 2 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['nl'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'nl'] | false | true | true | 469 | false | # exp_w2v2t_nl_vp-es_s496
Fine-tuned [facebook/wav2vec2-large-es-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-es-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (nl)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| 62b637893038b29389ab079429823822 |
jglaser/affinity_pred_regex_2 | jglaser | null | 7 | 0 | null | 0 | null | true | false | false | bsd-3-clause | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,462 | false |
Copyright 2018-2022, UT-Battelle
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. | 569516bbae332e161044e3bfefd64b43 |
bardsai/whisper-small-pl | bardsai | whisper | 15 | 12 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['pl'] | ['mozilla-foundation/common_voice_11_0', 'google/fleurs'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['whisper-event', 'generated_from_trainer'] | true | true | true | 1,292 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small PL
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 and the FLEURS datasets.
It achieves the following results on the evaluation set:
- eval_loss: 0.3571
- eval_wer: 14.8004
- eval_runtime: 2233.4204
- eval_samples_per_second: 3.714
- eval_steps_per_second: 0.232
- epoch: 4.03
- step: 3000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 24
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 48
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 8000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
| 2a7dd680dbc764b67ee0e917c66a7dd6 |
jonathanybema/twitter-xlm-roberta-base-sentiment | jonathanybema | xlm-roberta | 13 | 1 | transformers | 0 | text-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,025 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twitter-xlm-roberta-base-sentiment
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6256
- Accuracy: 0.7297
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
| 0e3d722b53ff718f0473d07136204bb3 |
Axon/resnet34-v1 | Axon | null | 3 | 0 | null | 0 | null | false | false | false | apache-2.0 | null | ['ImageNet'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['Axon', 'Elixir'] | false | true | true | 3,463 | false |
# ResNet
This ResNet34 model was translated from the ONNX ResNetv1 model found
at https://github.com/onnx/models/tree/main/vision/classification/resnet into Axon using [AxonOnnx](https://github.com/elixir-nx/axon_onnx)
The following description is copied from the relevant description at the ONNX repository.
## Use cases
These ResNet models perform image classification - they take images as input and classify the major object in the image into a set of pre-defined classes. They are trained on ImageNet dataset which contains images from 1000 classes. ResNet models provide very high accuracies with affordable model sizes. They are ideal for cases when high accuracy of classification is required.
ImageNet trained models are often used as the base layers for a transfer learning approach to training a model in your domain. Transfer learning can significantly reduce the processing necessary to train an accurate model in your domain. This model was published here with the expectation that it would be useful to the Elixir community for transfer learning and other similar approaches.
## Description
Deeper neural networks are more difficult to train. Residual learning framework ease the training of networks that are substantially deeper. The research explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. It also provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset the residual nets were evaluated with a depth of up to 152 layers — 8× deeper than VGG nets but still having lower complexity.
## Model
ResNet models consists of residual blocks and came up to counter the effect of deteriorating accuracies with more layers due to network not learning the initial layers.
ResNet v1 uses post-activation for the residual blocks.
### Input
All pre-trained models expect input images normalized in the same way, i.e. mini-batches of 3-channel RGB images of shape (N x 3 x H x W), where N is the batch size, and H and W are expected to be at least 224.
The inference was done using jpeg image.
### Preprocessing
The images have to be loaded in to a range of [0, 1] and then normalized using mean = [0.485, 0.456, 0.406] and std = [0.229, 0.224, 0.225]. The transformation should preferably happen at preprocessing.
### Output
The model outputs image scores for each of the 1000 classes of ImageNet.
### Postprocessing
The post-processing involves calculating the softmax probability scores for each class. You can also sort them to report the most probable classes. Check [imagenet_postprocess.py](../imagenet_postprocess.py) for code.
## Dataset
Dataset used for train and validation: [ImageNet (ILSVRC2012)](http://www.image-net.org/challenges/LSVRC/2012/). Check [imagenet_prep](../imagenet_prep.md) for guidelines on preparing the dataset.
## References
* **ResNetv1**
[Deep residual learning for image recognition](https://arxiv.org/abs/1512.03385)
He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778. 2016.
* **ONNX source model**
[onnx/models vision/classification/resnet resnet34-v1-7.onnx](https://github.com/onnx/models/tree/main/vision/classification/resnet/README)
| 369a8008180581f210c6ee70f4a90c53 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.