repo_id stringlengths 4 110 | author stringlengths 2 27 ⌀ | model_type stringlengths 2 29 ⌀ | files_per_repo int64 2 15.4k | downloads_30d int64 0 19.9M | library stringlengths 2 37 ⌀ | likes int64 0 4.34k | pipeline stringlengths 5 30 ⌀ | pytorch bool 2 classes | tensorflow bool 2 classes | jax bool 2 classes | license stringlengths 2 30 | languages stringlengths 4 1.63k ⌀ | datasets stringlengths 2 2.58k ⌀ | co2 stringclasses 29 values | prs_count int64 0 125 | prs_open int64 0 120 | prs_merged int64 0 15 | prs_closed int64 0 28 | discussions_count int64 0 218 | discussions_open int64 0 148 | discussions_closed int64 0 70 | tags stringlengths 2 513 | has_model_index bool 2 classes | has_metadata bool 1 class | has_text bool 1 class | text_length int64 401 598k | is_nc bool 1 class | readme stringlengths 0 598k | hash stringlengths 32 32 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
jonatasgrosman/exp_w2v2t_pl_xls-r_s235 | jonatasgrosman | wav2vec2 | 10 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['pl'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'pl'] | false | true | true | 453 | false | # exp_w2v2t_pl_xls-r_s235
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (pl)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| a15ec24005f7a42dc87a892d1e930963 |
Palak/google_electra-small-discriminator_squad | Palak | electra | 13 | 9 | transformers | 0 | question-answering | true | false | false | apache-2.0 | null | ['squad'] | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,073 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# google_electra-small-discriminator_squad
This model is a fine-tuned version of [google/electra-small-discriminator](https://huggingface.co/google/electra-small-discriminator) on the **squadV1** dataset.
- "eval_exact_match": 76.95364238410596
- "eval_f1": 84.98869246841396
- "eval_samples": 10784
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.14.1
- Pytorch 1.9.0
- Datasets 1.16.1
- Tokenizers 0.10.3
| cfee47a84ddde4d8f15d6f05b48c4f0b |
Helsinki-NLP/opus-mt-kwn-en | Helsinki-NLP | marian | 10 | 10 | transformers | 0 | translation | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 776 | false |
### opus-mt-kwn-en
* source languages: kwn
* target languages: en
* OPUS readme: [kwn-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/kwn-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/kwn-en/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/kwn-en/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/kwn-en/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.kwn.en | 27.5 | 0.434 |
| c73fa858a7869e02f2ec7538ebca0fba |
hassnain/wav2vec2-base-timit-demo-colab11 | hassnain | wav2vec2 | 12 | 6 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,462 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab11
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6269
- Wer: 0.7418
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.6439 | 7.04 | 500 | 3.3083 | 1.0 |
| 2.3763 | 14.08 | 1000 | 1.5059 | 0.8146 |
| 1.0161 | 21.13 | 1500 | 1.5101 | 0.7488 |
| 0.6195 | 28.17 | 2000 | 1.6269 | 0.7418 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
| 82cb18adfe353fdd29ad4f95bf986d1d |
AkiKagura/mkgen-diffusion | AkiKagura | null | 25 | 18 | diffusers | 1 | text-to-image | false | false | false | creativeml-openrail-m | null | null | null | 2 | 0 | 2 | 0 | 1 | 1 | 0 | ['text-to-image'] | false | true | true | 952 | false |
A stable diffusion model used to generate Marco's pictures by the prompt **'mkmk woman'**
Based on runwayml/stable-diffusion-v1-5 trained by Dreambooth
Trained on 39 pics, 3000 steps
What is Marco like?
<img src="https://huggingface.co/AkiKagura/mkgen-diffusion/resolve/main/samples/IMG_2683.jpeg" width="512" height="512"/>
<img src="https://huggingface.co/AkiKagura/mkgen-diffusion/resolve/main/samples/IMG_0537.jpeg" width="512" height="512"/>
Some samples generated by this model:
<img src="https://huggingface.co/AkiKagura/mkgen-diffusion/resolve/main/samples/0.png" width="512" height="512"/>
<img src="https://huggingface.co/AkiKagura/mkgen-diffusion/resolve/main/samples/1.png" width="512" height="512"/>
<img src="https://huggingface.co/AkiKagura/mkgen-diffusion/resolve/main/samples/2.png" width="512" height="512"/>
<img src="https://huggingface.co/AkiKagura/mkgen-diffusion/resolve/main/samples/3.png" width="512" height="512"/>
| 4372ecd2eb9dda4a11686d100fdecaff |
sd-concepts-library/trigger-studio | sd-concepts-library | null | 22 | 0 | null | 30 | null | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 2,590 | false | ### Trigger Studio on Stable Diffusion
This is the `<Trigger Studio>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:

















| 5fe9c5b34bbb32631a1669f0458aeaa5 |
frgfm/cspdarknet53 | frgfm | null | 4 | 4 | transformers | 0 | image-classification | true | false | false | apache-2.0 | null | ['frgfm/imagenette'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['image-classification', 'pytorch'] | false | true | true | 2,923 | false |
# CSP-Darknet-53 model
Pretrained on [ImageNette](https://github.com/fastai/imagenette). The CSP-Darknet-53 architecture was introduced in [this paper](https://arxiv.org/pdf/1911.11929.pdf).
## Model description
The core idea of the author is to change the convolutional stage by adding cross stage partial blocks in the architecture.
## Installation
### Prerequisites
Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install Holocron.
### Latest stable release
You can install the last stable release of the package using [pypi](https://pypi.org/project/pylocron/) as follows:
```shell
pip install pylocron
```
or using [conda](https://anaconda.org/frgfm/pylocron):
```shell
conda install -c frgfm pylocron
```
### Developer mode
Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*:
```shell
git clone https://github.com/frgfm/Holocron.git
pip install -e Holocron/.
```
## Usage instructions
```python
from PIL import Image
from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize
from torchvision.transforms.functional import InterpolationMode
from holocron.models import model_from_hf_hub
model = model_from_hf_hub("frgfm/cspdarknet53").eval()
img = Image.open(path_to_an_image).convert("RGB")
# Preprocessing
config = model.default_cfg
transform = Compose([
Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR),
PILToTensor(),
ConvertImageDtype(torch.float32),
Normalize(config['mean'], config['std'])
])
input_tensor = transform(img).unsqueeze(0)
# Inference
with torch.inference_mode():
output = model(input_tensor)
probs = output.squeeze(0).softmax(dim=0)
```
## Citation
Original paper
```bibtex
@article{DBLP:journals/corr/abs-1911-11929,
author = {Chien{-}Yao Wang and
Hong{-}Yuan Mark Liao and
I{-}Hau Yeh and
Yueh{-}Hua Wu and
Ping{-}Yang Chen and
Jun{-}Wei Hsieh},
title = {CSPNet: {A} New Backbone that can Enhance Learning Capability of {CNN}},
journal = {CoRR},
volume = {abs/1911.11929},
year = {2019},
url = {http://arxiv.org/abs/1911.11929},
eprinttype = {arXiv},
eprint = {1911.11929},
timestamp = {Tue, 03 Dec 2019 20:41:07 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1911-11929.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
Source of this implementation
```bibtex
@software{Fernandez_Holocron_2020,
author = {Fernandez, François-Guillaume},
month = {5},
title = {{Holocron}},
url = {https://github.com/frgfm/Holocron},
year = {2020}
}
```
| 34a70dcdf13a50fc8bdbd6434ba3d3e1 |
domenicrosati/deberta-v3-large-finetuned-DAGPap22 | domenicrosati | deberta-v2 | 21 | 1 | transformers | 0 | text-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text-classification', 'generated_from_trainer'] | true | true | true | 995 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large-finetuned-DAGPap22
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 20
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
| 2ca45ce0c371dc1a3351e4f243404ff0 |
DOOGLAK/Tagged_One_50v6_NER_Model_3Epochs_AUGMENTED | DOOGLAK | bert | 13 | 5 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | ['tagged_one50v6_wikigold_split'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,563 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_One_50v6_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one50v6_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6728
- Precision: 0.0625
- Recall: 0.0005
- F1: 0.0010
- Accuracy: 0.7775
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 16 | 0.7728 | 0.0 | 0.0 | 0.0 | 0.7773 |
| No log | 2.0 | 32 | 0.6898 | 0.04 | 0.0002 | 0.0005 | 0.7774 |
| No log | 3.0 | 48 | 0.6728 | 0.0625 | 0.0005 | 0.0010 | 0.7775 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
| d1d4a14b3313f045b428e501fb373751 |
Padomin/t5-base-TEDxJP-10front-1body-10rear | Padomin | t5 | 20 | 1 | transformers | 0 | text2text-generation | true | false | false | cc-by-sa-4.0 | null | ['te_dx_jp'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,955 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-TEDxJP-10front-1body-10rear
This model is a fine-tuned version of [sonoisa/t5-base-japanese](https://huggingface.co/sonoisa/t5-base-japanese) on the te_dx_jp dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4366
- Wer: 0.1693
- Mer: 0.1636
- Wil: 0.2493
- Wip: 0.7507
- Hits: 55904
- Substitutions: 6304
- Deletions: 2379
- Insertions: 2249
- Cer: 0.1332
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Mer | Wil | Wip | Hits | Substitutions | Deletions | Insertions | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:------:|:-----:|:-------------:|:---------:|:----------:|:------:|
| 0.6166 | 1.0 | 1457 | 0.4595 | 0.2096 | 0.1979 | 0.2878 | 0.7122 | 54866 | 6757 | 2964 | 3819 | 0.1793 |
| 0.4985 | 2.0 | 2914 | 0.4190 | 0.1769 | 0.1710 | 0.2587 | 0.7413 | 55401 | 6467 | 2719 | 2241 | 0.1417 |
| 0.4787 | 3.0 | 4371 | 0.4130 | 0.1728 | 0.1670 | 0.2534 | 0.7466 | 55677 | 6357 | 2553 | 2249 | 0.1368 |
| 0.4299 | 4.0 | 5828 | 0.4085 | 0.1726 | 0.1665 | 0.2530 | 0.7470 | 55799 | 6381 | 2407 | 2357 | 0.1348 |
| 0.3855 | 5.0 | 7285 | 0.4130 | 0.1702 | 0.1644 | 0.2501 | 0.7499 | 55887 | 6309 | 2391 | 2292 | 0.1336 |
| 0.3109 | 6.0 | 8742 | 0.4182 | 0.1732 | 0.1668 | 0.2525 | 0.7475 | 55893 | 6317 | 2377 | 2494 | 0.1450 |
| 0.3027 | 7.0 | 10199 | 0.4256 | 0.1691 | 0.1633 | 0.2486 | 0.7514 | 55949 | 6273 | 2365 | 2283 | 0.1325 |
| 0.2729 | 8.0 | 11656 | 0.4252 | 0.1709 | 0.1649 | 0.2503 | 0.7497 | 55909 | 6283 | 2395 | 2362 | 0.1375 |
| 0.2531 | 9.0 | 13113 | 0.4329 | 0.1696 | 0.1639 | 0.2499 | 0.7501 | 55870 | 6322 | 2395 | 2235 | 0.1334 |
| 0.2388 | 10.0 | 14570 | 0.4366 | 0.1693 | 0.1636 | 0.2493 | 0.7507 | 55904 | 6304 | 2379 | 2249 | 0.1332 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu116
- Datasets 2.4.0
- Tokenizers 0.12.1
| b0f84e896c4fe2915ec98e7425227e32 |
sd-concepts-library/kairuno | sd-concepts-library | null | 18 | 0 | null | 6 | null | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,894 | false | ### kairuno on Stable Diffusion
This is the `kairuno` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:













| 0ec5e8e2bb6e21930598be3326f617a3 |
CLAck/indo-mixed | CLAck | marian | 11 | 3 | transformers | 1 | translation | true | false | false | apache-2.0 | ['en', 'id'] | ['ALT'] | null | 1 | 0 | 0 | 1 | 0 | 0 | 0 | ['translation'] | false | true | true | 1,515 | false |
This model is pretrained on Chinese and Indonesian languages, and fine-tuned on Indonesian language.
### Example
```
%%capture
!pip install transformers transformers[sentencepiece]
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
# Download the pretrained model for English-Vietnamese available on the hub
model = AutoModelForSeq2SeqLM.from_pretrained("CLAck/indo-mixed")
tokenizer = AutoTokenizer.from_pretrained("CLAck/indo-mixed")
# Download a tokenizer that can tokenize English since the model Tokenizer doesn't know anymore how to do it
# We used the one coming from the initial model
# This tokenizer is used to tokenize the input sentence
tokenizer_en = AutoTokenizer.from_pretrained('Helsinki-NLP/opus-mt-en-zh')
# These special tokens are needed to reproduce the original tokenizer
tokenizer_en.add_tokens(["<2zh>", "<2indo>"], special_tokens=True)
sentence = "The cat is on the table"
# This token is needed to identify the target language
input_sentence = "<2indo> " + sentence
translated = model.generate(**tokenizer_en(input_sentence, return_tensors="pt", padding=True))
output_sentence = [tokenizer.decode(t, skip_special_tokens=True) for t in translated]
```
### Training results
MIXED
| Epoch | Bleu |
|:-----:|:-------:|
| 1.0 | 24.2579 |
| 2.0 | 30.6287 |
| 3.0 | 34.4417 |
| 4.0 | 36.2577 |
| 5.0 | 37.3488 |
FINETUNING
| Epoch | Bleu |
|:-----:|:-------:|
| 6.0 | 34.1676 |
| 7.0 | 35.2320 |
| 8.0 | 36.7110 |
| 9.0 | 37.3195 |
| 10.0 | 37.9461 | | b67155070646d7a8aca3aa23fdfde0a3 |
milyiyo/paraphraser-spanish-t5-small | milyiyo | t5 | 19 | 3 | transformers | 0 | text2text-generation | true | false | false | mit | ['es'] | ['paws-x', 'tapaco'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,126 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# paraphraser-spanish-t5-small
This model is a fine-tuned version of [flax-community/spanish-t5-small](https://huggingface.co/flax-community/spanish-t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.1079
- eval_runtime: 4.9573
- eval_samples_per_second: 365.924
- eval_steps_per_second: 36.713
- epoch: 0.83
- step: 43141
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2 | 97b2785f5f75631dc1ac67292979b124 |
google/flan-t5-xxl | google | t5 | 32 | 231,088 | transformers | 303 | text2text-generation | true | true | true | apache-2.0 | ['en', 'sp', 'ja', 'pe', 'hi', 'fr', 'ch', 'be', 'gu', 'ge', 'te', 'it', 'ar', 'po', 'ta', 'ma', 'ma', 'or', 'pa', 'po', 'ur', 'ga', 'he', 'ko', 'ca', 'th', 'du', 'in', 'vi', 'bu', 'fi', 'ce', 'la', 'tu', 'ru', 'cr', 'sw', 'yo', 'ku', 'bu', 'ma', 'cz', 'fi', 'so', 'ta', 'sw', 'si', 'ka', 'zh', 'ig', 'xh', 'ro', 'ha', 'es', 'sl', 'li', 'gr', 'ne', 'as', False] | ['svakulenk0/qrecc', 'taskmaster2', 'djaym7/wiki_dialog', 'deepmind/code_contests', 'lambada', 'gsm8k', 'aqua_rat', 'esnli', 'quasc', 'qed'] | null | 8 | 1 | 6 | 1 | 22 | 17 | 5 | ['text2text-generation'] | false | true | true | 9,197 | false |
# Model Card for FLAN-T5 XXL

# Table of Contents
0. [TL;DR](#TL;DR)
1. [Model Details](#model-details)
2. [Usage](#usage)
3. [Uses](#uses)
4. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
5. [Training Details](#training-details)
6. [Evaluation](#evaluation)
7. [Environmental Impact](#environmental-impact)
8. [Citation](#citation)
# TL;DR
If you already know T5, FLAN-T5 is just better at everything. For the same number of parameters, these models have been fine-tuned on more than 1000 additional tasks covering also more languages.
As mentioned in the first few lines of the abstract :
> Flan-PaLM 540B achieves state-of-the-art performance on several benchmarks, such as 75.2% on five-shot MMLU. We also publicly release Flan-T5 checkpoints,1 which achieve strong few-shot performance even compared to much larger models, such as PaLM 62B. Overall, instruction finetuning is a general method for improving the performance and usability of pretrained language models.
**Disclaimer**: Content from **this** model card has been written by the Hugging Face team, and parts of it were copy pasted from the [T5 model card](https://huggingface.co/t5-large).
# Model Details
## Model Description
- **Model type:** Language model
- **Language(s) (NLP):** English, Spanish, Japanese, Persian, Hindi, French, Chinese, Bengali, Gujarati, German, Telugu, Italian, Arabic, Polish, Tamil, Marathi, Malayalam, Oriya, Panjabi, Portuguese, Urdu, Galician, Hebrew, Korean, Catalan, Thai, Dutch, Indonesian, Vietnamese, Bulgarian, Filipino, Central Khmer, Lao, Turkish, Russian, Croatian, Swedish, Yoruba, Kurdish, Burmese, Malay, Czech, Finnish, Somali, Tagalog, Swahili, Sinhala, Kannada, Zhuang, Igbo, Xhosa, Romanian, Haitian, Estonian, Slovak, Lithuanian, Greek, Nepali, Assamese, Norwegian
- **License:** Apache 2.0
- **Related Models:** [All FLAN-T5 Checkpoints](https://huggingface.co/models?search=flan-t5)
- **Original Checkpoints:** [All Original FLAN-T5 Checkpoints](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints)
- **Resources for more information:**
- [Research paper](https://arxiv.org/pdf/2210.11416.pdf)
- [GitHub Repo](https://github.com/google-research/t5x)
- [Hugging Face FLAN-T5 Docs (Similar to T5) ](https://huggingface.co/docs/transformers/model_doc/t5)
# Usage
Find below some example scripts on how to use the model in `transformers`:
## Using the Pytorch model
### Running the model on a CPU
<details>
<summary> Click to expand </summary>
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-xxl")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-xxl")
input_text = "translate English to German: How old are you?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
### Running the model on a GPU
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-xxl")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-xxl", device_map="auto")
input_text = "translate English to German: How old are you?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
### Running the model on a GPU using different precisions
#### FP16
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
import torch
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-xxl")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-xxl", device_map="auto", torch_dtype=torch.float16)
input_text = "translate English to German: How old are you?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
#### INT8
<details>
<summary> Click to expand </summary>
```python
# pip install bitsandbytes accelerate
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-xxl")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-xxl", device_map="auto", load_in_8bit=True)
input_text = "translate English to German: How old are you?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
# Uses
## Direct Use and Downstream Use
The authors write in [the original paper's model card](https://arxiv.org/pdf/2210.11416.pdf) that:
> The primary use is research on language models, including: research on zero-shot NLP tasks and in-context few-shot learning NLP tasks, such as reasoning, and question answering; advancing fairness and safety research, and understanding limitations of current large language models
See the [research paper](https://arxiv.org/pdf/2210.11416.pdf) for further details.
## Out-of-Scope Use
More information needed.
# Bias, Risks, and Limitations
The information below in this section are copied from the model's [official model card](https://arxiv.org/pdf/2210.11416.pdf):
> Language models, including Flan-T5, can potentially be used for language generation in a harmful way, according to Rae et al. (2021). Flan-T5 should not be used directly in any application, without a prior assessment of safety and fairness concerns specific to the application.
## Ethical considerations and risks
> Flan-T5 is fine-tuned on a large corpus of text data that was not filtered for explicit content or assessed for existing biases. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data.
## Known Limitations
> Flan-T5 has not been tested in real world applications.
## Sensitive Use:
> Flan-T5 should not be applied for any unacceptable use cases, e.g., generation of abusive speech.
# Training Details
## Training Data
The model was trained on a mixture of tasks, that includes the tasks described in the table below (from the original paper, figure 2):

## Training Procedure
According to the model card from the [original paper](https://arxiv.org/pdf/2210.11416.pdf):
> These models are based on pretrained T5 (Raffel et al., 2020) and fine-tuned with instructions for better zero-shot and few-shot performance. There is one fine-tuned Flan model per T5 model size.
The model has been trained on TPU v3 or TPU v4 pods, using [`t5x`](https://github.com/google-research/t5x) codebase together with [`jax`](https://github.com/google/jax).
# Evaluation
## Testing Data, Factors & Metrics
The authors evaluated the model on various tasks covering several languages (1836 in total). See the table below for some quantitative evaluation:

For full details, please check the [research paper](https://arxiv.org/pdf/2210.11416.pdf).
## Results
For full results for FLAN-T5-XXL, see the [research paper](https://arxiv.org/pdf/2210.11416.pdf), Table 3.
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** Google Cloud TPU Pods - TPU v3 or TPU v4 | Number of chips ≥ 4.
- **Hours used:** More information needed
- **Cloud Provider:** GCP
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Citation
**BibTeX:**
```bibtex
@misc{https://doi.org/10.48550/arxiv.2210.11416,
doi = {10.48550/ARXIV.2210.11416},
url = {https://arxiv.org/abs/2210.11416},
author = {Chung, Hyung Won and Hou, Le and Longpre, Shayne and Zoph, Barret and Tay, Yi and Fedus, William and Li, Eric and Wang, Xuezhi and Dehghani, Mostafa and Brahma, Siddhartha and Webson, Albert and Gu, Shixiang Shane and Dai, Zhuyun and Suzgun, Mirac and Chen, Xinyun and Chowdhery, Aakanksha and Narang, Sharan and Mishra, Gaurav and Yu, Adams and Zhao, Vincent and Huang, Yanping and Dai, Andrew and Yu, Hongkun and Petrov, Slav and Chi, Ed H. and Dean, Jeff and Devlin, Jacob and Roberts, Adam and Zhou, Denny and Le, Quoc V. and Wei, Jason},
keywords = {Machine Learning (cs.LG), Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Scaling Instruction-Finetuned Language Models},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| 265bda2844c1fdd4c877ffb7765a52cd |
samayl24/vit-base-beans-demo-v5 | samayl24 | vit | 7 | 6 | transformers | 0 | image-classification | true | false | false | apache-2.0 | null | ['beans'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['image-classification', 'generated_from_trainer'] | true | true | true | 1,333 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-beans-demo-v5
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0427
- Accuracy: 0.9925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1378 | 1.54 | 100 | 0.1444 | 0.9549 |
| 0.0334 | 3.08 | 200 | 0.0427 | 0.9925 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| f00d438f5e311a5c5016d6212dc9874d |
sachin/vit2distilgpt2 | sachin | vision-encoder-decoder | 4 | 87 | transformers | 5 | image-to-text | true | false | false | mit | ['en'] | ['coco2017'] | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | ['image-to-text'] | false | true | true | 1,700 | false |
# Vit2-DistilGPT2
This model takes in an image and outputs a caption. It was trained using the Coco dataset and the full training script can be found in [this kaggle kernel](https://www.kaggle.com/sachin/visionencoderdecoder-model-training)
## Usage
```python
import Image
from transformers import AutoModel, GPT2Tokenizer, ViTFeatureExtractor
model = AutoModel.from_pretrained("sachin/vit2distilgpt2")
vit_feature_extractor = ViTFeatureExtractor.from_pretrained("google/vit-base-patch16-224-in21k")
# make sure GPT2 appends EOS in begin and end
def build_inputs_with_special_tokens(self, token_ids_0, token_ids_1=None):
outputs = [self.bos_token_id] + token_ids_0 + [self.eos_token_id]
return outputs
GPT2Tokenizer.build_inputs_with_special_tokens = build_inputs_with_special_tokens
gpt2_tokenizer = GPT2Tokenizer.from_pretrained("distilgpt2")
# set pad_token_id to unk_token_id -> be careful here as unk_token_id == eos_token_id == bos_token_id
gpt2_tokenizer.pad_token = gpt2_tokenizer.unk_token
image = (Image.open(image_path).convert("RGB"), return_tensors="pt").pixel_values
encoder_outputs = model.generate(image.unsqueeze(0))
generated_sentences = gpt2_tokenizer.batch_decode(encoder_outputs, skip_special_tokens=True)
```
Note that the output sentence may be repeated, hence a post processing step may be required.
## Bias Warning
This model may be biased due to dataset, lack of long training and the model itself. The following gender bias is an example.

## Results
<iframe src="https://wandb.ai/sachinruk/Vit2GPT2/reports/Shared-panel-22-01-27-23-01-56--VmlldzoxNDkyMTM3?highlightShare" style="border:none;height:1024px;width:100%">
| 6bf597ae2f1e6eec01433adb091f9fcd |
Ojimi/waifumake-full | Ojimi | null | 25 | 21 | diffusers | 1 | text-to-image | false | false | false | agpl-3.0 | ['en', 'vi'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['art'] | false | true | true | 4,400 | false |
# Waifumake (●'◡'●) AI Art model.

A single student training an AI model that generates art.
## **New model avalable** : [waifumake-full-v2](waifumake-full-v2.safetensors)!
## What's new in v2:
- Fix color loss.
- Increase image quality.
## Introduction:
- It's an AI art model for converting text to images, images to images, inpainting, and outpainting using Stable Diffusion.
- The AI art model is developed with a focus on the ability to draw anime characters relatively well through fine-tuning using Dreambooth.
- It can be used as a tool for upscaling or rendering anime-style images from 3D modeling software (Blender).
- Create an image from a sketch you created from a pure drawing program. (MS Paint)
- The model is aimed at everyone and has limitless usage potential.
## Used:
- For 🧨 Diffusers Library:
```python
from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained("Ojimi/waifumake-full")
pipe = pipe.to("cuda")
prompt = "1girl, animal ears, long hair, solo, cat ears, choker, bare shoulders, red eyes, fang, looking at viewer, animal ear fluff, upper body, black hair, blush, closed mouth, off shoulder, bangs, bow, collarbone"
image = pipe(prompt, negative_prompt="lowres, bad anatomy").images[0]
```
- For Web UI by Automatic1111:
```bash
#Install Web UI.
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
cd /content/stable-diffusion-webui/
pip install -qq -r requirements.txt
pip install -U xformers #Install `xformes` for better performance.
```
```bash
#Download model.
wget https://huggingface.co/Ojimi/waifumake-full/resolve/main/waifumake-full-v2.safetensors -O /content/stable-diffusion-webui/models/Stable-diffusion/waifumake-full-v2.safetensors
```
```bash
#Run and enjoy ☕.
cd /content/stable-diffusion-webui
python launch.py --xformers
```
## Tips:
- The `masterpiece` and `best quality` tags are not necessary, as it sometimes leads to contradictory results, but if it is distorted or discolored, add them now.
- The CGF scale should be 7.5 and the step count 28 for the best quality and best performance.
- Use a sample photo for your idea. `Interrogate DeepBooru` and change the prompts to suit what you want.
- You should use it as a supportive tool for creating works of art, and not rely on it completely.
## Preview: v2 model




## Training:
- **Data**: The model is trained based on a database of various sources from the Internet provided by my friend and images created by another AI.
- **Schedule**: Euler Ancestral Discrete.
- **Optimizer**: AdamW.
- **Precision**: BF16.
- **Hardware**: Google Colaboratory Pro - NVIDIA A100 40GB VRAM.
## **Limitations:**
- Loss of detail, errors, bad human-like (six-fingered hand) details, deformation, blurring, and unclear images are inevitable.
- Complex tasks cannot be handled.
- ⚠️Content may not be appropriate for all ages: As it is trained on data that includes adult content, the generated images may contain content not suitable for children (depending on your country there will be a specific regulation about it). If you do not want to appear adult content, make sure you have additional safety measures in place, such as adding "nsfw" to the negative prompt.
- The results generated by the model are considered impressive. But unfortunately, currently, it only supports the English language, to use multilingual, consider using third-party translation programs.
- The model is trained on the `Danbooru` and `Nai` tagging system, so the long text may result in poor results.
- My amount of money: 0 USD =((.

## **Desires:**
As it is a version made only by myself and my small associates, the model will not be perfect and may differ from what people expect. Any contributions from everyone will be respected.
Want to support me? Thank you, please help me make it better. ❤️
## Special Thank:
This wouldn't have happened if they hadn't made a breakthrough.
- [Runwayml](https://huggingface.co/runwayml/): Base model.
- [d8ahazard](https://github.com/d8ahazard/.sd_dreambooth_extension) : Dreambooth.
- [Automatic1111](https://github.com/AUTOMATIC1111/) : Web UI.
- [Mikubill](https://github.com/Mikubill/): Where my ideas started.
- Chat-GPT: Help me do crazy things that I thought I would never do.
- Novel AI: Dataset images. An AI made me thousands of pictures without worrying about copyright or dispute.
- Danbooru: Help me write the correct tag.
- My friend and others.
- And You 🫵❤️
## Copyright:
This license allows anyone to copy, modify, publish, and commercialize the model, but please follow the terms of the GNU General Public License. You can learn more about the GNU General Public License at [here](LICENSE.txt).
If any part of the model does not comply with the terms of the GNU General Public License, the copyright and other rights of the model will still be valid.
All AI-generated images are yours, you can do whatever you want, but please obey the laws of your country. We will not be responsible for any problems you cause.
Don't forget me.
# Have fun with your waifu! (●'◡'●)

Like it? | 5e7e74b9bc2ce79e65f8ca4fdad80aab |
Bingsu/my-k-anything-v3-0 | Bingsu | null | 18 | 53 | diffusers | 4 | text-to-image | false | false | false | creativeml-openrail-m | ['ko'] | null | null | 3 | 0 | 3 | 0 | 0 | 0 | 0 | ['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image'] | false | true | true | 2,127 | false |
# my-k-anything-v3-0
[Bingsu/my-korean-stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5)와 같은 방법으로 만든 k-아무거나 3.0 모델.
생각보다 잘 안되고, 특히 캐릭터에 관한 정보는 다 잊어버린 걸로 보입니다.
# Usage
```sh
pip install transformers accelerate>=0.14.0 diffusers>=0.7.2
```
```python
import torch
from diffusers import StableDiffusionPipeline
repo = "Bingsu/my-k-anything-v3-0"
pipe = StableDiffusionPipeline.from_pretrained(
repo, torch_dtype=torch.float16,
)
pipe.to("cuda")
pipe.safety_checker = None
```
```python
from typing import Optional
import torch
def gen_image(
prompt: str,
negative_prompt: Optional[str] = None,
seed: Optional[int] = None,
scale: float = 7.5,
steps: int = 30,
):
if seed is not None:
generator = torch.Generator("cuda").manual_seed(seed)
else:
generator = None
image = pipe(
prompt=prompt,
negative_prompt=negative_prompt,
generator=generator,
guidance_scale=scale,
num_inference_steps=steps,
).images[0]
return image
```
```python
prompt = "파란색 포니테일 헤어, 브로치, 정장을 입은 성인 여성, 고퀄리티, 최고품질"
negative = "저화질, 저품질, 텍스트"
seed = 42467781
scale = 12.0
gen_image(prompt, negative, seed, scale)
```

## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
| c8da03e727b2b703d1262ea0d036785c |
DmitryPogrebnoy/MedRuRobertaLarge | DmitryPogrebnoy | roberta | 9 | 24 | transformers | 0 | fill-mask | true | false | false | apache-2.0 | ['ru'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 2,142 | false |
# Model MedRuRobertaLarge
# Model Description
This model is fine-tuned version of [ruRoberta-large](https://huggingface.co/sberbank-ai/ruRoberta-large).
The code for the fine-tuned process can be found [here](https://github.com/DmitryPogrebnoy/MedSpellChecker/blob/main/spellchecker/ml_ranging/models/med_ru_roberta_large/fine_tune_ru_roberta_large.py).
The model is fine-tuned on a specially collected dataset of over 30,000 medical anamneses in Russian.
The collected dataset can be found [here](https://github.com/DmitryPogrebnoy/MedSpellChecker/blob/main/data/anamnesis/processed/all_anamnesis.csv).
This model was created as part of a master's project to develop a method for correcting typos
in medical histories using BERT models as a ranking of candidates.
The project is open source and can be found [here](https://github.com/DmitryPogrebnoy/MedSpellChecker).
# How to Get Started With the Model
You can use the model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> pipeline = pipeline('fill-mask', model='DmitryPogrebnoy/MedRuRobertaLarge')
>>> pipeline("У пациента <mask> боль в грудине.")
[{'score': 0.2467374950647354,
'token': 9233,
'token_str': ' сильный',
'sequence': 'У пациента сильный боль в грудине.'},
{'score': 0.16476310789585114,
'token': 27876,
'token_str': ' постоянный',
'sequence': 'У пациента постоянный боль в грудине.'},
{'score': 0.07211139053106308,
'token': 19551,
'token_str': ' острый',
'sequence': 'У пациента острый боль в грудине.'},
{'score': 0.0616639070212841,
'token': 18840,
'token_str': ' сильная',
'sequence': 'У пациента сильная боль в грудине.'},
{'score': 0.029712719842791557,
'token': 40176,
'token_str': ' острая',
'sequence': 'У пациента острая боль в грудине.'}]
```
Or you can load the model and tokenizer and do what you need to do:
```python
>>> from transformers import AutoTokenizer, AutoModelForMaskedLM
>>> tokenizer = AutoTokenizer.from_pretrained("DmitryPogrebnoy/MedRuRobertaLarge")
>>> model = AutoModelForMaskedLM.from_pretrained("DmitryPogrebnoy/MedRuRobertaLarge")
``` | 4038186c9f320e03221b3e44b72581dc |
m3hrdadfi/zabanshenas-roberta-base-mix | m3hrdadfi | roberta | 11 | 4 | transformers | 2 | text-classification | true | true | false | apache-2.0 | ['multilingual', 'ace', 'afr', 'als', 'amh', 'ang', 'ara', 'arg', 'arz', 'asm', 'ast', 'ava', 'aym', 'azb', 'aze', 'bak', 'bar', 'bcl', 'bel', 'ben', 'bho', 'bjn', 'bod', 'bos', 'bpy', 'bre', 'bul', 'bxr', 'cat', 'cbk', 'cdo', 'ceb', 'ces', 'che', 'chr', 'chv', 'ckb', 'cor', 'cos', 'crh', 'csb', 'cym', 'dan', 'deu', 'diq', 'div', 'dsb', 'dty', 'egl', 'ell', 'eng', 'epo', 'est', 'eus', 'ext', 'fao', 'fas', 'fin', 'fra', 'frp', 'fry', 'fur', 'gag', 'gla', 'gle', 'glg', 'glk', 'glv', 'grn', 'guj', 'hak', 'hat', 'hau', 'hbs', 'heb', 'hif', 'hin', 'hrv', 'hsb', 'hun', 'hye', 'ibo', 'ido', 'ile', 'ilo', 'ina', 'ind', 'isl', 'ita', 'jam', 'jav', 'jbo', 'jpn', 'kaa', 'kab', 'kan', 'kat', 'kaz', 'kbd', 'khm', 'kin', 'kir', 'koi', 'kok', 'kom', 'kor', 'krc', 'ksh', 'kur', 'lad', 'lao', 'lat', 'lav', 'lez', 'lij', 'lim', 'lin', 'lit', 'lmo', 'lrc', 'ltg', 'ltz', 'lug', 'lzh', 'mai', 'mal', 'mar', 'mdf', 'mhr', 'min', 'mkd', 'mlg', 'mlt', 'nan', 'mon', 'mri', 'mrj', 'msa', 'mwl', 'mya', 'myv', 'mzn', 'nap', 'nav', 'nci', 'nds', 'nep', 'new', 'nld', 'nno', 'nob', 'nrm', 'nso', 'oci', 'olo', 'ori', 'orm', 'oss', 'pag', 'pam', 'pan', 'pap', 'pcd', 'pdc', 'pfl', 'pnb', 'pol', 'por', 'pus', 'que', 'roh', 'ron', 'rue', 'rup', 'rus', 'sah', 'san', 'scn', 'sco', 'sgs', 'sin', 'slk', 'slv', 'sme', 'sna', 'snd', 'som', 'spa', 'sqi', 'srd', 'srn', 'srp', 'stq', 'sun', 'swa', 'swe', 'szl', 'tam', 'tat', 'tcy', 'tel', 'tet', 'tgk', 'tgl', 'tha', 'ton', 'tsn', 'tuk', 'tur', 'tyv', 'udm', 'uig', 'ukr', 'urd', 'uzb', 'vec', 'vep', 'vie', 'vls', 'vol', 'vro', 'war', 'wln', 'wol', 'wuu', 'xho', 'xmf', 'yid', 'yor', 'zea', 'zho'] | ['wili_2018'] | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | [] | false | true | true | 56,099 | false |
# Zabanshenas - Language Detector
Zabanshenas is a Transformer-based solution for identifying the most likely language of a written document/text. Zabanshenas is a Persian word that has two meanings:
- A person who studies linguistics.
- A way to identify the type of written language.
## How to use
Follow [Zabanshenas repo](https://github.com/m3hrdadfi/zabanshenas) for more information!
## Evaluation
The following tables summarize the scores obtained by model overall and per each class.
### By Paragraph
| language | precision | recall | f1-score |
|:--------------------------------------:|:---------:|:--------:|:--------:|
| Achinese (ace) | 1.000000 | 0.982143 | 0.990991 |
| Afrikaans (afr) | 1.000000 | 1.000000 | 1.000000 |
| Alemannic German (als) | 1.000000 | 0.946429 | 0.972477 |
| Amharic (amh) | 1.000000 | 0.982143 | 0.990991 |
| Old English (ang) | 0.981818 | 0.964286 | 0.972973 |
| Arabic (ara) | 0.846154 | 0.982143 | 0.909091 |
| Aragonese (arg) | 1.000000 | 1.000000 | 1.000000 |
| Egyptian Arabic (arz) | 0.979592 | 0.857143 | 0.914286 |
| Assamese (asm) | 0.981818 | 0.964286 | 0.972973 |
| Asturian (ast) | 0.964912 | 0.982143 | 0.973451 |
| Avar (ava) | 0.941176 | 0.905660 | 0.923077 |
| Aymara (aym) | 0.964912 | 0.982143 | 0.973451 |
| South Azerbaijani (azb) | 0.965517 | 1.000000 | 0.982456 |
| Azerbaijani (aze) | 1.000000 | 1.000000 | 1.000000 |
| Bashkir (bak) | 1.000000 | 0.978261 | 0.989011 |
| Bavarian (bar) | 0.843750 | 0.964286 | 0.900000 |
| Central Bikol (bcl) | 1.000000 | 0.982143 | 0.990991 |
| Belarusian (Taraschkewiza) (be-tarask) | 1.000000 | 0.875000 | 0.933333 |
| Belarusian (bel) | 0.870968 | 0.964286 | 0.915254 |
| Bengali (ben) | 0.982143 | 0.982143 | 0.982143 |
| Bhojpuri (bho) | 1.000000 | 0.928571 | 0.962963 |
| Banjar (bjn) | 0.981132 | 0.945455 | 0.962963 |
| Tibetan (bod) | 1.000000 | 0.982143 | 0.990991 |
| Bosnian (bos) | 0.552632 | 0.375000 | 0.446809 |
| Bishnupriya (bpy) | 1.000000 | 0.982143 | 0.990991 |
| Breton (bre) | 1.000000 | 0.964286 | 0.981818 |
| Bulgarian (bul) | 1.000000 | 0.964286 | 0.981818 |
| Buryat (bxr) | 0.946429 | 0.946429 | 0.946429 |
| Catalan (cat) | 0.982143 | 0.982143 | 0.982143 |
| Chavacano (cbk) | 0.914894 | 0.767857 | 0.834951 |
| Min Dong (cdo) | 1.000000 | 0.982143 | 0.990991 |
| Cebuano (ceb) | 1.000000 | 1.000000 | 1.000000 |
| Czech (ces) | 1.000000 | 1.000000 | 1.000000 |
| Chechen (che) | 1.000000 | 1.000000 | 1.000000 |
| Cherokee (chr) | 1.000000 | 0.963636 | 0.981481 |
| Chuvash (chv) | 0.938776 | 0.958333 | 0.948454 |
| Central Kurdish (ckb) | 1.000000 | 1.000000 | 1.000000 |
| Cornish (cor) | 1.000000 | 1.000000 | 1.000000 |
| Corsican (cos) | 1.000000 | 0.982143 | 0.990991 |
| Crimean Tatar (crh) | 1.000000 | 0.946429 | 0.972477 |
| Kashubian (csb) | 1.000000 | 0.963636 | 0.981481 |
| Welsh (cym) | 1.000000 | 1.000000 | 1.000000 |
| Danish (dan) | 1.000000 | 1.000000 | 1.000000 |
| German (deu) | 0.828125 | 0.946429 | 0.883333 |
| Dimli (diq) | 0.964912 | 0.982143 | 0.973451 |
| Dhivehi (div) | 1.000000 | 1.000000 | 1.000000 |
| Lower Sorbian (dsb) | 1.000000 | 0.982143 | 0.990991 |
| Doteli (dty) | 0.940000 | 0.854545 | 0.895238 |
| Emilian (egl) | 1.000000 | 0.928571 | 0.962963 |
| Modern Greek (ell) | 1.000000 | 1.000000 | 1.000000 |
| English (eng) | 0.588889 | 0.946429 | 0.726027 |
| Esperanto (epo) | 1.000000 | 0.982143 | 0.990991 |
| Estonian (est) | 0.963636 | 0.946429 | 0.954955 |
| Basque (eus) | 1.000000 | 0.982143 | 0.990991 |
| Extremaduran (ext) | 0.982143 | 0.982143 | 0.982143 |
| Faroese (fao) | 1.000000 | 1.000000 | 1.000000 |
| Persian (fas) | 0.948276 | 0.982143 | 0.964912 |
| Finnish (fin) | 1.000000 | 1.000000 | 1.000000 |
| French (fra) | 0.710145 | 0.875000 | 0.784000 |
| Arpitan (frp) | 1.000000 | 0.946429 | 0.972477 |
| Western Frisian (fry) | 0.982143 | 0.982143 | 0.982143 |
| Friulian (fur) | 1.000000 | 0.982143 | 0.990991 |
| Gagauz (gag) | 0.981132 | 0.945455 | 0.962963 |
| Scottish Gaelic (gla) | 0.982143 | 0.982143 | 0.982143 |
| Irish (gle) | 0.949153 | 1.000000 | 0.973913 |
| Galician (glg) | 1.000000 | 1.000000 | 1.000000 |
| Gilaki (glk) | 0.981132 | 0.945455 | 0.962963 |
| Manx (glv) | 1.000000 | 1.000000 | 1.000000 |
| Guarani (grn) | 1.000000 | 0.964286 | 0.981818 |
| Gujarati (guj) | 1.000000 | 0.982143 | 0.990991 |
| Hakka Chinese (hak) | 0.981818 | 0.964286 | 0.972973 |
| Haitian Creole (hat) | 1.000000 | 1.000000 | 1.000000 |
| Hausa (hau) | 1.000000 | 0.945455 | 0.971963 |
| Serbo-Croatian (hbs) | 0.448276 | 0.464286 | 0.456140 |
| Hebrew (heb) | 1.000000 | 0.982143 | 0.990991 |
| Fiji Hindi (hif) | 0.890909 | 0.890909 | 0.890909 |
| Hindi (hin) | 0.981481 | 0.946429 | 0.963636 |
| Croatian (hrv) | 0.500000 | 0.636364 | 0.560000 |
| Upper Sorbian (hsb) | 0.955556 | 1.000000 | 0.977273 |
| Hungarian (hun) | 1.000000 | 1.000000 | 1.000000 |
| Armenian (hye) | 1.000000 | 0.981818 | 0.990826 |
| Igbo (ibo) | 0.918033 | 1.000000 | 0.957265 |
| Ido (ido) | 1.000000 | 1.000000 | 1.000000 |
| Interlingue (ile) | 1.000000 | 0.962264 | 0.980769 |
| Iloko (ilo) | 0.947368 | 0.964286 | 0.955752 |
| Interlingua (ina) | 1.000000 | 1.000000 | 1.000000 |
| Indonesian (ind) | 0.761905 | 0.872727 | 0.813559 |
| Icelandic (isl) | 1.000000 | 1.000000 | 1.000000 |
| Italian (ita) | 0.861538 | 1.000000 | 0.925620 |
| Jamaican Patois (jam) | 1.000000 | 0.946429 | 0.972477 |
| Javanese (jav) | 0.964912 | 0.982143 | 0.973451 |
| Lojban (jbo) | 1.000000 | 1.000000 | 1.000000 |
| Japanese (jpn) | 1.000000 | 1.000000 | 1.000000 |
| Karakalpak (kaa) | 0.965517 | 1.000000 | 0.982456 |
| Kabyle (kab) | 1.000000 | 0.964286 | 0.981818 |
| Kannada (kan) | 0.982143 | 0.982143 | 0.982143 |
| Georgian (kat) | 1.000000 | 0.964286 | 0.981818 |
| Kazakh (kaz) | 0.980769 | 0.980769 | 0.980769 |
| Kabardian (kbd) | 1.000000 | 0.982143 | 0.990991 |
| Central Khmer (khm) | 0.960784 | 0.875000 | 0.915888 |
| Kinyarwanda (kin) | 0.981132 | 0.928571 | 0.954128 |
| Kirghiz (kir) | 1.000000 | 1.000000 | 1.000000 |
| Komi-Permyak (koi) | 0.962264 | 0.910714 | 0.935780 |
| Konkani (kok) | 0.964286 | 0.981818 | 0.972973 |
| Komi (kom) | 1.000000 | 0.962264 | 0.980769 |
| Korean (kor) | 1.000000 | 1.000000 | 1.000000 |
| Karachay-Balkar (krc) | 1.000000 | 0.982143 | 0.990991 |
| Ripuarisch (ksh) | 1.000000 | 0.964286 | 0.981818 |
| Kurdish (kur) | 1.000000 | 0.964286 | 0.981818 |
| Ladino (lad) | 1.000000 | 1.000000 | 1.000000 |
| Lao (lao) | 0.961538 | 0.909091 | 0.934579 |
| Latin (lat) | 0.877193 | 0.943396 | 0.909091 |
| Latvian (lav) | 0.963636 | 0.946429 | 0.954955 |
| Lezghian (lez) | 1.000000 | 0.964286 | 0.981818 |
| Ligurian (lij) | 1.000000 | 0.964286 | 0.981818 |
| Limburgan (lim) | 0.938776 | 1.000000 | 0.968421 |
| Lingala (lin) | 0.980769 | 0.927273 | 0.953271 |
| Lithuanian (lit) | 0.982456 | 1.000000 | 0.991150 |
| Lombard (lmo) | 1.000000 | 1.000000 | 1.000000 |
| Northern Luri (lrc) | 1.000000 | 0.928571 | 0.962963 |
| Latgalian (ltg) | 1.000000 | 0.982143 | 0.990991 |
| Luxembourgish (ltz) | 0.949153 | 1.000000 | 0.973913 |
| Luganda (lug) | 1.000000 | 1.000000 | 1.000000 |
| Literary Chinese (lzh) | 1.000000 | 1.000000 | 1.000000 |
| Maithili (mai) | 0.931034 | 0.964286 | 0.947368 |
| Malayalam (mal) | 1.000000 | 0.982143 | 0.990991 |
| Banyumasan (map-bms) | 0.977778 | 0.785714 | 0.871287 |
| Marathi (mar) | 0.949153 | 1.000000 | 0.973913 |
| Moksha (mdf) | 0.980000 | 0.890909 | 0.933333 |
| Eastern Mari (mhr) | 0.981818 | 0.964286 | 0.972973 |
| Minangkabau (min) | 1.000000 | 1.000000 | 1.000000 |
| Macedonian (mkd) | 1.000000 | 0.981818 | 0.990826 |
| Malagasy (mlg) | 0.981132 | 1.000000 | 0.990476 |
| Maltese (mlt) | 0.982456 | 1.000000 | 0.991150 |
| Min Nan Chinese (nan) | 1.000000 | 1.000000 | 1.000000 |
| Mongolian (mon) | 1.000000 | 0.981818 | 0.990826 |
| Maori (mri) | 1.000000 | 1.000000 | 1.000000 |
| Western Mari (mrj) | 0.982456 | 1.000000 | 0.991150 |
| Malay (msa) | 0.862069 | 0.892857 | 0.877193 |
| Mirandese (mwl) | 1.000000 | 0.982143 | 0.990991 |
| Burmese (mya) | 1.000000 | 1.000000 | 1.000000 |
| Erzya (myv) | 0.818182 | 0.964286 | 0.885246 |
| Mazanderani (mzn) | 0.981481 | 1.000000 | 0.990654 |
| Neapolitan (nap) | 1.000000 | 0.981818 | 0.990826 |
| Navajo (nav) | 1.000000 | 1.000000 | 1.000000 |
| Classical Nahuatl (nci) | 0.981481 | 0.946429 | 0.963636 |
| Low German (nds) | 0.982143 | 0.982143 | 0.982143 |
| West Low German (nds-nl) | 1.000000 | 1.000000 | 1.000000 |
| Nepali (macrolanguage) (nep) | 0.881356 | 0.928571 | 0.904348 |
| Newari (new) | 1.000000 | 0.909091 | 0.952381 |
| Dutch (nld) | 0.982143 | 0.982143 | 0.982143 |
| Norwegian Nynorsk (nno) | 1.000000 | 1.000000 | 1.000000 |
| Bokmål (nob) | 1.000000 | 1.000000 | 1.000000 |
| Narom (nrm) | 0.981818 | 0.964286 | 0.972973 |
| Northern Sotho (nso) | 1.000000 | 1.000000 | 1.000000 |
| Occitan (oci) | 0.903846 | 0.839286 | 0.870370 |
| Livvi-Karelian (olo) | 0.982456 | 1.000000 | 0.991150 |
| Oriya (ori) | 0.964912 | 0.982143 | 0.973451 |
| Oromo (orm) | 0.982143 | 0.982143 | 0.982143 |
| Ossetian (oss) | 0.982143 | 1.000000 | 0.990991 |
| Pangasinan (pag) | 0.980000 | 0.875000 | 0.924528 |
| Pampanga (pam) | 0.928571 | 0.896552 | 0.912281 |
| Panjabi (pan) | 1.000000 | 1.000000 | 1.000000 |
| Papiamento (pap) | 1.000000 | 0.964286 | 0.981818 |
| Picard (pcd) | 0.849057 | 0.849057 | 0.849057 |
| Pennsylvania German (pdc) | 0.854839 | 0.946429 | 0.898305 |
| Palatine German (pfl) | 0.946429 | 0.946429 | 0.946429 |
| Western Panjabi (pnb) | 0.981132 | 0.962963 | 0.971963 |
| Polish (pol) | 0.933333 | 1.000000 | 0.965517 |
| Portuguese (por) | 0.774648 | 0.982143 | 0.866142 |
| Pushto (pus) | 1.000000 | 0.910714 | 0.953271 |
| Quechua (que) | 0.962963 | 0.928571 | 0.945455 |
| Tarantino dialect (roa-tara) | 1.000000 | 0.964286 | 0.981818 |
| Romansh (roh) | 1.000000 | 0.928571 | 0.962963 |
| Romanian (ron) | 0.965517 | 1.000000 | 0.982456 |
| Rusyn (rue) | 0.946429 | 0.946429 | 0.946429 |
| Aromanian (rup) | 0.962963 | 0.928571 | 0.945455 |
| Russian (rus) | 0.859375 | 0.982143 | 0.916667 |
| Yakut (sah) | 1.000000 | 0.982143 | 0.990991 |
| Sanskrit (san) | 0.982143 | 0.982143 | 0.982143 |
| Sicilian (scn) | 1.000000 | 1.000000 | 1.000000 |
| Scots (sco) | 0.982143 | 0.982143 | 0.982143 |
| Samogitian (sgs) | 1.000000 | 0.982143 | 0.990991 |
| Sinhala (sin) | 0.964912 | 0.982143 | 0.973451 |
| Slovak (slk) | 1.000000 | 0.982143 | 0.990991 |
| Slovene (slv) | 1.000000 | 0.981818 | 0.990826 |
| Northern Sami (sme) | 0.962264 | 0.962264 | 0.962264 |
| Shona (sna) | 0.933333 | 1.000000 | 0.965517 |
| Sindhi (snd) | 1.000000 | 1.000000 | 1.000000 |
| Somali (som) | 0.948276 | 1.000000 | 0.973451 |
| Spanish (spa) | 0.739130 | 0.910714 | 0.816000 |
| Albanian (sqi) | 0.982143 | 0.982143 | 0.982143 |
| Sardinian (srd) | 1.000000 | 0.982143 | 0.990991 |
| Sranan (srn) | 1.000000 | 1.000000 | 1.000000 |
| Serbian (srp) | 1.000000 | 0.946429 | 0.972477 |
| Saterfriesisch (stq) | 1.000000 | 0.964286 | 0.981818 |
| Sundanese (sun) | 1.000000 | 0.977273 | 0.988506 |
| Swahili (macrolanguage) (swa) | 1.000000 | 1.000000 | 1.000000 |
| Swedish (swe) | 1.000000 | 1.000000 | 1.000000 |
| Silesian (szl) | 1.000000 | 0.981481 | 0.990654 |
| Tamil (tam) | 0.982143 | 1.000000 | 0.990991 |
| Tatar (tat) | 1.000000 | 1.000000 | 1.000000 |
| Tulu (tcy) | 0.982456 | 1.000000 | 0.991150 |
| Telugu (tel) | 1.000000 | 0.920000 | 0.958333 |
| Tetum (tet) | 1.000000 | 0.964286 | 0.981818 |
| Tajik (tgk) | 1.000000 | 1.000000 | 1.000000 |
| Tagalog (tgl) | 1.000000 | 1.000000 | 1.000000 |
| Thai (tha) | 0.932203 | 0.982143 | 0.956522 |
| Tongan (ton) | 1.000000 | 0.964286 | 0.981818 |
| Tswana (tsn) | 1.000000 | 1.000000 | 1.000000 |
| Turkmen (tuk) | 1.000000 | 0.982143 | 0.990991 |
| Turkish (tur) | 0.901639 | 0.982143 | 0.940171 |
| Tuvan (tyv) | 1.000000 | 0.964286 | 0.981818 |
| Udmurt (udm) | 1.000000 | 0.982143 | 0.990991 |
| Uighur (uig) | 1.000000 | 0.982143 | 0.990991 |
| Ukrainian (ukr) | 0.963636 | 0.946429 | 0.954955 |
| Urdu (urd) | 1.000000 | 0.982143 | 0.990991 |
| Uzbek (uzb) | 1.000000 | 1.000000 | 1.000000 |
| Venetian (vec) | 1.000000 | 0.982143 | 0.990991 |
| Veps (vep) | 0.982456 | 1.000000 | 0.991150 |
| Vietnamese (vie) | 0.964912 | 0.982143 | 0.973451 |
| Vlaams (vls) | 1.000000 | 0.982143 | 0.990991 |
| Volapük (vol) | 1.000000 | 1.000000 | 1.000000 |
| Võro (vro) | 0.964286 | 0.964286 | 0.964286 |
| Waray (war) | 1.000000 | 0.982143 | 0.990991 |
| Walloon (wln) | 1.000000 | 1.000000 | 1.000000 |
| Wolof (wol) | 0.981481 | 0.963636 | 0.972477 |
| Wu Chinese (wuu) | 0.981481 | 0.946429 | 0.963636 |
| Xhosa (xho) | 1.000000 | 0.964286 | 0.981818 |
| Mingrelian (xmf) | 1.000000 | 0.964286 | 0.981818 |
| Yiddish (yid) | 1.000000 | 1.000000 | 1.000000 |
| Yoruba (yor) | 0.964912 | 0.982143 | 0.973451 |
| Zeeuws (zea) | 1.000000 | 0.982143 | 0.990991 |
| Cantonese (zh-yue) | 0.981481 | 0.946429 | 0.963636 |
| Standard Chinese (zho) | 0.932203 | 0.982143 | 0.956522 |
| accuracy | 0.963055 | 0.963055 | 0.963055 |
| macro avg | 0.966424 | 0.963216 | 0.963891 |
| weighted avg | 0.966040 | 0.963055 | 0.963606 |
### By Sentence
| language | precision | recall | f1-score |
|:--------------------------------------:|:---------:|:--------:|:--------:|
| Achinese (ace) | 0.754545 | 0.873684 | 0.809756 |
| Afrikaans (afr) | 0.708955 | 0.940594 | 0.808511 |
| Alemannic German (als) | 0.870130 | 0.752809 | 0.807229 |
| Amharic (amh) | 1.000000 | 0.820000 | 0.901099 |
| Old English (ang) | 0.966667 | 0.906250 | 0.935484 |
| Arabic (ara) | 0.907692 | 0.967213 | 0.936508 |
| Aragonese (arg) | 0.921569 | 0.959184 | 0.940000 |
| Egyptian Arabic (arz) | 0.964286 | 0.843750 | 0.900000 |
| Assamese (asm) | 0.964286 | 0.870968 | 0.915254 |
| Asturian (ast) | 0.880000 | 0.795181 | 0.835443 |
| Avar (ava) | 0.864198 | 0.843373 | 0.853659 |
| Aymara (aym) | 1.000000 | 0.901961 | 0.948454 |
| South Azerbaijani (azb) | 0.979381 | 0.989583 | 0.984456 |
| Azerbaijani (aze) | 0.989899 | 0.960784 | 0.975124 |
| Bashkir (bak) | 0.837209 | 0.857143 | 0.847059 |
| Bavarian (bar) | 0.741935 | 0.766667 | 0.754098 |
| Central Bikol (bcl) | 0.962963 | 0.928571 | 0.945455 |
| Belarusian (Taraschkewiza) (be-tarask) | 0.857143 | 0.733333 | 0.790419 |
| Belarusian (bel) | 0.775510 | 0.752475 | 0.763819 |
| Bengali (ben) | 0.861111 | 0.911765 | 0.885714 |
| Bhojpuri (bho) | 0.965517 | 0.933333 | 0.949153 |
| Banjar (bjn) | 0.891566 | 0.880952 | 0.886228 |
| Tibetan (bod) | 1.000000 | 1.000000 | 1.000000 |
| Bosnian (bos) | 0.375000 | 0.323077 | 0.347107 |
| Bishnupriya (bpy) | 0.986301 | 1.000000 | 0.993103 |
| Breton (bre) | 0.951613 | 0.893939 | 0.921875 |
| Bulgarian (bul) | 0.945055 | 0.877551 | 0.910053 |
| Buryat (bxr) | 0.955556 | 0.843137 | 0.895833 |
| Catalan (cat) | 0.692308 | 0.750000 | 0.720000 |
| Chavacano (cbk) | 0.842857 | 0.641304 | 0.728395 |
| Min Dong (cdo) | 0.972973 | 1.000000 | 0.986301 |
| Cebuano (ceb) | 0.981308 | 0.954545 | 0.967742 |
| Czech (ces) | 0.944444 | 0.915385 | 0.929687 |
| Chechen (che) | 0.875000 | 0.700000 | 0.777778 |
| Cherokee (chr) | 1.000000 | 0.970588 | 0.985075 |
| Chuvash (chv) | 0.875000 | 0.836957 | 0.855556 |
| Central Kurdish (ckb) | 1.000000 | 0.983051 | 0.991453 |
| Cornish (cor) | 0.979592 | 0.969697 | 0.974619 |
| Corsican (cos) | 0.986842 | 0.925926 | 0.955414 |
| Crimean Tatar (crh) | 0.958333 | 0.907895 | 0.932432 |
| Kashubian (csb) | 0.920354 | 0.904348 | 0.912281 |
| Welsh (cym) | 0.971014 | 0.943662 | 0.957143 |
| Danish (dan) | 0.865169 | 0.777778 | 0.819149 |
| German (deu) | 0.721311 | 0.822430 | 0.768559 |
| Dimli (diq) | 0.915966 | 0.923729 | 0.919831 |
| Dhivehi (div) | 1.000000 | 0.991228 | 0.995595 |
| Lower Sorbian (dsb) | 0.898876 | 0.879121 | 0.888889 |
| Doteli (dty) | 0.821429 | 0.638889 | 0.718750 |
| Emilian (egl) | 0.988095 | 0.922222 | 0.954023 |
| Modern Greek (ell) | 0.988636 | 0.966667 | 0.977528 |
| English (eng) | 0.522727 | 0.784091 | 0.627273 |
| Esperanto (epo) | 0.963855 | 0.930233 | 0.946746 |
| Estonian (est) | 0.922222 | 0.873684 | 0.897297 |
| Basque (eus) | 1.000000 | 0.941176 | 0.969697 |
| Extremaduran (ext) | 0.925373 | 0.885714 | 0.905109 |
| Faroese (fao) | 0.855072 | 0.887218 | 0.870849 |
| Persian (fas) | 0.879630 | 0.979381 | 0.926829 |
| Finnish (fin) | 0.952830 | 0.943925 | 0.948357 |
| French (fra) | 0.676768 | 0.943662 | 0.788235 |
| Arpitan (frp) | 0.867925 | 0.807018 | 0.836364 |
| Western Frisian (fry) | 0.956989 | 0.890000 | 0.922280 |
| Friulian (fur) | 1.000000 | 0.857143 | 0.923077 |
| Gagauz (gag) | 0.939024 | 0.802083 | 0.865169 |
| Scottish Gaelic (gla) | 1.000000 | 0.879121 | 0.935673 |
| Irish (gle) | 0.989247 | 0.958333 | 0.973545 |
| Galician (glg) | 0.910256 | 0.922078 | 0.916129 |
| Gilaki (glk) | 0.964706 | 0.872340 | 0.916201 |
| Manx (glv) | 1.000000 | 0.965517 | 0.982456 |
| Guarani (grn) | 0.983333 | 1.000000 | 0.991597 |
| Gujarati (guj) | 1.000000 | 0.991525 | 0.995745 |
| Hakka Chinese (hak) | 0.955224 | 0.955224 | 0.955224 |
| Haitian Creole (hat) | 0.833333 | 0.666667 | 0.740741 |
| Hausa (hau) | 0.936709 | 0.913580 | 0.925000 |
| Serbo-Croatian (hbs) | 0.452830 | 0.410256 | 0.430493 |
| Hebrew (heb) | 0.988235 | 0.976744 | 0.982456 |
| Fiji Hindi (hif) | 0.936709 | 0.840909 | 0.886228 |
| Hindi (hin) | 0.965517 | 0.756757 | 0.848485 |
| Croatian (hrv) | 0.443820 | 0.537415 | 0.486154 |
| Upper Sorbian (hsb) | 0.951613 | 0.830986 | 0.887218 |
| Hungarian (hun) | 0.854701 | 0.909091 | 0.881057 |
| Armenian (hye) | 1.000000 | 0.816327 | 0.898876 |
| Igbo (ibo) | 0.974359 | 0.926829 | 0.950000 |
| Ido (ido) | 0.975000 | 0.987342 | 0.981132 |
| Interlingue (ile) | 0.880597 | 0.921875 | 0.900763 |
| Iloko (ilo) | 0.882353 | 0.821918 | 0.851064 |
| Interlingua (ina) | 0.952381 | 0.895522 | 0.923077 |
| Indonesian (ind) | 0.606383 | 0.695122 | 0.647727 |
| Icelandic (isl) | 0.978261 | 0.882353 | 0.927835 |
| Italian (ita) | 0.910448 | 0.910448 | 0.910448 |
| Jamaican Patois (jam) | 0.988764 | 0.967033 | 0.977778 |
| Javanese (jav) | 0.903614 | 0.862069 | 0.882353 |
| Lojban (jbo) | 0.943878 | 0.929648 | 0.936709 |
| Japanese (jpn) | 1.000000 | 0.764706 | 0.866667 |
| Karakalpak (kaa) | 0.940171 | 0.901639 | 0.920502 |
| Kabyle (kab) | 0.985294 | 0.837500 | 0.905405 |
| Kannada (kan) | 0.975806 | 0.975806 | 0.975806 |
| Georgian (kat) | 0.953704 | 0.903509 | 0.927928 |
| Kazakh (kaz) | 0.934579 | 0.877193 | 0.904977 |
| Kabardian (kbd) | 0.987952 | 0.953488 | 0.970414 |
| Central Khmer (khm) | 0.928571 | 0.829787 | 0.876404 |
| Kinyarwanda (kin) | 0.953125 | 0.938462 | 0.945736 |
| Kirghiz (kir) | 0.927632 | 0.881250 | 0.903846 |
| Komi-Permyak (koi) | 0.750000 | 0.776786 | 0.763158 |
| Konkani (kok) | 0.893491 | 0.872832 | 0.883041 |
| Komi (kom) | 0.734177 | 0.690476 | 0.711656 |
| Korean (kor) | 0.989899 | 0.989899 | 0.989899 |
| Karachay-Balkar (krc) | 0.928571 | 0.917647 | 0.923077 |
| Ripuarisch (ksh) | 0.915789 | 0.896907 | 0.906250 |
| Kurdish (kur) | 0.977528 | 0.935484 | 0.956044 |
| Ladino (lad) | 0.985075 | 0.904110 | 0.942857 |
| Lao (lao) | 0.896552 | 0.812500 | 0.852459 |
| Latin (lat) | 0.741935 | 0.831325 | 0.784091 |
| Latvian (lav) | 0.710526 | 0.878049 | 0.785455 |
| Lezghian (lez) | 0.975309 | 0.877778 | 0.923977 |
| Ligurian (lij) | 0.951807 | 0.897727 | 0.923977 |
| Limburgan (lim) | 0.909091 | 0.921053 | 0.915033 |
| Lingala (lin) | 0.942857 | 0.814815 | 0.874172 |
| Lithuanian (lit) | 0.892857 | 0.925926 | 0.909091 |
| Lombard (lmo) | 0.766234 | 0.951613 | 0.848921 |
| Northern Luri (lrc) | 0.972222 | 0.875000 | 0.921053 |
| Latgalian (ltg) | 0.895349 | 0.865169 | 0.880000 |
| Luxembourgish (ltz) | 0.882353 | 0.750000 | 0.810811 |
| Luganda (lug) | 0.946429 | 0.883333 | 0.913793 |
| Literary Chinese (lzh) | 1.000000 | 1.000000 | 1.000000 |
| Maithili (mai) | 0.893617 | 0.823529 | 0.857143 |
| Malayalam (mal) | 1.000000 | 0.975000 | 0.987342 |
| Banyumasan (map-bms) | 0.924242 | 0.772152 | 0.841379 |
| Marathi (mar) | 0.874126 | 0.919118 | 0.896057 |
| Moksha (mdf) | 0.771242 | 0.830986 | 0.800000 |
| Eastern Mari (mhr) | 0.820000 | 0.860140 | 0.839590 |
| Minangkabau (min) | 0.973684 | 0.973684 | 0.973684 |
| Macedonian (mkd) | 0.895652 | 0.953704 | 0.923767 |
| Malagasy (mlg) | 1.000000 | 0.966102 | 0.982759 |
| Maltese (mlt) | 0.987952 | 0.964706 | 0.976190 |
| Min Nan Chinese (nan) | 0.975000 | 1.000000 | 0.987342 |
| Mongolian (mon) | 0.954545 | 0.933333 | 0.943820 |
| Maori (mri) | 0.985294 | 1.000000 | 0.992593 |
| Western Mari (mrj) | 0.966292 | 0.914894 | 0.939891 |
| Malay (msa) | 0.770270 | 0.695122 | 0.730769 |
| Mirandese (mwl) | 0.970588 | 0.891892 | 0.929577 |
| Burmese (mya) | 1.000000 | 0.964286 | 0.981818 |
| Erzya (myv) | 0.535714 | 0.681818 | 0.600000 |
| Mazanderani (mzn) | 0.968750 | 0.898551 | 0.932331 |
| Neapolitan (nap) | 0.892308 | 0.865672 | 0.878788 |
| Navajo (nav) | 0.984375 | 0.984375 | 0.984375 |
| Classical Nahuatl (nci) | 0.901408 | 0.761905 | 0.825806 |
| Low German (nds) | 0.896226 | 0.913462 | 0.904762 |
| West Low German (nds-nl) | 0.873563 | 0.835165 | 0.853933 |
| Nepali (macrolanguage) (nep) | 0.704545 | 0.861111 | 0.775000 |
| Newari (new) | 0.920000 | 0.741935 | 0.821429 |
| Dutch (nld) | 0.925926 | 0.872093 | 0.898204 |
| Norwegian Nynorsk (nno) | 0.847059 | 0.808989 | 0.827586 |
| Bokmål (nob) | 0.861386 | 0.852941 | 0.857143 |
| Narom (nrm) | 0.966667 | 0.983051 | 0.974790 |
| Northern Sotho (nso) | 0.897436 | 0.921053 | 0.909091 |
| Occitan (oci) | 0.958333 | 0.696970 | 0.807018 |
| Livvi-Karelian (olo) | 0.967742 | 0.937500 | 0.952381 |
| Oriya (ori) | 0.933333 | 1.000000 | 0.965517 |
| Oromo (orm) | 0.977528 | 0.915789 | 0.945652 |
| Ossetian (oss) | 0.958333 | 0.841463 | 0.896104 |
| Pangasinan (pag) | 0.847328 | 0.909836 | 0.877470 |
| Pampanga (pam) | 0.969697 | 0.780488 | 0.864865 |
| Panjabi (pan) | 1.000000 | 1.000000 | 1.000000 |
| Papiamento (pap) | 0.876190 | 0.920000 | 0.897561 |
| Picard (pcd) | 0.707317 | 0.568627 | 0.630435 |
| Pennsylvania German (pdc) | 0.827273 | 0.827273 | 0.827273 |
| Palatine German (pfl) | 0.882353 | 0.914634 | 0.898204 |
| Western Panjabi (pnb) | 0.964286 | 0.931034 | 0.947368 |
| Polish (pol) | 0.859813 | 0.910891 | 0.884615 |
| Portuguese (por) | 0.535714 | 0.833333 | 0.652174 |
| Pushto (pus) | 0.989362 | 0.902913 | 0.944162 |
| Quechua (que) | 0.979167 | 0.903846 | 0.940000 |
| Tarantino dialect (roa-tara) | 0.964912 | 0.901639 | 0.932203 |
| Romansh (roh) | 0.914894 | 0.895833 | 0.905263 |
| Romanian (ron) | 0.880597 | 0.880597 | 0.880597 |
| Rusyn (rue) | 0.932584 | 0.805825 | 0.864583 |
| Aromanian (rup) | 0.783333 | 0.758065 | 0.770492 |
| Russian (rus) | 0.517986 | 0.765957 | 0.618026 |
| Yakut (sah) | 0.954023 | 0.922222 | 0.937853 |
| Sanskrit (san) | 0.866667 | 0.951220 | 0.906977 |
| Sicilian (scn) | 0.984375 | 0.940299 | 0.961832 |
| Scots (sco) | 0.851351 | 0.900000 | 0.875000 |
| Samogitian (sgs) | 0.977011 | 0.876289 | 0.923913 |
| Sinhala (sin) | 0.406154 | 0.985075 | 0.575163 |
| Slovak (slk) | 0.956989 | 0.872549 | 0.912821 |
| Slovene (slv) | 0.907216 | 0.854369 | 0.880000 |
| Northern Sami (sme) | 0.949367 | 0.892857 | 0.920245 |
| Shona (sna) | 0.936508 | 0.855072 | 0.893939 |
| Sindhi (snd) | 0.984962 | 0.992424 | 0.988679 |
| Somali (som) | 0.949153 | 0.848485 | 0.896000 |
| Spanish (spa) | 0.584158 | 0.746835 | 0.655556 |
| Albanian (sqi) | 0.988095 | 0.912088 | 0.948571 |
| Sardinian (srd) | 0.957746 | 0.931507 | 0.944444 |
| Sranan (srn) | 0.985714 | 0.945205 | 0.965035 |
| Serbian (srp) | 0.950980 | 0.889908 | 0.919431 |
| Saterfriesisch (stq) | 0.962500 | 0.875000 | 0.916667 |
| Sundanese (sun) | 0.778846 | 0.910112 | 0.839378 |
| Swahili (macrolanguage) (swa) | 0.915493 | 0.878378 | 0.896552 |
| Swedish (swe) | 0.989247 | 0.958333 | 0.973545 |
| Silesian (szl) | 0.944444 | 0.904255 | 0.923913 |
| Tamil (tam) | 0.990000 | 0.970588 | 0.980198 |
| Tatar (tat) | 0.942029 | 0.902778 | 0.921986 |
| Tulu (tcy) | 0.980519 | 0.967949 | 0.974194 |
| Telugu (tel) | 0.965986 | 0.965986 | 0.965986 |
| Tetum (tet) | 0.898734 | 0.855422 | 0.876543 |
| Tajik (tgk) | 0.974684 | 0.939024 | 0.956522 |
| Tagalog (tgl) | 0.965909 | 0.934066 | 0.949721 |
| Thai (tha) | 0.923077 | 0.882353 | 0.902256 |
| Tongan (ton) | 0.970149 | 0.890411 | 0.928571 |
| Tswana (tsn) | 0.888889 | 0.926316 | 0.907216 |
| Turkmen (tuk) | 0.968000 | 0.889706 | 0.927203 |
| Turkish (tur) | 0.871287 | 0.926316 | 0.897959 |
| Tuvan (tyv) | 0.948454 | 0.859813 | 0.901961 |
| Udmurt (udm) | 0.989362 | 0.894231 | 0.939394 |
| Uighur (uig) | 1.000000 | 0.953333 | 0.976109 |
| Ukrainian (ukr) | 0.893617 | 0.875000 | 0.884211 |
| Urdu (urd) | 1.000000 | 1.000000 | 1.000000 |
| Uzbek (uzb) | 0.636042 | 0.886700 | 0.740741 |
| Venetian (vec) | 1.000000 | 0.941176 | 0.969697 |
| Veps (vep) | 0.858586 | 0.965909 | 0.909091 |
| Vietnamese (vie) | 1.000000 | 0.940476 | 0.969325 |
| Vlaams (vls) | 0.885714 | 0.898551 | 0.892086 |
| Volapük (vol) | 0.975309 | 0.975309 | 0.975309 |
| Võro (vro) | 0.855670 | 0.864583 | 0.860104 |
| Waray (war) | 0.972222 | 0.909091 | 0.939597 |
| Walloon (wln) | 0.742138 | 0.893939 | 0.810997 |
| Wolof (wol) | 0.882979 | 0.954023 | 0.917127 |
| Wu Chinese (wuu) | 0.961538 | 0.833333 | 0.892857 |
| Xhosa (xho) | 0.934066 | 0.867347 | 0.899471 |
| Mingrelian (xmf) | 0.958333 | 0.929293 | 0.943590 |
| Yiddish (yid) | 0.984375 | 0.875000 | 0.926471 |
| Yoruba (yor) | 0.868421 | 0.857143 | 0.862745 |
| Zeeuws (zea) | 0.879518 | 0.793478 | 0.834286 |
| Cantonese (zh-yue) | 0.896552 | 0.812500 | 0.852459 |
| Standard Chinese (zho) | 0.906250 | 0.935484 | 0.920635 |
| accuracy | 0.881051 | 0.881051 | 0.881051 |
| macro avg | 0.903245 | 0.880618 | 0.888996 |
| weighted avg | 0.894174 | 0.881051 | 0.884520 |
### By Token (3 to 5)
| language | precision | recall | f1-score |
|:--------------------------------------:|:---------:|:--------:|:--------:|
| Achinese (ace) | 0.873846 | 0.827988 | 0.850299 |
| Afrikaans (afr) | 0.638060 | 0.732334 | 0.681954 |
| Alemannic German (als) | 0.673780 | 0.547030 | 0.603825 |
| Amharic (amh) | 0.997743 | 0.954644 | 0.975717 |
| Old English (ang) | 0.840816 | 0.693603 | 0.760148 |
| Arabic (ara) | 0.768737 | 0.840749 | 0.803132 |
| Aragonese (arg) | 0.493671 | 0.505181 | 0.499360 |
| Egyptian Arabic (arz) | 0.823529 | 0.741935 | 0.780606 |
| Assamese (asm) | 0.948454 | 0.893204 | 0.920000 |
| Asturian (ast) | 0.490000 | 0.508299 | 0.498982 |
| Avar (ava) | 0.813636 | 0.655678 | 0.726166 |
| Aymara (aym) | 0.795833 | 0.779592 | 0.787629 |
| South Azerbaijani (azb) | 0.832836 | 0.863777 | 0.848024 |
| Azerbaijani (aze) | 0.867470 | 0.800000 | 0.832370 |
| Bashkir (bak) | 0.851852 | 0.750000 | 0.797688 |
| Bavarian (bar) | 0.560897 | 0.522388 | 0.540958 |
| Central Bikol (bcl) | 0.708229 | 0.668235 | 0.687651 |
| Belarusian (Taraschkewiza) (be-tarask) | 0.615635 | 0.526462 | 0.567568 |
| Belarusian (bel) | 0.539952 | 0.597855 | 0.567430 |
| Bengali (ben) | 0.830275 | 0.885086 | 0.856805 |
| Bhojpuri (bho) | 0.723118 | 0.691517 | 0.706965 |
| Banjar (bjn) | 0.619586 | 0.726269 | 0.668699 |
| Tibetan (bod) | 0.999537 | 0.991728 | 0.995617 |
| Bosnian (bos) | 0.330849 | 0.403636 | 0.363636 |
| Bishnupriya (bpy) | 0.941634 | 0.949020 | 0.945312 |
| Breton (bre) | 0.772222 | 0.745308 | 0.758527 |
| Bulgarian (bul) | 0.771505 | 0.706897 | 0.737789 |
| Buryat (bxr) | 0.741935 | 0.753149 | 0.747500 |
| Catalan (cat) | 0.528716 | 0.610136 | 0.566516 |
| Chavacano (cbk) | 0.409449 | 0.312625 | 0.354545 |
| Min Dong (cdo) | 0.951264 | 0.936057 | 0.943599 |
| Cebuano (ceb) | 0.888298 | 0.876640 | 0.882431 |
| Czech (ces) | 0.806045 | 0.758294 | 0.781441 |
| Chechen (che) | 0.857143 | 0.600000 | 0.705882 |
| Cherokee (chr) | 0.997840 | 0.952577 | 0.974684 |
| Chuvash (chv) | 0.874346 | 0.776744 | 0.822660 |
| Central Kurdish (ckb) | 0.984848 | 0.953545 | 0.968944 |
| Cornish (cor) | 0.747596 | 0.807792 | 0.776529 |
| Corsican (cos) | 0.673913 | 0.708571 | 0.690808 |
| Crimean Tatar (crh) | 0.498801 | 0.700337 | 0.582633 |
| Kashubian (csb) | 0.797059 | 0.794721 | 0.795888 |
| Welsh (cym) | 0.829609 | 0.841360 | 0.835443 |
| Danish (dan) | 0.649789 | 0.622222 | 0.635707 |
| German (deu) | 0.559406 | 0.763514 | 0.645714 |
| Dimli (diq) | 0.835580 | 0.763547 | 0.797941 |
| Dhivehi (div) | 1.000000 | 0.980645 | 0.990228 |
| Lower Sorbian (dsb) | 0.740484 | 0.694805 | 0.716918 |
| Doteli (dty) | 0.616314 | 0.527132 | 0.568245 |
| Emilian (egl) | 0.822993 | 0.769625 | 0.795414 |
| Modern Greek (ell) | 0.972043 | 0.963753 | 0.967880 |
| English (eng) | 0.260492 | 0.724346 | 0.383183 |
| Esperanto (epo) | 0.766764 | 0.716621 | 0.740845 |
| Estonian (est) | 0.698885 | 0.673835 | 0.686131 |
| Basque (eus) | 0.882716 | 0.841176 | 0.861446 |
| Extremaduran (ext) | 0.570605 | 0.511628 | 0.539510 |
| Faroese (fao) | 0.773987 | 0.784017 | 0.778970 |
| Persian (fas) | 0.709836 | 0.809346 | 0.756332 |
| Finnish (fin) | 0.866261 | 0.796089 | 0.829694 |
| French (fra) | 0.496263 | 0.700422 | 0.580927 |
| Arpitan (frp) | 0.663366 | 0.584302 | 0.621329 |
| Western Frisian (fry) | 0.750000 | 0.756148 | 0.753061 |
| Friulian (fur) | 0.713555 | 0.675545 | 0.694030 |
| Gagauz (gag) | 0.728125 | 0.677326 | 0.701807 |
| Scottish Gaelic (gla) | 0.831601 | 0.817996 | 0.824742 |
| Irish (gle) | 0.868852 | 0.801296 | 0.833708 |
| Galician (glg) | 0.469816 | 0.454315 | 0.461935 |
| Gilaki (glk) | 0.703883 | 0.687204 | 0.695444 |
| Manx (glv) | 0.873047 | 0.886905 | 0.879921 |
| Guarani (grn) | 0.848580 | 0.793510 | 0.820122 |
| Gujarati (guj) | 0.995643 | 0.926978 | 0.960084 |
| Hakka Chinese (hak) | 0.898403 | 0.904971 | 0.901675 |
| Haitian Creole (hat) | 0.719298 | 0.518987 | 0.602941 |
| Hausa (hau) | 0.815353 | 0.829114 | 0.822176 |
| Serbo-Croatian (hbs) | 0.343465 | 0.244589 | 0.285714 |
| Hebrew (heb) | 0.891304 | 0.933941 | 0.912125 |
| Fiji Hindi (hif) | 0.662577 | 0.664615 | 0.663594 |
| Hindi (hin) | 0.782301 | 0.778169 | 0.780229 |
| Croatian (hrv) | 0.360308 | 0.374000 | 0.367026 |
| Upper Sorbian (hsb) | 0.745763 | 0.611111 | 0.671756 |
| Hungarian (hun) | 0.876812 | 0.846154 | 0.861210 |
| Armenian (hye) | 0.988201 | 0.917808 | 0.951705 |
| Igbo (ibo) | 0.825397 | 0.696429 | 0.755448 |
| Ido (ido) | 0.760479 | 0.814103 | 0.786378 |
| Interlingue (ile) | 0.701299 | 0.580645 | 0.635294 |
| Iloko (ilo) | 0.688356 | 0.844538 | 0.758491 |
| Interlingua (ina) | 0.577889 | 0.588235 | 0.583016 |
| Indonesian (ind) | 0.415879 | 0.514019 | 0.459770 |
| Icelandic (isl) | 0.855263 | 0.790754 | 0.821745 |
| Italian (ita) | 0.474576 | 0.561247 | 0.514286 |
| Jamaican Patois (jam) | 0.826087 | 0.791667 | 0.808511 |
| Javanese (jav) | 0.670130 | 0.658163 | 0.664093 |
| Lojban (jbo) | 0.896861 | 0.917431 | 0.907029 |
| Japanese (jpn) | 0.931373 | 0.848214 | 0.887850 |
| Karakalpak (kaa) | 0.790393 | 0.827744 | 0.808637 |
| Kabyle (kab) | 0.828571 | 0.759162 | 0.792350 |
| Kannada (kan) | 0.879357 | 0.847545 | 0.863158 |
| Georgian (kat) | 0.916399 | 0.907643 | 0.912000 |
| Kazakh (kaz) | 0.900901 | 0.819672 | 0.858369 |
| Kabardian (kbd) | 0.923345 | 0.892256 | 0.907534 |
| Central Khmer (khm) | 0.976667 | 0.816156 | 0.889226 |
| Kinyarwanda (kin) | 0.824324 | 0.726190 | 0.772152 |
| Kirghiz (kir) | 0.674766 | 0.779698 | 0.723447 |
| Komi-Permyak (koi) | 0.652830 | 0.633700 | 0.643123 |
| Konkani (kok) | 0.778865 | 0.728938 | 0.753075 |
| Komi (kom) | 0.737374 | 0.572549 | 0.644592 |
| Korean (kor) | 0.984615 | 0.967603 | 0.976035 |
| Karachay-Balkar (krc) | 0.869416 | 0.857627 | 0.863481 |
| Ripuarisch (ksh) | 0.709859 | 0.649485 | 0.678331 |
| Kurdish (kur) | 0.883777 | 0.862884 | 0.873206 |
| Ladino (lad) | 0.660920 | 0.576441 | 0.615797 |
| Lao (lao) | 0.986175 | 0.918455 | 0.951111 |
| Latin (lat) | 0.581250 | 0.636986 | 0.607843 |
| Latvian (lav) | 0.824513 | 0.797844 | 0.810959 |
| Lezghian (lez) | 0.898955 | 0.793846 | 0.843137 |
| Ligurian (lij) | 0.662903 | 0.677100 | 0.669927 |
| Limburgan (lim) | 0.615385 | 0.581818 | 0.598131 |
| Lingala (lin) | 0.836207 | 0.763780 | 0.798354 |
| Lithuanian (lit) | 0.756329 | 0.804714 | 0.779772 |
| Lombard (lmo) | 0.556818 | 0.536986 | 0.546722 |
| Northern Luri (lrc) | 0.838574 | 0.753296 | 0.793651 |
| Latgalian (ltg) | 0.759531 | 0.755102 | 0.757310 |
| Luxembourgish (ltz) | 0.645062 | 0.614706 | 0.629518 |
| Luganda (lug) | 0.787535 | 0.805797 | 0.796562 |
| Literary Chinese (lzh) | 0.921951 | 0.949749 | 0.935644 |
| Maithili (mai) | 0.777778 | 0.761658 | 0.769634 |
| Malayalam (mal) | 0.993377 | 0.949367 | 0.970874 |
| Banyumasan (map-bms) | 0.531429 | 0.453659 | 0.489474 |
| Marathi (mar) | 0.748744 | 0.818681 | 0.782152 |
| Moksha (mdf) | 0.728745 | 0.800000 | 0.762712 |
| Eastern Mari (mhr) | 0.790323 | 0.760870 | 0.775316 |
| Minangkabau (min) | 0.953271 | 0.886957 | 0.918919 |
| Macedonian (mkd) | 0.816399 | 0.849722 | 0.832727 |
| Malagasy (mlg) | 0.925187 | 0.918317 | 0.921739 |
| Maltese (mlt) | 0.869421 | 0.890017 | 0.879599 |
| Min Nan Chinese (nan) | 0.743707 | 0.820707 | 0.780312 |
| Mongolian (mon) | 0.852194 | 0.838636 | 0.845361 |
| Maori (mri) | 0.934726 | 0.937173 | 0.935948 |
| Western Mari (mrj) | 0.818792 | 0.827119 | 0.822934 |
| Malay (msa) | 0.508065 | 0.376119 | 0.432247 |
| Mirandese (mwl) | 0.650407 | 0.685225 | 0.667362 |
| Burmese (mya) | 0.995968 | 0.972441 | 0.984064 |
| Erzya (myv) | 0.475783 | 0.503012 | 0.489019 |
| Mazanderani (mzn) | 0.775362 | 0.701639 | 0.736661 |
| Neapolitan (nap) | 0.628993 | 0.595349 | 0.611708 |
| Navajo (nav) | 0.955882 | 0.937500 | 0.946602 |
| Classical Nahuatl (nci) | 0.679758 | 0.589005 | 0.631136 |
| Low German (nds) | 0.669789 | 0.690821 | 0.680143 |
| West Low German (nds-nl) | 0.513889 | 0.504545 | 0.509174 |
| Nepali (macrolanguage) (nep) | 0.640476 | 0.649758 | 0.645084 |
| Newari (new) | 0.928571 | 0.745902 | 0.827273 |
| Dutch (nld) | 0.553763 | 0.553763 | 0.553763 |
| Norwegian Nynorsk (nno) | 0.569277 | 0.519231 | 0.543103 |
| Bokmål (nob) | 0.519856 | 0.562500 | 0.540338 |
| Narom (nrm) | 0.691275 | 0.605882 | 0.645768 |
| Northern Sotho (nso) | 0.950276 | 0.815166 | 0.877551 |
| Occitan (oci) | 0.483444 | 0.366834 | 0.417143 |
| Livvi-Karelian (olo) | 0.816850 | 0.790780 | 0.803604 |
| Oriya (ori) | 0.981481 | 0.963636 | 0.972477 |
| Oromo (orm) | 0.885714 | 0.829218 | 0.856536 |
| Ossetian (oss) | 0.822006 | 0.855219 | 0.838284 |
| Pangasinan (pag) | 0.842105 | 0.715655 | 0.773748 |
| Pampanga (pam) | 0.770000 | 0.435028 | 0.555957 |
| Panjabi (pan) | 0.996154 | 0.984791 | 0.990440 |
| Papiamento (pap) | 0.674672 | 0.661670 | 0.668108 |
| Picard (pcd) | 0.407895 | 0.356322 | 0.380368 |
| Pennsylvania German (pdc) | 0.487047 | 0.509485 | 0.498013 |
| Palatine German (pfl) | 0.614173 | 0.570732 | 0.591656 |
| Western Panjabi (pnb) | 0.926267 | 0.887417 | 0.906426 |
| Polish (pol) | 0.797059 | 0.734417 | 0.764457 |
| Portuguese (por) | 0.500914 | 0.586724 | 0.540434 |
| Pushto (pus) | 0.941489 | 0.898477 | 0.919481 |
| Quechua (que) | 0.854167 | 0.797665 | 0.824950 |
| Tarantino dialect (roa-tara) | 0.669794 | 0.724138 | 0.695906 |
| Romansh (roh) | 0.745527 | 0.760649 | 0.753012 |
| Romanian (ron) | 0.805486 | 0.769048 | 0.786845 |
| Rusyn (rue) | 0.718543 | 0.645833 | 0.680251 |
| Aromanian (rup) | 0.288482 | 0.730245 | 0.413580 |
| Russian (rus) | 0.530120 | 0.690583 | 0.599805 |
| Yakut (sah) | 0.853521 | 0.865714 | 0.859574 |
| Sanskrit (san) | 0.931343 | 0.896552 | 0.913616 |
| Sicilian (scn) | 0.734139 | 0.618321 | 0.671271 |
| Scots (sco) | 0.571429 | 0.540816 | 0.555701 |
| Samogitian (sgs) | 0.829167 | 0.748120 | 0.786561 |
| Sinhala (sin) | 0.909474 | 0.935065 | 0.922092 |
| Slovak (slk) | 0.738235 | 0.665782 | 0.700139 |
| Slovene (slv) | 0.671123 | 0.662269 | 0.666667 |
| Northern Sami (sme) | 0.800676 | 0.825784 | 0.813036 |
| Shona (sna) | 0.761702 | 0.724696 | 0.742739 |
| Sindhi (snd) | 0.950172 | 0.946918 | 0.948542 |
| Somali (som) | 0.849462 | 0.802030 | 0.825065 |
| Spanish (spa) | 0.325234 | 0.413302 | 0.364017 |
| Albanian (sqi) | 0.875899 | 0.832479 | 0.853637 |
| Sardinian (srd) | 0.750000 | 0.711061 | 0.730012 |
| Sranan (srn) | 0.888889 | 0.771084 | 0.825806 |
| Serbian (srp) | 0.824561 | 0.814356 | 0.819427 |
| Saterfriesisch (stq) | 0.790087 | 0.734417 | 0.761236 |
| Sundanese (sun) | 0.764192 | 0.631769 | 0.691700 |
| Swahili (macrolanguage) (swa) | 0.763496 | 0.796247 | 0.779528 |
| Swedish (swe) | 0.838284 | 0.723647 | 0.776758 |
| Silesian (szl) | 0.819788 | 0.750809 | 0.783784 |
| Tamil (tam) | 0.985765 | 0.955172 | 0.970228 |
| Tatar (tat) | 0.469780 | 0.795349 | 0.590674 |
| Tulu (tcy) | 0.893300 | 0.873786 | 0.883436 |
| Telugu (tel) | 1.000000 | 0.913690 | 0.954899 |
| Tetum (tet) | 0.765116 | 0.744344 | 0.754587 |
| Tajik (tgk) | 0.828418 | 0.813158 | 0.820717 |
| Tagalog (tgl) | 0.751468 | 0.757396 | 0.754420 |
| Thai (tha) | 0.933884 | 0.807143 | 0.865900 |
| Tongan (ton) | 0.920245 | 0.923077 | 0.921659 |
| Tswana (tsn) | 0.873397 | 0.889070 | 0.881164 |
| Turkmen (tuk) | 0.898438 | 0.837887 | 0.867107 |
| Turkish (tur) | 0.666667 | 0.716981 | 0.690909 |
| Tuvan (tyv) | 0.857143 | 0.805063 | 0.830287 |
| Udmurt (udm) | 0.865517 | 0.756024 | 0.807074 |
| Uighur (uig) | 0.991597 | 0.967213 | 0.979253 |
| Ukrainian (ukr) | 0.771341 | 0.702778 | 0.735465 |
| Urdu (urd) | 0.877647 | 0.855505 | 0.866434 |
| Uzbek (uzb) | 0.655652 | 0.797040 | 0.719466 |
| Venetian (vec) | 0.611111 | 0.527233 | 0.566082 |
| Veps (vep) | 0.672862 | 0.688213 | 0.680451 |
| Vietnamese (vie) | 0.932406 | 0.914230 | 0.923228 |
| Vlaams (vls) | 0.594427 | 0.501305 | 0.543909 |
| Volapük (vol) | 0.765625 | 0.942308 | 0.844828 |
| Võro (vro) | 0.797203 | 0.740260 | 0.767677 |
| Waray (war) | 0.930876 | 0.930876 | 0.930876 |
| Walloon (wln) | 0.636804 | 0.693931 | 0.664141 |
| Wolof (wol) | 0.864220 | 0.845601 | 0.854809 |
| Wu Chinese (wuu) | 0.848921 | 0.830986 | 0.839858 |
| Xhosa (xho) | 0.837398 | 0.759214 | 0.796392 |
| Mingrelian (xmf) | 0.943396 | 0.874126 | 0.907441 |
| Yiddish (yid) | 0.955729 | 0.897311 | 0.925599 |
| Yoruba (yor) | 0.812010 | 0.719907 | 0.763190 |
| Zeeuws (zea) | 0.617737 | 0.550409 | 0.582133 |
| Cantonese (zh-yue) | 0.859649 | 0.649007 | 0.739623 |
| Standard Chinese (zho) | 0.845528 | 0.781955 | 0.812500 |
| accuracy | 0.749527 | 0.749527 | 0.749527 |
| macro avg | 0.762866 | 0.742101 | 0.749261 |
| weighted avg | 0.762006 | 0.749527 | 0.752910 |
## Questions?
Post a Github issue from [HERE](https://github.com/m3hrdadfi/zabanshenas/issues). | ffa8e9b944545c491c0070553d60bf51 |
Duskfallcrew/isometric-dreams-sd-1-5 | Duskfallcrew | null | 22 | 16 | diffusers | 3 | text-to-image | false | false | false | creativeml-openrail-m | ['en'] | null | null | 2 | 0 | 2 | 0 | 0 | 0 | 0 | ['text-to-image', 'isometric', 'art', 'stable diffusion', 'stable diffusion 1.5', 'duskfallcrew'] | false | true | true | 1,215 | false | [](https://huggingface.co/spaces/Duskfallcrew/isometric-dreams-sd-1-5)
### Isometric Dreams SD 1.5 trained by Duskfallcrew with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
# All samples and info are here:
https://civitai.com/user/duskfallcrew
# If you want to donate towards costs and don't want to subscribe:
https://ko-fi.com/DUSKFALLcrew
# If you want to monthly support the EARTH & DUSK media projects and not just AI:
https://www.patreon.com/earthndusk
duskametrick15 (use that on your prompt) | 6b8d3151e39a26099f69a87d7683170a |
nandysoham/Pub-clustered | nandysoham | distilbert | 8 | 1 | transformers | 0 | question-answering | false | true | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,858 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nandysoham/Pub-clustered
This model is a fine-tuned version of [nandysoham16/16-clustered_aug](https://huggingface.co/nandysoham16/16-clustered_aug) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3449
- Train End Logits Accuracy: 0.9097
- Train Start Logits Accuracy: 0.875
- Validation Loss: 0.8311
- Validation End Logits Accuracy: 0.7692
- Validation Start Logits Accuracy: 0.8462
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.3449 | 0.9097 | 0.875 | 0.8311 | 0.7692 | 0.8462 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
| d29be805d1f42788f7cd62f4706557b0 |
espnet/brianyan918_iwslt22_dialect_st_transformer_fisherlike_4gpu_bbins16m_fix | espnet | null | 27 | 2 | espnet | 0 | null | false | false | false | cc-by-4.0 | ['noinfo'] | ['iwslt22_dialect'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['espnet', 'audio', 'speech-translation'] | false | true | true | 22,045 | false |
## ESPnet2 ST model
### `espnet/brianyan918_iwslt22_dialect_st_transformer_fisherlike_4gpu_bbins16m_fix`
This model was trained by Brian Yan using iwslt22_dialect recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 77fce65312877a132bbae01917ad26b74f6e2e14
pip install -e .
cd egs2/iwslt22_dialect/st1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/brianyan918_iwslt22_dialect_st_transformer_fisherlike_4gpu_bbins16m_fix
```
<!-- Generated by scripts/utils/show_st_results.sh -->
# RESULTS
## Environments
- date: `Tue Feb 8 13:29:21 EST 2022`
- python version: `3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0]`
- espnet version: `espnet 0.10.7a1`
- pytorch version: `pytorch 1.8.1`
- Git hash: `77fce65312877a132bbae01917ad26b74f6e2e14`
- Commit date: `Tue Feb 8 10:48:10 2022 -0500`
## st_transformer_fisherlike_4gpu_bbins16m_fix_raw_bpe_tc1000_sp
### BLEU
|dataset|bleu_score|verbose_score|
|---|---|---|
p3_st_model_valid.acc.ave|12.0|37.4/17.3/8.6/4.5 (BP = 0.952 ratio = 0.953 hyp_len = 40192 ref_len = 42181)
## ST config
<details><summary>expand</summary>
```
config: conf/tuning/transformer_fisherlike_4gpu_bbins16m_fix.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/st_transformer_fisherlike_4gpu_bbins16m_fix_raw_bpe_tc1000_sp
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 4
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 36641
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 50
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 3
grad_clip_type: 2.0
grad_noise: false
accum_grad: 2
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 16000000
valid_batch_bins: null
train_shape_file:
- exp/st_stats_raw_bpe1000_sp/train/speech_shape
- exp/st_stats_raw_bpe1000_sp/train/text_shape.bpe
- exp/st_stats_raw_bpe1000_sp/train/src_text_shape.bpe
valid_shape_file:
- exp/st_stats_raw_bpe1000_sp/valid/speech_shape
- exp/st_stats_raw_bpe1000_sp/valid/text_shape.bpe
- exp/st_stats_raw_bpe1000_sp/valid/src_text_shape.bpe
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - /scratch/iwslt22dump//raw/train_sp/wav.scp
- speech
- kaldi_ark
- - /scratch/iwslt22dump//raw/train_sp/text.tc.en
- text
- text
- - /scratch/iwslt22dump//raw/train_sp/text.tc.rm.ta
- src_text
- text
valid_data_path_and_name_and_type:
- - /scratch/iwslt22dump//raw/dev/wav.scp
- speech
- kaldi_ark
- - /scratch/iwslt22dump//raw/dev/text.tc.en
- text
- text
- - /scratch/iwslt22dump//raw/dev/text.tc.rm.ta
- src_text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 12.5
scheduler: noamlr
scheduler_conf:
model_size: 256
warmup_steps: 25000
token_list:
- <blank>
- <unk>
- s
- ▁
- apo
- '&'
- ;
- ▁i
- ▁you
- t
- ▁it
- ▁the
- ▁and
- ▁to
- ▁that
- ▁a
- n
- a
- ▁he
- ▁me
- m
- d
- ▁yes
- ▁she
- ▁no
- ▁in
- ▁what
- ▁for
- ▁we
- ing
- ll
- ▁they
- re
- ▁are
- ▁did
- ▁god
- ▁is
- e
- ed
- ▁so
- ▁her
- ▁do
- ▁have
- ▁of
- ▁with
- ▁go
- ▁know
- ▁not
- ▁was
- ▁on
- ▁don
- y
- ▁him
- ▁one
- ▁like
- ▁there
- '%'
- ▁pw
- ▁be
- ▁at
- ▁told
- ▁good
- ▁will
- ▁my
- ▁all
- ▁or
- c
- er
- p
- ▁how
- ▁ah
- r
- ▁but
- ▁them
- ▁see
- ▁get
- ▁can
- i
- ▁when
- ▁going
- ▁about
- ▁mean
- ▁this
- k
- ▁your
- ▁by
- ▁if
- u
- ▁come
- ▁up
- ▁tell
- g
- ▁said
- ▁then
- ▁now
- ▁yeah
- o
- ▁out
- al
- ra
- ▁because
- ▁time
- ▁well
- ▁would
- ▁p
- ▁from
- h
- ar
- f
- ▁swear
- ▁went
- b
- ▁really
- or
- ▁want
- ri
- ▁home
- ▁work
- ve
- ▁take
- ▁got
- ▁just
- l
- ▁uh
- ▁why
- en
- ▁even
- ▁am
- ▁who
- ▁make
- ▁day
- '-'
- in
- ▁something
- ▁some
- ou
- ▁us
- ▁okay
- ▁where
- ▁does
- ▁has
- ▁thank
- ▁c
- ▁his
- th
- ▁back
- ▁fine
- ▁today
- ly
- ▁b
- ▁oh
- ▁doing
- ▁everything
- ▁here
- le
- ▁thing
- ▁two
- ▁anyway
- li
- ▁had
- ▁still
- ▁say
- ro
- ▁after
- ce
- ▁hello
- ▁ma
- ▁call
- w
- ▁listen
- il
- ▁should
- ▁girl
- ▁f
- z
- ▁too
- ▁let
- ▁understand
- ▁may
- ▁much
- ▁think
- ch
- ir
- ha
- ▁other
- ▁tomorrow
- ▁were
- ▁people
- es
- ▁year
- di
- ba
- ▁right
- el
- ▁things
- ▁house
- v
- ▁actually
- un
- ▁an
- ▁give
- ▁only
- ▁better
- pe
- ▁need
- ▁buy
- ▁de
- ne
- ▁ha
- ur
- ion
- ▁made
- la
- ▁willing
- ▁nothing
- ▁called
- ▁night
- ▁yesterday
- se
- ▁came
- ▁lot
- ter
- ▁g
- po
- ▁find
- ry
- ▁car
- ▁over
- ic
- ▁stay
- ▁eat
- ent
- ▁always
- ▁very
- 'on'
- ▁put
- ▁ramadan
- ▁those
- ▁hear
- is
- ▁talk
- ▁three
- ▁anything
- ▁mo
- ▁little
- ▁been
- ▁already
- fi
- ation
- ke
- ▁first
- ▁look
- it
- ▁won
- ▁mom
- ▁way
- ▁before
- ▁ok
- ▁last
- fa
- ▁cook
- vi
- ▁hi
- ▁same
- ▁thought
- ▁also
- um
- ate
- ▁money
- ▁start
- ▁place
- us
- ▁morning
- ▁could
- ▁ask
- ▁bring
- ▁bit
- ▁lo
- ▁leave
- ▁man
- ▁left
- ine
- ▁days
- ge
- ▁la
- ▁week
- ▁friend
- ▁problem
- ▁sister
- ▁allah
- ▁feel
- ▁every
- ▁more
- fe
- ▁long
- ▁hundred
- ▁j
- ▁eh
- ho
- ca
- em
- ▁talking
- ▁exam
- ▁next
- ▁new
- ▁fun
- ▁took
- ▁alright
- co
- ▁w
- ▁um
- ▁eid
- ▁brother
- ▁our
- gh
- ow
- ▁o
- ▁four
- ni
- wa
- ▁else
- ▁finish
- bo
- ▁sleep
- ▁bless
- ▁dear
- ▁since
- ▁play
- ▁name
- hi
- ▁coming
- ▁many
- et
- ▁usual
- ▁con
- ▁maybe
- ▁off
- bi
- ▁than
- ▁any
- ▁mother
- ▁son
- om
- ▁their
- ▁keep
- ▁dinner
- ▁ten
- ▁half
- ▁help
- ▁bad
- and
- ▁pass
- ▁hot
- ▁guy
- ▁least
- ▁down
- ▁bought
- ▁dinars
- ▁working
- ▁around
- ▁normal
- ▁poor
- ▁stuff
- ▁hope
- ▁used
- ▁again
- ▁bro
- ul
- ▁phone
- ▁ex
- ▁done
- ▁six
- ▁na
- ▁month
- ▁tired
- ▁check
- ▁show
- ▁together
- oo
- ▁later
- ▁past
- ▁five
- ▁watch
- ya
- ▁coffee
- ment
- ut
- ▁plan
- ▁great
- ▁daughter
- j
- ▁another
- side
- ▁change
- ▁yet
- ting
- ▁until
- ▁honestly
- ▁whole
- ol
- ▁care
- ▁sure
- able
- id
- ▁big
- ▁spend
- ▁exactly
- ▁boy
- ▁course
- ▁end
- ▁please
- ▁started
- he
- up
- ▁found
- ▁saw
- ▁family
- ▁asked
- ▁enough
- ▁during
- ▁rest
- ▁which
- ▁gave
- ▁true
- ▁while
- ▁job
- ▁el
- ▁each
- ▁away
- ▁kids
- ▁goes
- less
- ▁twenty
- ▁eight
- ▁someone
- ▁cha
- ▁clothes
- ah
- ▁myself
- ▁nice
- ▁late
- ▁old
- ▁real
- age
- ant
- ▁fast
- ▁add
- ▁hard
- ▁these
- ful
- im
- ▁close
- ive
- ▁dad
- ▁pay
- ies
- ▁dude
- ▁alone
- ▁far
- ance
- ▁dis
- ▁seven
- ▁isn
- ▁pro
- our
- ▁thousand
- ▁break
- ▁hour
- ▁wait
- ▁brought
- ▁open
- ▁un
- ▁wedding
- ▁walk
- ▁father
- ▁ka
- ▁second
- x
- ▁saturday
- ▁salad
- ▁win
- ▁everyone
- ▁water
- ▁tunis
- ▁remember
- ity
- ▁wake
- ▁minute
- ▁school
- ▁sunday
- ▁own
- ▁shop
- ▁cold
- ▁meet
- ▁wear
- ever
- ▁send
- ▁early
- ▁gra
- tic
- ▁short
- ▁use
- ▁sometimes
- hou
- ▁love
- ▁prepare
- ▁sea
- ▁study
- ure
- ▁com
- qui
- ▁hand
- ▁both
- ja
- ▁summer
- ▁wrong
- ▁wanted
- che
- ▁miss
- ▁try
- ▁iftar
- ▁yourself
- q
- ▁live
- war
- ▁expensive
- ▁getting
- ▁waiting
- ▁once
- ▁kh
- ▁forgot
- ▁nine
- ▁anymore
- ▁soup
- ▁uncle
- ▁beach
- ▁saying
- ▁into
- ▁having
- ▁brik
- ▁room
- ▁food
- ▁visit
- ▁matter
- ▁thirty
- ▁taking
- ▁rain
- ▁aunt
- ▁never
- ▁pick
- ▁tunisia
- ▁health
- ▁head
- ▁cut
- ▁fasting
- ▁sick
- ▁friday
- ▁forget
- ▁monday
- ▁become
- ▁dress
- ated
- ▁most
- wi
- ▁hang
- ▁life
- ▁fish
- ▁happy
- ▁delicious
- ▁deal
- ▁finished
- ble
- ▁studying
- ▁weather
- ▁making
- ▁cost
- ▁bl
- ▁stayed
- ▁guess
- ▁teach
- ▁stop
- ▁near
- ▁watching
- ▁without
- ▁imagine
- ▁seriously
- fl
- ▁speak
- ▁idea
- ▁must
- ▁normally
- ▁turn
- ize
- ▁clean
- ▁tv
- ▁meat
- ▁woke
- ▁example
- ▁easy
- ▁sent
- ▁sell
- over
- ▁fifty
- ▁amazing
- ▁beautiful
- ▁whatever
- ▁enjoy
- ▁talked
- ▁believe
- ▁thinking
- ▁count
- ▁almost
- ▁longer
- ▁afternoon
- ▁hair
- ▁front
- ▁earlier
- ▁mind
- ▁kind
- ▁tea
- ▁best
- ▁rent
- ▁picture
- ▁cooked
- ▁price
- ight
- ▁soon
- ▁woman
- ▁otherwise
- ▁happened
- ▁story
- ▁luck
- ▁high
- ▁happen
- ▁arrive
- ▁paper
- ga
- ▁quickly
- ▁looking
- ub
- ▁number
- ▁staying
- ▁sit
- man
- ack
- ▁important
- ▁either
- ▁person
- ▁small
- ▁free
- ▁crazy
- ▁playing
- ▁kept
- ▁part
- ▁game
- law
- ▁till
- uck
- ▁ready
- ▁might
- ▁gone
- ▁full
- ▁fix
- ▁subject
- ▁laugh
- ▁doctor
- ▁welcome
- ▁eleven
- ▁sleeping
- ▁heat
- ▁probably
- ▁such
- ▁café
- ▁fat
- ▁sweet
- ▁married
- ▁drink
- ▁move
- ▁outside
- ▁especially
- ▁group
- ji
- ▁market
- ▁through
- ▁train
- ▁protect
- ▁turned
- ▁red
- ▁busy
- ▁light
- ▁noise
- ▁street
- ▁manage
- ▁piece
- ▁sitting
- gue
- ▁sake
- ▁party
- ish
- ▁young
- ▁case
- ▁cool
- huh
- ▁marwa
- ▁drive
- ▁pray
- clock
- ▁couscous
- ▁spent
- ▁felt
- ▁hopefully
- ▁everybody
- ▁living
- ▁pain
- line
- ▁between
- ▁match
- ▁prayer
- que
- ian
- ▁facebook
- ▁spi
- ▁eye
- ▁children
- ▁tonight
- ▁mohamed
- ▁understood
- ▁black
- ▁husband
- ▁rid
- ▁kitchen
- ▁face
- ▁swim
- ▁kid
- ▁invite
- ▁cup
- ▁grilled
- ▁wife
- ▁cousin
- ▁drop
- ▁wow
- ▁table
- ▁du
- ▁bored
- ▁neighborhood
- ▁agree
- ▁bread
- ▁hamma
- ▁straight
- ▁tuesday
- ▁anyone
- ▁lunch
- ade
- ▁himself
- ▁gather
- ▁wish
- ▁fifteen
- ▁wednesday
- ▁die
- ▁thursday
- ▁color
- ▁asleep
- ▁different
- ▁whether
- ▁ago
- ▁middle
- ▁class
- ▁cake
- shirt
- ▁fight
- ▁clear
- ▁test
- ▁plus
- ▁sousse
- ▁beginning
- ▁result
- ▁learn
- ▁crowded
- ▁slept
- ▁shoes
- ▁august
- ▁pretty
- ▁white
- ▁apparently
- ▁reach
- ▁mariem
- ▁return
- ▁road
- ▁million
- ▁stand
- ▁paid
- ▁word
- ious
- ▁few
- ▁breakfast
- ▁post
- ▁kilo
- ▁chicken
- ▁grade
- ▁read
- ▁accept
- ▁birthday
- ▁exhaust
- ▁point
- ▁july
- ▁patience
- ▁studies
- ▁trouble
- ▁along
- ▁worry
- ▁follow
- ▁hurt
- ▁afraid
- ▁trip
- ▁ahmed
- ▁remain
- ▁succeed
- ▁mercy
- ▁difficult
- ▁weekend
- ▁answer
- ▁cheap
- ▁repeat
- ▁auntie
- ▁sign
- ▁hold
- ▁under
- ▁olive
- ▁mahdi
- ▁sfax
- ▁annoy
- ▁dishes
- ▁message
- ▁business
- ▁french
- ▁serious
- ▁travel
- ▁office
- ▁wonder
- ▁student
- ▁internship
- ▁pepper
- ▁knew
- ▁kill
- ▁sauce
- ▁herself
- ▁hammamet
- ▁damn
- ▁mix
- ▁suit
- ▁medicine
- ▁remove
- ▁gonna
- ▁company
- ▁quarter
- ▁shopping
- ▁correct
- ▁throw
- ▁grow
- ▁voice
- ▁series
- gotten
- ▁taste
- ▁driving
- ▁hospital
- ▁sorry
- ▁aziz
- ▁milk
- ▁green
- ▁baccalaureate
- ▁running
- ▁lord
- ▁explain
- ▁angry
- ▁build
- ▁fruit
- ▁photo
- é
- ▁crying
- ▁baby
- ▁store
- ▁project
- ▁france
- ▁twelve
- ▁decide
- ▁swimming
- ▁world
- ▁preparing
- ▁special
- ▁session
- ▁behind
- ▁vegetable
- ▁strong
- ▁fatma
- ▁treat
- ▁cream
- ▁situation
- ▁settle
- ▁totally
- ▁stopped
- ▁book
- ▁honest
- ▁solution
- ▁vacation
- ▁cheese
- ▁ahead
- ▁sami
- ▁focus
- ▁scared
- ▁club
- ▁consider
- ▁final
- ▁naturally
- ▁barely
- ▁issue
- ▁floor
- ▁birth
- ▁almighty
- ▁engagement
- ▁blue
- ▁empty
- ▁soccer
- ▁prophet
- ▁ticket
- ▁indeed
- ▁write
- ▁present
- ▁patient
- ▁available
- ▁holiday
- ▁leaving
- ▁became
- ▁reason
- ▁apart
- ▁impossible
- ▁shame
- ▁worried
- ▁body
- ▁continue
- ▁program
- ▁stress
- ▁arabic
- ▁round
- ▁taxi
- ▁transport
- ▁third
- ▁certain
- ▁downstairs
- ▁neighbor
- ▁directly
- ▁giving
- ▁june
- ▁mini
- ▁upstairs
- ▁mistake
- ▁period
- ▁catch
- ▁buddy
- ▁success
- ▁tajine
- ▁excuse
- ▁organize
- ▁question
- ▁suffer
- ▁remind
- ▁university
- ▁downtown
- ▁sugar
- ▁twice
- ▁women
- ▁couple
- ▁everyday
- ▁condition
- ▁obvious
- ▁nobody
- ▁complete
- ▁stomach
- ▁account
- ▁september
- ▁choose
- ▁bottle
- ▁figure
- ▁instead
- ▁salary
- '0'
- '1'
- '3'
- '2'
- '5'
- '7'
- '4'
- '9'
- '8'
- /
- °
- '6'
- è
- $
- ï
- <sos/eos>
src_token_list:
- <blank>
- <unk>
- ّ
- ي
- ا
- ِ
- ل
- َ
- و
- ه
- ة
- م
- ر
- ك
- ▁ما
- ُ
- ب
- ش
- د
- ت
- ▁في
- َّ
- ▁ن
- ▁ي
- ▁ت
- ن
- ▁لا
- ح
- ▁ه
- س
- وا
- ▁م
- ف
- ▁إي
- ع
- ▁ب
- ها
- ط
- ى
- ق
- ▁الل
- ▁أ
- ج
- ▁والل
- ▁و
- ▁إيه
- ▁ا
- ▁يا
- ز
- ▁تو
- ▁بش
- ص
- ▁أه
- خ
- ات
- ▁إنت
- ▁أنا
- نا
- ▁شن
- ▁ق
- ▁ش
- ▁ك
- يت
- ين
- ▁ف
- ار
- ▁قال
- ▁باهي
- ▁ع
- ▁من
- ▁ل
- ▁مش
- ▁كان
- ▁حت
- ▁ول
- هم
- ▁ر
- ان
- ▁س
- ض
- ني
- ▁بال
- ▁على
- ▁متاع
- ▁كي
- ▁ال
- ▁ح
- ▁كل
- ▁آنا
- ▁الم
- ▁خ
- ▁الس
- ▁وال
- ون
- ور
- ▁أم
- ▁هك
- ▁آش
- ▁الد
- ▁عاد
- ▁ج
- ▁معناها
- ▁مع
- اش
- ▁الص
- ▁نهار
- ▁لل
- لها
- ▁تي
- ▁رب
- ▁خاطر
- ▁أكهو
- غ
- ▁شي
- الل
- ام
- تها
- ▁ون
- ▁آك
- ▁فهمت
- وم
- ▁موش
- مشي
- ▁ص
- ▁اليوم
- ▁مر
- ست
- ▁الب
- ▁لاباس
- تلي
- ▁الكل
- ▁عال
- ذ
- ▁فم
- ▁الك
- ▁حاجة
- ▁شوي
- اكا
- ▁ياخي
- ▁هاني
- ▁صح
- اس
- ▁آه
- ▁برشة
- ▁الن
- ▁وت
- ▁الج
- لك
- ▁راهو
- سم
- ▁الح
- مت
- ▁الت
- ▁بعد
- اج
- عد
- ▁انشا
- وش
- لت
- ▁وين
- ث
- ▁ولا
- ▁باش
- ▁فيها
- نت
- ▁إ
- ▁الأ
- ▁الف
- ▁إم
- ▁واحد
- ▁ألو
- ▁عندي
- ▁أك
- ▁خل
- ▁وي
- ▁تعمل
- أ
- ▁ريت
- ▁وأ
- ▁تعرف
- بت
- ▁الع
- ▁مشيت
- ▁وه
- ▁حاصيلو
- ▁بالل
- ▁نعمل
- ▁غ
- ▁تجي
- ▁يجي
- ▁كيفاش
- ▁عملت
- ظ
- اك
- ▁هاو
- ▁اش
- ▁قد
- ▁نق
- ▁د
- ▁زادا
- ▁فيه
- رة
- ▁بر
- ▁الش
- ▁ز
- ▁كيما
- ▁الا
- ند
- عم
- ▁نح
- ▁بنتي
- ▁نمشي
- ▁عليك
- ▁نعرفش
- ▁كهو
- ▁وم
- ▁ط
- تي
- ▁خير
- ▁آ
- مش
- ▁عليه
- له
- حت
- ▁إيا
- ▁أحنا
- ▁تع
- الا
- عب
- ▁ديما
- ▁تت
- ▁جو
- ▁مالا
- ▁أو
- ▁قلتلك
- ▁معنتها
- لنا
- ▁شكون
- ▁تحب
- بر
- ▁الر
- ▁وا
- ▁الق
- اء
- ▁عل
- ▁البارح
- ▁وخ
- ▁سافا
- ▁هوما
- ▁ولدي
- ▁
- ▁نعرف
- يف
- رت
- ▁وب
- ▁روح
- ▁علاش
- ▁هاذاك
- ▁رو
- وس
- ▁جا
- ▁كيف
- طر
- ▁غادي
- يكا
- عمل
- ▁نحب
- ▁عندك
- ▁وما
- ▁فر
- اني
- ▁قلتله
- ▁الط
- فر
- ▁دار
- ▁عليها
- ▁يعمل
- ▁نت
- ▁تح
- باح
- ▁ماهو
- ▁وكل
- ▁وع
- قت
- ▁فهمتك
- عر
- ▁وس
- ▁تر
- ▁سي
- يلة
- ▁قلت
- ▁رمضان
- صل
- ▁آما
- ▁الواحد
- ▁بيه
- ▁ثلاثة
- ▁فهمتني
- ▁ها
- بط
- ▁مازال
- قل
- ▁بالك
- ▁معناتها
- ▁ور
- ▁قلتلها
- ▁يس
- رب
- ▁ام
- ▁وبعد
- ▁الث
- ▁وإنت
- ▁بحذا
- ▁لازم
- ْ
- ▁بن
- قرا
- سك
- ▁يت
- خل
- ▁فه
- عت
- ▁هاك
- ▁تق
- ▁قبل
- ▁وك
- ▁نقول
- ▁الز
- حم
- ▁عادش
- حكي
- وها
- بة
- نس
- طل
- ▁علاه
- ذا
- ▁سا
- ▁طل
- الي
- ▁يق
- ▁دو
- حوا
- حد
- ▁نشوف
- نة
- ▁لي
- ▁تك
- ▁نا
- ▁هاذ
- ▁خويا
- ▁المر
- ▁وينك
- ▁البر
- ▁أتو
- ينا
- ▁حل
- ولي
- ▁ثم
- ▁عم
- ▁آي
- ▁قر
- از
- ▁وح
- كش
- بعة
- ▁كيفاه
- ▁نع
- ▁الحمدلله
- ▁ياسر
- ▁الخ
- ▁معاك
- ▁معاه
- ▁تقول
- دة
- ▁حكاية
- تش
- ▁حس
- ▁غدوا
- ▁بالحق
- روا
- وز
- ▁تخ
- ▁العيد
- رجع
- ▁بالي
- ▁جات
- ▁وج
- حة
- ▁وش
- ▁آخر
- ▁طا
- ▁مت
- لقا
- تك
- ▁مس
- ▁راني
- كون
- ▁صاحب
- ▁هاكا
- ▁قول
- ▁عر
- ▁عنده
- ▁يلزم
- ▁هاذا
- ▁يخ
- ▁وقتاش
- ▁وقت
- بع
- ▁العش
- ▁هاذي
- هاش
- ينة
- ▁هاذاكا
- عطي
- ▁تنج
- ▁باهية
- نيا
- فت
- ▁يحب
- ▁تف
- ▁أهلا
- وف
- ▁غدوة
- ▁بيك
- ▁بد
- عن
- ▁در
- ▁ننج
- هار
- ▁الحكاية
- مون
- وق
- ▁نورمال
- ▁عندها
- خر
- ▁بو
- ▁حب
- ▁آكا
- ▁وف
- ▁هاذيكا
- ▁ديجا
- ▁وق
- ▁طي
- لتل
- بعث
- ▁تص
- رك
- ▁مانيش
- ▁العادة
- ▁شوف
- ضر
- ▁يمشي
- ▁نعملوا
- ▁عرفت
- ▁زال
- ▁متع
- ▁عمل
- ▁بيها
- ▁نحكي
- اع
- ▁نج
- معة
- ▁والكل
- عناها
- ▁يعي
- ▁نجي
- ستن
- ▁هاذيك
- ▁عام
- ▁فلوس
- قة
- تين
- ▁بالقدا
- لهم
- ▁تخدم
- ▁ٱ
- ▁شيء
- ▁راهي
- ▁جاب
- ولاد
- ابل
- ▁ماك
- عة
- ▁نمشيوا
- وني
- شري
- بار
- انس
- ▁وقتها
- ▁جديد
- ▁يز
- ▁كر
- ▁حاسيلو
- ▁شق
- ▁اه
- ▁سايي
- ▁انشالل
- رج
- مني
- ▁بلا
- ▁صحيح
- ▁غير
- ▁يخدم
- مان
- وكا
- ▁عند
- ▁قاعدة
- ▁تس
- ربة
- ▁راس
- ▁حط
- ▁نكل
- تني
- ▁الو
- سيون
- ▁عندنا
- ▁لو
- ▁ست
- صف
- ▁ض
- ▁كامل
- ▁نخدم
- ▁يبدا
- ▁دونك
- ▁أمور
- رات
- ▁تونس
- بدا
- ▁تحكي
- ▁سو
- ▁جاي
- ▁وحدة
- ▁ساعة
- حنا
- ▁بكري
- ▁إل
- ▁وبر
- ▁كم
- ▁تبدا
- ارة
- ادي
- رق
- لوا
- ▁يمكن
- ▁خاط
- ▁وص
- جين
- ▁هاذاي
- ▁هز
- قد
- ▁قل
- ▁وكهو
- ▁نص
- ▁دي
- لقى
- ▁وأنا
- سين
- ▁يح
- ▁ماشي
- ▁شو
- ▁خذيت
- امات
- ▁كنت
- خرج
- ▁لقيت
- رتاح
- كس
- ▁حاجات
- ▁مريق
- ▁مل
- ليفون
- اوا
- ▁شفت
- ▁عاملة
- ▁تن
- ▁والا
- سأل
- ▁حد
- ▁قاللك
- ▁العباد
- ▁عالاخ
- ▁وآك
- ▁ماني
- ▁ناخذ
- ▁حم
- ▁الإ
- ▁ماضي
- ▁ث
- الة
- ▁أخرى
- رين
- ▁تشوف
- ▁نخرج
- ▁أربعة
- ▁ألف
- نيش
- ▁هاي
- آ
- ▁فيك
- رشة
- ولة
- فلة
- ▁بابا
- ▁أما
- ▁روحي
- ▁فيهم
- ▁رج
- ▁ليك
- ونس
- يرة
- ▁وأكهو
- ندي
- ▁صار
- شك
- ▁نرو
- ▁آكهو
- ▁تش
- ▁غاديكا
- ▁معاها
- ▁لب
- ▁أذاكا
- ▁آني
- ▁يوم
- عملوا
- ▁نقعد
- دوا
- ▁عد
- سمع
- متني
- ▁الخدمة
- ▁مازلت
- ▁قعدت
- ايا
- ▁برك
- قعد
- ▁خرجت
- ضح
- ▁قالل
- ▁يقول
- ▁وفي
- ▁حق
- ختي
- ▁يعني
- خدم
- ▁جيت
- ▁نرمال
- طف
- ▁عجب
- ▁تقعد
- ▁مشينا
- اية
- ▁خدمة
- لدي
- روف
- ▁الفطر
- ▁مشكل
- ▁سل
- ▁وآنا
- الط
- ▁بالس
- ▁هانا
- ▁أوه
- ▁أذيكا
- ▁وإ
- ▁عليهم
- ▁حالة
- جت
- قضي
- ▁لق
- ▁ونصف
- سعة
- عطيه
- عاو
- خانة
- ▁مخ
- ▁شبيك
- بيعة
- ▁أهوك
- يني
- ▁تعد
- ▁خال
- ▁قريب
- ▁راك
- ▁قالت
- ▁لتو
- ▁أكثر
- اعة
- ▁يظهرلي
- ▁ماشية
- سمعني
- ▁نسيت
- ▁ينج
- ▁الحمدلل
- هدي
- ▁وشن
- ▁تطي
- ▁هنا
- ▁نسمع
- ▁إنتوما
- ▁نحكيلك
- ▁قاعد
- ▁اسمعني
- خرين
- إ
- ماعة
- ▁بالر
- ▁دا
- ▁عمر
- ▁نشري
- ▁قهوة
- ▁تبارك
- ▁صب
- ▁مشات
- غر
- ▁شريت
- ▁عامل
- ▁زوج
- ثنين
- ▁برب
- ريق
- ▁نكم
- ▁لم
- بيب
- ▁مياة
- ▁مالل
- ▁قعد
- ▁سخون
- قس
- ▁وحده
- ▁اسمع
- ▁خمسة
- ▁غالي
- ▁الأو
- رلي
- ▁العظيم
- ▁ترو
- تهم
- كري
- ▁نجيب
- ▁جملة
- قول
- ▁قلتلي
- ▁إيجا
- ▁يقعد
- ▁إيام
- ▁يعطيك
- ▁نخل
- ▁دب
- يمة
- رهبة
- ▁نهز
- ▁محم
- ▁بين
- غار
- ▁نحنا
- ▁بون
- ▁الغ
- ▁شهر
- ▁بار
- رقة
- ▁نطي
- ئ
- ترو
- ▁ملا
- ▁الكرهبة
- ▁باه
- ▁عالإخ
- ▁عباد
- ▁بلاصة
- ▁مشى
- بيع
- ▁نفس
- ▁عملنا
- ▁واح
- ▁أحلاه
- ▁بحذاك
- ▁لأ
- ▁دخ
- باب
- ▁ودر
- ▁غالب
- ▁ناكل
- ▁مثلا
- ء
- ▁راقد
- ▁تفر
- ▁الوقت
- ▁تاخذ
- حذا
- نتر
- ▁نبدا
- ▁حال
- ▁مريم
- الم
- ▁جمعة
- رجول
- ▁معايا
- ▁تخرج
- ▁باس
- ▁ساعات
- ▁عندهم
- ▁نتفر
- مسة
- ▁الجمعة
- بعين
- ▁أكاهو
- ▁ميش
- مراة
- ▁خذا
- ▁ظ
- ▁سيدي
- ▁معاي
- ▁شبيه
- ▁حكا
- ▁سف
- ▁بعضنا
- ▁بالض
- ▁ليلة
- ▁زعما
- ▁الحق
- مضان
- ▁صعيب
- ▁قالتلك
- ً
- ملة
- ▁بق
- عرف
- لاطة
- ▁خرج
- ▁أخت
- ▁تقوللي
- ▁معانا
- ▁صغير
- ▁إسمه
- ▁بعض
- ▁العام
- ▁علينا
- ▁يتع
- ▁فاش
- ▁شع
- ▁معاهم
- ▁يسالش
- ▁لهنا
- ▁سمعت
- ▁البار
- ▁نتصو
- ▁الاخ
- ▁وكان
- وبة
- دمة
- ▁كون
- ▁مبعد
- ▁تسمع
- ▁بعيد
- ▁تاكل
- ▁نلقا
- لامة
- لاثة
- ▁ذ
- ▁تحس
- ▁الواح
- ▁لدار
- ▁فاتت
- ▁تاو
- ▁أحوالك
- ▁عاملين
- ▁كبيرة
- عجب
- ▁بنت
- ▁بيدي
- ▁حكيت
- ▁تحط
- ▁مسكينة
- ▁هاذوكم
- ▁نزيد
- لاث
- ▁عشرة
- ▁عيني
- ▁تعب
- ▁ياكل
- ▁وزيد
- ▁طول
- ▁حمدلله
- ▁وقتاه
- ▁معناه
- ▁وآش
- ▁ووه
- ▁وواحد
- ▁نشوفوا
- ▁عيد
- ▁بصراحة
- ▁بحذانا
- ▁قاعدين
- ▁راجل
- ▁وحدي
- ▁وعشرين
- ▁لين
- ▁خايب
- ▁قالتله
- ▁تهز
- عيد
- ▁كبير
- ▁يعرف
- ▁عارف
- ▁الفلوس
- ▁زايد
- ▁خدمت
- ▁هاذوما
- ▁سلاطة
- ▁فارغة
- ▁ساعتين
- ▁تبد
- ▁راو
- ▁مائة
- ▁بعضهم
- ▁ظاهرلي
- ▁الفازة
- كتب
- ▁القهوة
- سبوك
- ▁زاد
- ▁ضرب
- حكيلي
- ▁فوق
- ▁عاود
- ▁راي
- ▁ومبعد
- ▁حوايج
- ▁دخلت
- ▁يقوللك
- ▁زيد
- ▁زلت
- لفزة
- ▁وقال
- ▁يهب
- ▁يلزمني
- ▁الحمد
- ▁أذي
- طبيعت
- ▁دورة
- ▁عالأقل
- ▁آذاك
- ▁وبال
- ▁الجاي
- عطيني
- ▁ياخذ
- ▁احكيلي
- ▁نهبط
- ▁رقدت
- بلاصة
- ▁عزيز
- ▁صغار
- ▁أقسم
- ▁جيب
- ▁وصلت
- ▁أحوال
- ▁جيست
- ▁جماعة
- سئل
- ▁خوذ
- ▁يهز
- ▁الأخرى
- ▁آلاف
- ▁إسمع
- ▁الحقيقة
- ▁ناقص
- ▁حاط
- ▁موجود
- عباد
- ▁آذيك
- ▁خارج
- ▁الخير
- ▁البنات
- بقى
- ▁طرف
- ▁سينون
- ▁ماذاب
- ▁البحر
- ▁نرقد
- مدلله
- ▁إيجى
- ▁خالتي
- ▁فازة
- ▁بريك
- ▁شريبتك
- ▁تطلع
- ؤ
- ▁المشكلة
- ▁طري
- ▁مادام
- ▁طلبت
- ▁يلعب
- ▁نعاود
- ▁وحدك
- ▁ظاهر
- ٱ
- ژ
- ٍ
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
model_conf:
asr_weight: 0.3
mt_weight: 0.0
mtlalpha: 1.0
lsm_weight: 0.1
length_normalized_loss: false
use_preprocessor: true
token_type: bpe
src_token_type: bpe
bpemodel: data/token_list/tgt_bpe_unigram1000/bpe.model
src_bpemodel: data/token_list/src_bpe_unigram1000/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
n_fft: 512
win_length: 400
hop_length: 160
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: global_mvn
normalize_conf:
stats_file: exp/st_stats_raw_bpe1000_sp/train/feats_stats.npz
preencoder: null
preencoder_conf: {}
encoder: transformer
encoder_conf:
input_layer: conv2d
num_blocks: 12
linear_units: 2048
dropout_rate: 0.1
output_size: 256
attention_heads: 4
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
input_layer: embed
num_blocks: 6
linear_units: 2048
dropout_rate: 0.1
extra_asr_decoder: transformer
extra_asr_decoder_conf:
input_layer: embed
num_blocks: 2
linear_units: 2048
dropout_rate: 0.1
extra_mt_decoder: transformer
extra_mt_decoder_conf:
input_layer: embed
num_blocks: 2
linear_units: 2048
dropout_rate: 0.1
required:
- output_dir
- src_token_list
- token_list
version: 0.10.6a1
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| 45503482834ff322fd88daa26bae88b6 |
MultiBertGunjanPatrick/multiberts-seed-2-40k | MultiBertGunjanPatrick | bert | 7 | 5 | transformers | 0 | null | true | false | false | apache-2.0 | ['en'] | ['bookcorpus', 'wikipedia'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['exbert', 'multiberts', 'multiberts-seed-2'] | false | true | true | 6,479 | false | # MultiBERTs Seed 2 Checkpoint 40k (uncased)
Seed 2 intermediate checkpoint 40k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-2](https://hf.co/multberts-seed-2). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-2-40k')
model = BertModel.from_pretrained("multiberts-seed-2-40k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| c179fcaef475a4124c91c5c6372162fe |
Helsinki-NLP/opus-tatoeba-fi-en | Helsinki-NLP | marian | 12 | 42 | transformers | 2 | translation | true | true | false | apache-2.0 | ['fi', 'en'] | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 2,776 | false | ### fi-en
* source group: Finnish
* target group: English
* OPUS readme: [fin-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fin-eng/README.md)
* model: transformer-align
* source language(s): fin
* target language(s): eng
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opusTCv20210807+bt-2021-08-25.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fin-eng/opusTCv20210807+bt-2021-08-25.zip)
* test set translations: [opusTCv20210807+bt-2021-08-25.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fin-eng/opusTCv20210807+bt-2021-08-25.test.txt)
* test set scores: [opusTCv20210807+bt-2021-08-25.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fin-eng/opusTCv20210807+bt-2021-08-25.eval.txt)
## Benchmarks
| testset | BLEU | chr-F | #sent | #words | BP |
|---------|-------|-------|-------|--------|----|
| newsdev2015-enfi.fin-eng | 27.1 | 0.550 | 1500 | 32104 | 0.988 |
| newstest2015-enfi.fin-eng | 28.5 | 0.560 | 1370 | 27356 | 0.980 |
| newstest2016-enfi.fin-eng | 31.7 | 0.586 | 3000 | 63043 | 1.000 |
| newstest2017-enfi.fin-eng | 34.6 | 0.610 | 3002 | 61936 | 0.988 |
| newstest2018-enfi.fin-eng | 25.4 | 0.530 | 3000 | 62325 | 0.981 |
| newstest2019-fien.fin-eng | 30.6 | 0.577 | 1996 | 36227 | 0.994 |
| newstestB2016-enfi.fin-eng | 25.8 | 0.538 | 3000 | 63043 | 0.987 |
| newstestB2017-enfi.fin-eng | 29.6 | 0.572 | 3002 | 61936 | 0.999 |
| newstestB2017-fien.fin-eng | 29.6 | 0.572 | 3002 | 61936 | 0.999 |
| Tatoeba-test-v2021-08-07.fin-eng | 54.1 | 0.700 | 10000 | 75212 | 0.988 |
### System Info:
- hf_name: fi-en
- source_languages: fin
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fin-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['fi', 'en']
- src_constituents: ('Finnish', {'fin'})
- tgt_constituents: ('English', {'eng'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: fin-eng
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/fin-eng/opusTCv20210807+bt-2021-08-25.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/fin-eng/opusTCv20210807+bt-2021-08-25.test.txt
- src_alpha3: fin
- tgt_alpha3: eng
- chrF2_score: 0.7
- bleu: 54.1
- src_name: Finnish
- tgt_name: English
- train_date: 2021-08-25 00:00:00
- src_alpha2: fi
- tgt_alpha2: en
- prefer_old: False
- short_pair: fi-en
- helsinki_git_sha: 2ef219d5b67f0afb0c6b732cd07001d84181f002
- transformers_git_sha: 12b4d66a80419db30a15e7b9d4208ceb9887c03b
- port_machine: LM0-400-22516.local
- port_time: 2021-11-04-21:36 | b739c75576e516ddb104b4a99a1cafda |
smeoni/nbme-roberta-large | smeoni | roberta | 13 | 4 | transformers | 0 | fill-mask | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,300 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nbme-roberta-large
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7825
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1117 | 1.0 | 1850 | 0.9610 |
| 0.8911 | 2.0 | 3700 | 0.8466 |
| 0.8158 | 3.0 | 5550 | 0.7825 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
| 3eebd2231624f720fbda93838b2c06b8 |
rajesh426/distilbert-base-uncased_finetuned_Balance_Upsampling_SPEECH_TEXT_DISPLAY_v1 | rajesh426 | distilbert | 10 | 4 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,948 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_finetuned_Balance_Upsampling_SPEECH_TEXT_DISPLAY_v1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6982
- Accuracy: 0.7759
- F1: 0.7743
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 0.5321 | 1.0 | 7958 | 1.3225 | 0.7271 | 0.7391 |
| 0.2967 | 2.0 | 15916 | 1.3868 | 0.7574 | 0.7601 |
| 0.1821 | 3.0 | 23874 | 1.4753 | 0.7513 | 0.7515 |
| 0.1193 | 4.0 | 31832 | 1.7028 | 0.7588 | 0.7596 |
| 0.0722 | 5.0 | 39790 | 1.8155 | 0.7615 | 0.7599 |
| 0.041 | 6.0 | 47748 | 2.1622 | 0.7695 | 0.7678 |
| 0.0258 | 7.0 | 55706 | 2.3871 | 0.75 | 0.7462 |
| 0.0149 | 8.0 | 63664 | 2.6135 | 0.7571 | 0.7524 |
| 0.0076 | 9.0 | 71622 | 2.7974 | 0.7648 | 0.7617 |
| 0.0051 | 10.0 | 79580 | 2.6982 | 0.7759 | 0.7743 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.10.2
- Datasets 2.5.2
- Tokenizers 0.12.1
| f3b958ff11a5745e64d4756ddc606855 |
skr1125/xlm-roberta-base-finetuned-panx-de-fr | skr1125 | xlm-roberta | 10 | 4 | transformers | 0 | token-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,321 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1654
- F1: 0.8590
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2845 | 1.0 | 715 | 0.1831 | 0.8249 |
| 0.1449 | 2.0 | 1430 | 0.1643 | 0.8479 |
| 0.0929 | 3.0 | 2145 | 0.1654 | 0.8590 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
| 45491939cfc813802ee1e332bfca39d8 |
google/t5-efficient-small-el16-dl1 | google | t5 | 12 | 8 | transformers | 0 | text2text-generation | true | true | true | apache-2.0 | ['en'] | ['c4'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['deep-narrow'] | false | true | true | 6,285 | false |
# T5-Efficient-SMALL-EL16-DL1 (Deep-Narrow version)
T5-Efficient-SMALL-EL16-DL1 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-small-el16-dl1** - is of model type **Small** with the following variations:
- **el** is **16**
- **dl** is **1**
It has **71.01** million parameters and thus requires *ca.* **284.04 MB** of memory in full precision (*fp32*)
or **142.02 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future. | 6e39e2ddc8e542c03bcbd8dba3de621b |
BlinkDL/rwkv-4-pile-14b | BlinkDL | null | 18 | 0 | null | 17 | text-generation | true | false | false | apache-2.0 | ['en'] | ['the_pile'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['pytorch', 'text-generation', 'causal-lm', 'rwkv'] | false | true | true | 487 | false |
# RWKV-4 14B
## Model Description
RWKV-4 14B is a L40-D5120 causal language model trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details.
Use https://github.com/BlinkDL/ChatRWKV to run it.
ctx_len = 1024
n_layer = 40
n_embd = 5120
Final checkpoint: RWKV-4-Pile-14B-20230213-8019.pth : Trained on the Pile for 331B tokens.
* Pile loss 1.7579
* LAMBADA ppl 3.81, acc 71.05%
* PIQA acc 77.42%
* SC2016 acc 75.57%
* Hellaswag acc_norm 70.24%
* WinoGrande acc 62.98%
| 398d66e5e9be9582cff24030eea66086 |
farleyknight/mnist-digit-classification-2022-09-04 | farleyknight | vit | 11 | 48 | transformers | 0 | image-classification | true | false | false | apache-2.0 | null | ['mnist'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['image-classification', 'vision', 'generated_from_trainer'] | true | true | true | 1,066 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mnist-digit-classification-2022-09-04
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the mnist dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0319
- Accuracy: 0.9923
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.22.0.dev0
- Pytorch 1.12.1+cu102
- Datasets 2.4.0
- Tokenizers 0.12.1
| 1a15fb2e7b9941a5aeacfc5b323274b4 |
Helsinki-NLP/opus-mt-tr-lt | Helsinki-NLP | marian | 10 | 7 | transformers | 0 | translation | true | false | false | apache-2.0 | ['tr', 'lt'] | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 2,004 | false |
### tur-lit
* source group: Turkish
* target group: Lithuanian
* OPUS readme: [tur-lit](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tur-lit/README.md)
* model: transformer-align
* source language(s): tur
* target language(s): lit
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-lit/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-lit/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-lit/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.tur.lit | 35.6 | 0.631 |
### System Info:
- hf_name: tur-lit
- source_languages: tur
- target_languages: lit
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tur-lit/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['tr', 'lt']
- src_constituents: {'tur'}
- tgt_constituents: {'lit'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/tur-lit/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/tur-lit/opus-2020-06-17.test.txt
- src_alpha3: tur
- tgt_alpha3: lit
- short_pair: tr-lt
- chrF2_score: 0.631
- bleu: 35.6
- brevity_penalty: 0.9490000000000001
- ref_len: 8285.0
- src_name: Turkish
- tgt_name: Lithuanian
- train_date: 2020-06-17
- src_alpha2: tr
- tgt_alpha2: lt
- prefer_old: False
- long_pair: tur-lit
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | 10f7daf7089d6139f8502cf393ea2c64 |
NAOKITY/bert-finetuned-squad | NAOKITY | distilbert | 8 | 5 | transformers | 0 | question-answering | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,434 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# NAOKITY/bert-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.1438
- Validation Loss: 0.0
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1149, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.1483 | 0.0 | 0 |
| 2.1484 | 0.0 | 1 |
| 2.1438 | 0.0 | 2 |
### Framework versions
- Transformers 4.20.0
- TensorFlow 2.9.1
- Datasets 2.3.2
- Tokenizers 0.12.1
| 1943b1ee0dde7f3465cadcabc0eeb900 |
chiranthans23/xlm-roberta-base-finetuned-panx-de | chiranthans23 | xlm-roberta | 13 | 13 | transformers | 0 | token-classification | true | false | false | mit | null | ['xtreme'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,319 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1372
- F1: 0.8621
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2575 | 1.0 | 525 | 0.1621 | 0.8292 |
| 0.1287 | 2.0 | 1050 | 0.1378 | 0.8526 |
| 0.0831 | 3.0 | 1575 | 0.1372 | 0.8621 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| e9808619b776112c606617193a5d7534 |
jojoUla/bert-large-cased-finetuned-lowR100-2-cased-DA-20 | jojoUla | bert | 85 | 2 | transformers | 0 | fill-mask | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 3,204 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-cased-finetuned-lowR100-2-cased-DA-20
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7801
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 30
- eval_batch_size: 30
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.4515 | 1.0 | 1 | 8.1791 |
| 6.4671 | 2.0 | 2 | 6.0155 |
| 6.533 | 3.0 | 3 | 5.9784 |
| 5.8654 | 4.0 | 4 | 5.2092 |
| 5.5458 | 5.0 | 5 | 6.1062 |
| 5.1806 | 6.0 | 6 | 5.0913 |
| 4.8797 | 7.0 | 7 | 4.3025 |
| 4.6975 | 8.0 | 8 | 4.8598 |
| 4.2859 | 9.0 | 9 | 4.2301 |
| 4.3584 | 10.0 | 10 | 4.0683 |
| 4.0203 | 11.0 | 11 | 2.7986 |
| 3.977 | 12.0 | 12 | 4.1575 |
| 3.4077 | 13.0 | 13 | 3.6507 |
| 3.313 | 14.0 | 14 | 2.8674 |
| 3.0962 | 15.0 | 15 | 2.5103 |
| 2.8883 | 16.0 | 16 | 3.1318 |
| 2.9623 | 17.0 | 17 | 2.1316 |
| 2.5544 | 18.0 | 18 | 2.7741 |
| 2.9957 | 19.0 | 19 | 2.9045 |
| 2.749 | 20.0 | 20 | 2.8824 |
| 2.291 | 21.0 | 21 | 2.7450 |
| 2.3373 | 22.0 | 22 | 2.3774 |
| 2.6506 | 23.0 | 23 | 2.5515 |
| 2.6736 | 24.0 | 24 | 2.2106 |
| 2.3845 | 25.0 | 25 | 2.3166 |
| 2.3762 | 26.0 | 26 | 2.3221 |
| 2.4184 | 27.0 | 27 | 2.8996 |
| 2.6826 | 28.0 | 28 | 2.1793 |
| 2.4678 | 29.0 | 29 | 2.4268 |
| 2.2998 | 30.0 | 30 | 1.8153 |
| 2.7085 | 31.0 | 31 | 2.4401 |
| 2.1231 | 32.0 | 32 | 3.3329 |
| 2.1349 | 33.0 | 33 | 1.9675 |
| 2.4647 | 34.0 | 34 | 3.0172 |
| 2.3552 | 35.0 | 35 | 1.8550 |
| 2.2843 | 36.0 | 36 | 2.7737 |
| 2.2164 | 37.0 | 37 | 3.4890 |
| 2.2118 | 38.0 | 38 | 3.4251 |
| 2.3133 | 39.0 | 39 | 2.6806 |
| 1.9773 | 40.0 | 40 | 2.7801 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
| ec8d58085320d514d4e25258f063db6a |
dbmdz/electra-base-turkish-cased-discriminator | dbmdz | electra | 7 | 670 | transformers | 0 | null | true | true | false | mit | ['tr'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 3,364 | false |
# 🤗 + 📚 dbmdz Turkish ELECTRA model
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources a cased ELECTRA base model for Turkish 🎉
# Turkish ELECTRA model
We release a base ELEC**TR**A model for Turkish, that was trained on the same data as *BERTurk*.
> ELECTRA is a new method for self-supervised language representation learning. It can be used to
> pre-train transformer networks using relatively little compute. ELECTRA models are trained to
> distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to
> the discriminator of a GAN.
More details about ELECTRA can be found in the [ICLR paper](https://openreview.net/forum?id=r1xMH1BtvB)
or in the [official ELECTRA repository](https://github.com/google-research/electra) on GitHub.
## Stats
The current version of the model is trained on a filtered and sentence
segmented version of the Turkish [OSCAR corpus](https://traces1.inria.fr/oscar/),
a recent Wikipedia dump, various [OPUS corpora](http://opus.nlpl.eu/) and a
special corpus provided by [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/).
The final training corpus has a size of 35GB and 44,04,976,662 tokens.
Thanks to Google's TensorFlow Research Cloud (TFRC) we could train a cased model
on a TPU v3-8 for 1M steps.
## Model weights
[Transformers](https://github.com/huggingface/transformers)
compatible weights for both PyTorch and TensorFlow are available.
| Model | Downloads
| ------------------------------------------------ | ---------------------------------------------------------------------------------------------------------------
| `dbmdz/electra-base-turkish-cased-discriminator` | [`config.json`](https://cdn.huggingface.co/dbmdz/electra-base-turkish-cased-discriminator/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/electra-base-turkish-cased-discriminator/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/electra-base-turkish-cased-discriminator/vocab.txt)
## Usage
With Transformers >= 2.8 our ELECTRA base cased model can be loaded like:
```python
from transformers import AutoModelWithLMHead, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("dbmdz/electra-base-turkish-cased-discriminator")
model = AutoModelWithLMHead.from_pretrained("dbmdz/electra-base-turkish-cased-discriminator")
```
## Results
For results on PoS tagging or NER tasks, please refer to
[this repository](https://github.com/stefan-it/turkish-bert/electra).
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz).
# Contact (Bugs, Feedback, Contribution and more)
For questions about our ELECTRA models just open an issue
[here](https://github.com/dbmdz/berts/issues/new) 🤗
# Acknowledgments
Thanks to [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/) for providing us
additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing
us the Turkish NER dataset for evaluation.
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
| 9b7177251ef06d7ac2c2c52b6d88e905 |
amanneo/distilgpt2-finetuned-custom-mail | amanneo | gpt2 | 12 | 4 | transformers | 0 | text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,111 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-custom-mail
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1905
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 7 | 3.5915 |
| No log | 2.0 | 14 | 3.4986 |
| No log | 3.0 | 21 | 3.4418 |
| No log | 4.0 | 28 | 3.3970 |
| No log | 5.0 | 35 | 3.3569 |
| No log | 6.0 | 42 | 3.3207 |
| No log | 7.0 | 49 | 3.2972 |
| No log | 8.0 | 56 | 3.2806 |
| No log | 9.0 | 63 | 3.2620 |
| No log | 10.0 | 70 | 3.2451 |
| No log | 11.0 | 77 | 3.2302 |
| No log | 12.0 | 84 | 3.2177 |
| No log | 13.0 | 91 | 3.2083 |
| No log | 14.0 | 98 | 3.2024 |
| No log | 15.0 | 105 | 3.1984 |
| No log | 16.0 | 112 | 3.1962 |
| No log | 17.0 | 119 | 3.1938 |
| No log | 18.0 | 126 | 3.1920 |
| No log | 19.0 | 133 | 3.1913 |
| No log | 20.0 | 140 | 3.1905 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
| eb5a570c63fbec6b28adebb2b4d78d0c |
Joblift/distilbert-base-german-cased-finetuned-jl | Joblift | distilbert | 21 | 260 | transformers | 2 | fill-mask | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,793 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-german-cased-finetuned-jl
This model is a fine-tuned version of [distilbert-base-german-cased](https://huggingface.co/distilbert-base-german-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9427
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| No log | 0.1 | 1000 | 1.5731 |
| No log | 0.19 | 2000 | 1.4019 |
| No log | 0.29 | 3000 | 1.3042 |
| No log | 0.39 | 4000 | 1.2398 |
| No log | 0.48 | 5000 | 1.1949 |
| No log | 0.58 | 6000 | 1.1584 |
| No log | 0.68 | 7000 | 1.1296 |
| No log | 0.77 | 8000 | 1.1055 |
| No log | 0.87 | 9000 | 1.0842 |
| No log | 0.97 | 10000 | 1.0680 |
| No log | 1.06 | 11000 | 1.0521 |
| No log | 1.16 | 12000 | 1.0388 |
| No log | 1.26 | 13000 | 1.0248 |
| No log | 1.35 | 14000 | 1.0154 |
| No log | 1.45 | 15000 | 1.0051 |
| No log | 1.55 | 16000 | 0.9981 |
| No log | 1.64 | 17000 | 0.9891 |
| No log | 1.74 | 18000 | 0.9827 |
| No log | 1.84 | 19000 | 0.9765 |
| No log | 1.93 | 20000 | 0.9714 |
| 1.2477 | 2.03 | 21000 | 0.9672 |
| 1.2477 | 2.13 | 22000 | 0.9613 |
| 1.2477 | 2.22 | 23000 | 0.9582 |
| 1.2477 | 2.32 | 24000 | 0.9548 |
| 1.2477 | 2.42 | 25000 | 0.9508 |
| 1.2477 | 2.51 | 26000 | 0.9491 |
| 1.2477 | 2.61 | 27000 | 0.9466 |
| 1.2477 | 2.71 | 28000 | 0.9458 |
| 1.2477 | 2.8 | 29000 | 0.9446 |
| 1.2477 | 2.9 | 30000 | 0.9431 |
| 1.2477 | 3.0 | 31000 | 0.9427 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.9.0+cu111
- Datasets 2.5.1
- Tokenizers 0.12.1
| f97c1df5f387a77180a9c139777f7d97 |
fourthbrain-demo/bert_model_reddit_tsla_tracked_actions | fourthbrain-demo | distilbert | 10 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 952 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_model_reddit_tsla_tracked_actions
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
| 8e69d622c6fbb6670b5d7822f7a7eb2d |
bofenghuang/deprecated-whisper-large-v2-cv11-french-punct-plus | bofenghuang | whisper | 26 | 13 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['fr'] | ['mozilla-foundation/common_voice_11_0'] | null | 0 | 0 | 0 | 0 | 1 | 0 | 1 | ['automatic-speech-recognition', 'hf-asr-leaderboard', 'whisper-event'] | true | true | true | 3,426 | false |
<style>
img {
display: inline;
}
</style>



# Fine-tuned whisper-large-v2 model for ASR in French
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2), trained on the mozilla-foundation/common_voice_11_0 fr dataset. When using the model make sure that your speech input is also sampled at 16Khz. **This model also predicts casing and punctuation.**
## Usage
Inference with 🤗 Pipeline
```python
import torch
from datasets import load_dataset
from transformers import pipeline
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# Load pipeline
pipe = pipeline("automatic-speech-recognition", model="bofenghuang/whisper-large-v2-cv11-french-punct", device=device)
# NB: set forced_decoder_ids for generation utils
pipe.model.config.forced_decoder_ids = pipe.tokenizer.get_decoder_prompt_ids(language="fr", task="transcribe")
# Load data
ds_mcv_test = load_dataset("mozilla-foundation/common_voice_11_0", "fr", split="test", streaming=True)
test_segment = next(iter(ds_mcv_test))
waveform = test_segment["audio"]
# NB: decoding option
# limit the maximum number of generated tokens to 225
pipe.model.config.max_length = 225 + 1
# sampling
# pipe.model.config.do_sample = True
# beam search
# pipe.model.config.num_beams = 5
# return
# pipe.model.config.return_dict_in_generate = True
# pipe.model.config.output_scores = True
# pipe.model.config.num_return_sequences = 5
# Run
generated_sentences = pipe(waveform)["text"]
```
Inference with 🤗 low-level APIs
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# Load model
model = AutoModelForSpeechSeq2Seq.from_pretrained("bofenghuang/whisper-large-v2-cv11-french-punct").to(device)
processor = AutoProcessor.from_pretrained("bofenghuang/whisper-large-v2-cv11-french-punct", language="french", task="transcribe")
# NB: set forced_decoder_ids for generation utils
model.config.forced_decoder_ids = processor.get_decoder_prompt_ids(language="fr", task="transcribe")
# 16_000
model_sample_rate = processor.feature_extractor.sampling_rate
# Load data
ds_mcv_test = load_dataset("mozilla-foundation/common_voice_11_0", "fr", split="test", streaming=True)
test_segment = next(iter(ds_mcv_test))
waveform = torch.from_numpy(test_segment["audio"]["array"])
sample_rate = test_segment["audio"]["sampling_rate"]
# Resample
if sample_rate != model_sample_rate:
resampler = torchaudio.transforms.Resample(sample_rate, model_sample_rate)
waveform = resampler(waveform)
# Get feat
inputs = processor(waveform, sampling_rate=model_sample_rate, return_tensors="pt")
input_features = inputs.input_features
input_features = input_features.to(device)
# Generate
generated_ids = model.generate(inputs=input_features, max_new_tokens=225) # greedy
# generated_ids = model.generate(inputs=input_features, max_new_tokens=225, num_beams=5) # beam search
# Detokenize
generated_sentences = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
# Normalise predicted sentences if necessary
``` | 07799ba54e1215cba7f69f345aae8218 |
tuni/xlm-roberta-large-xnli-finetuned-mnli-SJP-v3 | tuni | xlm-roberta | 11 | 1 | transformers | 0 | text-classification | true | false | false | mit | null | ['swiss_judgment_prediction'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,179 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-large-xnli-finetuned-mnli-SJP-v3
This model is a fine-tuned version of [joeddav/xlm-roberta-large-xnli](https://huggingface.co/joeddav/xlm-roberta-large-xnli) on the swiss_judgment_prediction dataset.
It achieves the following results on the evaluation set:
- eval_loss: 5.4348
- eval_accuracy: 0.3352
- eval_runtime: 588.81
- eval_samples_per_second: 8.492
- eval_steps_per_second: 4.246
- epoch: 14.0
- step: 70
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| 33a20569fd346502e705878b92654067 |
Lvxue/distilled-mt5-small-b0.02 | Lvxue | mt5 | 17 | 1 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | ['en', 'ro'] | ['wmt16'] | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,034 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilled-mt5-small-b0.02
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8126
- Bleu: 7.632
- Gen Len: 45.006
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
| 86c7a33db5596ea44835a36a60cad51f |
jcmc/wav2vec-1b-cv8-ir | jcmc | wav2vec2 | 38 | 4 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['ga-IE'] | ['mozilla-foundation/common_voice_8_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'ga-IE', 'robust-speech-event', 'hf-asr-leaderboard'] | true | true | true | 1,371 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - GA-IE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8445
- Wer: 0.5585
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 60.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.7135 | 31.24 | 500 | 0.9609 | 0.6926 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
| 9bdeed633a009fa2b8efacc116084570 |
chanind/frame-semantic-transformer-small | chanind | t5 | 7 | 5 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,145 | false |
Fine-tuned T5 small model for use as a frame semantic parser in the [Frame Semantic Transformer](https://github.com/chanind/frame-semantic-transformer) project. This model is trained on data from [FrameNet](https://framenet2.icsi.berkeley.edu/).
### Usage
This is meant to be used a part of [Frame Semantic Transformer](https://github.com/chanind/frame-semantic-transformer). See that project for usage instructions.
### Tasks
This model is trained to perform 3 tasks related to semantic frame parsing:
1. Identify frame trigger locations in the text
2. Classify the frame given a trigger location
3. Extract frame elements in the sentence
### Performance
This model is trained and evaluated using the same train/dev/test splits from FrameNet 1.7 annotated corpora as used by [Open Sesame](https://github.com/swabhs/open-sesame).
| Task | F1 Score (Dev) | F1 Score (Test) |
| ---------------------- | -------------- | --------------- |
| Trigger identification | 0.74 | 0.70 |
| Frame Classification | 0.83 | 0.81 |
| Argument Extraction | 0.68 | 0.70 | | ce9bcf116eb904c911cb5bd3c3fe28e9 |
gokuls/mobilebert_sa_GLUE_Experiment_logit_kd_data_aug_mrpc_256 | gokuls | mobilebert | 17 | 2 | transformers | 0 | text-classification | true | false | false | apache-2.0 | ['en'] | ['glue'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 5,415 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_sa_GLUE_Experiment_logit_kd_data_aug_mrpc_256
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1267
- Accuracy: 0.9926
- F1: 0.9947
- Combined Score: 0.9936
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:|
| 0.3017 | 1.0 | 1959 | 0.2241 | 0.9608 | 0.9713 | 0.9661 |
| 0.233 | 2.0 | 3918 | 0.2357 | 0.9828 | 0.9876 | 0.9852 |
| 0.2241 | 3.0 | 5877 | 0.1908 | 0.9706 | 0.9786 | 0.9746 |
| 0.2189 | 4.0 | 7836 | 0.1863 | 0.9755 | 0.9824 | 0.9789 |
| 0.2149 | 5.0 | 9795 | 0.1868 | 0.9804 | 0.9858 | 0.9831 |
| 0.211 | 6.0 | 11754 | 0.1735 | 0.9804 | 0.9859 | 0.9831 |
| 0.2073 | 7.0 | 13713 | 0.1875 | 0.9828 | 0.9876 | 0.9852 |
| 0.204 | 8.0 | 15672 | 0.1690 | 0.9853 | 0.9894 | 0.9873 |
| 0.2014 | 9.0 | 17631 | 0.1597 | 0.9853 | 0.9893 | 0.9873 |
| 0.1992 | 10.0 | 19590 | 0.1604 | 0.9877 | 0.9911 | 0.9894 |
| 0.1975 | 11.0 | 21549 | 0.1563 | 0.9853 | 0.9894 | 0.9873 |
| 0.1959 | 12.0 | 23508 | 0.1518 | 0.9853 | 0.9894 | 0.9873 |
| 0.1948 | 13.0 | 25467 | 0.1429 | 0.9902 | 0.9929 | 0.9915 |
| 0.1937 | 14.0 | 27426 | 0.1484 | 0.9853 | 0.9894 | 0.9873 |
| 0.1928 | 15.0 | 29385 | 0.1527 | 0.9804 | 0.9856 | 0.9830 |
| 0.1919 | 16.0 | 31344 | 0.1433 | 0.9926 | 0.9947 | 0.9936 |
| 0.1913 | 17.0 | 33303 | 0.1445 | 0.9902 | 0.9929 | 0.9915 |
| 0.1905 | 18.0 | 35262 | 0.1407 | 0.9926 | 0.9947 | 0.9936 |
| 0.1899 | 19.0 | 37221 | 0.1402 | 0.9926 | 0.9947 | 0.9936 |
| 0.1892 | 20.0 | 39180 | 0.1387 | 0.9926 | 0.9947 | 0.9936 |
| 0.1887 | 21.0 | 41139 | 0.1384 | 0.9926 | 0.9947 | 0.9936 |
| 0.1882 | 22.0 | 43098 | 0.1430 | 0.9951 | 0.9964 | 0.9958 |
| 0.1877 | 23.0 | 45057 | 0.1384 | 0.9951 | 0.9964 | 0.9958 |
| 0.1871 | 24.0 | 47016 | 0.1398 | 0.9951 | 0.9964 | 0.9958 |
| 0.1867 | 25.0 | 48975 | 0.1336 | 0.9926 | 0.9947 | 0.9936 |
| 0.1863 | 26.0 | 50934 | 0.1368 | 0.9951 | 0.9964 | 0.9958 |
| 0.1859 | 27.0 | 52893 | 0.1337 | 0.9951 | 0.9964 | 0.9958 |
| 0.1855 | 28.0 | 54852 | 0.1352 | 0.9926 | 0.9947 | 0.9936 |
| 0.1851 | 29.0 | 56811 | 0.1314 | 0.9951 | 0.9964 | 0.9958 |
| 0.1847 | 30.0 | 58770 | 0.1333 | 0.9951 | 0.9964 | 0.9958 |
| 0.1844 | 31.0 | 60729 | 0.1368 | 0.9951 | 0.9964 | 0.9958 |
| 0.184 | 32.0 | 62688 | 0.1310 | 0.9951 | 0.9964 | 0.9958 |
| 0.1837 | 33.0 | 64647 | 0.1321 | 0.9951 | 0.9964 | 0.9958 |
| 0.1834 | 34.0 | 66606 | 0.1302 | 0.9926 | 0.9947 | 0.9936 |
| 0.183 | 35.0 | 68565 | 0.1320 | 0.9951 | 0.9964 | 0.9958 |
| 0.1827 | 36.0 | 70524 | 0.1303 | 0.9951 | 0.9964 | 0.9958 |
| 0.1825 | 37.0 | 72483 | 0.1273 | 0.9951 | 0.9964 | 0.9958 |
| 0.1822 | 38.0 | 74442 | 0.1293 | 0.9951 | 0.9964 | 0.9958 |
| 0.1819 | 39.0 | 76401 | 0.1296 | 0.9951 | 0.9964 | 0.9958 |
| 0.1817 | 40.0 | 78360 | 0.1305 | 0.9926 | 0.9947 | 0.9936 |
| 0.1814 | 41.0 | 80319 | 0.1267 | 0.9926 | 0.9947 | 0.9936 |
| 0.1812 | 42.0 | 82278 | 0.1267 | 0.9951 | 0.9964 | 0.9958 |
| 0.1809 | 43.0 | 84237 | 0.1278 | 0.9902 | 0.9929 | 0.9915 |
| 0.1807 | 44.0 | 86196 | 0.1293 | 0.9951 | 0.9964 | 0.9958 |
| 0.1805 | 45.0 | 88155 | 0.1269 | 0.9951 | 0.9964 | 0.9958 |
| 0.1803 | 46.0 | 90114 | 0.1284 | 0.9951 | 0.9964 | 0.9958 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
| 238e91bf616e310b0760f169c65b6dae |
google/t5-efficient-base-nl2 | google | t5 | 12 | 39 | transformers | 0 | text2text-generation | true | true | true | apache-2.0 | ['en'] | ['c4'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['deep-narrow'] | false | true | true | 6,247 | false |
# T5-Efficient-BASE-NL2 (Deep-Narrow version)
T5-Efficient-BASE-NL2 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-base-nl2** - is of model type **Base** with the following variations:
- **nl** is **2**
It has **57.72** million parameters and thus requires *ca.* **230.88 MB** of memory in full precision (*fp32*)
or **115.44 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future. | 572ef2a417ffd3237190229bc7224be2 |
clp/leanne-or-lauren-v2 | clp | vit | 14 | 1 | transformers | 0 | image-classification | true | false | false | apache-2.0 | null | ['imagefolder'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['image-classification', 'generated_from_trainer'] | true | true | true | 991 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# leanne-or-lauren-v2
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
| 658d48ad4b30a46467a18fd1415221a0 |
google/t5-11b-ssm-nqo | google | t5 | 9 | 7 | transformers | 0 | text2text-generation | true | true | false | apache-2.0 | ['en'] | ['c4', 'wikipedia', 'natural_questions'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 2,609 | false |
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) for **Closed Book Question Answering**.
The model was pre-trained using T5's denoising objective on [C4](https://huggingface.co/datasets/c4), subsequently additionally pre-trained using [REALM](https://arxiv.org/pdf/2002.08909.pdf)'s salient span masking objective on [Wikipedia](https://huggingface.co/datasets/wikipedia), and finally fine-tuned on [Natural Questions (NQ)](https://huggingface.co/datasets/natural_questions).
**Note**: The model was fine-tuned on 90% of the train splits of [Natural Questions (NQ)](https://huggingface.co/datasets/natural_questions) for 20k steps and validated on the held-out 10% of the train split.
Other community Checkpoints: [here](https://huggingface.co/models?search=ssm)
Paper: [How Much Knowledge Can You Pack
Into the Parameters of a Language Model?](https://arxiv.org/abs/1910.10683.pdf)
Authors: *Adam Roberts, Colin Raffel, Noam Shazeer*
## Results on Natural Questions - Test Set
|Id | link | Exact Match |
|---|---|---|
|T5-large|https://huggingface.co/google/t5-large-ssm-nqo|29.0|
|T5-xxl|https://huggingface.co/google/t5-xxl-ssm-nqo|35.2|
|T5-3b|https://huggingface.co/google/t5-3b-ssm-nqo|31.7|
|**T5-11b**|**https://huggingface.co/google/t5-11b-ssm-nqo**|**34.8**|
## Usage
The model can be used as follows for **closed book question answering**:
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
t5_qa_model = AutoModelForSeq2SeqLM.from_pretrained("google/t5-11b-ssm-nqo")
t5_tok = AutoTokenizer.from_pretrained("google/t5-11b-ssm-nqo")
input_ids = t5_tok("When was Franklin D. Roosevelt born?", return_tensors="pt").input_ids
gen_output = t5_qa_model.generate(input_ids)[0]
print(t5_tok.decode(gen_output, skip_special_tokens=True))
```
## Abstract
It has recently been observed that neural language models trained on unstructured text can implicitly store and retrieve knowledge using natural language queries. In this short paper, we measure the practical utility of this approach by fine-tuning pre-trained models to answer questions without access to any external context or knowledge. We show that this approach scales with model size and performs competitively with open-domain systems that explicitly retrieve answers from an external knowledge source when answering questions. To facilitate reproducibility and future work, we release our code and trained models at https://goo.gle/t5-cbqa.
 | 8ab55f5a29bd5908ed97d747bd54cb95 |
google/t5-efficient-small-el4 | google | t5 | 12 | 10 | transformers | 0 | text2text-generation | true | true | true | apache-2.0 | ['en'] | ['c4'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['deep-narrow'] | false | true | true | 6,250 | false |
# T5-Efficient-SMALL-EL4 (Deep-Narrow version)
T5-Efficient-SMALL-EL4 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-small-el4** - is of model type **Small** with the following variations:
- **el** is **4**
It has **54.23** million parameters and thus requires *ca.* **216.9 MB** of memory in full precision (*fp32*)
or **108.45 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future. | 3659f52986c50db65931afbf443b4aaa |
Duskfallcrew/duskfallai | Duskfallcrew | null | 233 | 91 | diffusers | 1 | text-to-image | false | false | false | creativeml-openrail-m | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text-to-image'] | false | true | true | 3,766 | false | ### DuskfallAi Dreambooth model trained by Duskfallcrew with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
If you want to donate towards costs and don't want to subscribe:
https://ko-fi.com/DUSKFALLcrew
If you want to monthly support the EARTH & DUSK media projects and not just AI:
https://www.patreon.com/earthndusk
WARNING: This is trained largely on a small data set of our own art with a focus on the fact that our art, and any stable/midjourney outputs we included in this are related to our Dissoicative Identity Disorder. May actually retrain a larger data set later on.
Trained using the MultiModel Dreambooth App, sitting on a friday afternoon doing absolute squat.
Please DO NOT re-upload the sample pictures that it was trained on, except in the instance you are inspired to use img2img..
In which we dutifully ask you to spam the community section with your outputs.
DO NOT RESELL THIS MODEL, AS IT DOES HAVE A TON OF MY ART IN IT.
You may:
- Merge, use at will
- SELL your generations - it's a STYLE after all!
- Do credit when reuploading or merging if possible.
- DO USE in any merged, OR home based model - cause that's what it's for!
More information & output samples to all our models: [Civit AI -Duskfallcrew](https://civitai.com/user/duskfallcrew)
lisdusk1 (use that on your prompt)
lisdusk1 (use that on your prompt)

lisdusk2 (use that on your prompt)
lisdusk2 (use that on your prompt)
 | 0c843d91ed7d556ec96087e1dabe2865 |
no3/pistachio-wd-1.3-beta1 | no3 | null | 25 | 6 | diffusers | 0 | null | false | false | false | mit | null | null | null | 3 | 0 | 3 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,951 | false | ### Pistachio from [vibrant venture](https://store.steampowered.com/app/1264520), on **waifu diffusion** via Dreambooth
#### model by no3
This your the **waifu diffusion** model fine-tuned the pistachio from [vibrant venture](https://store.steampowered.com/app/1264520) taught to **waifu diffusion** with Dreambooth.
It can be used by modifying the `instance_prompt`: **sks ps**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
### Note
If the output isn't that good using instance prompt you can use generic prompt like `a woman` or `a girl` you can add `, green hair` before `a woman` or `a girl` if that's doesn't give you good result.
If you have issues or questions feel free to visit the Community Tab and start discussion about it.
Here are the images used for training this concept:






[and this](https://huggingface.co/no3/pistachio-wd-1.3-beta1/resolve/main/concept_images/7.jpg) | 98d5c274742f2f60a453e11fc884ed21 |
ashhyun/distilbert-base-uncased-finetuned-squad | ashhyun | distilbert | 12 | 3 | transformers | 0 | question-answering | true | false | false | apache-2.0 | null | ['squad'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,120 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.1563
- eval_runtime: 141.535
- eval_samples_per_second: 76.193
- eval_steps_per_second: 4.762
- epoch: 1.0
- step: 5533
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| 3ff8735845baefbe242a7ae3b9b3ee9f |
pglauner/distilbert-base-uncased-finetuned-emotion | pglauner | distilbert | 16 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['emotion'] | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,345 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2251
- Accuracy: 0.9265
- F1: 0.9265
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8432 | 1.0 | 250 | 0.3353 | 0.8975 | 0.8939 |
| 0.2582 | 2.0 | 500 | 0.2251 | 0.9265 | 0.9265 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
| a9f93d9a26be426f6ba89b1563172618 |
danielwang-hads/wav2vec2-base-timit-demo-google-colab | danielwang-hads | wav2vec2 | 12 | 6 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,998 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-google-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5079
- Wer: 0.3365
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.4933 | 1.0 | 500 | 1.7711 | 0.9978 |
| 0.8658 | 2.01 | 1000 | 0.6262 | 0.5295 |
| 0.4405 | 3.01 | 1500 | 0.4841 | 0.4845 |
| 0.3062 | 4.02 | 2000 | 0.4897 | 0.4215 |
| 0.233 | 5.02 | 2500 | 0.4326 | 0.4101 |
| 0.1896 | 6.02 | 3000 | 0.4924 | 0.4078 |
| 0.1589 | 7.03 | 3500 | 0.4430 | 0.3896 |
| 0.1391 | 8.03 | 4000 | 0.4334 | 0.3889 |
| 0.1216 | 9.04 | 4500 | 0.4691 | 0.3828 |
| 0.1063 | 10.04 | 5000 | 0.4726 | 0.3705 |
| 0.0992 | 11.04 | 5500 | 0.4333 | 0.3690 |
| 0.0872 | 12.05 | 6000 | 0.4986 | 0.3771 |
| 0.0829 | 13.05 | 6500 | 0.4903 | 0.3685 |
| 0.0713 | 14.06 | 7000 | 0.5293 | 0.3655 |
| 0.068 | 15.06 | 7500 | 0.5039 | 0.3612 |
| 0.0621 | 16.06 | 8000 | 0.5314 | 0.3665 |
| 0.0571 | 17.07 | 8500 | 0.5038 | 0.3572 |
| 0.0585 | 18.07 | 9000 | 0.4718 | 0.3550 |
| 0.0487 | 19.08 | 9500 | 0.5482 | 0.3626 |
| 0.0459 | 20.08 | 10000 | 0.5239 | 0.3545 |
| 0.0419 | 21.08 | 10500 | 0.5096 | 0.3473 |
| 0.0362 | 22.09 | 11000 | 0.5222 | 0.3500 |
| 0.0331 | 23.09 | 11500 | 0.5062 | 0.3489 |
| 0.0352 | 24.1 | 12000 | 0.4913 | 0.3459 |
| 0.0315 | 25.1 | 12500 | 0.4701 | 0.3412 |
| 0.028 | 26.1 | 13000 | 0.5178 | 0.3402 |
| 0.0255 | 27.11 | 13500 | 0.5168 | 0.3405 |
| 0.0228 | 28.11 | 14000 | 0.5154 | 0.3368 |
| 0.0232 | 29.12 | 14500 | 0.5079 | 0.3365 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.1+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
| dda1d98b864920b7951f309911d2836e |
sentence-transformers/paraphrase-xlm-r-multilingual-v1 | sentence-transformers | xlm-roberta | 13 | 89,261 | sentence-transformers | 38 | sentence-similarity | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers'] | false | true | true | 3,594 | false |
# sentence-transformers/paraphrase-xlm-r-multilingual-v1
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/paraphrase-xlm-r-multilingual-v1')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/paraphrase-xlm-r-multilingual-v1')
model = AutoModel.from_pretrained('sentence-transformers/paraphrase-xlm-r-multilingual-v1')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/paraphrase-xlm-r-multilingual-v1)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` | a7cf6007492e14d7fdb6df11d78a7cd4 |
kormilitzin/en_core_med7_trf | kormilitzin | null | 22 | 303 | spacy | 7 | token-classification | false | false | false | mit | ['en'] | null | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | ['spacy', 'token-classification'] | false | true | true | 1,212 | false | | Feature | Description |
| --- | --- |
| **Name** | `en_core_med7_trf` |
| **Version** | `3.4.2.1` |
| **spaCy** | `>=3.4.2,<3.5.0` |
| **Default Pipeline** | `transformer`, `ner` |
| **Components** | `transformer`, `ner` |
| **Vectors** | 514157 keys, 514157 unique vectors (300 dimensions) |
| **Sources** | n/a |
| **License** | `MIT` |
| **Author** | [Andrey Kormilitzin](https://www.kormilitzin.com/) |
### Label Scheme
<details>
<summary>View label scheme (7 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `DOSAGE`, `DRUG`, `DURATION`, `FORM`, `FREQUENCY`, `ROUTE`, `STRENGTH` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 90.33 |
| `ENTS_P` | 88.22 |
| `ENTS_R` | 92.54 |
| `TRANSFORMER_LOSS` | 2502627.06 |
| `NER_LOSS` | 114576.77 |
### BibTeX entry and citation info
```bibtex
@article{kormilitzin2021med7,
title={Med7: A transferable clinical natural language processing model for electronic health records},
author={Kormilitzin, Andrey and Vaci, Nemanja and Liu, Qiang and Nevado-Holgado, Alejo},
journal={Artificial Intelligence in Medicine},
volume={118},
pages={102086},
year={2021},
publisher={Elsevier}
}
``` | bb54c81417fdc7904eaa43d717ee3076 |
espnet/kan-bayashi_jsut_vits_prosody | espnet | null | 27 | 0 | espnet | 0 | text-to-speech | false | false | false | cc-by-4.0 | ['ja'] | ['jsut'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['espnet', 'audio', 'text-to-speech'] | false | true | true | 1,800 | false | ## ESPnet2 TTS pretrained model
### `kan-bayashi/jsut_vits_prosody`
♻️ Imported from https://zenodo.org/record/5521354/
This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | d53d55b56ff8bba2b60e82b6af4bd541 |
semy/finetuning-tweeteval-hate-speech | semy | distilbert | 19 | 3 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,036 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-tweeteval-hate-speech
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8397
- Accuracy: 0.0
- F1: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.10.2
- Datasets 2.3.2
- Tokenizers 0.12.1
| 296fbb5619a3ea692e15908d685d9738 |
TransQuest/monotransquest-hter-en_cs-pharmaceutical | TransQuest | xlm-roberta | 8 | 8 | transformers | 0 | text-classification | true | false | false | apache-2.0 | ['en-cs'] | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['Quality Estimation', 'monotransquest', 'hter'] | false | true | true | 5,320 | false |
# TransQuest: Translation Quality Estimation with Cross-lingual Transformers
The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commercial translation workflows as they have numerous potential uses. They can be employed to select the best translation when several translation engines are available or can inform the end user about the reliability of automatically translated content. In addition, QE systems can be used to decide whether a translation can be published as it is in a given context, or whether it requires human post-editing before publishing or translation from scratch by a human. The quality estimation can be done at different levels: document level, sentence level and word level.
With TransQuest, we have opensourced our research in translation quality estimation which also won the sentence-level direct assessment quality estimation shared task in [WMT 2020](http://www.statmt.org/wmt20/quality-estimation-task.html). TransQuest outperforms current open-source quality estimation frameworks such as [OpenKiwi](https://github.com/Unbabel/OpenKiwi) and [DeepQuest](https://github.com/sheffieldnlp/deepQuest).
## Features
- Sentence-level translation quality estimation on both aspects: predicting post editing efforts and direct assessment.
- Word-level translation quality estimation capable of predicting quality of source words, target words and target gaps.
- Outperform current state-of-the-art quality estimation methods like DeepQuest and OpenKiwi in all the languages experimented.
- Pre-trained quality estimation models for fifteen language pairs are available in [HuggingFace.](https://huggingface.co/TransQuest)
## Installation
### From pip
```bash
pip install transquest
```
### From Source
```bash
git clone https://github.com/TharinduDR/TransQuest.git
cd TransQuest
pip install -r requirements.txt
```
## Using Pre-trained Models
```python
import torch
from transquest.algo.sentence_level.monotransquest.run_model import MonoTransQuestModel
model = MonoTransQuestModel("xlmroberta", "TransQuest/monotransquest-hter-en_cs-pharmaceutical", num_labels=1, use_cuda=torch.cuda.is_available())
predictions, raw_outputs = model.predict([["Reducerea acestor conflicte este importantă pentru conservare.", "Reducing these conflicts is not important for preservation."]])
print(predictions)
```
## Documentation
For more details follow the documentation.
1. **[Installation](https://tharindudr.github.io/TransQuest/install/)** - Install TransQuest locally using pip.
2. **Architectures** - Checkout the architectures implemented in TransQuest
1. [Sentence-level Architectures](https://tharindudr.github.io/TransQuest/architectures/sentence_level_architectures/) - We have released two architectures; MonoTransQuest and SiameseTransQuest to perform sentence level quality estimation.
2. [Word-level Architecture](https://tharindudr.github.io/TransQuest/architectures/word_level_architecture/) - We have released MicroTransQuest to perform word level quality estimation.
3. **Examples** - We have provided several examples on how to use TransQuest in recent WMT quality estimation shared tasks.
1. [Sentence-level Examples](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/)
2. [Word-level Examples](https://tharindudr.github.io/TransQuest/examples/word_level_examples/)
4. **Pre-trained Models** - We have provided pretrained quality estimation models for fifteen language pairs covering both sentence-level and word-level
1. [Sentence-level Models](https://tharindudr.github.io/TransQuest/models/sentence_level_pretrained/)
2. [Word-level Models](https://tharindudr.github.io/TransQuest/models/word_level_pretrained/)
5. **[Contact](https://tharindudr.github.io/TransQuest/contact/)** - Contact us for any issues with TransQuest
## Citations
If you are using the word-level architecture, please consider citing this paper which is accepted to [ACL 2021](https://2021.aclweb.org/).
```bash
@InProceedings{ranasinghe2021,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {An Exploratory Analysis of Multilingual Word Level Quality Estimation with Cross-Lingual Transformers},
booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics},
year = {2021}
}
```
If you are using the sentence-level architectures, please consider citing these papers which were presented in [COLING 2020](https://coling2020.org/) and in [WMT 2020](http://www.statmt.org/wmt20/) at EMNLP 2020.
```bash
@InProceedings{transquest:2020a,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest: Translation Quality Estimation with Cross-lingual Transformers},
booktitle = {Proceedings of the 28th International Conference on Computational Linguistics},
year = {2020}
}
```
```bash
@InProceedings{transquest:2020b,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest at WMT2020: Sentence-Level Direct Assessment},
booktitle = {Proceedings of the Fifth Conference on Machine Translation},
year = {2020}
}
```
| 033bf264c7c90264a2fbbf9c57d7760c |
ZhiyuanQiu/camembert-base-finetuned-Train_RAW20-dd | ZhiyuanQiu | camembert | 12 | 11 | transformers | 0 | token-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,448 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# camembert-base-finetuned-Train_RAW20-dd
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2380
- Precision: 0.8661
- Recall: 0.8900
- F1: 0.8779
- Accuracy: 0.9209
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.23 | 1.0 | 14269 | 0.2282 | 0.8446 | 0.8714 | 0.8578 | 0.9088 |
| 0.1787 | 2.0 | 28538 | 0.2380 | 0.8661 | 0.8900 | 0.8779 | 0.9209 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| ba52cb1b9dbb26f4f85a20d243933dff |
google/multiberts-seed_2-step_80k | google | bert | 8 | 13 | transformers | 0 | null | true | true | false | apache-2.0 | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['multiberts', 'multiberts-seed_2', 'multiberts-seed_2-step_80k'] | false | true | true | 3,515 | false |
# MultiBERTs, Intermediate Checkpoint - Seed 2, Step 80k
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
[the original BERT model](https://github.com/google-research/bert) but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
[http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our
paper
[The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163).
This is model #2, captured at step 80k (max: 2000k, i.e., 2M steps).
## Model Description
This model was captured during a reproduction of
[BERT-base uncased](https://github.com/google-research/bert), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [BERT-base uncased](https://github.com/google-research/bert). Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962).
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
### How to use
Using code from
[BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on
Tensorflow:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_2-step_80k')
model = TFBertModel.from_pretrained("google/multiberts-seed_2-step_80k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
PyTorch version:
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_2-step_80k')
model = BertModel.from_pretrained("google/multiberts-seed_2-step_80k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Citation info
```bibtex
@article{sellam2021multiberts,
title={The MultiBERTs: BERT Reproductions for Robustness Analysis},
author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick},
journal={arXiv preprint arXiv:2106.16163},
year={2021}
}
```
| 5877c4c81b3f55209bd77b98803f210a |
AndyChiang/cdgp-csg-bert-cloth | AndyChiang | bert | 7 | 34 | transformers | 1 | fill-mask | true | false | false | mit | ['en'] | ['cloth'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['bert', 'cloze', 'distractor', 'generation'] | false | true | true | 3,681 | false |
# cdgp-csg-bert-cloth
## Model description
This model is a Candidate Set Generator in **"CDGP: Automatic Cloze Distractor Generation based on Pre-trained Language Model", Findings of EMNLP 2022**.
Its input are stem and answer, and output is candidate set of distractors. It is fine-tuned by [**CLOTH**](https://www.cs.cmu.edu/~glai1/data/cloth/) dataset based on [**bert-base-uncased**](https://huggingface.co/bert-base-uncased) model.
For more details, you can see our **paper** or [**GitHub**](https://github.com/AndyChiangSH/CDGP).
## How to use?
1. Download the model by hugging face transformers.
```python
from transformers import BertTokenizer, BertForMaskedLM, pipeline
tokenizer = BertTokenizer.from_pretrained("AndyChiang/cdgp-csg-bert-cloth")
csg_model = BertForMaskedLM.from_pretrained("AndyChiang/cdgp-csg-bert-cloth")
```
2. Create a unmasker.
```python
unmasker = pipeline("fill-mask", tokenizer=tokenizer, model=csg_model, top_k=10)
```
3. Use the unmasker to generate the candidate set of distractors.
```python
sent = "I feel [MASK] now. [SEP] happy"
cs = unmasker(sent)
print(cs)
```
## Dataset
This model is fine-tuned by [CLOTH](https://www.cs.cmu.edu/~glai1/data/cloth/) dataset, which is a collection of nearly 100,000 cloze questions from middle school and high school English exams. The detail of CLOTH dataset is shown below.
| Number of questions | Train | Valid | Test |
| ------------------- | ----- | ----- | ----- |
| Middle school | 22056 | 3273 | 3198 |
| High school | 54794 | 7794 | 8318 |
| Total | 76850 | 11067 | 11516 |
You can also use the [dataset](https://huggingface.co/datasets/AndyChiang/cloth) we have already cleaned.
## Training
We use a special way to fine-tune model, which is called **"Answer-Relating Fine-Tune"**. More detail is in our paper.
### Training hyperparameters
The following hyperparameters were used during training:
- Pre-train language model: [bert-base-uncased](https://huggingface.co/bert-base-uncased)
- Optimizer: adam
- Learning rate: 0.0001
- Max length of input: 64
- Batch size: 64
- Epoch: 1
- Device: NVIDIA® Tesla T4 in Google Colab
## Testing
The evaluations of this model as a Candidate Set Generator in CDGP is as follows:
| P@1 | F1@3 | F1@10 | MRR | NDCG@10 |
| ----- | ----- | ----- | ----- | ------- |
| 18.50 | 13.80 | 15.37 | 29.96 | 37.82 |
## Other models
### Candidate Set Generator
| Models | CLOTH | DGen |
| ----------- | ----------------------------------------------------------------------------------- | -------------------------------------------------------------------------------- |
| **BERT** | [*cdgp-csg-bert-cloth*](https://huggingface.co/AndyChiang/cdgp-csg-bert-cloth) | [cdgp-csg-bert-dgen](https://huggingface.co/AndyChiang/cdgp-csg-bert-dgen) |
| **SciBERT** | [cdgp-csg-scibert-cloth](https://huggingface.co/AndyChiang/cdgp-csg-scibert-cloth) | [cdgp-csg-scibert-dgen](https://huggingface.co/AndyChiang/cdgp-csg-scibert-dgen) |
| **RoBERTa** | [cdgp-csg-roberta-cloth](https://huggingface.co/AndyChiang/cdgp-csg-roberta-cloth) | [cdgp-csg-roberta-dgen](https://huggingface.co/AndyChiang/cdgp-csg-roberta-dgen) |
| **BART** | [cdgp-csg-bart-cloth](https://huggingface.co/AndyChiang/cdgp-csg-bart-cloth) | [cdgp-csg-bart-dgen](https://huggingface.co/AndyChiang/cdgp-csg-bart-dgen) |
### Distractor Selector
**fastText**: [cdgp-ds-fasttext](https://huggingface.co/AndyChiang/cdgp-ds-fasttext)
## Citation
None | 3134bb7bb97eab0336ea468e3c854060 |
Hosioka/AniReal | Hosioka | null | 4 | 2 | diffusers | 56 | text-to-image | false | false | false | creativeml-openrail-m | ['en'] | null | null | 7 | 0 | 2 | 5 | 1 | 0 | 1 | ['text-to-image', 'stable-Diffusion', 'stable-diffusion-diffusers', 'diffusers', 'safetensors'] | false | true | true | 2,616 | false |
<p align="center"> <img src="https://s1.fileditch.ch/iMqyjOnUtxntHolBiNgT.png" width=35% height=35%> <p>
<p align="center"> AniReal - A latent diffusion model fine-tuned to output High Quality Photorealistic anime illustrations!
<img src="https://m1.afileditch.ch/uJoodjDNVWxDqhhQHeRH.png">
<p>
________
# 【 AniReal 】
Welcome to AniReal! a latent diffusion model Trained and Fine-Tuned on **Photorealistic High Quality** anime illustrations using the **Danbooru** tagging dataset aswell as **Blip** I made it so that it understands some natural text description alongside danbooru tags It may not work as well though but give it a shot!
The model itself its made to output generally anything with an anime art style if u can think of it u can prompt it!
________
# Showcase
*Below are some examples generated with this model!*
<img src=https://s1.fileditch.ch/laCwJNcXRqnsDoLkbge.png width=100% height=100%>
# Usage!
- DPM++ 2S a Karras for both anatomy and quality
- Clipskip 1 or 2 works
- Upscaler (Latent)
- Denoise Strength (0.53~0.7)
- below 0.53 Denoise Blocking can occur
# This project would be impossible without
- [Haisenberg](https://huggingface.co/haisenberguwu)
- [Thiros](https://huggingface.co/thiros)
- [Closertodeath](https://huggingface.co/closertodeath)
<img src="https://s1.fileditch.ch/FjLFnEcKHAFpEEEAawMP.png">
Many thanks,
Hosioka.
# Note!
for some reason huggingface wont let me upload ckpt or diffusers to this repo if you're looking for difffusers please revert to commit #0bb03505150d9b4b39975a9da8589b40190e7078
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
# License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the **Model** to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the **Model** commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
| 9727b39c1a74108ae950bafd2efb7386 |
pbwt/turkishReviews-ds-mini | pbwt | gpt2 | 9 | 2 | transformers | 0 | text-generation | false | true | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,575 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# turkishReviews-ds-mini
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 9.1632
- Validation Loss: 9.2525
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': -896, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 10.2835 | 9.9707 | 0 |
| 9.6408 | 9.6241 | 1 |
| 9.1632 | 9.2525 | 2 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.6.2
- Datasets 2.4.0
- Tokenizers 0.12.1
| 5368b428fd06be61807c02c242b1dd72 |
ChristianOrr/madnet_keras | ChristianOrr | null | 7 | 0 | null | 0 | depth-estimation | false | false | false | apache-2.0 | null | ['flyingthings-3d', 'kitti'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['vision', 'deep-stereo', 'depth-estimation', 'Tensorflow2', 'Keras'] | false | true | true | 5,388 | false |
# MADNet Keras
MADNet is a deep stereo depth estimation model. Its key defining features are:
1. It has a light-weight architecture which means it has low latency.
2. It supports self-supervised training, so it can be conveniently adapted in the field with no training data.
3. It's a stereo depth model, which means it's capable of high accuracy.
The MADNet weights in this repository were trained using a Tensorflow 2 / Keras implementation of the original code. The model was created using the Keras Functional API, which enables the following features:
1. Good optimization.
2. High level Keras methods (.fit, .predict and .evaluate).
3. Little boilerplate code.
4. Decent support from external packages (like Weights and Biases).
5. Callbacks.
The weights provided were either trained on the 2012 / 2015 kitti stereo dataset or flyingthings-3d dataset. The weights of the pretrained models from the original paper (tf1_conversion_kitti.h5 and tf1_conversion_synthetic.h5) are provided in tensorflow 2 format. The TF1 weights help speed up fine-tuning, but its recommended to use either synthetic.h5 (trained on flyingthings-3d) or kitti.h5 (trained on 2012 and 2015 kitti stereo datasets).
**Abstract**:
Deep convolutional neural networks trained end-to-end are the undisputed state-of-the-art methods to regress dense disparity maps directly from stereo pairs. However, such methods suffer from notable accuracy drops when exposed to scenarios significantly different from those seen in the training phase (e.g.real vs synthetic images, indoor vs outdoor, etc). As it is unlikely to be able to gather enough samples to achieve effective training/ tuning in any target domain, we propose to perform unsupervised and continuous online adaptation of a deep stereo network in order to preserve its accuracy independently of the sensed environment. However, such a strategy can be extremely demanding regarding computational resources and thus not enabling real-time performance. Therefore, we address this side effect by introducing a new lightweight, yet effective, deep stereo architecture Modularly ADaptive Network (MADNet) and by developing Modular ADaptation (MAD), an algorithm to train independently only sub-portions of our model. By deploying MADNet together with MAD we propose the first ever realtime self-adaptive deep stereo system.
## Usage Instructions
See the accompanying codes readme for details on how to perform training and inferencing with the model: [madnet-deep-stereo-with-keras](https://github.com/ChristianOrr/madnet-deep-stereo-with-keras).
## Training
### TF1 Kitti and TF1 Synthetic
Training details for the TF1 weights are available in the supplementary material (at the end) of this paper: [Real-time self-adaptive deep stereo](https://arxiv.org/abs/1810.05424)
### Synthetic
The synthetic model was finetuned using the tf1 synthetic weights. It was trained on the flyingthings-3d dataset with the following parameters:
- Steps: 1.5 million
- Learning Rate: 0.0001
- Decay Rate: 0.999
- Minimum Learning Rate Cap: 0.000001
- Batch Size: 1
- Optimizer: Adam
- Image Height: 480
- Image Width: 640
### Kitti
The kitti model was finetuned using the synthetic weights. Tensorboard events file is available in the logs directory. It was trained on the 2012 and 2015 kitti stereo dataset with the following parameters:
- Steps: 0.5 million
- Learning Rate: 0.0001
- Decay Rate: 0.999
- Minimum Learning Rate Cap: 0.0000001
- Batch Size: 1
- Optimizer: Adam
- Image Height: 480
- Image Width: 640
## BibTeX entry and citation info
```bibtex
@InProceedings{Tonioni_2019_CVPR,
author = {Tonioni, Alessio and Tosi, Fabio and Poggi, Matteo and Mattoccia, Stefano and Di Stefano, Luigi},
title = {Real-time self-adaptive deep stereo},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}
```
```bibtex
@article{Poggi2021continual,
author={Poggi, Matteo and Tonioni, Alessio and Tosi, Fabio
and Mattoccia, Stefano and Di Stefano, Luigi},
title={Continual Adaptation for Deep Stereo},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)},
year={2021}
}
```
```bibtex
@InProceedings{MIFDB16,
author = "N. Mayer and E. Ilg and P. Hausser and P. Fischer and D. Cremers and A. Dosovitskiy and T. Brox",
title = "A Large Dataset to Train Convolutional Networks for Disparity, Optical Flow, and Scene Flow Estimation",
booktitle = "IEEE International Conference on Computer Vision and Pattern Recognition (CVPR)",
year = "2016",
note = "arXiv:1512.02134",
url = "http://lmb.informatik.uni-freiburg.de/Publications/2016/MIFDB16"
}
```
```bibtex
@INPROCEEDINGS{Geiger2012CVPR,
author = {Andreas Geiger and Philip Lenz and Raquel Urtasun},
title = {Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite},
booktitle = {Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2012}
}
```
```bibtex
@INPROCEEDINGS{Menze2015CVPR,
author = {Moritz Menze and Andreas Geiger},
title = {Object Scene Flow for Autonomous Vehicles},
booktitle = {Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2015}
}
``` | 6acab8fa76ea91208c9598a2c0eccb2c |
elihoole/distilgpt2-ttds | elihoole | gpt2 | 11 | 7 | transformers | 0 | text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,221 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-ttds
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3666
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 40 | 4.5807 |
| No log | 2.0 | 80 | 4.4023 |
| No log | 3.0 | 120 | 4.3666 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.7.1
- Datasets 2.0.0
- Tokenizers 0.11.6
| 33e6089020c3a5d09b8a18a72157b6fb |
jonatasgrosman/exp_w2v2r_fr_xls-r_accent_france-2_belgium-8_s587 | jonatasgrosman | wav2vec2 | 10 | 3 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['fr'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'fr'] | false | true | true | 479 | false | # exp_w2v2r_fr_xls-r_accent_france-2_belgium-8_s587
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| e1cbc18d07c55a7bdf98ff5d8787394a |
nlp-esg-scoring/bert-base-finetuned-cleaned-esg-plus | nlp-esg-scoring | bert | 8 | 2 | transformers | 0 | fill-mask | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,912 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nlp-esg-scoring/bert-base-finetuned-cleaned-esg-plus
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.7242
- Validation Loss: 2.5107
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -146, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.7185 | 2.5414 | 0 |
| 2.7167 | 2.5223 | 1 |
| 2.7161 | 2.5627 | 2 |
| 2.7189 | 2.5305 | 3 |
| 2.7248 | 2.5103 | 4 |
| 2.7173 | 2.5095 | 5 |
| 2.7272 | 2.5135 | 6 |
| 2.7215 | 2.5447 | 7 |
| 2.7247 | 2.5632 | 8 |
| 2.7242 | 2.5107 | 9 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
| d7c843a24dca212b69625933647a12ca |
fathyshalab/all-roberta-large-v1-work-2-16-5-oos | fathyshalab | roberta | 11 | 3 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,513 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-work-2-16-5-oos
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3586
- Accuracy: 0.3689
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.8058 | 1.0 | 1 | 2.6169 | 0.2356 |
| 2.3524 | 2.0 | 2 | 2.5215 | 0.2978 |
| 1.9543 | 3.0 | 3 | 2.4427 | 0.3422 |
| 1.5539 | 4.0 | 4 | 2.3874 | 0.36 |
| 1.4133 | 5.0 | 5 | 2.3586 | 0.3689 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
| 926a2701486cb43218cebcb9cee9198c |
DuboiJ/finetuning-sentiment-model-3000-samples | DuboiJ | distilbert | 13 | 9 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['imdb'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,055 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3211
- Accuracy: 0.8633
- F1: 0.8638
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| d54ed587a4936a074c80849799ea50b2 |
lilykaw/finetuning-sentiment-model-3000-samples | lilykaw | distilbert | 13 | 9 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['imdb'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,055 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6551
- Accuracy: 0.6633
- F1: 0.7248
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
| e64048a12320aa2ff843a1c094243c3d |
SauravMaheshkar/rembert-maxseq-384-docstride-135-chaii | SauravMaheshkar | null | 31 | 0 | null | 0 | question-answering | true | false | false | cc0-1.0 | ['multilingual'] | ['Commonlit-Readibility'] | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['kaggle', 'rembert', 'pytorch', 'question-answering'] | false | true | true | 1,194 | false | <div align = "center">
<img src = "https://github.com/SauravMaheshkar/chaii-Hindi-Tamil-QA/blob/main/assets/Coffee%20Banner.png?raw=true">
</div>
This dataset contains the [**google/rembert**](https://huggingface.co/transformers/model_doc/rembert.html) model weights according to my team's experimentation strategy during the [**chaii - Hindi and Tamil Question Answering**](https://www.kaggle.com/c/chaii-hindi-and-tamil-question-answering) competition. They are listed below with their corresponding public LB score:-
| Huggingface Hub Link | Public LB Score |
| :---: | :---: |
| [**SauravMaheshkar/rembert-maxseq-400-docstride-128-chaii**](https://huggingface.co/SauravMaheshkar/rembert-maxseq-400-docstride-128-chaii) | 0.724 |
| [**SauravMaheshkar/rembert-maxseq-384-docstride-135-chaii**](https://huggingface.co/SauravMaheshkar/rembert-maxseq-384-docstride-135-chaii) | 0.723 |
| [**SauravMaheshkar/rembert-maxseq-400-docstride-135-chaii**](https://huggingface.co/SauravMaheshkar/rembert-maxseq-400-docstride-135-chaii) | 0.737 |
| [**SauravMaheshkar/rembert-maxseq-384-docstride-128-chaii**](https://huggingface.co/SauravMaheshkar/rembert-maxseq-384-docstride-128-chaii) | 0.725 |
| 964b1cff91c215a10b9d2765ac07bd21 |
GanjinZero/biobart-v2-base | GanjinZero | bart | 8 | 98 | transformers | 1 | text2text-generation | true | false | false | apache-2.0 | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['bart', 'biobart', 'biomedical'] | false | true | true | 434 | false |
Paper: [BioBART: Pretraining and Evaluation of A Biomedical Generative Language Model](https://arxiv.org/pdf/2204.03905.pdf)
V2 adopts a new biomedical vocab.
```
@misc{BioBART,
title={BioBART: Pretraining and Evaluation of A Biomedical Generative Language Model},
author={Hongyi Yuan and Zheng Yuan and Ruyi Gan and Jiaxing Zhang and Yutao Xie and Sheng Yu},
year={2022},
eprint={2204.03905},
archivePrefix={arXiv}
}
``` | f57dfa762122011c5aacaf8cb94c2456 |
pritamdeka/PubMedBert-fulltext-cord19 | pritamdeka | bert | 13 | 6 | transformers | 0 | fill-mask | true | false | false | mit | null | ['pritamdeka/cord-19-fulltext'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,124 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pubmedbert-fulltext-cord19
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the pritamdeka/cord-19-fulltext dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2667
- Accuracy: 0.7175
## Model description
The model has been trained using a maximum train sample size of 300K and evaluation size of 25K due to GPU limitations
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.7985 | 0.27 | 5000 | 1.2710 | 0.7176 |
| 1.7542 | 0.53 | 10000 | 1.3359 | 0.7070 |
| 1.7462 | 0.8 | 15000 | 1.3489 | 0.7034 |
| 1.8371 | 1.07 | 20000 | 1.4361 | 0.6891 |
| 1.7102 | 1.33 | 25000 | 1.3502 | 0.7039 |
| 1.6596 | 1.6 | 30000 | 1.3341 | 0.7065 |
| 1.6265 | 1.87 | 35000 | 1.3228 | 0.7087 |
| 1.605 | 2.13 | 40000 | 1.3079 | 0.7099 |
| 1.5731 | 2.4 | 45000 | 1.2986 | 0.7121 |
| 1.5602 | 2.67 | 50000 | 1.2929 | 0.7136 |
| 1.5447 | 2.93 | 55000 | 1.2875 | 0.7143 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| e76316f9d052c88a7253c54ebc4a847e |
muhtasham/tiny-mlm-glue-qnli-target-glue-sst2 | muhtasham | bert | 10 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,240 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-mlm-glue-qnli-target-glue-sst2
This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-qnli](https://huggingface.co/muhtasham/tiny-mlm-glue-qnli) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5008
- Accuracy: 0.8211
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5757 | 0.24 | 500 | 0.4901 | 0.7775 |
| 0.4436 | 0.48 | 1000 | 0.4673 | 0.7833 |
| 0.3947 | 0.71 | 1500 | 0.4434 | 0.7970 |
| 0.3751 | 0.95 | 2000 | 0.4601 | 0.7970 |
| 0.3326 | 1.19 | 2500 | 0.4463 | 0.8005 |
| 0.316 | 1.43 | 3000 | 0.4510 | 0.8005 |
| 0.2981 | 1.66 | 3500 | 0.4367 | 0.8142 |
| 0.2929 | 1.9 | 4000 | 0.4383 | 0.8108 |
| 0.2746 | 2.14 | 4500 | 0.4873 | 0.8016 |
| 0.256 | 2.38 | 5000 | 0.4395 | 0.8165 |
| 0.246 | 2.61 | 5500 | 0.4444 | 0.8280 |
| 0.2522 | 2.85 | 6000 | 0.4478 | 0.8245 |
| 0.2371 | 3.09 | 6500 | 0.4556 | 0.8291 |
| 0.2299 | 3.33 | 7000 | 0.4655 | 0.8326 |
| 0.2143 | 3.56 | 7500 | 0.4581 | 0.8314 |
| 0.2153 | 3.8 | 8000 | 0.4869 | 0.8291 |
| 0.2134 | 4.04 | 8500 | 0.5008 | 0.8211 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
| 7c743e7b8878667fb6c1dda66fcf8b85 |
dominguesm/legal-bert-tokenizer | dominguesm | null | 6 | 0 | null | 0 | null | false | false | false | cc-by-sa-4.0 | ['pt'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 2,799 | false |
# LegalBERT Tokenizer
**LegalBERT** tokenizer is a word level byte-pair encoding with
vocabulary size of 52k tokens (containing the most common words in legal documents), based on the [BERTimbau](https://huggingface.co/neuralmind/bert-base-portuguese-cased) tokenizer. The tokenizer was trained on data provided by the **BRAZILIAN SUPREME FEDERAL TRIBUNAL**, through the terms of use: [LREC 2020](https://ailab.unb.br/victor/lrec2020).
Tokenizer utilize `BertTokenizer` implementation from [transformers](https://github.com/huggingface/transformers).
**NOTE**: The results of this project do not imply in any way the position of the BRAZILIAN SUPREME FEDERAL TRIBUNAL, all being the sole and exclusive responsibility of the author.
## Tokenizer usage
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("dominguesm/legal-bert-tokenizer")
example = ""
tokens = tokenizer.tokenize(example)
```
### Comparison of results
**Original Text**: ```De ordem, a Secretaria Judiciária do Supremo Tribunal Federal INTIMA a parte abaixo identificada, ou quem as suas vezes fizer, do inteiro teor do(a) despacho/decisão presente nos autos (art. 270 do Código de Processo Cívil e art 5º da Lei 11.419/2006).```
| Tokenizer | Tokens | Num. Tokens |
| --------- | ------ | ----------- |
| BERTimbau | ```['De', 'ordem', ',', 'a', 'Secretaria', 'Judic', '##iária', 'do', 'Supremo', 'Tribunal', 'Federal', 'IN', '##TI', '##MA', 'a', 'parte', 'abaixo', 'identificada', ',', 'ou', 'quem', 'as', 'suas', 'vezes', 'fiz', '##er', ',', 'do', 'inteiro', 'teor', 'do', '(', 'a', ')', 'despa', '##cho', '/', 'decisão', 'presente', 'nos', 'auto', '##s', '(', 'art', '.', '27', '##0', 'do', 'Código', 'de', 'Processo', 'Cí', '##vil', 'e', 'art', '[UNK]', 'da', 'Lei', '11', '.', '41', '##9', '/', '2006', ')', '.']``` | 66 |
| LegalBERT | ```['De', 'ordem', ',', 'a', 'Secretaria', 'Judiciária', 'do', 'Supremo', 'Tribunal', 'Federal', 'INTIMA', 'a', 'parte', 'abaixo', 'identificada', ',', 'ou', 'quem', 'as', 'suas', 'vezes', 'fizer', ',', 'do', 'inteiro', 'teor', 'do', '(', 'a', ')', 'despacho', '/', 'decisão', 'presente', 'nos', 'autos', '(', 'art', '.', '270', 'do', 'Código', 'de', 'Processo', 'Cív', '##il', 'e', 'art', '5º', 'da', 'Lei', '11', '.', '419', '/', '2006', ')', '.']``` | 58 |
## Citation
If you use this tokenizer, please cite:
```
@misc {maicon_domingues_2022,
author = { {Maicon Domingues} },
title = { legal-bert-tokenizer (Revision d8e9d4a) },
year = 2022,
url = { https://huggingface.co/dominguesm/legal-bert-tokenizer },
doi = { 10.57967/hf/0110 },
publisher = { Hugging Face }
}
```
## Contacts:
* <a href="mailto:dominguesm@outlook.com">dominguesm@outlook.com</a>
* [NLP.ROCKS](http://nlp.rocks)
| 4d35d80f788a4201ce3c2d320805289f |
Atsui75/whisper-small-gl | Atsui75 | whisper | 39 | 6 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['gl'] | ['mozilla-foundation/common_voice_11_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['hf-asr-leaderboard', 'generated_from_trainer'] | true | true | true | 1,640 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small GL - Santiago Paramés-Estévez
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3179
- Wer: 15.2334
## Model description
This model was fine-tuned using Sanchit Gandhi's notebook: https://huggingface.co/blog/fine-tune-whisper
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0707 | 2.69 | 1000 | 0.2596 | 16.4915 |
| 0.0063 | 5.38 | 2000 | 0.2952 | 15.8583 |
| 0.0014 | 8.06 | 3000 | 0.3105 | 15.2624 |
| 0.0011 | 10.75 | 4000 | 0.3179 | 15.2334 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1
- Tokenizers 0.13.2
| 79e33a350dfa9e7e40e3e9b2ac279c8d |
nikhil6041/wav2vec2-large-xlsr-hindi_commonvoice | nikhil6041 | wav2vec2 | 17 | 7 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | ['common_voice'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,705 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-hindi_commonvoice
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5947
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 24.0069 | 4.0 | 20 | 40.3956 | 1.0 |
| 18.1097 | 8.0 | 40 | 15.3603 | 1.0 |
| 7.1344 | 12.0 | 60 | 5.2695 | 1.0 |
| 4.0032 | 16.0 | 80 | 3.7403 | 1.0 |
| 3.4894 | 20.0 | 100 | 3.5724 | 1.0 |
| 3.458 | 24.0 | 120 | 3.6164 | 1.0 |
| 3.4412 | 28.0 | 140 | 3.5947 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu102
- Datasets 1.13.3
- Tokenizers 0.10.3
| 1e2d9ed051df0916ed23e53c7c5d78fd |
MultiBertGunjanPatrick/multiberts-seed-2-80k | MultiBertGunjanPatrick | bert | 7 | 4 | transformers | 0 | null | true | false | false | apache-2.0 | ['en'] | ['bookcorpus', 'wikipedia'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['exbert', 'multiberts', 'multiberts-seed-2'] | false | true | true | 6,479 | false | # MultiBERTs Seed 2 Checkpoint 80k (uncased)
Seed 2 intermediate checkpoint 80k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-2](https://hf.co/multberts-seed-2). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-2-80k')
model = BertModel.from_pretrained("multiberts-seed-2-80k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| f61f8e6bae4216de6fa014399fee87f5 |
sd-dreambooth-library/mexican-concha | sd-dreambooth-library | null | 26 | 2 | diffusers | 1 | null | false | false | false | mit | null | null | null | 2 | 2 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,726 | false | ### mexican_concha on Stable Diffusion via Dreambooth
#### model by MrHidden
This your the Stable Diffusion model fine-tuned the mexican_concha concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **a photo of sks Mexican Concha**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:








| 8130407812cc50027f8905cf82501f6c |
84rry/84rry-xlsr-53-arabic | 84rry | wav2vec2 | 15 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | ['common_voice'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,076 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 84rry-xlsr-53-arabic
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0025
- Wer: 0.4977
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.4906 | 2.25 | 500 | 1.3179 | 0.8390 |
| 0.8851 | 4.5 | 1000 | 0.7385 | 0.6221 |
| 0.6884 | 6.76 | 1500 | 0.7005 | 0.5765 |
| 0.5525 | 9.01 | 2000 | 0.6931 | 0.5610 |
| 0.474 | 11.26 | 2500 | 0.7977 | 0.5560 |
| 0.3976 | 13.51 | 3000 | 0.7750 | 0.5375 |
| 0.343 | 15.76 | 3500 | 0.7553 | 0.5206 |
| 0.2838 | 18.02 | 4000 | 0.8162 | 0.5099 |
| 0.2369 | 20.27 | 4500 | 0.8574 | 0.5124 |
| 0.2298 | 22.52 | 5000 | 0.8848 | 0.5057 |
| 0.1727 | 24.77 | 5500 | 0.9193 | 0.5070 |
| 0.1675 | 27.03 | 6000 | 0.9959 | 0.4988 |
| 0.1457 | 29.28 | 6500 | 1.0025 | 0.4977 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
| 91f6eddb1974cee00e03b4406f8e38b1 |
naem1023/bart-v2-dialouge | naem1023 | bart | 12 | 2 | transformers | 0 | text2text-generation | true | false | false | mit | null | ['naem1023/aihub-dialogue'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 998 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-v2-dialouge
This model is a fine-tuned version of [hyunwoongko/kobart](https://huggingface.co/hyunwoongko/kobart) on the naem1023/aihub-dialogue dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 150
- eval_batch_size: 40
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 1200
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6.0
### Training results
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1
| a0558dc8c3ea96587598d1efad265e75 |
Likalto4/Butterflies_x64_DDPM_cosine | Likalto4 | null | 6 | 2 | diffusers | 0 | unconditional-image-generation | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['pytorch', 'diffusers', 'unconditional-image-generation', 'diffusion-models-class'] | false | true | true | 609 | false |
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
The model was trained with 1000 images using the [DDPM](https://arxiv.org/abs/2006.11239) architecture. Images generated are of 64x64 pixel size.
The model was trained for 50 epochs with a batch size of 64, using around 10 GB of GPU memory.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(Likalto4/Butterflies_x64_DDPM_cosine)
image = pipeline().images[0]
image
```
| cd0f9f28fe760b32058b8c8a13ebbf9a |
YeRyeongLee/bert-base-uncased-finetuned-0505-2 | YeRyeongLee | bert | 12 | 3 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,458 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-0505-2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4277
- Accuracy: 0.9206
- F1: 0.9205
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 1373 | 0.3634 | 0.9025 | 0.9012 |
| No log | 2.0 | 2746 | 0.3648 | 0.9066 | 0.9060 |
| No log | 3.0 | 4119 | 0.3978 | 0.9189 | 0.9183 |
| No log | 4.0 | 5492 | 0.4277 | 0.9206 | 0.9205 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0
- Datasets 1.16.1
- Tokenizers 0.10.3
| e34e2ddcfe39c4519ec6e48933b176b8 |
asifhugs/distilgpt2-finetuned-distilgpt2 | asifhugs | gpt2 | 9 | 4 | transformers | 0 | text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,358 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-distilgpt2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the wikitext dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6662
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6.25e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.848 | 1.0 | 2334 | 3.7175 |
| 3.7652 | 2.0 | 4668 | 3.6859 |
| 3.7196 | 3.0 | 7002 | 3.6728 |
| 3.6868 | 4.0 | 9336 | 3.6682 |
| 3.6639 | 5.0 | 11670 | 3.6662 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
| ab1510cefd951cb7b188f6afbc6e4e77 |
ChristosSevastopoulos/swin-tiny-patch4-window7-224-thecbbbfs | ChristosSevastopoulos | swin | 11 | 10 | transformers | 0 | image-classification | true | false | false | apache-2.0 | null | ['imagefolder'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,365 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-thecbbbfs
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3088
- Accuracy: 0.8933
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5717 | 0.96 | 12 | 0.3088 | 0.8933 |
### Framework versions
- Transformers 4.23.0.dev0
- Pytorch 1.12.1+cu102
- Datasets 2.5.1
- Tokenizers 0.13.0
| b17f9ff2a48f52820cdacfa6ba205251 |
facebook/vit-mae-large | facebook | vit_mae | 6 | 255 | transformers | 2 | null | true | true | false | apache-2.0 | null | ['imagenet-1k'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['vision'] | false | true | true | 2,923 | false |
# Vision Transformer (large-sized model) pre-trained with MAE
Vision Transformer (ViT) model pre-trained using the MAE method. It was introduced in the paper [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick and first released in [this repository](https://github.com/facebookresearch/mae).
Disclaimer: The team releasing MAE did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Vision Transformer (ViT) is a transformer encoder model (BERT-like). Images are presented to the model as a sequence of fixed-size patches.
During pre-training, one randomly masks out a high portion (75%) of the image patches. First, the encoder is used to encode the visual patches. Next, a learnable (shared) mask token is added at the positions of the masked patches. The decoder takes the encoded visual patches and mask tokens as input and reconstructs raw pixel values for the masked positions.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=facebook/vit-mae) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import AutoFeatureExtractor, ViTMAEForPreTraining
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = AutoFeatureExtractor.from_pretrained('facebook/vit-mae-large')
model = ViTMAEForPreTraining.from_pretrained('facebook/vit-mae-large')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
loss = outputs.loss
mask = outputs.mask
ids_restore = outputs.ids_restore
```
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2111-06377,
author = {Kaiming He and
Xinlei Chen and
Saining Xie and
Yanghao Li and
Piotr Doll{\'{a}}r and
Ross B. Girshick},
title = {Masked Autoencoders Are Scalable Vision Learners},
journal = {CoRR},
volume = {abs/2111.06377},
year = {2021},
url = {https://arxiv.org/abs/2111.06377},
eprinttype = {arXiv},
eprint = {2111.06377},
timestamp = {Tue, 16 Nov 2021 12:12:31 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2111-06377.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | 293f5a78aa4f2cc96e1f2f474a3838b8 |
adasgaleus/insertion-prop-05 | adasgaleus | distilbert | 12 | 3 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,532 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# insertion-prop-05
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0756
- Precision: 0.9217
- Recall: 0.8949
- F1: 0.9081
- Accuracy: 0.9708
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1648 | 0.32 | 500 | 0.0914 | 0.9072 | 0.8710 | 0.8887 | 0.9648 |
| 0.1028 | 0.64 | 1000 | 0.0792 | 0.9195 | 0.8878 | 0.9033 | 0.9693 |
| 0.095 | 0.96 | 1500 | 0.0756 | 0.9217 | 0.8949 | 0.9081 | 0.9708 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
| 4fdc5e067bbfda7efa422858cc26a0b4 |
Alex-VisTas/swin-tiny-patch4-window7-224-finetuned-woody_LeftGR_130epochs | Alex-VisTas | swin | 14 | 7 | transformers | 0 | image-classification | true | false | false | apache-2.0 | null | ['imagefolder'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 9,383 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-woody_LeftGR_130epochs
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3377
- Accuracy: 0.9047
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 130
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6614 | 1.0 | 61 | 0.6404 | 0.6521 |
| 0.5982 | 2.0 | 122 | 0.5548 | 0.7107 |
| 0.579 | 3.0 | 183 | 0.5390 | 0.7141 |
| 0.5621 | 4.0 | 244 | 0.4920 | 0.7623 |
| 0.5567 | 5.0 | 305 | 0.5375 | 0.7313 |
| 0.5271 | 6.0 | 366 | 0.5542 | 0.7405 |
| 0.5312 | 7.0 | 427 | 0.4573 | 0.7876 |
| 0.5477 | 8.0 | 488 | 0.4540 | 0.7784 |
| 0.5554 | 9.0 | 549 | 0.4932 | 0.7635 |
| 0.5247 | 10.0 | 610 | 0.4407 | 0.7968 |
| 0.5239 | 11.0 | 671 | 0.4479 | 0.7842 |
| 0.5294 | 12.0 | 732 | 0.4509 | 0.7910 |
| 0.531 | 13.0 | 793 | 0.4419 | 0.7933 |
| 0.5493 | 14.0 | 854 | 0.4646 | 0.7784 |
| 0.4934 | 15.0 | 915 | 0.4310 | 0.7968 |
| 0.4965 | 16.0 | 976 | 0.4449 | 0.7876 |
| 0.4946 | 17.0 | 1037 | 0.4342 | 0.8129 |
| 0.4716 | 18.0 | 1098 | 0.4129 | 0.8140 |
| 0.4679 | 19.0 | 1159 | 0.4290 | 0.8002 |
| 0.4799 | 20.0 | 1220 | 0.4356 | 0.7842 |
| 0.4744 | 21.0 | 1281 | 0.4042 | 0.8094 |
| 0.4512 | 22.0 | 1342 | 0.3953 | 0.8117 |
| 0.4633 | 23.0 | 1403 | 0.4157 | 0.7956 |
| 0.4528 | 24.0 | 1464 | 0.3920 | 0.8094 |
| 0.4427 | 25.0 | 1525 | 0.3930 | 0.8220 |
| 0.4238 | 26.0 | 1586 | 0.3891 | 0.8140 |
| 0.4257 | 27.0 | 1647 | 0.3700 | 0.8255 |
| 0.4102 | 28.0 | 1708 | 0.4122 | 0.7968 |
| 0.4505 | 29.0 | 1769 | 0.4210 | 0.7945 |
| 0.3973 | 30.0 | 1830 | 0.3923 | 0.8197 |
| 0.3824 | 31.0 | 1891 | 0.3908 | 0.8473 |
| 0.3887 | 32.0 | 1952 | 0.3897 | 0.8312 |
| 0.3723 | 33.0 | 2013 | 0.3747 | 0.8381 |
| 0.3608 | 34.0 | 2074 | 0.3706 | 0.8301 |
| 0.3718 | 35.0 | 2135 | 0.3937 | 0.8255 |
| 0.3692 | 36.0 | 2196 | 0.3984 | 0.8037 |
| 0.3533 | 37.0 | 2257 | 0.3792 | 0.8335 |
| 0.3625 | 38.0 | 2318 | 0.4070 | 0.8163 |
| 0.3633 | 39.0 | 2379 | 0.4130 | 0.8232 |
| 0.3602 | 40.0 | 2440 | 0.3996 | 0.8186 |
| 0.3557 | 41.0 | 2501 | 0.3756 | 0.8335 |
| 0.3373 | 42.0 | 2562 | 0.3914 | 0.8220 |
| 0.3102 | 43.0 | 2623 | 0.4165 | 0.8507 |
| 0.3135 | 44.0 | 2684 | 0.3852 | 0.8278 |
| 0.3286 | 45.0 | 2745 | 0.4164 | 0.8450 |
| 0.316 | 46.0 | 2806 | 0.3498 | 0.8496 |
| 0.2802 | 47.0 | 2867 | 0.3887 | 0.8462 |
| 0.3184 | 48.0 | 2928 | 0.3829 | 0.8576 |
| 0.2785 | 49.0 | 2989 | 0.3627 | 0.8485 |
| 0.2988 | 50.0 | 3050 | 0.3679 | 0.8370 |
| 0.267 | 51.0 | 3111 | 0.3528 | 0.8645 |
| 0.2907 | 52.0 | 3172 | 0.3538 | 0.8519 |
| 0.2857 | 53.0 | 3233 | 0.3593 | 0.8530 |
| 0.2651 | 54.0 | 3294 | 0.3732 | 0.8439 |
| 0.2447 | 55.0 | 3355 | 0.3441 | 0.8542 |
| 0.2542 | 56.0 | 3416 | 0.3897 | 0.8576 |
| 0.2634 | 57.0 | 3477 | 0.4082 | 0.8657 |
| 0.2505 | 58.0 | 3538 | 0.3416 | 0.8657 |
| 0.2555 | 59.0 | 3599 | 0.3725 | 0.8576 |
| 0.2466 | 60.0 | 3660 | 0.3496 | 0.8680 |
| 0.2585 | 61.0 | 3721 | 0.3214 | 0.8783 |
| 0.235 | 62.0 | 3782 | 0.3584 | 0.8737 |
| 0.215 | 63.0 | 3843 | 0.3467 | 0.8657 |
| 0.236 | 64.0 | 3904 | 0.3471 | 0.8829 |
| 0.2211 | 65.0 | 3965 | 0.3318 | 0.8863 |
| 0.1989 | 66.0 | 4026 | 0.3645 | 0.8852 |
| 0.2133 | 67.0 | 4087 | 0.3456 | 0.8898 |
| 0.2169 | 68.0 | 4148 | 0.3287 | 0.8852 |
| 0.223 | 69.0 | 4209 | 0.3182 | 0.8921 |
| 0.2379 | 70.0 | 4270 | 0.3260 | 0.8840 |
| 0.2149 | 71.0 | 4331 | 0.3230 | 0.8886 |
| 0.2007 | 72.0 | 4392 | 0.3926 | 0.8760 |
| 0.2091 | 73.0 | 4453 | 0.4133 | 0.8783 |
| 0.2229 | 74.0 | 4514 | 0.3867 | 0.8772 |
| 0.1903 | 75.0 | 4575 | 0.3594 | 0.8840 |
| 0.2124 | 76.0 | 4636 | 0.3388 | 0.8875 |
| 0.1999 | 77.0 | 4697 | 0.3305 | 0.8875 |
| 0.2053 | 78.0 | 4758 | 0.4670 | 0.8840 |
| 0.1958 | 79.0 | 4819 | 0.3468 | 0.8909 |
| 0.1839 | 80.0 | 4880 | 0.3902 | 0.8886 |
| 0.1715 | 81.0 | 4941 | 0.3830 | 0.8875 |
| 0.1803 | 82.0 | 5002 | 0.3134 | 0.8967 |
| 0.1803 | 83.0 | 5063 | 0.3935 | 0.8909 |
| 0.1865 | 84.0 | 5124 | 0.3882 | 0.8863 |
| 0.1884 | 85.0 | 5185 | 0.3485 | 0.8990 |
| 0.1663 | 86.0 | 5246 | 0.3667 | 0.8944 |
| 0.1665 | 87.0 | 5307 | 0.3545 | 0.8932 |
| 0.1556 | 88.0 | 5368 | 0.3882 | 0.8944 |
| 0.18 | 89.0 | 5429 | 0.3751 | 0.8898 |
| 0.1974 | 90.0 | 5490 | 0.3979 | 0.8863 |
| 0.1622 | 91.0 | 5551 | 0.3623 | 0.8967 |
| 0.1657 | 92.0 | 5612 | 0.3855 | 0.8978 |
| 0.1672 | 93.0 | 5673 | 0.3722 | 0.8944 |
| 0.1807 | 94.0 | 5734 | 0.3994 | 0.8932 |
| 0.1419 | 95.0 | 5795 | 0.4017 | 0.8863 |
| 0.178 | 96.0 | 5856 | 0.4168 | 0.8886 |
| 0.1402 | 97.0 | 5917 | 0.3727 | 0.8944 |
| 0.1427 | 98.0 | 5978 | 0.3919 | 0.8967 |
| 0.1318 | 99.0 | 6039 | 0.3843 | 0.8955 |
| 0.1417 | 100.0 | 6100 | 0.4017 | 0.8898 |
| 0.1536 | 101.0 | 6161 | 0.3613 | 0.8955 |
| 0.1631 | 102.0 | 6222 | 0.3377 | 0.9047 |
| 0.1459 | 103.0 | 6283 | 0.3724 | 0.8967 |
| 0.1499 | 104.0 | 6344 | 0.3934 | 0.8955 |
| 0.1572 | 105.0 | 6405 | 0.3368 | 0.8967 |
| 0.1308 | 106.0 | 6466 | 0.3782 | 0.8990 |
| 0.1535 | 107.0 | 6527 | 0.3306 | 0.9024 |
| 0.125 | 108.0 | 6588 | 0.4076 | 0.8898 |
| 0.1339 | 109.0 | 6649 | 0.3628 | 0.8990 |
| 0.148 | 110.0 | 6710 | 0.3672 | 0.9013 |
| 0.1725 | 111.0 | 6771 | 0.4006 | 0.8909 |
| 0.1326 | 112.0 | 6832 | 0.4117 | 0.8921 |
| 0.1438 | 113.0 | 6893 | 0.3927 | 0.8978 |
| 0.1205 | 114.0 | 6954 | 0.3612 | 0.8990 |
| 0.1531 | 115.0 | 7015 | 0.3594 | 0.8932 |
| 0.1473 | 116.0 | 7076 | 0.4490 | 0.8875 |
| 0.1388 | 117.0 | 7137 | 0.3952 | 0.8921 |
| 0.136 | 118.0 | 7198 | 0.4098 | 0.8921 |
| 0.1579 | 119.0 | 7259 | 0.3595 | 0.9013 |
| 0.1359 | 120.0 | 7320 | 0.3970 | 0.8944 |
| 0.1314 | 121.0 | 7381 | 0.4092 | 0.8932 |
| 0.1337 | 122.0 | 7442 | 0.4192 | 0.8909 |
| 0.1538 | 123.0 | 7503 | 0.4154 | 0.8898 |
| 0.119 | 124.0 | 7564 | 0.4120 | 0.8909 |
| 0.1353 | 125.0 | 7625 | 0.4060 | 0.8921 |
| 0.1489 | 126.0 | 7686 | 0.4162 | 0.8909 |
| 0.1554 | 127.0 | 7747 | 0.4148 | 0.8944 |
| 0.1558 | 128.0 | 7808 | 0.4169 | 0.8944 |
| 0.1268 | 129.0 | 7869 | 0.4110 | 0.8955 |
| 0.1236 | 130.0 | 7930 | 0.4197 | 0.8944 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
| ea68479ae6c86a77c96b4c3049718419 |
stevemobs/deberta-base-combined-squad1-aqa-and-newsqa | stevemobs | deberta | 13 | 11 | transformers | 0 | question-answering | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,273 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-base-combined-squad1-aqa-and-newsqa
This model is a fine-tuned version of [stevemobs/deberta-base-combined-squad1-aqa](https://huggingface.co/stevemobs/deberta-base-combined-squad1-aqa) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7527
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.6729 | 1.0 | 17307 | 0.7076 |
| 0.4631 | 2.0 | 34614 | 0.7527 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| 6145a54a2a00c0fc17ec25427d82525e |
facebook/wav2vec2-xls-r-300m-21-to-en | facebook | speech-encoder-decoder | 9 | 75 | transformers | 4 | automatic-speech-recognition | true | false | false | apache-2.0 | ['multilingual', 'fr', 'de', 'es', 'ca', 'it', 'ru', 'zh', 'pt', 'fa', 'et', 'mn', 'nl', 'tr', 'ar', 'sv', 'lv', 'sl', 'ta', 'ja', 'id', 'cy', 'en'] | ['common_voice', 'multilingual_librispeech', 'covost2'] | null | 3 | 1 | 1 | 1 | 0 | 0 | 0 | ['speech', 'xls_r', 'automatic-speech-recognition', 'xls_r_translation'] | false | true | true | 3,458 | false |
# Wav2Vec2-XLS-R-300M-21-EN
Facebook's Wav2Vec2 XLS-R fine-tuned for **Speech Translation.**

This is a [SpeechEncoderDecoderModel](https://huggingface.co/transformers/model_doc/speechencoderdecoder.html) model.
The encoder was warm-started from the [**`facebook/wav2vec2-xls-r-300m`**](https://huggingface.co/facebook/wav2vec2-xls-r-300m) checkpoint and
the decoder from the [**`facebook/mbart-large-50`**](https://huggingface.co/facebook/mbart-large-50) checkpoint.
Consequently, the encoder-decoder model was fine-tuned on 21 `{lang}` -> `en` translation pairs of the [Covost2 dataset](https://huggingface.co/datasets/covost2).
The model can translate from the following spoken languages `{lang}` -> `en` (English):
{`fr`, `de`, `es`, `ca`, `it`, `ru`, `zh-CN`, `pt`, `fa`, `et`, `mn`, `nl`, `tr`, `ar`, `sv-SE`, `lv`, `sl`, `ta`, `ja`, `id`, `cy`} -> `en`
For more information, please refer to Section *5.1.2* of the [official XLS-R paper](https://arxiv.org/abs/2111.09296).
## Usage
### Demo
The model can be tested directly on the speech recognition widget on this model card!
Simple record some audio in one of the possible spoken languages or pick an example audio file to see how well the checkpoint can translate the input.
### Example
As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the
transcripts by passing the speech features to the model.
You can use the model directly via the ASR pipeline
```python
from datasets import load_dataset
from transformers import pipeline
# replace following lines to load an audio file of your choice
librispeech_en = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
audio_file = librispeech_en[0]["file"]
asr = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-xls-r-300m-21-to-en", feature_extractor="facebook/wav2vec2-xls-r-300m-21-to-en")
translation = asr(audio_file)
```
or step-by-step as follows:
```python
import torch
from transformers import Speech2Text2Processor, SpeechEncoderDecoderModel
from datasets import load_dataset
model = SpeechEncoderDecoderModel.from_pretrained("facebook/wav2vec2-xls-r-300m-21-to-en")
processor = Speech2Text2Processor.from_pretrained("facebook/wav2vec2-xls-r-300m-21-to-en")
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
inputs = processor(ds[0]["audio"]["array"], sampling_rate=ds[0]["audio"]["array"]["sampling_rate"], return_tensors="pt")
generated_ids = model.generate(input_ids=inputs["input_features"], attention_mask=inputs["attention_mask"])
transcription = processor.batch_decode(generated_ids)
```
## Results `{lang}` -> `en`
See the row of **XLS-R (0.3B)** for the performance on [Covost2](https://huggingface.co/datasets/covost2) for this model.

## More XLS-R models for `{lang}` -> `en` Speech Translation
- [Wav2Vec2-XLS-R-300M-21-EN](https://huggingface.co/facebook/wav2vec2-xls-r-300m-21-to-en)
- [Wav2Vec2-XLS-R-1B-21-EN](https://huggingface.co/facebook/wav2vec2-xls-r-1b-21-to-en)
- [Wav2Vec2-XLS-R-2B-21-EN](https://huggingface.co/facebook/wav2vec2-xls-r-2b-21-to-en)
- [Wav2Vec2-XLS-R-2B-22-16](https://huggingface.co/facebook/wav2vec2-xls-r-2b-22-to-16)
| 635c60a92a0801cce2437b9a5e5c6117 |
henryscheible/stsb | henryscheible | bert | 14 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | ['en'] | ['glue'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,039 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# stsb
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4914
- Pearson: 0.8930
- Spearmanr: 0.8888
- Combined Score: 0.8909
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1
- Datasets 2.6.1
- Tokenizers 0.13.1
| 5e11a5d1fefc7c163291ab04df04dd0d |
venetis/vit-base-patch16-224_album_vitVMMRdb_make_model_album_pred | venetis | vit | 19 | 4 | transformers | 0 | image-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,797 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224_album_vitVMMRdb_make_model_album_pred
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4670
- Accuracy: 0.8781
- Precision: 0.8768
- Recall: 0.8781
- F1: 0.8758
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 3.5529 | 1.0 | 839 | 3.3687 | 0.3096 | 0.2809 | 0.3096 | 0.2246 |
| 1.7855 | 2.0 | 1678 | 1.6042 | 0.6378 | 0.6187 | 0.6378 | 0.5996 |
| 1.1054 | 3.0 | 2517 | 1.0105 | 0.7556 | 0.7512 | 0.7556 | 0.7385 |
| 0.8179 | 4.0 | 3356 | 0.7794 | 0.8033 | 0.8020 | 0.8033 | 0.7934 |
| 0.6057 | 5.0 | 4195 | 0.6479 | 0.8294 | 0.8274 | 0.8294 | 0.8212 |
| 0.4709 | 6.0 | 5034 | 0.5817 | 0.8478 | 0.8477 | 0.8478 | 0.8428 |
| 0.3962 | 7.0 | 5873 | 0.5333 | 0.8571 | 0.8570 | 0.8571 | 0.8527 |
| 0.346 | 8.0 | 6712 | 0.5073 | 0.8638 | 0.8647 | 0.8638 | 0.8615 |
| 0.2772 | 9.0 | 7551 | 0.4881 | 0.8681 | 0.8679 | 0.8681 | 0.8656 |
| 0.2136 | 10.0 | 8390 | 0.4777 | 0.8719 | 0.8718 | 0.8719 | 0.8689 |
| 0.1937 | 11.0 | 9229 | 0.4737 | 0.8734 | 0.8731 | 0.8734 | 0.8703 |
| 0.1754 | 12.0 | 10068 | 0.4604 | 0.8758 | 0.8750 | 0.8758 | 0.8733 |
| 0.1111 | 13.0 | 10907 | 0.4561 | 0.8790 | 0.8782 | 0.8790 | 0.8768 |
| 0.1128 | 14.0 | 11746 | 0.4519 | 0.8808 | 0.8799 | 0.8808 | 0.8787 |
| 0.1018 | 15.0 | 12585 | 0.4497 | 0.8813 | 0.8805 | 0.8813 | 0.8794 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
| 163d4c8d6db0e99b49b20b0e3cd78b27 |
gokuls/bert-tiny-Massive-intent-KD-BERT | gokuls | bert | 15 | 5 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['massive'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 3,933 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-tiny-Massive-intent-KD-BERT
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the massive dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8380
- Accuracy: 0.8534
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 33
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 5.83 | 1.0 | 720 | 4.8826 | 0.3050 |
| 4.7602 | 2.0 | 1440 | 3.9904 | 0.4191 |
| 4.0301 | 3.0 | 2160 | 3.3806 | 0.5032 |
| 3.4797 | 4.0 | 2880 | 2.9065 | 0.5967 |
| 3.0352 | 5.0 | 3600 | 2.5389 | 0.6596 |
| 2.6787 | 6.0 | 4320 | 2.2342 | 0.7044 |
| 2.3644 | 7.0 | 5040 | 1.9873 | 0.7354 |
| 2.1145 | 8.0 | 5760 | 1.7928 | 0.7462 |
| 1.896 | 9.0 | 6480 | 1.6293 | 0.7644 |
| 1.7138 | 10.0 | 7200 | 1.5062 | 0.7752 |
| 1.5625 | 11.0 | 7920 | 1.3923 | 0.7885 |
| 1.4229 | 12.0 | 8640 | 1.3092 | 0.7978 |
| 1.308 | 13.0 | 9360 | 1.2364 | 0.8018 |
| 1.201 | 14.0 | 10080 | 1.1759 | 0.8155 |
| 1.1187 | 15.0 | 10800 | 1.1322 | 0.8214 |
| 1.0384 | 16.0 | 11520 | 1.0990 | 0.8234 |
| 0.976 | 17.0 | 12240 | 1.0615 | 0.8308 |
| 0.9163 | 18.0 | 12960 | 1.0377 | 0.8328 |
| 0.8611 | 19.0 | 13680 | 1.0054 | 0.8337 |
| 0.812 | 20.0 | 14400 | 0.9926 | 0.8367 |
| 0.7721 | 21.0 | 15120 | 0.9712 | 0.8382 |
| 0.7393 | 22.0 | 15840 | 0.9586 | 0.8357 |
| 0.7059 | 23.0 | 16560 | 0.9428 | 0.8372 |
| 0.6741 | 24.0 | 17280 | 0.9377 | 0.8396 |
| 0.6552 | 25.0 | 18000 | 0.9229 | 0.8377 |
| 0.627 | 26.0 | 18720 | 0.9100 | 0.8416 |
| 0.5972 | 27.0 | 19440 | 0.9028 | 0.8416 |
| 0.5784 | 28.0 | 20160 | 0.8996 | 0.8406 |
| 0.5595 | 29.0 | 20880 | 0.8833 | 0.8451 |
| 0.5438 | 30.0 | 21600 | 0.8772 | 0.8475 |
| 0.5218 | 31.0 | 22320 | 0.8758 | 0.8451 |
| 0.509 | 32.0 | 23040 | 0.8728 | 0.8480 |
| 0.4893 | 33.0 | 23760 | 0.8640 | 0.8480 |
| 0.4948 | 34.0 | 24480 | 0.8541 | 0.8475 |
| 0.4722 | 35.0 | 25200 | 0.8595 | 0.8495 |
| 0.468 | 36.0 | 25920 | 0.8488 | 0.8495 |
| 0.4517 | 37.0 | 26640 | 0.8460 | 0.8505 |
| 0.4462 | 38.0 | 27360 | 0.8450 | 0.8485 |
| 0.4396 | 39.0 | 28080 | 0.8422 | 0.8490 |
| 0.427 | 40.0 | 28800 | 0.8380 | 0.8534 |
| 0.4287 | 41.0 | 29520 | 0.8385 | 0.8480 |
| 0.4222 | 42.0 | 30240 | 0.8319 | 0.8510 |
| 0.421 | 43.0 | 30960 | 0.8296 | 0.8510 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
| 3fd9703ff2aa38f39b3f463ab4870c3e |
domenicrosati/deberta-v3-large-finetuned-syndag-multiclass-not-bloom | domenicrosati | deberta-v2 | 13 | 1 | transformers | 0 | text-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text-classification', 'generated_from_trainer'] | true | true | true | 1,405 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large-finetuned-syndag-multiclass-not-bloom
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0197
- F1: 0.9956
- Precision: 0.9956
- Recall: 0.9956
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Precision | Recall |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:---------:|:------:|
| 0.0194 | 1.0 | 10847 | 0.0222 | 0.9955 | 0.9955 | 0.9955 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
| 2c139d54a43158aebff171f6c73d04bc |
stanfordnlp/stanza-sl | stanfordnlp | null | 13 | 91 | stanza | 0 | token-classification | false | false | false | apache-2.0 | ['sl'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['stanza', 'token-classification'] | false | true | true | 582 | false | # Stanza model for Slovenian (sl)
Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages. Starting from raw text to syntactic analysis and entity recognition, Stanza brings state-of-the-art NLP models to languages of your choosing.
Find more about it in [our website](https://stanfordnlp.github.io/stanza) and our [GitHub repository](https://github.com/stanfordnlp/stanza).
This card and repo were automatically prepared with `hugging_stanza.py` in the `stanfordnlp/huggingface-models` repo
Last updated 2022-09-25 02:01:37.680
| 4fd17c1ca7a750eb3304498af90167c1 |
alefiury/wav2vec2-large-xlsr-53-coraa-brazilian-portuguese-gain-normalization-sna | alefiury | wav2vec2 | 8 | 6 | transformers | 2 | automatic-speech-recognition | true | false | false | apache-2.0 | ['pt'] | ['CORAA', 'common_voice', 'mls', 'cetuc', 'voxforge'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['audio', 'speech', 'wav2vec2', 'pt', 'portuguese-speech-corpus', 'automatic-speech-recognition', 'speech', 'PyTorch'] | false | true | true | 641 | false |
# Wav2vec 2.0 trained with CORAA Portuguese Dataset and Open Portuguese Datasets
This a the demonstration of a fine-tuned Wav2vec model for Portuguese using the following datasets:
- [CORAA dataset](https://github.com/nilc-nlp/CORAA)
- [CETUC](http://www02.smt.ufrj.br/~igor.quintanilha/alcaim.tar.gz).
- [Multilingual Librispeech (MLS)](http://www.openslr.org/94/).
- [VoxForge](http://www.voxforge.org/).
- [Common Voice 6.1](https://commonvoice.mozilla.org/pt).
## Repository
The repository that implements the model to be trained and tested is avaible [here](https://github.com/alefiury/SE-R_2022_Challenge_Wav2vec2). | c17b8d7589cb29ac8cbcde10abab2171 |
utkarshbelkhede/distilbart-sec-10K | utkarshbelkhede | bart | 13 | 1 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,863 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbart-cnn-12-6-sec
This model is a fine-tuned version of [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1379
- Rouge1: 72.2845
- Rouge2: 61.1501
- Rougel: 67.6999
- Rougelsum: 70.9968
- Gen Len: 113.8
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 99 | 0.4429 | 56.0806 | 40.5969 | 47.5271 | 53.7227 | 115.44 |
| No log | 2.0 | 198 | 0.2279 | 56.6042 | 42.1781 | 48.9542 | 54.951 | 116.84 |
| No log | 3.0 | 297 | 0.1845 | 65.9646 | 51.8575 | 59.8647 | 64.103 | 113.8 |
| No log | 4.0 | 396 | 0.1532 | 71.6132 | 61.1434 | 67.4165 | 70.4093 | 110.46 |
| No log | 5.0 | 495 | 0.1379 | 72.2845 | 61.1501 | 67.6999 | 70.9968 | 113.8 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
| 784d728c640eba847315cbd790a6c0bd |
mphamsioo/lol | mphamsioo | t5 | 10 | 1 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,377 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lol
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6366
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- gradient_accumulation_steps: 10
- total_train_batch_size: 100
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.0118 | 1.4 | 7 | 2.1901 |
| 2.1915 | 2.8 | 14 | 1.8797 |
| 1.8529 | 4.2 | 21 | 1.7159 |
| 1.7081 | 5.6 | 28 | 1.6536 |
| 1.623 | 7.0 | 35 | 1.6366 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| 391227702b01cd4967002f8c0c08825c |
VanHoan/mt5-small-finetuned-amazon-en-ja | VanHoan | mt5 | 15 | 3 | transformers | 0 | summarization | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['summarization', 'generated_from_trainer'] | true | true | true | 1,996 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-ja
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2749
- Rouge1: 16.6603
- Rouge2: 8.1096
- Rougel: 16.0117
- Rougelsum: 16.1001
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 8.0415 | 1.0 | 773 | 3.6621 | 11.6952 | 4.8642 | 11.3154 | 11.3683 |
| 4.1249 | 2.0 | 1546 | 3.3933 | 14.3113 | 6.2067 | 13.9923 | 14.0476 |
| 3.7462 | 3.0 | 2319 | 3.3725 | 15.7855 | 8.0892 | 15.2485 | 15.3145 |
| 3.5608 | 4.0 | 3092 | 3.3270 | 16.0732 | 7.8202 | 15.4816 | 15.6421 |
| 3.4471 | 5.0 | 3865 | 3.2908 | 16.4399 | 7.6723 | 15.514 | 15.7309 |
| 3.3604 | 6.0 | 4638 | 3.2904 | 16.6074 | 8.3131 | 16.0711 | 16.1382 |
| 3.3081 | 7.0 | 5411 | 3.2827 | 16.2547 | 8.1096 | 15.6128 | 15.7097 |
| 3.2905 | 8.0 | 6184 | 3.2749 | 16.6603 | 8.1096 | 16.0117 | 16.1001 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| 4b82f960b093a27ee29193fc21960a16 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.