license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Whisper Large-v2 Ukrainian This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the mozilla-foundation/common_voice_11_0 uk dataset. It achieves the following results on the evaluation set: - Loss: 0.2068 - Wer: 10.0435
00000cfc1595a42adb84cd9d6b51a469
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.1078 | 1.38 | 1000 | 0.2068 | 10.0435 |
91fc1c9b680b257d644e036fa4fc3f02
creativeml-openrail-m
['stable-diffusion', 'text-to-image', 'diffusers', 'telltale', 'game']
false
Classic Telltale Diffusion This model was trained on arts from gameplay footage across most Telltale classic games, and some game advertisements. The art style can essentially be described as 2D comic arts but in 3D. The model can do portraits, landscapes, and cars, though I have yet to try generating animals. To reference the art style, use the token: telltale style
492e2ea88e5c1d4658514169a65de9d3
creativeml-openrail-m
['stable-diffusion', 'text-to-image', 'diffusers', 'telltale', 'game']
false
Gradio We support a [Gradio](https://github.com/gradio-app/gradio) Web UI to run Classic_Telltale_Diffusion: [![Open In Spaces](https://camo.githubusercontent.com/00380c35e60d6b04be65d3d94a58332be5cc93779f630bcdfc18ab9a3a7d3388/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f25463025394625413425393725323048756767696e67253230466163652d5370616365732d626c7565)](https://huggingface.co/spaces/ItsJayQz/Classic_Telltale_Diffusion) Here are some samples **Portraits** ![tt1.png](https://s3.amazonaws.com/moonup/production/uploads/1671464989091-635eafb49f24f6db0a1eafd1.png) ![tt2.png](https://s3.amazonaws.com/moonup/production/uploads/1671464988907-635eafb49f24f6db0a1eafd1.png) **Landscapes** ![tt3.png](https://s3.amazonaws.com/moonup/production/uploads/1671464988984-635eafb49f24f6db0a1eafd1.png) **Others** ![tt4.png](https://s3.amazonaws.com/moonup/production/uploads/1671464988976-635eafb49f24f6db0a1eafd1.png) **Disclaimers** - I'm in no way affliated with Telltale Games, or any entities relating to the ownership of the game artworks. - The phrase Telltale is simply a reference for accessibility. - This was created entirely for research, and entertainment purpose. - I did not plan, or is planning on turning this model into a commercial product, or use for commercial purposes. - I do not condone the usage of the model for making counterfeit products that might infringe on Telltale Games's copyrights/trademarks. **License** - This model is under Creative OpenRAIL-M. - This means the model can be used royalty-free, and flexible with the model usage, such as redistribution of the model, or of any derivatives of the model. - However, there are restrictions on the openess of the license. More info into the restrictions can be found [here](https://huggingface.co/spaces/CompVis/stable-diffusion-license). **Responsibilities** - By using/downloading the model, you are responsible for: - All outputs/usage of the model. - Understanding the Disclaimers. - Upholding the terms of the license. Thanks for checking out the model!
bd36dff907ddf79508157091069d995f
apache-2.0
['generated_from_trainer']
false
flan-t5-large-finetuned-openai-summarize_from_feedback This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on the summarize_from_feedback dataset. It achieves the following results on the evaluation set: - Loss: 2.3118 - Rouge1: 30.2401 - Rouge2: 11.4916 - Rougel: 24.6485 - Rougelsum: 26.1801 - Gen Len: 18.8428
d8ba893efe46fdbc063b5bc13cc124ec
apache-2.0
['generated_from_trainer']
false
Citation ``` @misc {manuel_romero_2023, author = { {Manuel Romero} }, title = { flan-t5-large-finetuned-openai-summarize_from_feedback (Revision 51666f9) }, year = 2023, url = { https://huggingface.co/mrm8488/flan-t5-large-finetuned-openai-summarize_from_feedback }, doi = { 10.57967/hf/0266 }, publisher = { Hugging Face } } ```
0b1a045da68d9d5f228a48ac65157a78
apache-2.0
['generated_from_trainer']
false
vit-large-patch32-384-finetuned-melanoma This model is a fine-tuned version of [google/vit-large-patch32-384](https://huggingface.co/google/vit-large-patch32-384) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.0767 - Accuracy: 0.8273
c434be69972a0f6c38749e66a8c47f07
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 40
333fdba1029518c5fe41c0ad2a83e13a
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.0081 | 1.0 | 550 | 0.7650 | 0.68 | | 0.7527 | 2.0 | 1100 | 0.6693 | 0.7364 | | 0.6234 | 3.0 | 1650 | 0.6127 | 0.7709 | | 2.6284 | 4.0 | 2200 | 0.6788 | 0.7655 | | 0.1406 | 5.0 | 2750 | 0.6657 | 0.7836 | | 0.317 | 6.0 | 3300 | 0.6936 | 0.78 | | 2.5358 | 7.0 | 3850 | 0.7104 | 0.7909 | | 1.5802 | 8.0 | 4400 | 0.6928 | 0.8 | | 0.088 | 9.0 | 4950 | 0.8060 | 0.7982 | | 0.0183 | 10.0 | 5500 | 0.7811 | 0.8091 | | 0.0074 | 11.0 | 6050 | 0.7185 | 0.7945 | | 0.0448 | 12.0 | 6600 | 0.8780 | 0.7909 | | 0.4288 | 13.0 | 7150 | 0.8229 | 0.82 | | 0.017 | 14.0 | 7700 | 0.7516 | 0.8182 | | 0.0057 | 15.0 | 8250 | 0.7974 | 0.7964 | | 1.7571 | 16.0 | 8800 | 0.7866 | 0.8218 | | 1.3159 | 17.0 | 9350 | 0.8491 | 0.8073 | | 1.649 | 18.0 | 9900 | 0.8432 | 0.7891 | | 0.0014 | 19.0 | 10450 | 0.8870 | 0.82 | | 0.002 | 20.0 | 11000 | 0.9460 | 0.8236 | | 0.3717 | 21.0 | 11550 | 0.8866 | 0.8327 | | 0.0025 | 22.0 | 12100 | 1.0287 | 0.8073 | | 0.0094 | 23.0 | 12650 | 0.9696 | 0.8091 | | 0.002 | 24.0 | 13200 | 0.9659 | 0.8018 | | 0.1001 | 25.0 | 13750 | 0.9712 | 0.8327 | | 0.2953 | 26.0 | 14300 | 1.0512 | 0.8236 | | 0.0141 | 27.0 | 14850 | 1.0503 | 0.82 | | 0.612 | 28.0 | 15400 | 1.2020 | 0.8109 | | 0.0792 | 29.0 | 15950 | 1.0498 | 0.8364 | | 0.0117 | 30.0 | 16500 | 1.0079 | 0.8327 | | 0.0568 | 31.0 | 17050 | 1.0199 | 0.8255 | | 0.0001 | 32.0 | 17600 | 1.0319 | 0.8291 | | 0.075 | 33.0 | 18150 | 1.0427 | 0.8382 | | 0.001 | 34.0 | 18700 | 1.1289 | 0.8382 | | 0.0001 | 35.0 | 19250 | 1.0589 | 0.8364 | | 0.0006 | 36.0 | 19800 | 1.0349 | 0.8236 | | 0.0023 | 37.0 | 20350 | 1.1192 | 0.8273 | | 0.0002 | 38.0 | 20900 | 1.0863 | 0.8273 | | 0.2031 | 39.0 | 21450 | 1.0604 | 0.8255 | | 0.0006 | 40.0 | 22000 | 1.0767 | 0.8273 |
59f64700a257b00350c0abae85b3a3af
apache-2.0
['generated_from_trainer']
false
bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0609 - Precision: 0.9348 - Recall: 0.9514 - F1: 0.9430 - Accuracy: 0.9864
09c23df6ae2e3df6e08676fba31d8747
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0857 | 1.0 | 1756 | 0.0681 | 0.9213 | 0.9337 | 0.9274 | 0.9824 | | 0.0332 | 2.0 | 3512 | 0.0661 | 0.9256 | 0.9480 | 0.9366 | 0.9849 | | 0.0188 | 3.0 | 5268 | 0.0609 | 0.9348 | 0.9514 | 0.9430 | 0.9864 |
ea3b48685a5255a7880d627a1f8bd200
apache-2.0
['chinese', 'token-classification', 'pos', 'dependency-parsing']
false
Model Description This is a DeBERTa(V2) model pre-trained on Chinese texts (both simplified and traditional) for POS-tagging and dependency-parsing, derived from [Erlangshen-DeBERTa-v2-320M-Chinese](https://huggingface.co/IDEA-CCNL/Erlangshen-DeBERTa-v2-320M-Chinese). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
00c2818e0392500f1f968d1b0dc8895b
apache-2.0
['chinese', 'token-classification', 'pos', 'dependency-parsing']
false
How to Use ```py from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/deberta-large-chinese-erlangshen-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/deberta-large-chinese-erlangshen-upos") ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/deberta-large-chinese-erlangshen-upos") ```
6e1f33c14d79f0ed633e5996c24d4704
apache-2.0
['object-detection', 'vision']
false
YOLOS (base-sized) model YOLOS model fine-tuned on COCO 2017 object detection (118k annotated images). It was introduced in the paper [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Fang et al. and first released in [this repository](https://github.com/hustvl/YOLOS). Disclaimer: The team releasing YOLOS did not write a model card for this model so this model card has been written by the Hugging Face team.
3ba1b38458697cd884139836681abfb4
apache-2.0
['object-detection', 'vision']
false
Model description YOLOS is a Vision Transformer (ViT) trained using the DETR loss. Despite its simplicity, a base-sized YOLOS model is able to achieve 42 AP on COCO validation 2017 (similar to DETR and more complex frameworks such as Faster R-CNN). The model is trained using a "bipartite matching loss": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model.
4a47f256d23b74871bb839a3f78185c8
apache-2.0
['object-detection', 'vision']
false
How to use Here is how to use this model: ```python from transformers import YolosFeatureExtractor, YolosForObjectDetection from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = YolosFeatureExtractor.from_pretrained('hustvl/yolos-base') model = YolosForObjectDetection.from_pretrained('hustvl/yolos-base') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs)
2f0447e394eb297aa83b51fc59a12aa6
apache-2.0
['object-detection', 'vision']
false
BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2106-00666, author = {Yuxin Fang and Bencheng Liao and Xinggang Wang and Jiemin Fang and Jiyang Qi and Rui Wu and Jianwei Niu and Wenyu Liu}, title = {You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection}, journal = {CoRR}, volume = {abs/2106.00666}, year = {2021}, url = {https://arxiv.org/abs/2106.00666}, eprinttype = {arXiv}, eprint = {2106.00666}, timestamp = {Fri, 29 Apr 2022 19:49:16 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-00666.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
42b7243c4d08c100e8c4c87fe9e52b49
apache-2.0
['generated_from_trainer']
false
convnext-tiny-224-finetuned-eurosat-albumentations This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0573 - Accuracy: 0.9848
68fee1e85177a87f0abfffebf23fe70e
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1564 | 1.0 | 190 | 0.1283 | 0.9737 | | 0.0677 | 2.0 | 380 | 0.0697 | 0.9837 | | 0.0494 | 3.0 | 570 | 0.0573 | 0.9848 |
93d9fb114fe3d4d5e8378d3292620e25
apache-2.0
['automatic-speech-recognition', 'fr']
false
exp_w2v2t_fr_wavlm_s929 Fine-tuned [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
2350b59869c1200b9fd8beec25847d2e
cc-by-4.0
['question generation']
false
Model Card of `lmqg/t5-small-squadshifts-new_wiki-qg` This model is fine-tuned version of [lmqg/t5-small-squad](https://huggingface.co/lmqg/t5-small-squad) for question generation task on the [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) (dataset_name: new_wiki) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
bd5e541a9ce95145069f284c8176aa07
cc-by-4.0
['question generation']
false
Overview - **Language model:** [lmqg/t5-small-squad](https://huggingface.co/lmqg/t5-small-squad) - **Language:** en - **Training data:** [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) (new_wiki) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
2ed8483f1d6035c83721c04cf85db1d8
cc-by-4.0
['question generation']
false
model prediction questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/t5-small-squadshifts-new_wiki-qg") output = pipe("generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.") ```
c2ef929b77979b77227461d86624132a
cc-by-4.0
['question generation']
false
Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/t5-small-squadshifts-new_wiki-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_squadshifts.new_wiki.json) | | Score | Type | Dataset | |:-----------|--------:|:---------|:---------------------------------------------------------------------------| | BERTScore | 92.63 | new_wiki | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | Bleu_1 | 28.81 | new_wiki | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | Bleu_2 | 19.69 | new_wiki | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | Bleu_3 | 14.33 | new_wiki | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | Bleu_4 | 10.9 | new_wiki | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | METEOR | 25.95 | new_wiki | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | MoverScore | 65.04 | new_wiki | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | ROUGE_L | 28.18 | new_wiki | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) |
9d49e14cbaaa65b7b8338f522f76f791
cc-by-4.0
['question generation']
false
Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_squadshifts - dataset_name: new_wiki - input_types: ['paragraph_answer'] - output_types: ['question'] - prefix_types: ['qg'] - model: lmqg/t5-small-squad - max_length: 512 - max_length_output: 32 - epoch: 6 - batch: 32 - lr: 0.0001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 2 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/t5-small-squadshifts-new_wiki-qg/raw/main/trainer_config.json).
e9e95a9767d4e6776da05198914f707f
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 500 - mixed_precision_training: Native AMP
89f64a30a2a46bf236fd2f239dc9e74c
apache-2.0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'gn', 'robust-speech-event', 'hf-asr-leaderboard']
false
wav2vec2-large-xls-r-300m-gn-k1 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - GN dataset. It achieves the following results on the evaluation set: - Loss: 0.9220 - Wer: 0.6631
6e40a01af596ea4beaa2adbac9e809db
apache-2.0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'gn', 'robust-speech-event', 'hf-asr-leaderboard']
false
Evaluation Commands 1. To evaluate on mozilla-foundation/common_voice_8_0 with test split python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-gn-k1 --dataset mozilla-foundation/common_voice_8_0 --config gn --split test --log_outputs 2. To evaluate on speech-recognition-community-v2/dev_data NA
5003ff3161a89c1ec3cc17838810fbd3
apache-2.0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'gn', 'robust-speech-event', 'hf-asr-leaderboard']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.00018 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 600 - num_epochs: 200 - mixed_precision_training: Native AMP
fee6f9d650e9b2f794e84f899813bb50
apache-2.0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'gn', 'robust-speech-event', 'hf-asr-leaderboard']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | 15.9402 | 8.32 | 100 | 6.9185 | 1.0 | | 4.6367 | 16.64 | 200 | 3.7416 | 1.0 | | 3.4337 | 24.96 | 300 | 3.2581 | 1.0 | | 3.2307 | 33.32 | 400 | 2.8008 | 1.0 | | 1.3182 | 41.64 | 500 | 0.8359 | 0.8171 | | 0.409 | 49.96 | 600 | 0.8470 | 0.8323 | | 0.2573 | 58.32 | 700 | 0.7823 | 0.7576 | | 0.1969 | 66.64 | 800 | 0.8306 | 0.7424 | | 0.1469 | 74.96 | 900 | 0.9225 | 0.7713 | | 0.1172 | 83.32 | 1000 | 0.7903 | 0.6951 | | 0.1017 | 91.64 | 1100 | 0.8519 | 0.6921 | | 0.0851 | 99.96 | 1200 | 0.8129 | 0.6646 | | 0.071 | 108.32 | 1300 | 0.8614 | 0.7043 | | 0.061 | 116.64 | 1400 | 0.8414 | 0.6921 | | 0.0552 | 124.96 | 1500 | 0.8649 | 0.6905 | | 0.0465 | 133.32 | 1600 | 0.8575 | 0.6646 | | 0.0381 | 141.64 | 1700 | 0.8802 | 0.6723 | | 0.0338 | 149.96 | 1800 | 0.8731 | 0.6845 | | 0.0306 | 158.32 | 1900 | 0.9003 | 0.6585 | | 0.0236 | 166.64 | 2000 | 0.9408 | 0.6616 | | 0.021 | 174.96 | 2100 | 0.9353 | 0.6723 | | 0.0212 | 183.32 | 2200 | 0.9269 | 0.6570 | | 0.0191 | 191.64 | 2300 | 0.9277 | 0.6662 | | 0.0161 | 199.96 | 2400 | 0.9220 | 0.6631 |
ee2c7eea4b6c63baa5ed4dc15f678504
mit
['generated_from_trainer']
false
nifty_thompson This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets.
a359e5413e8df5be6f0606e9ae1d9de2
mit
['generated_from_trainer']
false
Full config {'dataset': {'conditional_training_config': {'aligned_prefix': '<|aligned|>', 'drop_token_fraction': 0.01, 'misaligned_prefix': '<|misaligned|>', 'threshold': 0.00056}, 'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000', 'tomekkorbak/detoxify-pile-chunk3-50000-100000', 'tomekkorbak/detoxify-pile-chunk3-100000-150000', 'tomekkorbak/detoxify-pile-chunk3-150000-200000', 'tomekkorbak/detoxify-pile-chunk3-200000-250000', 'tomekkorbak/detoxify-pile-chunk3-250000-300000', 'tomekkorbak/detoxify-pile-chunk3-300000-350000', 'tomekkorbak/detoxify-pile-chunk3-350000-400000', 'tomekkorbak/detoxify-pile-chunk3-400000-450000', 'tomekkorbak/detoxify-pile-chunk3-450000-500000', 'tomekkorbak/detoxify-pile-chunk3-500000-550000', 'tomekkorbak/detoxify-pile-chunk3-550000-600000', 'tomekkorbak/detoxify-pile-chunk3-600000-650000', 'tomekkorbak/detoxify-pile-chunk3-650000-700000', 'tomekkorbak/detoxify-pile-chunk3-700000-750000', 'tomekkorbak/detoxify-pile-chunk3-750000-800000', 'tomekkorbak/detoxify-pile-chunk3-800000-850000', 'tomekkorbak/detoxify-pile-chunk3-850000-900000', 'tomekkorbak/detoxify-pile-chunk3-900000-950000', 'tomekkorbak/detoxify-pile-chunk3-950000-1000000', 'tomekkorbak/detoxify-pile-chunk3-1000000-1050000', 'tomekkorbak/detoxify-pile-chunk3-1050000-1100000', 'tomekkorbak/detoxify-pile-chunk3-1100000-1150000', 'tomekkorbak/detoxify-pile-chunk3-1150000-1200000', 'tomekkorbak/detoxify-pile-chunk3-1200000-1250000', 'tomekkorbak/detoxify-pile-chunk3-1250000-1300000', 'tomekkorbak/detoxify-pile-chunk3-1300000-1350000', 'tomekkorbak/detoxify-pile-chunk3-1350000-1400000', 'tomekkorbak/detoxify-pile-chunk3-1400000-1450000', 'tomekkorbak/detoxify-pile-chunk3-1450000-1500000', 'tomekkorbak/detoxify-pile-chunk3-1500000-1550000', 'tomekkorbak/detoxify-pile-chunk3-1550000-1600000', 'tomekkorbak/detoxify-pile-chunk3-1600000-1650000', 'tomekkorbak/detoxify-pile-chunk3-1650000-1700000', 'tomekkorbak/detoxify-pile-chunk3-1700000-1750000', 'tomekkorbak/detoxify-pile-chunk3-1750000-1800000', 'tomekkorbak/detoxify-pile-chunk3-1800000-1850000', 'tomekkorbak/detoxify-pile-chunk3-1850000-1900000', 'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'], 'is_split_by_sentences': True}, 'generation': {'force_call_on': [25354], 'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}], 'scenario_configs': [{'generate_kwargs': {'bad_words_ids': [[50257], [50258]], 'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_samples': 2048, 'prefix': '<|aligned|>'}, {'generate_kwargs': {'bad_words_ids': [[50257], [50258]], 'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'challenging_rtp', 'num_samples': 2048, 'prefix': '<|aligned|>', 'prompt_before_control': True, 'prompts_path': 'resources/challenging_rtp.jsonl'}], 'scorer_config': {'device': 'cuda:0'}}, 'kl_gpt3_callback': {'force_call_on': [25354], 'max_tokens': 64, 'num_samples': 4096, 'prefix': '<|aligned|>'}, 'model': {'from_scratch': True, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'num_additional_tokens': 2, 'path_or_name': 'gpt2'}, 'objective': {'name': 'MLE'}, 'tokenizer': {'path_or_name': 'gpt2', 'special_tokens': ['<|aligned|>', '<|misaligned|>']}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 64, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'nifty_thompson', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.0005, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000, 'output_dir': 'training_output104340', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 25354, 'save_strategy': 'steps', 'seed': 42, 'warmup_ratio': 0.01, 'weight_decay': 0.1}}
cbb0b187cc3cab45f07baa2ae4d6f6fb
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1754 - F1: 0.8440
1c2467ec24a762efa79e55c860a75aac
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.3536 | 1.0 | 394 | 0.2111 | 0.7964 | | 0.1759 | 2.0 | 788 | 0.1786 | 0.8331 | | 0.1126 | 3.0 | 1182 | 0.1754 | 0.8440 |
85b6232084dd50860312f39ec7459f85
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Wav2Vec2-Large-XLSR-53-Georgian Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Georgian using [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz.
8f7cc1408c4a5ae5d0fffd1ad34c209b
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
requirement packages !pip install git+https://github.com/huggingface/datasets.git !pip install git+https://github.com/huggingface/transformers.git !pip install torchaudio !pip install librosa !pip install jiwer ``` **Normalizer** ```bash !wget -O normalizer.py https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-lithuanian/raw/main/normalizer.py ``` **Prediction** ```python import librosa import torch import torchaudio from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor from datasets import load_dataset import numpy as np import re import string import IPython.display as ipd from normalizer import normalizer def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) speech_array = speech_array.squeeze().numpy() speech_array = librosa.resample(np.asarray(speech_array), sampling_rate, 16_000) batch["speech"] = speech_array return batch def predict(batch): features = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) input_values = features.input_values.to(device) attention_mask = features.attention_mask.to(device) with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits pred_ids = torch.argmax(logits, dim=-1) batch["predicted"] = processor.batch_decode(pred_ids)[0] return batch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") processor = Wav2Vec2Processor.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-georgian") model = Wav2Vec2ForCTC.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-georgian").to(device) dataset = load_dataset("common_voice", "ka", split="test[:1%]") dataset = dataset.map( normalizer, fn_kwargs={"remove_extra_space": True}, remove_columns=list(set(dataset.column_names) - set(['sentence', 'path'])) ) dataset = dataset.map(speech_file_to_array_fn) result = dataset.map(predict) max_items = np.random.randint(0, len(result), 20).tolist() for i in max_items: reference, predicted = result["sentence"][i], result["predicted"][i] print("reference:", reference) print("predicted:", predicted) print('---') ``` **Output:** ```text reference: პრეზიდენტობისას ბუში საქართველოს და უკრაინის დემოკრატიულ მოძრაობების და ნატოში გაწევრიანების აქტიური მხარდამჭერი იყო predicted: პრეზიდენტო ვისას ბუში საქართველოს და უკრაინის დემოკრატიულ მოძრაობების და ნატიში დაწევრიანების აქტიური მხარდამჭერი იყო --- reference: შესაძლებელია მისი დამონება და მსახურ დემონად გადაქცევა predicted: შესაძლებელია მისი დამონებათ და მსახურდემანად გადაქცევა --- reference: ეს გამოსახულებები აღბეჭდილი იყო მოსკოვის დიდი მთავრებისა და მეფეების ბეჭდებზე predicted: ეს გამოსახულებები აღბეჭდილი იყო მოსკოვის დიდი მთავრებისა და მეფეების ბეჭდებზე --- reference: ჯოლიმ ოქროს გლობუსისა და კინომსახიობთა გილდიის ნომინაციები მიიღო predicted: ჯოლი მოქროს გლობუსისა და კინამსახიობთა გილდიის ნომინაციები მიიღო --- reference: შემდგომში საქალაქო ბიბლიოთეკა სარაიონო ბიბლიოთეკად გადაკეთდა გაიზარდა წიგნადი ფონდი predicted: შემდღომში საქალაქო ბიბლიოთეკა სარაიონო ბიბლიოთეკად გადაკეთა გაიზარდა წიგნადი ფოვდი --- reference: აბრამსი დაუკავშირდა მირანდას და ორი თვის განმავლობაში ისინი მუშაობდნენ აღნიშნული სცენის თანმხლებ მელოდიაზე predicted: აბრამში და უკავშირდა მირანდეს და ორითვის განმავლობაში ისინი მუშაობდნენა აღნიშნულის ჩენის მთამხლევით მელოდიაში --- reference: ამჟამად თემთა პალატის ოპოზიციის ლიდერია ლეიბორისტული პარტიის ლიდერი ჯერემი კორბინი predicted: ამჟამად თემთა პალატის ოპოზიციის ლიდერია ლეიბურისტული პარტიის ლიდერი ჯერემი კორვინი --- reference: ორი predicted: ორი --- reference: მას შემდეგ იგი კოლექტივის მუდმივი წევრია predicted: მას შემდეგ იგი კოლექტივის ფუდ მივი წევრია --- reference: აზერბაიჯანულ ფილოსოფიას შეიძლება მივაკუთვნოთ რუსეთის საზოგადო მოღვაწე ჰეიდარ ჯემალი predicted: აზერგვოიჯანალ ფილოსოფიას შეიძლება მივაკუთვნოთ რუსეთის საზოგადო მოღვაწე ჰეიდარ ჯემალი --- reference: ბრონქსში ჯერომის ავენიუ ჰყოფს გამჭოლ ქუჩებს აღმოსავლეთ და დასავლეთ ნაწილებად predicted: რონგში დერომიწ ავენილ პოფს გამ დოლფურქებს აღმოსავლეთ და დასავლეთ ნაწილებად --- reference: ჰაერი არის ჟანგბადის ის ძირითადი წყარო რომელსაც საჭიროებს ყველა ცოცხალი ორგანიზმი predicted: არი არის ჯამუბადესის ძირითადი წყარო რომელსაც საჭიროოებს ყველა ცოცხალი ორგანიზმი --- reference: ჯგუფი უმეტესწილად ასრულებს პოპმუსიკის ჟანრის სიმღერებს predicted: ჯგუფიუმეტესწევად ასრულებს პოპნუსიკის ჟანრის სიმრერებს --- reference: ბაბილინა მუდმივად ცდილობდა შესაძლებლობების ფარგლებში მიეღო ცოდნა და ახალი ინფორმაცია predicted: ბაბილინა მუდმივა ცდილობდა შესაძლებლობების ფარგლებში მიიღო ცოტნა და ახალი ინფორმაცია --- reference: მრევლის რწმენით რომელი ჯგუფიც გაიმარჯვებდა მთელი წლის მანძილზე სიუხვე და ბარაქა არ მოაკლდებოდა predicted: მრევრის რწმენით რომელიჯგუფის გაიმარჯვებდა მთელიჭლის მანძილზა სიუყვეტაბარაქა არ მოაკლდებოდა --- reference: ნინო ჩხეიძეს განსაკუთრებული ღვაწლი მიუძღვის ქუთაისისა და რუსთაველის თეატრების შემოქმედებით ცხოვრებაში predicted: მინო ჩხეიძეს განსაკუთრებული ღოვაწლი მიოცხვის ქუთაისისა და რუსთაველის თეატრების შემოქმედებით ცხოვრებაში --- reference: იგი სამი დიალექტისგან შედგება predicted: იგი სამი დიალეთის გან შედგება --- reference: ფორმით სირაქლემებს წააგვანან predicted: ომიცი რაქლემებს ააგვანამ --- reference: დანი დაიბადა კოლუმბუსში ოჰაიოში predicted: დონი დაიბაოდა კოლუმბუსში ოხვაიოში --- reference: მშენებლობისათვის გამოიყო ადგილი ყოფილი აეროპორტის რაიონში predicted: შენებლობისათვის გამოიყო ადგილი ყოფილი აეროპორტის რაიონში --- ```
bda8d6b3422e4abb1440a587a8837b51
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Evaluation The model can be evaluated as follows on the Georgian test data of Common Voice. ```python import librosa import torch import torchaudio from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor from datasets import load_dataset, load_metric import numpy as np import re import string from normalizer import normalizer def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) speech_array = speech_array.squeeze().numpy() speech_array = librosa.resample(np.asarray(speech_array), sampling_rate, 16_000) batch["speech"] = speech_array return batch def predict(batch): features = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) input_values = features.input_values.to(device) attention_mask = features.attention_mask.to(device) with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits pred_ids = torch.argmax(logits, dim=-1) batch["predicted"] = processor.batch_decode(pred_ids)[0] return batch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") processor = Wav2Vec2Processor.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-georgian") model = Wav2Vec2ForCTC.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-georgian").to(device) dataset = load_dataset("common_voice", "ka", split="test") dataset = dataset.map( normalizer, fn_kwargs={"remove_extra_space": True}, remove_columns=list(set(dataset.column_names) - set(['sentence', 'path'])) ) dataset = dataset.map(speech_file_to_array_fn) result = dataset.map(predict) wer = load_metric("wer") print("WER: {:.2f}".format(100 * wer.compute(predictions=result["predicted"], references=result["sentence"]))) ``` **Test Result**: - WER: 43.86%
a7ef3b9f69bb1cd071091a7ddb2e2751
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Training & Report The Common Voice `train`, `validation` datasets were used for training. You can see the training states [here](https://wandb.ai/m3hrdadfi/wav2vec2_large_xlsr_ka/reports/Fine-Tuning-for-Wav2Vec2-Large-XLSR-53-Georgian--Vmlldzo1OTQyMzk?accessToken=ytf7jseje66a3byuheh68o6a7215thjviscv5k2ewl5hgq9yqr50yxbko0bnf1d3) The script used for training can be found [here](https://colab.research.google.com/github/m3hrdadfi/notebooks/blob/main/Fine_Tune_XLSR_Wav2Vec2_on_Georgian_ASR_with_%F0%9F%A4%97_Transformers_ipynb.ipynb)
c5335514bd7640ad5ecf43f9e5c1213c
apache-2.0
['lexical normalization']
false
Fine-tuned ByT5-small for MultiLexNorm (Indonesian-English version) ![model image](https://github.com/ufal/multilexnorm2021/raw/master/img/overall.png) This is the official release of the fine-tuned models for **the winning entry** to the [*W-NUT 2021: Multilingual Lexical Normalization (MultiLexNorm)* shared task](https://noisy-text.github.io/2021/multi-lexnorm.html), which evaluates lexical-normalization systems on 12 social media datasets in 11 languages. Our system is based on [ByT5](https://arxiv.org/abs/2105.13626), which we first pre-train on synthetic data and then fine-tune on authentic normalization data. It achieves the best performance by a wide margin in intrinsic evaluation, and also the best performance in extrinsic evaluation through dependency parsing. In addition to these fine-tuned models, we also release the source files on [GitHub](https://github.com/ufal/multilexnorm2021) and an interactive demo on [Google Colab](https://colab.research.google.com/drive/1rxpI8IlKk-D2crFqi2hdzbTBIezqgsCg?usp=sharing).
f3bedd1e1ad9a0990603aa298b6bba77
mit
['audio', 'music', 'generation', 'tensorflow']
false
Model provided by: DarkDude31 Fine-tuned (from misc) halvany\_oszi\_rozsa model for the [Musika system](https://github.com/marcoppasini/musika) for fast infinite waveform music generation. Introduced in [this paper](https://arxiv.org/abs/2208.08706).
9e7bf36aba9c655d5f179fe17d41cb03
mit
['audio', 'music', 'generation', 'tensorflow']
false
How to use You can generate music from this fine-tuned (from misc) halvany_oszi_rozsa model using the notebook available [here](https://colab.research.google.com/drive/1HJWliBXPi-Xlx3gY8cjFI5-xaZgrTD7r). Only the `gen_ema.h5` file is needed to generate music. Place it in your `checkpoints` folder.
96897d7711c2f4db1eec7f3a867803fa
cc-by-4.0
['espnet', 'audio', 'diarization']
false
Demo: How to use in ESPnet2 ```bash cd espnet git checkout e08a89e0a43db7fc12bec835c62a000ad10bd417 pip install -e . cd egs2/mini_librispeech/diar1 ./run.sh --skip_data_prep false --skip_train true --download_model jkang/espnet2_mini_librispeech_diar ``` <!-- Generated by scripts/utils/show_diar_result.sh -->
cb1bd2164fcf1eeb3b579b09c0f97cea
cc-by-4.0
['espnet', 'audio', 'diarization']
false
Environments - date: `Tue Feb 8 16:41:16 KST 2022` - python version: `3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]` - espnet version: `espnet 0.10.6a1` - pytorch version: `pytorch 1.10.1` - Git hash: `e08a89e0a43db7fc12bec835c62a000ad10bd417` - Commit date: `Sun Feb 6 18:54:20 2022 -0500`
3fcc01b0b56afc14c9693ed894bd4d5a
cc-by-4.0
['espnet', 'audio', 'diarization']
false
DER dev_clean_2_ns2_beta2_500 |threshold_median_collar|DER| |---|---| |result_th0.3_med11_collar0.0|31.39| |result_th0.3_med1_collar0.0|31.78| |result_th0.4_med11_collar0.0|29.99| |result_th0.4_med1_collar0.0|30.61| |result_th0.5_med11_collar0.0|29.28| |result_th0.5_med1_collar0.0|30.19| |result_th0.6_med11_collar0.0|29.50| |result_th0.6_med1_collar0.0|30.66| |result_th0.7_med11_collar0.0|30.90| |result_th0.7_med1_collar0.0|32.38|
861f4d08f79651573109764d8932e5c3
cc-by-4.0
['espnet', 'audio', 'diarization']
false
DIAR config <details><summary>expand</summary> ``` config: conf/train_diar.yaml print_config: false log_level: INFO dry_run: false iterator_type: chunk output_dir: exp/diar_train_diar_raw ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 100 patience: 3 val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - acc - max keep_nbest_models: 3 nbest_averaging_interval: 0 grad_clip: 5 grad_clip_type: 2.0 grad_noise: false accum_grad: 2 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_matplotlib: true use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 16 valid_batch_size: null batch_bins: 1000000 valid_batch_bins: null train_shape_file: - exp/diar_stats_8k/train/speech_shape - exp/diar_stats_8k/train/spk_labels_shape valid_shape_file: - exp/diar_stats_8k/valid/speech_shape - exp/diar_stats_8k/valid/spk_labels_shape batch_type: folded valid_batch_type: null fold_length: - 80000 - 800 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 200000 chunk_shift_ratio: 0.5 num_cache_chunks: 64 train_data_path_and_name_and_type: - - dump/raw/simu/data/train_clean_5_ns2_beta2_500/wav.scp - speech - sound - - dump/raw/simu/data/train_clean_5_ns2_beta2_500/espnet_rttm - spk_labels - rttm valid_data_path_and_name_and_type: - - dump/raw/simu/data/dev_clean_2_ns2_beta2_500/wav.scp - speech - sound - - dump/raw/simu/data/dev_clean_2_ns2_beta2_500/espnet_rttm - spk_labels - rttm allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 0.01 scheduler: noamlr scheduler_conf: warmup_steps: 1000 num_spk: 2 init: xavier_uniform input_size: null model_conf: attractor_weight: 1.0 use_preprocessor: true frontend: default frontend_conf: fs: 8k hop_length: 128 specaug: null specaug_conf: {} normalize: global_mvn normalize_conf: stats_file: exp/diar_stats_8k/train/feats_stats.npz encoder: transformer encoder_conf: input_layer: linear num_blocks: 2 linear_units: 512 dropout_rate: 0.1 output_size: 256 attention_heads: 4 attention_dropout_rate: 0.0 decoder: linear decoder_conf: {} label_aggregator: label_aggregator label_aggregator_conf: {} attractor: null attractor_conf: {} required: - output_dir version: 0.10.6a1 distributed: false ``` </details>
133cb462854b331a85cce4a39eeaf385
apache-2.0
['Quality Estimation', 'microtransquest']
false
Using Pre-trained Models ```python from transquest.algo.word_level.microtransquest.run_model import MicroTransQuestModel import torch model = MicroTransQuestModel("xlmroberta", "TransQuest/microtransquest-en_zh-wiki", labels=["OK", "BAD"], use_cuda=torch.cuda.is_available()) source_tags, target_tags = model.predict([["if not , you may not be protected against the diseases . ", "ja tā nav , Jūs varat nepasargāt no slimībām . "]]) ```
706fcf188f9dc6f93636ddc4d1d91137
apache-2.0
['translation']
false
opus-mt-fi-sv * source languages: fi * target languages: sv * OPUS readme: [fi-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-sv/README.md) * dataset: opus+bt * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus+bt-2020-04-11.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-sv/opus+bt-2020-04-11.zip) * test set translations: [opus+bt-2020-04-11.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-sv/opus+bt-2020-04-11.test.txt) * test set scores: [opus+bt-2020-04-11.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-sv/opus+bt-2020-04-11.eval.txt)
56e6f5115290f52a017546b9956122dc
apache-2.0
['generated_from_trainer']
false
mobilebert_sa_GLUE_Experiment_logit_kd_data_aug_cola This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE COLA dataset. It achieves the following results on the evaluation set: - Loss: 0.6837 - Matthews Correlation: 0.1055
3e7654094aef78c44d730bd87de1c66b
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:-----:|:---------------:|:--------------------:| | 0.6247 | 1.0 | 1669 | 0.6837 | 0.1055 | | 0.5458 | 2.0 | 3338 | 0.7216 | 0.1168 | | 0.5041 | 3.0 | 5007 | 0.7127 | 0.1296 | | 0.4445 | 4.0 | 6676 | 0.7718 | 0.1436 | | 0.3961 | 5.0 | 8345 | 0.8417 | 0.1284 | | 0.3603 | 6.0 | 10014 | 0.7805 | 0.1240 |
6d3fcc97cd6c56be4c2511ecec6be7ad
creativeml-openrail-m
['text-to-image', 'stable-diffusion']
false
Model Dreambooth concept any-ely-wd-Noah_Titan-4200 được train bởi hr16 bằng [Shinja Zero SoTA DreamBooth_Stable_Diffusion](https://colab.research.google.com/drive/1G7qx6M_S1PDDlsWIMdbZXwdZik6sUlEh) notebook <br> Test concept bằng [Shinja Zero no Notebook](https://colab.research.google.com/drive/1Hp1ZIjPbsZKlCtomJVmt2oX7733W44b0) <br> Hoặc test bằng `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb) Ảnh mẫu của concept: WIP
f863fd1a857de3afa69ec09fea201631
apache-2.0
['splinter', 'SplinterModel']
false
Splinter large model, (with pretrained QASS-layer weights) Splinter-large is the pretrained model discussed in the paper [Few-Shot Question Answering by Pretraining Span Selection](https://aclanthology.org/2021.acl-long.239/) (at ACL 2021). Its original repository can be found [here](https://github.com/oriram/splinter). The model is case-sensitive. Note (1): This model **does** contain the pretrained weights for the QASS layer (see paper for details). For the model **without** those weights, see [tau/splinter-large](https://huggingface.co/tau/splinter-large). Note (2): Splinter-large was trained after the paper was released, so the results are not reported. However, this model outperforms the base model by large margins. For example, on SQuAD, the model is able to reach 80% F1 given only 128 examples, whereas the base model obtains only ~73%). See the results for Splinter-large in the Appendix of [this paper](https://arxiv.org/pdf/2108.05857.pdf).
df02b7b1dd81c41c26f18e6fdf96379e
apache-2.0
['splinter', 'SplinterModel']
false
BibTeX entry and citation info ```bibtex @inproceedings{ram-etal-2021-shot, title = "Few-Shot Question Answering by Pretraining Span Selection", author = "Ram, Ori and Kirstain, Yuval and Berant, Jonathan and Globerson, Amir and Levy, Omer", booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.acl-long.239", doi = "10.18653/v1/2021.acl-long.239", pages = "3066--3079", } ```
6bb5674ba5ffa790a2e65ea764910cc3
mit
['generated_from_trainer']
false
hasoc19-microsoft-mdeberta-v3-base-HatredStatement-new This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6039 - Accuracy: 0.7329 - Precision: 0.7324 - Recall: 0.7329 - F1: 0.7316
4ca61a29dd124fe8234fb78d881e8502
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | No log | 1.0 | 296 | 0.5276 | 0.7253 | 0.7258 | 0.7253 | 0.7225 | | 0.5406 | 2.0 | 592 | 0.5513 | 0.7319 | 0.7348 | 0.7319 | 0.7278 | | 0.5406 | 3.0 | 888 | 0.5466 | 0.7357 | 0.7458 | 0.7357 | 0.7283 | | 0.4372 | 4.0 | 1184 | 0.5531 | 0.7452 | 0.7502 | 0.7452 | 0.7406 | | 0.4372 | 5.0 | 1480 | 0.5927 | 0.7367 | 0.7364 | 0.7367 | 0.7352 | | 0.3868 | 6.0 | 1776 | 0.6039 | 0.7329 | 0.7324 | 0.7329 | 0.7316 |
b84b9d7b8be92c668f5179311237b65a
cc-by-4.0
['question generation']
false
Model Card of `lmqg/mbart-large-cc25-koquad-qg` This model is fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) for question generation task on the [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
552ad034351775ca072fd8476488fd30
cc-by-4.0
['question generation']
false
model prediction questions = model.generate_q(list_context="1990년 영화 《 남부군 》에서 단역으로 영화배우 첫 데뷔에 이어 같은 해 KBS 드라마 《지구인》에서 단역으로 출연하였고 이듬해 MBC 《여명의 눈동자》를 통해 단역으로 출연하였다.", list_answer="남부군") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/mbart-large-cc25-koquad-qg") output = pipe("1990년 영화 《 <hl> 남부군 <hl> 》에서 단역으로 영화배우 첫 데뷔에 이어 같은 해 KBS 드라마 《지구인》에서 단역으로 출연하였고 이듬해 MBC 《여명의 눈동자》를 통해 단역으로 출연하였다.") ```
6e48bd3096438a8d995ab37c0e5e8725
cc-by-4.0
['question generation']
false
Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/mbart-large-cc25-koquad-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_koquad.default.json) | | Score | Type | Dataset | |:-----------|--------:|:--------|:-----------------------------------------------------------------| | BERTScore | 83.89 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | Bleu_1 | 26.92 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | Bleu_2 | 19.57 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | Bleu_3 | 14.52 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | Bleu_4 | 10.92 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | METEOR | 30.23 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | MoverScore | 82.95 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | ROUGE_L | 27.76 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | - ***Metric (Question & Answer Generation, Reference Answer)***: Each question is generated from *the gold answer*. [raw metric file](https://huggingface.co/lmqg/mbart-large-cc25-koquad-qg/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_koquad.default.json) | | Score | Type | Dataset | |:--------------------------------|--------:|:--------|:-----------------------------------------------------------------| | QAAlignedF1Score (BERTScore) | 88.18 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | QAAlignedF1Score (MoverScore) | 85.53 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | QAAlignedPrecision (BERTScore) | 88.22 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | QAAlignedPrecision (MoverScore) | 85.62 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | QAAlignedRecall (BERTScore) | 88.15 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | QAAlignedRecall (MoverScore) | 85.46 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | - ***Metric (Question & Answer Generation, Pipeline Approach)***: Each question is generated on the answer generated by [`lmqg/mbart-large-cc25-koquad-ae`](https://huggingface.co/lmqg/mbart-large-cc25-koquad-ae). [raw metric file](https://huggingface.co/lmqg/mbart-large-cc25-koquad-qg/raw/main/eval_pipeline/metric.first.answer.paragraph.questions_answers.lmqg_qg_koquad.default.lmqg_mbart-large-cc25-koquad-ae.json) | | Score | Type | Dataset | |:--------------------------------|--------:|:--------|:-----------------------------------------------------------------| | QAAlignedF1Score (BERTScore) | 80.64 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | QAAlignedF1Score (MoverScore) | 82.74 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | QAAlignedPrecision (BERTScore) | 77.67 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | QAAlignedPrecision (MoverScore) | 78.99 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | QAAlignedRecall (BERTScore) | 83.95 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | QAAlignedRecall (MoverScore) | 87.04 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
a91e642ce44f3a4f1454d528f7328562
cc-by-4.0
['question generation']
false
Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_koquad - dataset_name: default - input_types: ['paragraph_answer'] - output_types: ['question'] - prefix_types: None - model: facebook/mbart-large-cc25 - max_length: 512 - max_length_output: 32 - epoch: 6 - batch: 4 - lr: 0.0001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 16 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mbart-large-cc25-koquad-qg/raw/main/trainer_config.json).
84e194d5dc7cac81b9d22ee0e8f4ccb5
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1.0
b9f7081ab2705a96703f4181ae521832
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-gc-indep This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1014 - Accuracy: 0.983 - F1: 0.9746
a20ae0832c56185769b9621815acd011
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.2611 | 1.0 | 32 | 0.1014 | 0.983 | 0.9746 |
c52b65daed0d11cfde40fb0b6c549d3a
apache-2.0
['generated_from_trainer']
false
whisper-small-toi This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.1668 - Wer: 63.5938
ea6d5fbc184584561297523f610632aa
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 10000 - mixed_precision_training: Native AMP
650d920af925721dec97ad6d952c6caa
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:-------:| | 0.568 | 1.47 | 500 | 2.1883 | 72.0402 | | 0.2614 | 2.95 | 1000 | 2.1071 | 67.1034 | | 0.0811 | 4.42 | 1500 | 2.3456 | 67.5012 | | 0.0383 | 5.9 | 2000 | 2.4961 | 67.9691 | | 0.021 | 7.37 | 2500 | 2.6259 | 68.8348 | | 0.0077 | 8.85 | 3000 | 2.6423 | 66.6823 | | 0.0046 | 10.32 | 3500 | 2.8497 | 65.9336 | | 0.0005 | 11.8 | 4000 | 2.8305 | 64.6467 | | 0.0014 | 13.27 | 4500 | 2.9174 | 66.0739 | | 0.0003 | 14.75 | 5000 | 2.9358 | 63.2663 | | 0.0002 | 16.22 | 5500 | 2.9820 | 63.8278 | | 0.0002 | 17.7 | 6000 | 3.0369 | 64.7403 | | 0.0001 | 19.17 | 6500 | 3.0641 | 63.3832 | | 0.0005 | 20.65 | 7000 | 3.0512 | 63.1493 | | 0.0001 | 22.12 | 7500 | 3.0924 | 63.5002 | | 0.0001 | 23.6 | 8000 | 3.1215 | 65.0679 | | 0.0001 | 25.07 | 8500 | 3.1336 | 64.6233 | | 0.0001 | 26.55 | 9000 | 3.1513 | 63.7108 | | 0.0001 | 28.02 | 9500 | 3.1620 | 63.5938 | | 0.0001 | 29.5 | 10000 | 3.1668 | 63.5938 |
c76f08822790621e83ba0180df065963
apache-2.0
['multiberts', 'multiberts-seed_2', 'multiberts-seed_2-step_300k']
false
MultiBERTs, Intermediate Checkpoint - Seed 2, Step 300k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model
8dadbd074490d048fa930c13ee5d56c7
apache-2.0
['multiberts', 'multiberts-seed_2', 'multiberts-seed_2-step_300k']
false
How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_2-step_300k') model = TFBertModel.from_pretrained("google/multiberts-seed_2-step_300k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_2-step_300k') model = BertModel.from_pretrained("google/multiberts-seed_2-step_300k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ```
323c70d725d068bab900f7cd6e3c1613
apache-2.0
['generated_from_trainer']
false
bert-base-multilingual-cased-finetuned-viquad This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9815
d9f4223e35b78afc0b7b0c9ee52acf49
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 65 | 2.5534 | | No log | 2.0 | 130 | 2.1165 | | No log | 3.0 | 195 | 1.9815 |
94a402a9e3a5b168fa901f2a6bed534d
apache-2.0
['translation']
false
ine-eng * source group: Indo-European languages * target group: English * OPUS readme: [ine-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ine-eng/README.md) * model: transformer * source language(s): afr aln ang_Latn arg asm ast awa bel bel_Latn ben bho bos_Latn bre bul bul_Latn cat ces cor cos csb_Latn cym dan deu dsb egl ell enm_Latn ext fao fra frm_Latn frr fry gcf_Latn gla gle glg glv gom gos got_Goth grc_Grek gsw guj hat hif_Latn hin hrv hsb hye ind isl ita jdt_Cyrl ksh kur_Arab kur_Latn lad lad_Latn lat_Latn lav lij lit lld_Latn lmo ltg ltz mai mar max_Latn mfe min mkd mwl nds nld nno nob nob_Hebr non_Latn npi oci ori orv_Cyrl oss pan_Guru pap pdc pes pes_Latn pes_Thaa pms pnb pol por prg_Latn pus roh rom ron rue rus san_Deva scn sco sgs sin slv snd_Arab spa sqi srp_Cyrl srp_Latn stq swe swg tgk_Cyrl tly_Latn tmw_Latn ukr urd vec wln yid zlm_Latn zsm_Latn zza * target language(s): eng * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ine-eng/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ine-eng/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ine-eng/opus2m-2020-08-01.eval.txt)
97b6312db9e409b7ee0819880fa3d16a
apache-2.0
['translation']
false
Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdev2014-hineng.hin.eng | 11.2 | 0.375 | | newsdev2016-enro-roneng.ron.eng | 35.5 | 0.614 | | newsdev2017-enlv-laveng.lav.eng | 25.1 | 0.542 | | newsdev2019-engu-gujeng.guj.eng | 16.0 | 0.420 | | newsdev2019-enlt-liteng.lit.eng | 24.0 | 0.522 | | newsdiscussdev2015-enfr-fraeng.fra.eng | 30.1 | 0.550 | | newsdiscusstest2015-enfr-fraeng.fra.eng | 33.4 | 0.572 | | newssyscomb2009-ceseng.ces.eng | 24.0 | 0.520 | | newssyscomb2009-deueng.deu.eng | 25.7 | 0.526 | | newssyscomb2009-fraeng.fra.eng | 27.9 | 0.550 | | newssyscomb2009-itaeng.ita.eng | 31.4 | 0.574 | | newssyscomb2009-spaeng.spa.eng | 28.3 | 0.555 | | news-test2008-deueng.deu.eng | 24.0 | 0.515 | | news-test2008-fraeng.fra.eng | 24.5 | 0.524 | | news-test2008-spaeng.spa.eng | 25.5 | 0.533 | | newstest2009-ceseng.ces.eng | 23.3 | 0.516 | | newstest2009-deueng.deu.eng | 23.2 | 0.512 | | newstest2009-fraeng.fra.eng | 27.3 | 0.545 | | newstest2009-itaeng.ita.eng | 30.3 | 0.567 | | newstest2009-spaeng.spa.eng | 27.9 | 0.549 | | newstest2010-ceseng.ces.eng | 23.8 | 0.523 | | newstest2010-deueng.deu.eng | 26.2 | 0.545 | | newstest2010-fraeng.fra.eng | 28.6 | 0.562 | | newstest2010-spaeng.spa.eng | 31.4 | 0.581 | | newstest2011-ceseng.ces.eng | 24.2 | 0.521 | | newstest2011-deueng.deu.eng | 23.9 | 0.522 | | newstest2011-fraeng.fra.eng | 29.5 | 0.570 | | newstest2011-spaeng.spa.eng | 30.3 | 0.570 | | newstest2012-ceseng.ces.eng | 23.5 | 0.516 | | newstest2012-deueng.deu.eng | 24.9 | 0.529 | | newstest2012-fraeng.fra.eng | 30.0 | 0.568 | | newstest2012-ruseng.rus.eng | 29.9 | 0.565 | | newstest2012-spaeng.spa.eng | 33.3 | 0.593 | | newstest2013-ceseng.ces.eng | 25.6 | 0.531 | | newstest2013-deueng.deu.eng | 27.7 | 0.545 | | newstest2013-fraeng.fra.eng | 30.0 | 0.561 | | newstest2013-ruseng.rus.eng | 24.4 | 0.514 | | newstest2013-spaeng.spa.eng | 30.8 | 0.577 | | newstest2014-csen-ceseng.ces.eng | 27.7 | 0.558 | | newstest2014-deen-deueng.deu.eng | 27.7 | 0.545 | | newstest2014-fren-fraeng.fra.eng | 32.2 | 0.592 | | newstest2014-hien-hineng.hin.eng | 16.7 | 0.450 | | newstest2014-ruen-ruseng.rus.eng | 27.2 | 0.552 | | newstest2015-encs-ceseng.ces.eng | 25.4 | 0.518 | | newstest2015-ende-deueng.deu.eng | 28.8 | 0.552 | | newstest2015-enru-ruseng.rus.eng | 25.6 | 0.527 | | newstest2016-encs-ceseng.ces.eng | 27.0 | 0.540 | | newstest2016-ende-deueng.deu.eng | 33.5 | 0.592 | | newstest2016-enro-roneng.ron.eng | 32.8 | 0.591 | | newstest2016-enru-ruseng.rus.eng | 24.8 | 0.523 | | newstest2017-encs-ceseng.ces.eng | 23.7 | 0.510 | | newstest2017-ende-deueng.deu.eng | 29.3 | 0.556 | | newstest2017-enlv-laveng.lav.eng | 18.9 | 0.486 | | newstest2017-enru-ruseng.rus.eng | 28.0 | 0.546 | | newstest2018-encs-ceseng.ces.eng | 24.9 | 0.521 | | newstest2018-ende-deueng.deu.eng | 36.0 | 0.604 | | newstest2018-enru-ruseng.rus.eng | 23.8 | 0.517 | | newstest2019-deen-deueng.deu.eng | 31.5 | 0.570 | | newstest2019-guen-gujeng.guj.eng | 12.1 | 0.377 | | newstest2019-lten-liteng.lit.eng | 26.6 | 0.555 | | newstest2019-ruen-ruseng.rus.eng | 27.5 | 0.541 | | Tatoeba-test.afr-eng.afr.eng | 59.0 | 0.724 | | Tatoeba-test.ang-eng.ang.eng | 9.9 | 0.254 | | Tatoeba-test.arg-eng.arg.eng | 41.6 | 0.487 | | Tatoeba-test.asm-eng.asm.eng | 22.8 | 0.392 | | Tatoeba-test.ast-eng.ast.eng | 36.1 | 0.521 | | Tatoeba-test.awa-eng.awa.eng | 11.6 | 0.280 | | Tatoeba-test.bel-eng.bel.eng | 42.2 | 0.597 | | Tatoeba-test.ben-eng.ben.eng | 45.8 | 0.598 | | Tatoeba-test.bho-eng.bho.eng | 34.4 | 0.518 | | Tatoeba-test.bre-eng.bre.eng | 24.4 | 0.405 | | Tatoeba-test.bul-eng.bul.eng | 50.8 | 0.660 | | Tatoeba-test.cat-eng.cat.eng | 51.2 | 0.677 | | Tatoeba-test.ces-eng.ces.eng | 47.6 | 0.641 | | Tatoeba-test.cor-eng.cor.eng | 5.4 | 0.214 | | Tatoeba-test.cos-eng.cos.eng | 61.0 | 0.675 | | Tatoeba-test.csb-eng.csb.eng | 22.5 | 0.394 | | Tatoeba-test.cym-eng.cym.eng | 34.7 | 0.522 | | Tatoeba-test.dan-eng.dan.eng | 56.2 | 0.708 | | Tatoeba-test.deu-eng.deu.eng | 44.9 | 0.625 | | Tatoeba-test.dsb-eng.dsb.eng | 21.0 | 0.383 | | Tatoeba-test.egl-eng.egl.eng | 6.9 | 0.221 | | Tatoeba-test.ell-eng.ell.eng | 62.1 | 0.741 | | Tatoeba-test.enm-eng.enm.eng | 22.6 | 0.466 | | Tatoeba-test.ext-eng.ext.eng | 33.2 | 0.496 | | Tatoeba-test.fao-eng.fao.eng | 28.1 | 0.460 | | Tatoeba-test.fas-eng.fas.eng | 9.6 | 0.306 | | Tatoeba-test.fra-eng.fra.eng | 50.3 | 0.661 | | Tatoeba-test.frm-eng.frm.eng | 30.0 | 0.457 | | Tatoeba-test.frr-eng.frr.eng | 15.2 | 0.301 | | Tatoeba-test.fry-eng.fry.eng | 34.4 | 0.525 | | Tatoeba-test.gcf-eng.gcf.eng | 18.4 | 0.317 | | Tatoeba-test.gla-eng.gla.eng | 24.1 | 0.400 | | Tatoeba-test.gle-eng.gle.eng | 52.2 | 0.671 | | Tatoeba-test.glg-eng.glg.eng | 50.5 | 0.669 | | Tatoeba-test.glv-eng.glv.eng | 5.7 | 0.189 | | Tatoeba-test.gos-eng.gos.eng | 19.2 | 0.378 | | Tatoeba-test.got-eng.got.eng | 0.1 | 0.022 | | Tatoeba-test.grc-eng.grc.eng | 0.9 | 0.095 | | Tatoeba-test.gsw-eng.gsw.eng | 23.9 | 0.390 | | Tatoeba-test.guj-eng.guj.eng | 28.0 | 0.428 | | Tatoeba-test.hat-eng.hat.eng | 44.2 | 0.567 | | Tatoeba-test.hbs-eng.hbs.eng | 51.6 | 0.666 | | Tatoeba-test.hif-eng.hif.eng | 22.3 | 0.451 | | Tatoeba-test.hin-eng.hin.eng | 41.7 | 0.585 | | Tatoeba-test.hsb-eng.hsb.eng | 46.4 | 0.590 | | Tatoeba-test.hye-eng.hye.eng | 40.4 | 0.564 | | Tatoeba-test.isl-eng.isl.eng | 43.8 | 0.605 | | Tatoeba-test.ita-eng.ita.eng | 60.7 | 0.735 | | Tatoeba-test.jdt-eng.jdt.eng | 5.5 | 0.091 | | Tatoeba-test.kok-eng.kok.eng | 7.8 | 0.205 | | Tatoeba-test.ksh-eng.ksh.eng | 15.8 | 0.284 | | Tatoeba-test.kur-eng.kur.eng | 11.6 | 0.232 | | Tatoeba-test.lad-eng.lad.eng | 30.7 | 0.484 | | Tatoeba-test.lah-eng.lah.eng | 11.0 | 0.286 | | Tatoeba-test.lat-eng.lat.eng | 24.4 | 0.432 | | Tatoeba-test.lav-eng.lav.eng | 47.2 | 0.646 | | Tatoeba-test.lij-eng.lij.eng | 9.0 | 0.287 | | Tatoeba-test.lit-eng.lit.eng | 51.7 | 0.670 | | Tatoeba-test.lld-eng.lld.eng | 22.4 | 0.369 | | Tatoeba-test.lmo-eng.lmo.eng | 26.1 | 0.381 | | Tatoeba-test.ltz-eng.ltz.eng | 39.8 | 0.536 | | Tatoeba-test.mai-eng.mai.eng | 72.3 | 0.758 | | Tatoeba-test.mar-eng.mar.eng | 32.0 | 0.554 | | Tatoeba-test.mfe-eng.mfe.eng | 63.1 | 0.822 | | Tatoeba-test.mkd-eng.mkd.eng | 49.5 | 0.638 | | Tatoeba-test.msa-eng.msa.eng | 38.6 | 0.566 | | Tatoeba-test.multi.eng | 45.6 | 0.615 | | Tatoeba-test.mwl-eng.mwl.eng | 40.4 | 0.767 | | Tatoeba-test.nds-eng.nds.eng | 35.5 | 0.538 | | Tatoeba-test.nep-eng.nep.eng | 4.9 | 0.209 | | Tatoeba-test.nld-eng.nld.eng | 54.2 | 0.694 | | Tatoeba-test.non-eng.non.eng | 39.3 | 0.573 | | Tatoeba-test.nor-eng.nor.eng | 50.9 | 0.663 | | Tatoeba-test.oci-eng.oci.eng | 19.6 | 0.386 | | Tatoeba-test.ori-eng.ori.eng | 16.2 | 0.364 | | Tatoeba-test.orv-eng.orv.eng | 13.6 | 0.288 | | Tatoeba-test.oss-eng.oss.eng | 9.4 | 0.301 | | Tatoeba-test.pan-eng.pan.eng | 17.1 | 0.389 | | Tatoeba-test.pap-eng.pap.eng | 57.0 | 0.680 | | Tatoeba-test.pdc-eng.pdc.eng | 41.6 | 0.526 | | Tatoeba-test.pms-eng.pms.eng | 13.7 | 0.333 | | Tatoeba-test.pol-eng.pol.eng | 46.5 | 0.632 | | Tatoeba-test.por-eng.por.eng | 56.4 | 0.710 | | Tatoeba-test.prg-eng.prg.eng | 2.3 | 0.193 | | Tatoeba-test.pus-eng.pus.eng | 3.2 | 0.194 | | Tatoeba-test.roh-eng.roh.eng | 17.5 | 0.420 | | Tatoeba-test.rom-eng.rom.eng | 5.0 | 0.237 | | Tatoeba-test.ron-eng.ron.eng | 51.4 | 0.670 | | Tatoeba-test.rue-eng.rue.eng | 26.0 | 0.447 | | Tatoeba-test.rus-eng.rus.eng | 47.8 | 0.634 | | Tatoeba-test.san-eng.san.eng | 4.0 | 0.195 | | Tatoeba-test.scn-eng.scn.eng | 45.1 | 0.440 | | Tatoeba-test.sco-eng.sco.eng | 41.9 | 0.582 | | Tatoeba-test.sgs-eng.sgs.eng | 38.7 | 0.498 | | Tatoeba-test.sin-eng.sin.eng | 29.7 | 0.499 | | Tatoeba-test.slv-eng.slv.eng | 38.2 | 0.564 | | Tatoeba-test.snd-eng.snd.eng | 12.7 | 0.342 | | Tatoeba-test.spa-eng.spa.eng | 53.2 | 0.687 | | Tatoeba-test.sqi-eng.sqi.eng | 51.9 | 0.679 | | Tatoeba-test.stq-eng.stq.eng | 9.0 | 0.391 | | Tatoeba-test.swe-eng.swe.eng | 57.4 | 0.705 | | Tatoeba-test.swg-eng.swg.eng | 18.0 | 0.338 | | Tatoeba-test.tgk-eng.tgk.eng | 24.3 | 0.413 | | Tatoeba-test.tly-eng.tly.eng | 1.1 | 0.094 | | Tatoeba-test.ukr-eng.ukr.eng | 48.0 | 0.639 | | Tatoeba-test.urd-eng.urd.eng | 27.2 | 0.471 | | Tatoeba-test.vec-eng.vec.eng | 28.0 | 0.398 | | Tatoeba-test.wln-eng.wln.eng | 17.5 | 0.320 | | Tatoeba-test.yid-eng.yid.eng | 26.9 | 0.457 | | Tatoeba-test.zza-eng.zza.eng | 1.7 | 0.131 |
f0bb0bf7ec3170f8a76f6554cc93bdaf
apache-2.0
['translation']
false
System Info: - hf_name: ine-eng - source_languages: ine - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ine-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ca', 'es', 'os', 'ro', 'fy', 'cy', 'sc', 'is', 'yi', 'lb', 'an', 'sq', 'fr', 'ht', 'rm', 'ps', 'af', 'uk', 'sl', 'lt', 'bg', 'be', 'gd', 'si', 'en', 'br', 'mk', 'or', 'mr', 'ru', 'fo', 'co', 'oc', 'pl', 'gl', 'nb', 'bn', 'id', 'hy', 'da', 'gv', 'nl', 'pt', 'hi', 'as', 'kw', 'ga', 'sv', 'gu', 'wa', 'lv', 'el', 'it', 'hr', 'ur', 'nn', 'de', 'cs', 'ine'] - src_constituents: {'cat', 'spa', 'pap', 'mwl', 'lij', 'bos_Latn', 'lad_Latn', 'lat_Latn', 'pcd', 'oss', 'ron', 'fry', 'cym', 'awa', 'swg', 'zsm_Latn', 'srd', 'gcf_Latn', 'isl', 'yid', 'bho', 'ltz', 'kur_Latn', 'arg', 'pes_Thaa', 'sqi', 'csb_Latn', 'fra', 'hat', 'non_Latn', 'sco', 'pnb', 'roh', 'bul_Latn', 'pus', 'afr', 'ukr', 'slv', 'lit', 'tmw_Latn', 'hsb', 'tly_Latn', 'bul', 'bel', 'got_Goth', 'lat_Grek', 'ext', 'gla', 'mai', 'sin', 'hif_Latn', 'eng', 'bre', 'nob_Hebr', 'prg_Latn', 'ang_Latn', 'aln', 'mkd', 'ori', 'mar', 'afr_Arab', 'san_Deva', 'gos', 'rus', 'fao', 'orv_Cyrl', 'bel_Latn', 'cos', 'zza', 'grc_Grek', 'oci', 'mfe', 'gom', 'bjn', 'sgs', 'tgk_Cyrl', 'hye_Latn', 'pdc', 'srp_Cyrl', 'pol', 'ast', 'glg', 'pms', 'nob', 'ben', 'min', 'srp_Latn', 'zlm_Latn', 'ind', 'rom', 'hye', 'scn', 'enm_Latn', 'lmo', 'npi', 'pes', 'dan', 'rus_Latn', 'jdt_Cyrl', 'gsw', 'glv', 'nld', 'snd_Arab', 'kur_Arab', 'por', 'hin', 'dsb', 'asm', 'lad', 'frm_Latn', 'ksh', 'pan_Guru', 'cor', 'gle', 'swe', 'guj', 'wln', 'lav', 'ell', 'frr', 'rue', 'ita', 'hrv', 'urd', 'stq', 'nno', 'deu', 'lld_Latn', 'ces', 'egl', 'vec', 'max_Latn', 'pes_Latn', 'ltg', 'nds'} - tgt_constituents: {'eng'} - src_multilingual: True - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ine-eng/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ine-eng/opus2m-2020-08-01.test.txt - src_alpha3: ine - tgt_alpha3: eng - short_pair: ine-en - chrF2_score: 0.615 - bleu: 45.6 - brevity_penalty: 0.997 - ref_len: 71872.0 - src_name: Indo-European languages - tgt_name: English - train_date: 2020-08-01 - src_alpha2: ine - tgt_alpha2: en - prefer_old: False - long_pair: ine-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
4a9a4959e7a6f3d7da3cf0dd1231cdc5
apache-2.0
['chinese', 'token-classification', 'pos', 'wikipedia', 'dependency-parsing']
false
Model Description This is a BERT model pre-trained on Chinese Wikipedia texts (both simplified and traditional) for POS-tagging and dependency-parsing, derived from [chinese-bert-wwm-ext](https://huggingface.co/hfl/chinese-bert-wwm-ext). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
befab40ea1fa7fa443b5972ec49931a6
apache-2.0
['chinese', 'token-classification', 'pos', 'wikipedia', 'dependency-parsing']
false
How to Use ```py from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/chinese-bert-wwm-ext-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/chinese-bert-wwm-ext-upos") ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/chinese-bert-wwm-ext-upos") ```
6147d85507761b32b01b23b5e6fb815f
apache-2.0
['conversational', 'dialogue', 'response generation']
false
Model Card for 🧑🏻‍🚀COSMO 🧑🏻‍🚀COSMO is a conversation agent with greater generalizability on both in- and out-of-domain chitchat datasets (e.g., DailyDialog, BlendedSkillTalk). It is trained on two datasets: SODA and ProsocialDialog. COSMO is especially aiming to model natural human conversations. It can accept situation descriptions as well as instructions on what role it should play in the situation.
67262498d8c2f8dbe4639f6374576e63
apache-2.0
['conversational', 'dialogue', 'response generation']
false
Model Description - **Repository:** [Code](https://github.com/skywalker023/sodaverse) - **Paper:** [SODA: Million-scale Dialogue Distillation with Social Commonsense Contextualization](https://arxiv.org/abs/2212.10465) - **Point of Contact:** [Hyunwoo Kim](mailto:hyunwook@allenai.org)
71c622481961f777073d6a0606348f95
apache-2.0
['conversational', 'dialogue', 'response generation']
false
Model Training 🧑🏻‍🚀COSMO is trained on our two recent datasets: 🥤[SODA](https://huggingface.co/datasets/allenai/soda) and [Prosocial Dialog](https://huggingface.co/datasets/allenai/prosocial-dialog). The backbone model of COSMO is the [lm-adapted T5](https://huggingface.co/google/t5-xl-lm-adapt).
d9763db571b16cc0b64152e8cffd4b86
apache-2.0
['conversational', 'dialogue', 'response generation']
false
How to use > 💡 <b>Note:</b> The HuggingFace inference API for Cosmo is not working correctly, we gently guide you to [our repository](https://hyunw.kim/sodaverse) to try out the demo code! > 🚨 <b>Disclaimer:</b> We would like to emphasize that COSMO is trained on SODA and ProsocialDialog mainly for academic/research purposes. We discourage using COSMO in real-world applications or services as is. Model outputs should not be used for advice for humans, and could be potentially offensive, problematic, or harmful. The model’s output does not necessarily reflect the views and opinions of the authors and their associated affiliations. Below is a simple code snippet to get Cosmo running :) ```python import torch from transformers import AutoTokenizer, AutoModelForSeq2SeqLM device = torch.device("cuda" if torch.cuda.is_available() else "cpu") tokenizer = AutoTokenizer.from_pretrained("allenai/cosmo-xl") model = AutoModelForSeq2SeqLM.from_pretrained("allenai/cosmo-xl").to(device) def set_input(situation_narrative, role_instruction, conversation_history): input_text = " <turn> ".join(conversation_history) if role_instruction != "": input_text = "{} <sep> {}".format(role_instruction, input_text) if situation_narrative != "": input_text = "{} <sep> {}".format(situation_narrative, input_text) return input_text def generate(situation_narrative, role_instruction, conversation_history): """ situation_narrative: the description of situation/context with the characters included (e.g., "David goes to an amusement park") role_instruction: the perspective/speaker instruction (e.g., "Imagine you are David and speak to his friend Sarah"). conversation_history: the previous utterances in the conversation in a list """ input_text = set_input(situation_narrative, role_instruction, conversation_history) inputs = tokenizer([input_text], return_tensors="pt").to(device) outputs = model.generate(inputs["input_ids"], max_new_tokens=128, temperature=1.0, top_p=.95, do_sample=True) response = tokenizer.decode(outputs[0], skip_special_tokens=True, clean_up_tokenization_spaces=False) return response situation = "Cosmo had a really fun time participating in the EMNLP conference at Abu Dhabi." instruction = "You are Cosmo and you are talking to a friend."
eba1c7fcb9342fd8ed0533aee029c37f
apache-2.0
['conversational', 'dialogue', 'response generation']
false
Further Details, Social Impacts, Bias, and Limitations Please refer to our [paper](https://arxiv.org/abs/2212.10465). Cosmo is mostly trained on social chitchat. Therefore, we do not encourage having knowledge-intensive conversations (e.g., science, medical issues, law). Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. 2021](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. 2021](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
611ff0017ea946da36a611309a851dde
apache-2.0
['conversational', 'dialogue', 'response generation']
false
Citation Please cite our work if you find the resources in this repository useful: ``` @article{kim2022soda, title={SODA: Million-scale Dialogue Distillation with Social Commonsense Contextualization}, author={Hyunwoo Kim and Jack Hessel and Liwei Jiang and Peter West and Ximing Lu and Youngjae Yu and Pei Zhou and Ronan Le Bras and Malihe Alikhani and Gunhee Kim and Maarten Sap and Yejin Choi}, journal={ArXiv}, year={2022}, volume={abs/2212.10465} } ```
234c847c2054f1c54beff580569fe862
apache-2.0
['generated_from_trainer']
false
bert-emotion This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 1.2559 - Precision: 0.7221 - Recall: 0.7242 - Fscore: 0.7223
e7f3f216e5184d6f1c0e19e33a76f9bc
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Fscore | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:| | 0.8588 | 1.0 | 815 | 0.8342 | 0.7807 | 0.6117 | 0.6364 | | 0.5394 | 2.0 | 1630 | 0.9126 | 0.7363 | 0.6923 | 0.7096 | | 0.2805 | 3.0 | 2445 | 1.2559 | 0.7221 | 0.7242 | 0.7223 |
7ba00996c86e346c865ff563eb2d42fa
mit
['Dutch', 'Flemish', 'RoBERTa', 'RobBERT', 'RobBERTje']
false
The models | Model | Description | Parameters | Training size | Huggingface id | |--------------|-------------|------------------|-------------------|------------------------------------------------------------------------------------| | Non-shuffled | Trained on the non-shuffled variant of the oscar corpus, without any operations to preserve this order during training and distillation. | 74 M | 1 GB | [DTAI-KULeuven/robbertje-1-gb-non-shuffled](https://huggingface.co/DTAI-KULeuven/robbertje-1-gb-non-shuffled) | | Shuffled | Trained on the publicly available and shuffled OSCAR corpus. | 74 M | 1 GB | [DTAI-KULeuven/robbertje-1-gb-shuffled](https://huggingface.co/DTAI-KULeuven/robbertje-1-gb-shuffled) | | Merged (p=0.5) | Same as the non-shuffled variant, but sequential sentences of the same document are merged with a probability of 50%. | 74 M | 1 GB | this model | | BORT | A smaller version with 8 attention heads instead of 12 and 4 layers instead of 6 (and 12 for RobBERT). | 46 M | 1 GB | [DTAI-KULeuven/robbertje-1-gb-bort](https://huggingface.co/DTAI-KULeuven/robbertje-1-gb-bort) |
24c9191fd72f8fb8750d010cb8824ac8
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Wav2Vec2-Large-XLSR-53-Chinese-zh-CN-aishell1 Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Chinese using the [AISHELL-1](https://github.com/kaldi-asr/kaldi/tree/master/egs/aishell) dataset. When using this model, make sure that your speech input is sampled at 16kHz.
b44f4cac1f7513abb169f405a0c775f7
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Usage The model can be used directly (without a language model) as follows: ```python import torch import librosa from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor device = "cuda:0" if torch.cuda.is_available() else "cpu" processor = Wav2Vec2Processor.from_pretrained( 'qinyue/wav2vec2-large-xlsr-53-chinese-zn-cn-aishell1') model = Wav2Vec2ForCTC.from_pretrained( 'qinyue/wav2vec2-large-xlsr-53-chinese-zn-cn-aishell1').to(device) filepath = 'test.wav' audio, sr = librosa.load(filepath, sr=16000, mono=True) inputs = processor(audio, sample_rate=16000, return_tensors="pt").to(device) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) pred_str = processor.decode(predicted_ids[0]) print(pred_str) ```
4bd49f2e3f9976c0334dde46261d986f
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Evaluation ```python wer_metric = load_metric("wer") def compute_metrics(pred): pred_logits = pred.predictions pred_ids = np.argmax(pred_logits, axis=-1) pred.label_ids[pred.label_ids == -100] = processor.tokenizer.pad_token_id pred_str = processor.batch_decode(pred_ids, spaces_between_special_tokens=True) label_str = processor.batch_decode(pred.label_ids, group_tokens=False, spaces_between_special_tokens=True) wer = wer_metric.compute(predictions=pred_str, references=label_str) return {"wer": wer} ```
70ffa01b9a302f15fd0b17737914a18b
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Results | Reference | Prediction | | ------------- | ------------- | | 据 伟 业 我 爱 我 家 市 场 研 究 院 测 算 | 据 北 业 我 爱 我 家 市 场 研 究 院 测 算 | | 七 月 北 京 公 积 金 贷 款 成 交 量 提 升 了 百 分 之 五 | 七 月 北 京 公 积 金 贷 款 成 交 量 提 升 了 百 分 之 五 | | 培 育 门 类 丰 富 层 次 齐 用 的 综 合 利 用 产 业 | 培 育 门 类 丰 富 层 资 集 业 的 综 合 利 用 产 业 | | 我 们 迎 来 了 赶 超 发 达 国 家 的 难 得 机 遇 | 我 们 迎 来 了 赶 超 发 达 国 家 的 单 得 机 遇 | | 坚 持 基 本 草 原 保 护 制 度 | 坚 持 基 本 草 员 保 护 制 度 | | 强 化 水 生 生 态 修 复 和 建 设 | 强 化 水 生 生 态 修 复 和 建 设 | | 温 州 两 男 子 为 争 女 人 驾 奔 驰 宝 马 街 头 四 次 对 撞 | 温 州 两 男 子 为 争 女 人 架 奔 驰 宝 马 接 头 四 次 对 重 | | 她 表 示 应 该 是 吃 吃 饭 看 电 影 之 类 的 | 他 表 示 一 的 是 吃 吃 饭 看 电 影 之 理 | | 加 强 畜 禽 遗 传 资 源 和 农 业 野 生 植 物 资 源 保 护 | 加 强 续 紧 遗 传 资 源 和 农 业 野 生 职 物 资 源 保 护 | | 两 人 都 是 依 赖 电 话 沟 通 | 两 人 都 是 依 赖 电 话 沟 通 | **Test Result**: In the table below I report the Word Error Rate (WER) of the model on the AISHELL-1 test dataset. | Model | WER | WER-with-LM | | ------------- | ------------- | ------------- | | qinyue/wav2vec2-large-xlsr-53-chinese-zn-cn-aishell1 | **7.04%** | **3.96%** |
88e1ea1068abea26f9964e6816990184
apache-2.0
['generated_from_trainer']
false
finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3213 - Accuracy: 0.8667 - F1: 0.8684
2a8fce11ce73f1381a5be163edb2aeca
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.4721
796644f7451862ba7f5955e72d5eae1e
apache-2.0
[]
false
_Copyright 2023 Anugrah Akbar Praramadhan. All rights reserved._ _Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at_ _[http://www.apache.org/licenses/LICENSE-2.0)](http://www.apache.org/licenses/LICENSE-2.0)_ _Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License._
0357d1b368f13cac18e34f3cb6fee574
apache-2.0
[]
false
Model Description A GPT-2 *(Generative Pretrained Transformer-2)* model is a transformer based architecture for Causal Language Modeling, meaning it's required a left token/word as an input prompt for generating the right/next token, developed by Open AI *{Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya}*. See the paper here: [https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf](https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf)
cca0c8a93d08292f12b6b6ff0150c28e
apache-2.0
[]
false
Limitation Since GPT-2 is an unsupervised model and trained using an unlabelled of text sequences without any explicit supervision, the clarity and output of this model often comes with randomness. To overcome this issue we have to create a specific seed for determined output. Supported language for this model is only English *(get from GPT-2 pretrained model)* and Indonesian *(fine tune using Indonesian Wikipedia Dataset)*.
7b1abe26bffefd6e291d1022eded5147
apache-2.0
[]
false
How To Use Direct use of using Pytorch: ```python >>> from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM, set_seed >>> model_name = 'anugrahap/gpt2-indo-textgen' >>> tokenizer = AutoTokenizer.from_pretrained(model_name, padding_side='left') >>> model = AutoModelForCausalLM.from_pretrained(model_name, pad_token_id=tokenizer.eos_token_id) >>> generator = pipeline('text-generation', model=model, tokenizer=tokenizer) >>>
03bf36fc158a724a8d450be08a945b5a
apache-2.0
[]
false
Learn more | [GPT-2 Pretrained Model Medium-345M Parameters](https://github.com/openai/gpt-2/blob/master/download_model.py)<br> | [Indo4B Wikipedia CoNLL-U Dataset - 433MB by IndoNLP](https://drive.google.com/file/d/1ZoKd31yr3soveU0O38XEIFUBKx-D66t5/view?usp=sharing)<br> | [References for CoNLL-U format](https://universaldependencies.org/format.html)<br> | [Project Repository](https://huggingface.co/spaces/anugrahap/gpt2-indo-text-gen/tree/main)
867c1f3ce67ee9b3ccf5b7907c5e2893
apache-2.0
['generated_from_keras_callback']
false
tfranklin/bert-a-saurus This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0003 - Validation Loss: 0.0004 - Epoch: 2
9028785b06e1bc3acc7083e1438f665d
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1202, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32
f4f2d6e50bb1b8d3f3e4d70009790292
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.2424 | 0.0004 | 0 | | 0.0004 | 0.0004 | 1 | | 0.0003 | 0.0004 | 2 |
c1c1f872b94907d2f17335adb35c8d5c
creativeml-openrail-m
['text-to-image', 'stable-diffusion']
false
{INSTANCE_NAME} Dreambooth model trained by zuruyu with [buildspace's DreamBooth](https://colab.research.google.com/github/buildspace/diffusers/blob/main/examples/dreambooth/DreamBooth_Stable_Diffusion.ipynb) notebook Build your own using the [AI Avatar project](https://buildspace.so/builds/ai-avatar)! To get started head over to the [project dashboard](https://buildspace.so/p/build-ai-avatars). Sample pictures of this concept:
0ecf0b07d61481727d89bb952e0616ab
apache-2.0
[]
false
This is a pretrained [MT5](https://github.com/google-research/multilingual-t5) large model (**973M** parameters). Training was performed with the span corruption task on a clean 80GB Romanian text corpus for 4M total steps with these [scripts](https://github.com/dumitrescustefan/t5x_models), starting from the 1M public mt5x-large checkpoint. The model was trained with an encoder and decoder sequence length of 512, and has the same mt5x vocabulary as the 1M multilingual checkpoint. **!! IMPORTANT !!** This model was pretrained on the span corruption MLM task, meaning this model is **not usable** in any downstream task **without finetuning** first!
284ef9f449447402f0fbece34c3a2fea
apache-2.0
[]
false
How to load an mt5x model ```python from transformers import MT5Model, T5Tokenizer model = MT5Model.from_pretrained('dumitrescustefan/mt5-large-romanian') tokenizer = T5Tokenizer.from_pretrained('dumitrescustefan/mt5-large-romanian') input_text = "Acesta este un test." target_text = "Acesta este" inputs = tokenizer(input_text, return_tensors="pt") labels = tokenizer(text_target=target_text, return_tensors="pt") outputs = model(input_ids=inputs["input_ids"], decoder_input_ids=labels["input_ids"]) hidden_states = outputs.last_hidden_state print(hidden_states.shape)
8b0807bdd8a211e8eaeec46ee91970f6
apache-2.0
[]
false
this will print [1, 4, 1024] ``` Remember to always sanitize your text! Replace ``ş`` and ``ţ`` cedilla-letters to comma-letters with : ```python text = text.replace("ţ", "ț").replace("ş", "ș").replace("Ţ", "Ț").replace("Ş", "Ș") ``` because the model was **not** trained on cedilla ``ş`` and ``ţ``s. If you don't, you will have decreased performance due to ``<UNK>``s and increased number of tokens per word.
71fcdf84ad1f14cda9d95dd155e5ee6f