license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
cc-by-4.0
[]
false
Training data The model is trained on the parallel FreEM dataset [FreEM_norm corpus](https://freem-corpora.github.io/corpora/norm/), consisting of 17,930 training sentences and 2,443 development sentences (used for model selection).
bc8e2047fc5b1c2d7a32d8942f43be8d
cc-by-4.0
[]
false
Preprocessing Texts are normalised (in terms of apostrophes, quotes and spaces), before being tokenised with SentencePiece and a vocabulary size of 1000. The inputs are of the form: ``` Sentence in Early Modern French </s> ``` where `</s>` is the end-of-sentence (eos) token.
1736f9f5938e39aaeb99e07b5833e38a
cc-by-4.0
[]
false
Training The model was trained using [Fairseq](https://github.com/facebookresearch/fairseq) and ported to HuggingFace using an adapted version of [Stas's scripts for FSMT models](https://huggingface.co/blog/porting-fsmt).
6ecf3905d56f59faa068ece32d20feaa
cc-by-4.0
[]
false
BibTex entry and citation info <a name="cite"></a> Rachel Bawden, Jonathan Poinhos, Eleni Kogkitsidou, Philippe Gambette, Benoît Sagot and Simon Gabay. 2022. [Automatic Normalisation of Early Modern French](http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.358.pdf). In Proceedings of the 13th Language Resources and Evaluation Conference. European Language Resources Association. Marseille, France.] Bibtex: ``` @inproceedings{bawden-etal-2022-automatic, title = {{Automatic Normalisation of Early Modern French}}, author = {Bawden, Rachel and Poinhos, Jonathan and Kogkitsidou, Eleni and Gambette, Philippe and Sagot, Beno{\^i}t and Gabay, Simon}, url = {https://hal.inria.fr/hal-03540226}, booktitle = {Proceedings of the 13th Language Resources and Evaluation Conference}, publisher = {European Language Resources Association}, year = {2022}, address = {Marseille, France}, pages = {3354--3366}, url = {http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.358.pdf} } ``` And to reference the FreEM-norm dataset used in the experiments: Simon Gabay. (2022). FreEM-corpora/FreEMnorm: FreEM norm Parallel corpus (1.0.0). Zenodo. https://doi.org/10.5281/zenodo.5865428 ``` @software{simon_gabay_2022_5865428, author = {Simon Gabay}, title = {{FreEM-corpora/FreEMnorm: FreEM norm Parallel corpus}}, month = jan, year = 2022, publisher = {Zenodo}, version = {1.0.0}, doi = {10.5281/zenodo.5865428}, url = {https://doi.org/10.5281/zenodo.5865428} }
193e1b8d894311f70005965e1ea0bd59
apache-2.0
['translation']
false
deu-afr * source group: German * target group: Afrikaans * OPUS readme: [deu-afr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-afr/README.md) * model: transformer-align * source language(s): deu * target language(s): afr * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-afr/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-afr/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-afr/opus-2020-06-17.eval.txt)
58372b640a3d8e22031f61b57ddd289f
apache-2.0
['translation']
false
System Info: - hf_name: deu-afr - source_languages: deu - target_languages: afr - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-afr/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['de', 'af'] - src_constituents: {'deu'} - tgt_constituents: {'afr'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-afr/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-afr/opus-2020-06-17.test.txt - src_alpha3: deu - tgt_alpha3: afr - short_pair: de-af - chrF2_score: 0.69 - bleu: 51.3 - brevity_penalty: 1.0 - ref_len: 9507.0 - src_name: German - tgt_name: Afrikaans - train_date: 2020-06-17 - src_alpha2: de - tgt_alpha2: af - prefer_old: False - long_pair: deu-afr - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
dce9f4003b005004e2b6143fde5edfb8
creativeml-openrail-m
['text-to-image', 'stable-diffusion']
false
rbto_v3 Dreambooth model trained by rudzinskimaciej with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb) Sample pictures of this concept:
c9ee32717ddd07912aeea481fb9ad5f6
apache-2.0
[]
false
Digikala Digikala user comments provided by [Open Data Mining Program (ODMP)](https://www.digikala.com/opendata/). This dataset contains 62,321 user comments with three labels: | Label |
bc44d98feece5ee290be23522a26a75e
apache-2.0
[]
false
| |:---------------:|:------:| | no_idea | 10394 | | not_recommended | 15885 | | recommended | 36042 | **Download** You can download the dataset from [here](https://www.digikala.com/opendata/)
64345b16ddd393dec0f0d5a0a566bf12
apache-2.0
[]
false
Results The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures. | Dataset | ParsBERT v2 | ParsBERT v1 | mBERT | DeepSentiPers | |:------------------------:|:-----------:|:-----------:|:-----:|:-------------:| | Digikala User Comments | 81.72 | 81.74* | 80.74 | - |
937508b6a0602e2478389e632dc644c5
mit
['autotrain', 'vision', 'image-classification']
false
Dataset Info This was trained on scraped pfp images from Mastodon, with some non-pfp images thrown in for "balancing" (i.e ensuring pokemon, kemonomimi (catgirls/foxgirls/etc), and normal animals weren't classified as 'furry') **Furry images**: 551 **Non-furry images**: 641
aca5004b6a9c4f368e3f3c8a691f3da4
mit
['autotrain', 'vision', 'image-classification']
false
Disclaimer Please do not ruin this by using this to harass anyone. This is *not* intended to be used for targeted harrassement, and I will explicitly condemn any use that attempts to do so. If you're wondering why I made this public in the first place? I believe in freedom of *information* - this image classification model has various perfectly valid uses, and it's kinda useless to keep it private.
e2d5dea4193a30dfaa2389bdd1d38c58
apache-2.0
['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'model_for_talk', 'mozilla-foundation/common_voice_7_0', 'robust-speech-event', 'ur']
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> infinitejoy/wav2vec2-large-xls-r-300m-urdu This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - -UR dataset. It achieves the following results on the evaluation set: - Loss: NA - Wer: NA
8d11592839b7bdc7576188ff0c7cbe58
apache-2.0
['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'model_for_talk', 'mozilla-foundation/common_voice_7_0', 'robust-speech-event', 'ur']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 50.0 - mixed_precision_training: Native AMP
e6d4c3906ebeb8e4c770966ed5e07e5a
apache-2.0
['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'model_for_talk', 'mozilla-foundation/common_voice_7_0', 'robust-speech-event', 'ur']
false
Evaluation Commands 1. To evaluate on `mozilla-foundation/common_voice_7_0` with split `test` ```bash python eval.py \ --model_id infinitejoy/wav2vec2-large-xls-r-300m-urdu --dataset speech-recognition-community-v2/dev_data \ --config ur --split validation --chunk_length_s 10 --stride_length_s 1 ```
2dd1380e5f4b24f6077cd6a81865c6b3
apache-2.0
['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'model_for_talk', 'mozilla-foundation/common_voice_7_0', 'robust-speech-event', 'ur']
false
Inference ```python import torch from datasets import load_dataset from transformers import AutoModelForCTC, AutoProcessor import torchaudio.functional as F model_id = "infinitejoy/wav2vec2-large-xls-r-300m-urdu" sample_iter = iter(load_dataset("mozilla-foundation/common_voice_7_0", "ur", split="test", streaming=True, use_auth_token=True)) sample = next(sample_iter) resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy() model = AutoModelForCTC.from_pretrained(model_id) processor = AutoProcessor.from_pretrained(model_id) input_values = processor(resampled_audio, return_tensors="pt").input_values with torch.no_grad(): logits = model(input_values).logits transcription = processor.batch_decode(logits.numpy()).text ```
0b9fc1dc695f65876c3cfac5a4c20764
mit
['generated_from_trainer']
false
suspicious_noyce This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets.
a70c18f189516c543449dd26b1884a55
mit
['generated_from_trainer']
false
Full config {'dataset': {'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000', 'tomekkorbak/detoxify-pile-chunk3-50000-100000', 'tomekkorbak/detoxify-pile-chunk3-100000-150000', 'tomekkorbak/detoxify-pile-chunk3-150000-200000', 'tomekkorbak/detoxify-pile-chunk3-200000-250000', 'tomekkorbak/detoxify-pile-chunk3-250000-300000', 'tomekkorbak/detoxify-pile-chunk3-300000-350000', 'tomekkorbak/detoxify-pile-chunk3-350000-400000', 'tomekkorbak/detoxify-pile-chunk3-400000-450000', 'tomekkorbak/detoxify-pile-chunk3-450000-500000', 'tomekkorbak/detoxify-pile-chunk3-500000-550000', 'tomekkorbak/detoxify-pile-chunk3-550000-600000', 'tomekkorbak/detoxify-pile-chunk3-600000-650000', 'tomekkorbak/detoxify-pile-chunk3-650000-700000', 'tomekkorbak/detoxify-pile-chunk3-700000-750000', 'tomekkorbak/detoxify-pile-chunk3-750000-800000', 'tomekkorbak/detoxify-pile-chunk3-800000-850000', 'tomekkorbak/detoxify-pile-chunk3-850000-900000', 'tomekkorbak/detoxify-pile-chunk3-900000-950000', 'tomekkorbak/detoxify-pile-chunk3-950000-1000000', 'tomekkorbak/detoxify-pile-chunk3-1000000-1050000', 'tomekkorbak/detoxify-pile-chunk3-1050000-1100000', 'tomekkorbak/detoxify-pile-chunk3-1100000-1150000', 'tomekkorbak/detoxify-pile-chunk3-1150000-1200000', 'tomekkorbak/detoxify-pile-chunk3-1200000-1250000', 'tomekkorbak/detoxify-pile-chunk3-1250000-1300000', 'tomekkorbak/detoxify-pile-chunk3-1300000-1350000', 'tomekkorbak/detoxify-pile-chunk3-1350000-1400000', 'tomekkorbak/detoxify-pile-chunk3-1400000-1450000', 'tomekkorbak/detoxify-pile-chunk3-1450000-1500000', 'tomekkorbak/detoxify-pile-chunk3-1500000-1550000', 'tomekkorbak/detoxify-pile-chunk3-1550000-1600000', 'tomekkorbak/detoxify-pile-chunk3-1600000-1650000', 'tomekkorbak/detoxify-pile-chunk3-1650000-1700000', 'tomekkorbak/detoxify-pile-chunk3-1700000-1750000', 'tomekkorbak/detoxify-pile-chunk3-1750000-1800000', 'tomekkorbak/detoxify-pile-chunk3-1800000-1850000', 'tomekkorbak/detoxify-pile-chunk3-1850000-1900000', 'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'], 'is_split_by_sentences': True}, 'generation': {'force_call_on': [25354], 'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}], 'scenario_configs': [{'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_samples': 2048}, {'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'challenging_rtp', 'num_samples': 2048, 'prompts_path': 'resources/challenging_rtp.jsonl'}], 'scorer_config': {'device': 'cuda:0'}}, 'kl_gpt3_callback': {'force_call_on': [25354], 'max_tokens': 64, 'num_samples': 4096}, 'model': {'from_scratch': True, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'path_or_name': 'gpt2'}, 'objective': {'name': 'MLE'}, 'tokenizer': {'path_or_name': 'gpt2'}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 64, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'suspicious_noyce', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.0005, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000, 'output_dir': 'training_output104340', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 25354, 'save_strategy': 'steps', 'seed': 42, 'warmup_ratio': 0.01, 'weight_decay': 0.1}}
3ea17a9c2b3d0fee39d17806da82c88a
apache-2.0
['multilingual', 'PyTorch', 'Transformers', 'gpt3', 'gpt2', 'Deepspeed', 'Megatron', 'mGPT']
false
mGPT: fine-tune on message data - 2E - This model is a fine-tuned version of [sberbank-ai/mGPT](https://huggingface.co/sberbank-ai/mGPT) on 80k messages. This builds on the minimum-working-example checkpoint [here](https://huggingface.co/pszemraj/mGPT-Peter-mwe). - 2E = 2 epochs
98d820af37c51ee12b8ee9c6f2c0a435
apache-2.0
['multilingual', 'PyTorch', 'Transformers', 'gpt3', 'gpt2', 'Deepspeed', 'Megatron', 'mGPT']
false
Model description - testing if fine-tuned personality data bleeds over to other languages without being trained in them explicitly **Interesting findings thus far:** - Passing a generic word after the `<name-identifier>` that is in a non-English language helps ensure the model responds in the question language (see: any example). - Model generations (in general) remain semantically consistent, even if the generations switch from `<language>`to English in the middle of the generated text. This demonstrates some sort of "universal concept understanding"
2bfb5d2bfdc2b996c74615f8edf41144
apache-2.0
['multilingual', 'PyTorch', 'Transformers', 'gpt3', 'gpt2', 'Deepspeed', 'Megatron', 'mGPT']
false
Usage in python Install the transformers library if you don't have it: ``` pip install -U transformers ``` load the model into a pipeline object: ``` from transformers import pipeline import torch device = 'cuda' if torch.cuda.is_available() else 'cpu' my_chatbot = pipeline('text-generation', 'pszemraj/mGPT-Peter-2E', device=0 if device == 'cuda' else -1, ) ```
bbadd51cf9057f189ddba351d88dea9f
apache-2.0
['multilingual', 'PyTorch', 'Transformers', 'gpt3', 'gpt2', 'Deepspeed', 'Megatron', 'mGPT']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 1 (in addition to all training on prior checkpoints)
5521d5e00234e85fe034e5d0edd589ab
mit
[]
false
wojaks-now on Stable Diffusion This is the `<red-wojak>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<red-wojak> 0](https://huggingface.co/sd-concepts-library/wojaks-now/resolve/main/concept_images/0.jpeg) ![<red-wojak> 1](https://huggingface.co/sd-concepts-library/wojaks-now/resolve/main/concept_images/1.jpeg) ![<red-wojak> 2](https://huggingface.co/sd-concepts-library/wojaks-now/resolve/main/concept_images/2.jpeg)
86869c4e8061ded20ad2e77afa735894
mit
['audio', 'audio-to-audio']
false
SpeechT5 (voice conversion task) SpeechT5 model fine-tuned for voice conversion (speech-to-speech) on CMU ARCTIC. This model was introduced in [SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing](https://arxiv.org/abs/2110.07205) by Junyi Ao, Rui Wang, Long Zhou, Chengyi Wang, Shuo Ren, Yu Wu, Shujie Liu, Tom Ko, Qing Li, Yu Zhang, Zhihua Wei, Yao Qian, Jinyu Li, Furu Wei. SpeechT5 was first released in [this repository](https://github.com/microsoft/SpeechT5/), [original weights](https://huggingface.co/mechanicalsea/speecht5-vc). The license used is [MIT](https://github.com/microsoft/SpeechT5/blob/main/LICENSE). Disclaimer: The team releasing SpeechT5 did not write a model card for this model so this model card has been written by the Hugging Face team.
5c0714a0e89541a5b66f52a3d67baa2a
mit
['audio', 'audio-to-audio']
false
Model Description Motivated by the success of T5 (Text-To-Text Transfer Transformer) in pre-trained natural language processing models, we propose a unified-modal SpeechT5 framework that explores the encoder-decoder pre-training for self-supervised speech/text representation learning. The SpeechT5 framework consists of a shared encoder-decoder network and six modal-specific (speech/text) pre/post-nets. After preprocessing the input speech/text through the pre-nets, the shared encoder-decoder network models the sequence-to-sequence transformation, and then the post-nets generate the output in the speech/text modality based on the output of the decoder. Leveraging large-scale unlabeled speech and text data, we pre-train SpeechT5 to learn a unified-modal representation, hoping to improve the modeling capability for both speech and text. To align the textual and speech information into this unified semantic space, we propose a cross-modal vector quantization approach that randomly mixes up speech/text states with latent units as the interface between encoder and decoder. Extensive evaluations show the superiority of the proposed SpeechT5 framework on a wide variety of spoken language processing tasks, including automatic speech recognition, speech synthesis, speech translation, voice conversion, speech enhancement, and speaker identification.
30b477a5fd148188c7a41725cf38a993
mit
['audio', 'audio-to-audio']
false
Intended Uses & Limitations You can use this model for speech conversion. See the [model hub](https://huggingface.co/models?search=speecht5) to look for fine-tuned versions on a task that interests you. Currently, both the feature extractor and model support PyTorch.
32e28cb2967dc935901527101a776dc7
mit
['audio', 'audio-to-audio']
false
Citation **BibTeX:** ```bibtex @inproceedings{ao-etal-2022-speecht5, title = {{S}peech{T}5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing}, author = {Ao, Junyi and Wang, Rui and Zhou, Long and Wang, Chengyi and Ren, Shuo and Wu, Yu and Liu, Shujie and Ko, Tom and Li, Qing and Zhang, Yu and Wei, Zhihua and Qian, Yao and Li, Jinyu and Wei, Furu}, booktitle = {Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)}, month = {May}, year = {2022}, pages={5723--5738}, } ```
db6f26c5ca29b0b641d382a45796801f
mit
['audio', 'audio-to-audio']
false
How to Get Started With the Model Use the code below to convert a mono 16 kHz speech waveform into another. ```python from transformers import SpeechT5Processor, SpeechT5ForSpeechToSpeech, SpeechT5HifiGan from datasets import load_dataset dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") dataset = dataset.sort("id") sampling_rate = dataset.features["audio"].sampling_rate example_speech = dataset[0]["audio"]["array"] processor = SpeechT5Processor.from_pretrained("microsoft/speecht5_vc") model = SpeechT5ForSpeechToSpeech.from_pretrained("microsoft/speecht5_vc") vocoder = SpeechT5HifiGan.from_pretrained("microsoft/speecht5_hifigan") inputs = processor(audio=example_speech, sampling_rate=sampling_rate, return_tensors="pt")
c55205633976ae6cb17e4860b6fa4713
mit
['audio', 'audio-to-audio']
false
load xvector containing speaker's voice characteristics from a file import numpy as np import torch speaker_embeddings = np.load("xvector_speaker_embedding.npy") speaker_embeddings = torch.tensor(speaker_embeddings).unsqueeze(0) speech = model.generate_speech(inputs["input_values"], speaker_embeddings, vocoder=vocoder) import soundfile as sf sf.write("speech.wav", speech.numpy(), samplerate=16000) ```
bed0e0c46c9fda07241f1985f46735a0
apache-2.0
['generated_from_trainer']
false
wav2vec2-large-xlsr-korean-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4534 - Wer: 0.3272
30025fa710acc32c7ccff33e6f5ea6cf
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP
dc59503680ddb08a59e4deda7c9e9753
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 17.4809 | 0.65 | 400 | 4.6145 | 1.0 | | 4.4863 | 1.29 | 800 | 4.3819 | 1.0 | | 4.2921 | 1.94 | 1200 | 4.1163 | 0.9970 | | 2.7971 | 2.59 | 1600 | 1.5376 | 0.8379 | | 1.5061 | 3.24 | 2000 | 1.0354 | 0.7299 | | 1.1123 | 3.88 | 2400 | 0.7909 | 0.6418 | | 0.9037 | 4.53 | 2800 | 0.6345 | 0.5698 | | 0.779 | 5.18 | 3200 | 0.5909 | 0.5571 | | 0.6834 | 5.83 | 3600 | 0.5339 | 0.5063 | | 0.6287 | 6.47 | 4000 | 0.5326 | 0.4954 | | 0.5518 | 7.12 | 4400 | 0.4930 | 0.4607 | | 0.5315 | 7.77 | 4800 | 0.4577 | 0.4451 | | 0.4867 | 8.41 | 5200 | 0.4547 | 0.4382 | | 0.4543 | 9.06 | 5600 | 0.4581 | 0.4371 | | 0.4089 | 9.71 | 6000 | 0.4387 | 0.4258 | | 0.3893 | 10.36 | 6400 | 0.4300 | 0.4100 | | 0.3751 | 11.0 | 6800 | 0.4265 | 0.4137 | | 0.3333 | 11.65 | 7200 | 0.4294 | 0.4011 | | 0.3039 | 12.3 | 7600 | 0.4187 | 0.3912 | | 0.2974 | 12.94 | 8000 | 0.4079 | 0.3805 | | 0.2658 | 13.59 | 8400 | 0.4273 | 0.3864 | | 0.2676 | 14.24 | 8800 | 0.4103 | 0.3734 | | 0.2466 | 14.89 | 9200 | 0.4122 | 0.3701 | | 0.2282 | 15.53 | 9600 | 0.4176 | 0.3650 | | 0.2186 | 16.18 | 10000 | 0.4199 | 0.3632 | | 0.2132 | 16.83 | 10400 | 0.4159 | 0.3671 | | 0.1962 | 17.48 | 10800 | 0.4321 | 0.3641 | | 0.1922 | 18.12 | 11200 | 0.4300 | 0.3535 | | 0.1827 | 18.77 | 11600 | 0.4244 | 0.3596 | | 0.1709 | 19.42 | 12000 | 0.4191 | 0.3518 | | 0.157 | 20.06 | 12400 | 0.4308 | 0.3496 | | 0.147 | 20.71 | 12800 | 0.4360 | 0.3457 | | 0.1502 | 21.36 | 13200 | 0.4329 | 0.3431 | | 0.1448 | 22.01 | 13600 | 0.4334 | 0.3432 | | 0.1407 | 22.65 | 14000 | 0.4392 | 0.3440 | | 0.1342 | 23.3 | 14400 | 0.4418 | 0.3399 | | 0.1325 | 23.95 | 14800 | 0.4360 | 0.3383 | | 0.1183 | 24.6 | 15200 | 0.4521 | 0.3359 | | 0.1174 | 25.24 | 15600 | 0.4426 | 0.3322 | | 0.1137 | 25.89 | 16000 | 0.4438 | 0.3356 | | 0.1129 | 26.54 | 16400 | 0.4547 | 0.3347 | | 0.1077 | 27.18 | 16800 | 0.4482 | 0.3300 | | 0.0999 | 27.83 | 17200 | 0.4491 | 0.3281 | | 0.0978 | 28.48 | 17600 | 0.4533 | 0.3281 | | 0.0997 | 29.13 | 18000 | 0.4542 | 0.3283 | | 0.0908 | 29.77 | 18400 | 0.4534 | 0.3272 |
9ea8e1ac0ef651c1e5e0eaee67a8ebc5
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
Demo: How to use in ESPnet2 ```bash cd espnet git checkout 11890fdd9dd872edc50ce8eb7660d746c6ee160e pip install -e . cd egs2/stop/asr3 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/stop_hubert_slu_raw_en_bpe500 ``` <!-- Generated by scripts/utils/show_asr_result.sh -->
57cb237eef1af5890b48302ed549aaaa
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
Environments - date: `Sun Dec 25 13:33:10 EST 2022` - python version: `3.9.5 (default, Jun 4 2021, 12:28:51) [GCC 7.5.0]` - espnet version: `espnet 202205` - pytorch version: `pytorch 1.13.0+cu116` - Git hash: `11890fdd9dd872edc50ce8eb7660d746c6ee160e` - Commit date: `Sat Jun 18 17:05:39 2022 -0400`
6c4bb487a9f5b9ff4c9dd1f246959f7d
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_asr_model_valid.acc.ave_10best/test|75636|728701|93.9|3.2|2.9|3.1|9.1|29.8| |decode_asr_asr_model_valid.acc.ave_10best/valid|33384|322094|0.0|0.0|100.0|0.0|100.0|100.0| |inference_asr_model_valid.acc.ave_10best/test|75636|728701|93.9|3.3|2.8|3.2|9.4|30.6| |inference_asr_model_valid.acc.ave_10best/valid|33384|322094|0.0|0.0|100.0|0.0|100.0|100.0|
e0ad9e7f9678c1107f54419c70b531dc
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_asr_model_valid.acc.ave_10best/test|75636|5745269|95.9|0.9|3.2|3.2|7.3|29.8| |decode_asr_asr_model_valid.acc.ave_10best/valid|33384|2537594|0.0|0.0|100.0|0.0|100.0|100.0| |inference_asr_model_valid.acc.ave_10best/test|75636|5745269|95.9|1.0|3.1|3.3|7.4|30.6| |inference_asr_model_valid.acc.ave_10best/valid|33384|2537594|0.0|0.0|100.0|0.0|100.0|100.0|
246b35a3af99bd18f166aeef86d52650
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
TER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_asr_model_valid.acc.ave_10best/test|75636|2091389|95.1|1.5|3.4|3.1|8.0|29.8| |decode_asr_asr_model_valid.acc.ave_10best/valid|33384|921077|0.0|0.0|100.0|0.0|100.0|100.0| |inference_asr_model_valid.acc.ave_10best/test|75636|2091389|95.2|1.5|3.3|3.3|8.1|30.6| |inference_asr_model_valid.acc.ave_10best/valid|33384|921077|0.0|0.0|100.0|0.0|100.0|100.0|
89b26ceb372fede0a6fe535619924d0d
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
ASR config <details><summary>expand</summary> ``` config: conf/train_asr2_hubert_lr0.002.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_train_asr2_hubert_lr0.002_raw_en_bpe500 ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: 4 dist_rank: 0 local_rank: 0 dist_master_addr: localhost dist_master_port: 57197 dist_launcher: null multiprocessing_distributed: true unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 50 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - acc - max keep_nbest_models: 10 nbest_averaging_interval: 0 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_matplotlib: true use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: - frontend.upstream num_iters_per_epoch: null batch_size: 128 valid_batch_size: null batch_bins: 1000000 valid_batch_bins: null train_shape_file: - exp/asr_stats_raw_en_bpe500/train/speech_shape - exp/asr_stats_raw_en_bpe500/train/text_shape.bpe valid_shape_file: - exp/asr_stats_raw_en_bpe500/valid/speech_shape - exp/asr_stats_raw_en_bpe500/valid/text_shape.bpe batch_type: folded valid_batch_type: null fold_length: - 80000 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train/wav.scp - speech - sound - - dump/raw/train/text - text - text valid_data_path_and_name_and_type: - - dump/raw/valid/wav.scp - speech - sound - - dump/raw/valid/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 0.0004 weight_decay: 1.0e-06 scheduler: warmuplr scheduler_conf: warmup_steps: 25000 token_list: - <blank> - <unk> - ▁[ - ':' - ▁] - _ - SL - IN - GET - S - TIME - DATE - ▁THE - ▁TO - ▁FOR - ▁ - E - LOCATION - A - WEATHER - O - ▁ME - MUSIC - ▁MY - CREATE - ALARM - Y - D - ▁I - T - ▁AT - I - ▁A - TIMER - ▁IS - U - ▁IN - ▁ON - EVENT - M - ▁TIMER - TODO - REMINDER - R - ▁PM - P - ING - ▁WHAT - ▁THIS - ▁TODAY - ▁AM - N - ▁ALARM - ▁SET - NT - METHOD - ▁TOMORROW - ER - TYPE - B - ATTRIBUTE - DESTINATION - ▁MINUTES - REMINDED - PERSON - L - ▁HOW - NAME - K - ▁FIVE - ▁BE - ▁' - G - ▁NEXT - 'ON' - ▁IT - MESSAGE - H - ▁WILL - ▁S - ▁WEEK - ST - C - INFO - EN - CATEGORY - TRAFFIC - ▁F - LE - ▁AND - AR - SEND - RE - ▁P - ▁D - ▁FROM - RECIPIE - PLAY - ▁DO - ▁TRAFFIC - AN - ▁AN - AL - ▁SIX - ▁SONG - ▁ALL - ▁UP - CONTENT - ▁REMINDER - ▁WEEKEND - ▁REMIND - ▁OF - ▁T - RA - ▁WEATHER - ▁SEVEN - ▁PLEASE - ▁RE - ▁TONIGHT - EXACT - ▁EIGHT - ▁W - W - ▁TEN - F - SOURCE - ▁TIME - ESTIMATED - RECURRING - TH - DELETE - VE - ▁NEW - LL - ▁EVERY - ▁PLAY - ES - ▁THIRTY - ▁GET - ▁RAIN - CK - ▁TWO - ▁C - ▁CO - ▁ARE - ▁MESSAGE - RI - ▁G - ▁MORNING - CONTACT - ▁CAN - ▁NOW - ▁THREE - ▁THERE - ET - ▁MUSIC - TER - ▁TAKE - IC - CH - ▁J - V - ED - ▁FOUR - DURATION - LY - ▁E - ▁FRIDAY - UR - ▁YOU - ▁ANY - ▁NINE - ▁GO - UNSUPPORTED - OR - ▁SHOW - ▁O - ▁BA - ▁PA - ▁LONG - AT - ▁ONE - ND - ▁MA - ▁ST - ▁GOING - ▁LIKE - ▁ALARMS - ▁BY - ▁THAT - ▁TWENTY - ▁DAY - ▁CH - ▁MONTH - ▁K - ▁SH - UPDATE - ▁MONDAY - CE - IT - IL - AMOUNT - ▁SATURDAY - ▁BR - ▁NEED - ▁WORK - ID - ▁DRIVE - LA - ▁MO - ▁HAVE - ▁TUESDAY - ▁TELL - IR - HA - '''' - ▁IF - HOME - ▁HE - ▁LO - ▁LA - ▁WHEN - LO - ▁TH - ▁REMINDERS - IE - DISTANCE - ▁WE - ▁SA - ▁HOUR - OULD - NE - DEPARTURE - ▁HI - ▁LI - ARTIST - Z - TRAVEL - ▁OUT - PAUSE - EST - ARRIVAL - ▁CANCEL - ▁MI - ▁OFF - ▁FIFTEEN - POINT - ▁SNOW - NA - EL - ▁EVENTS - ▁CA - ▁SUNDAY - ▁LEAVE - TRACK - ▁SEND - ▁DELETE - ▁APPOINTMENT - ▁BO - RDINAL - ▁MAKE - ▁NEAR - ▁BEFORE - GE - ▁HOME - RELATION - ▁V - FR - ▁THURSDAY - ▁LAST - DIRECTIONS - ▁WEDNESDAY - ▁START - ▁FORECAST - ▁YORK - ▁RIGHT - UM - ▁WITH - USE - ▁MEETING - UT - LI - ▁CHANGE - ▁CAR - GENRE - ATION - X - ▁PICK - ▁WANT - ▁NIGHT - SKIP - ▁DE - ▁RO - ▁ABOUT - MAP - CO - MA - ▁HOUSE - ▁HOT - ▁PARTY - ▁WA - UNIT - ▁HERE - ▁SU - ▁AFTERNOON - ▁MUCH - ▁MOM - ▁TEMPERATURE - EQUENC - ▁ADD - ▁SAN - ▁HER - ▁CONCERTS - ▁CHRISTMAS - ▁DINNER - ▁MAR - LAND - ▁HOURS - ▁CURRENT - ▁TRACK - ▁SOME - ▁CITY - ▁FORTY - ATE - ▁ROUTE - SNOOZE - ▁TEXT - WORK - ▁COLD - RELATED - ▁OR - ▁NO - Q - ▁WAY - WAY - ▁MANY - ▁BIRTHDAY - ▁MINUTE - ▁PLAYLIST - ▁NOON - ▁ROAD - TITLE - PATH - ▁ASK - NAVIGATION - ▁LEFT - ▁ALBUM - ▁TURN - ▁LATE - ▁ELEVEN - NEW - ▁CELSIUS - ▁BUY - AVOID - LOW - NCE - SEARCH - ▁GAME - ▁STOP - ▁JO - ▁FIRST - ▁SHE - ▁DOCTOR - ▁BU - PERIOD - ▁WAKE - CONDITION - ▁EVENING - RADIUS - MODIFIE - ▁REPEAT - ▁SECOND - ▁CONCERT - ▁ANGELES - ▁DOWNTOWN - ▁UMBRELLA - TEMPERATURE - ASH - ▁YEAR - GROUP - ▁DRIVING - ▁GIVE - ▁HUNDRED - ▁HO - ▁MILES - PLAYLIST - ADD - RETRIEV - ▁TWELVE - EAD - ▁CLASS - ▁FREE - PORT - VILLE - ▁BETWEEN - ▁KNOW - ▁AROUND - ▁SCHOOL - ▁NINETY - PROVIDER - SILENCE - RESUME - ▁LET - TION - ▁AUGUST - ▁HAPPENING - ▁AFTER - ▁FAHRENHEIT - ▁EX - ▁VIDEO - ROAD - ▁PARK - ▁CHICAGO - ▁DAILY - ▁CHECK - ▁BEACH - ▁WHERE - ▁JUNE - ▁STREET - ▁FESTIVAL - ▁FLORIDA - ▁JOHN - ▁HAS - ▁SPOTIFY - ▁BILL - RESTART - ▁HIGHWAY - ▁SEATTLE - J - ▁LUNCH - ▁LOOK - ▁FRIEND - ▁COMING - ▁ALERT - IGHT - ▁PANDORA - ▁HEAVY - ▁KIDS - ▁MOVIE - ▁SOUTH - REACT - ▁CONSTRUCTION - PREVIOUS - ▁ORLANDO - ▁OVER - ▁MIAMI - REACTION - ▁ATLANTA - ▁ACCIDENT - ▁COUNTRY - ▁NORTH - ▁LIGHT - RADIO - ▁READ - ▁FAMILY - ▁AIRPORT - ▁EXPECT - ▁DEGREE - ▁PRO - ▁PARTIES - ▁FIFTY - ▁HIGH - ▁PLAN - ▁FOOD - ▁WARM - ▁SUNNY - ▁VEGAS - ▁HOLIDAY - ▁SCHEDULE - ▁STORM - ▁FIFTH - ▁BOSTON - ▁FRANCISCO - ▁LONDON - ATTENDEE - ▁JULY - ▁WALK - ▁COMMUTE - ▁CLEAN - ▁DENTIST - TOWN - ▁AGAIN - ▁DALLAS - ▁PORTLAND - ▁SEPTEMBER - ▁ARRIVE - ▁SISTER - ▁HOUSTON - Ã - É - Í - '*' - Á - Ç - Ó - ']' - '[' - Ú - Ü - <sos/eos> transcript_token_list: null two_pass: false pre_postencoder_norm: false init: null input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: true joint_net_conf: null use_preprocessor: true token_type: bpe bpemodel: data/en_token_list/bpe_unigram500/bpe.model non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' frontend: s3prl frontend_conf: frontend_conf: upstream: hubert_large_ll60k download_dir: ./hub multilayer_feature: true fs: 16k specaug: specaug specaug_conf: apply_time_warp: true time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 30 num_freq_mask: 2 apply_time_mask: true time_mask_width_range: - 0 - 40 num_time_mask: 2 normalize: utterance_mvn normalize_conf: {} model: espnet model_conf: ctc_weight: 0.3 lsm_weight: 0.1 length_normalized_loss: false extract_feats_in_collect_stats: false preencoder: linear preencoder_conf: input_size: 1024 output_size: 80 encoder: conformer encoder_conf: output_size: 512 attention_heads: 8 linear_units: 2048 num_blocks: 12 dropout_rate: 0.1 positional_dropout_rate: 0.1 attention_dropout_rate: 0.1 input_layer: conv2d2 normalize_before: true macaron_style: true rel_pos_type: latest pos_enc_layer_type: rel_pos selfattention_layer_type: rel_selfattn activation_type: swish use_cnn_module: true cnn_module_kernel: 31 postencoder: null postencoder_conf: {} deliberationencoder: null deliberationencoder_conf: {} decoder: transformer decoder_conf: attention_heads: 8 linear_units: 2048 num_blocks: 6 dropout_rate: 0.1 positional_dropout_rate: 0.1 self_attention_dropout_rate: 0.1 src_attention_dropout_rate: 0.1 decoder2: null decoder2_conf: {} postdecoder: null postdecoder_conf: {} required: - output_dir - token_list version: '202205' distributed: true ``` </details>
9df39bee883db8d6fe445570f5cf9f06
apache-2.0
['generated_from_keras_callback']
false
aalogan/bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0170 - Validation Loss: 0.0546 - Epoch: 3
dbe0fd8da205fcb8ac46a4c9f2b2c39c
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 3508, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16
9b1c6778477e99eb960eb3cc1935e6ba
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.1722 | 0.0676 | 0 | | 0.0481 | 0.0531 | 1 | | 0.0270 | 0.0551 | 2 | | 0.0170 | 0.0546 | 3 |
747d9b4e47f97ea5a98bc681fb728b1d
apache-2.0
['automatic-speech-recognition', 'de']
false
exp_w2v2t_de_unispeech-ml_s952 Fine-tuned [microsoft/unispeech-large-multi-lingual-1500h-cv](https://huggingface.co/microsoft/unispeech-large-multi-lingual-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
5bb587e8c3cd40ff420726761c2866d0
mit
['generated_from_trainer']
false
camembert-ner-lr10e3 This model is a fine-tuned version of [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5566 - Overall Precision: 0.0 - Overall Recall: 0.0 - Overall F1: 0.0 - Overall Accuracy: 0.8840 - Humanprod F1: 0.0 - Loc F1: 0.0 - Org F1: 0.0 - Per F1: 0.0
299e77b49afa7e0e6936f2aaa02eaa6b
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.002 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2
5c5d09aa38ae027f5d479816612ab2d7
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | Humanprod F1 | Loc F1 | Org F1 | Per F1 | |:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:----------------:|:------------:|:------:|:------:|:------:| | 0.5473 | 1.0 | 613 | 0.5626 | 0.0 | 0.0 | 0.0 | 0.8840 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.5299 | 2.0 | 1226 | 0.5566 | 0.0 | 0.0 | 0.0 | 0.8840 | 0.0 | 0.0 | 0.0 | 0.0 |
184ba5878ca13c691379ecd60d49f6e1
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3 - mixed_precision_training: Native AMP
366169b08a680803e76e4e0a3d7c125c
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.06 - num_epochs: 10
a5d9a92abc0a20f0e9c7611551294d10
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-newsmodelclassification This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2177 - Accuracy: 0.928 - F1: 0.9278
62c050bc933a8895972de72d218f9dca
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8104 | 1.0 | 250 | 0.3057 | 0.9105 | 0.9084 | | 0.2506 | 2.0 | 500 | 0.2177 | 0.928 | 0.9278 |
10f850a200dee01f443a67720fa4ea79
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-panx-all This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1262 - F1: 0.8799
45aa8580f1889140b4963162ac7626f7
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2905 | 1.0 | 715 | 0.1625 | 0.8392 | | 0.1477 | 2.0 | 1430 | 0.1294 | 0.8688 | | 0.095 | 3.0 | 2145 | 0.1262 | 0.8799 |
7780260cdae96398419f40a4babfbd2c
gpl-3.0
['spacy', 'token-classification']
false
Model description Catalan transformer (projecte-aina/roberta-large-ca-v2) pipeline by BSC. Components: transformer, morphologizer, parser, ner, attribute_ruler, lemmatizer, text classification. | Feature | Description | | --- | --- | | **Name** | `ca_bsc_demo_trf` | | **Version** | `3.4.2` | | **spaCy** | `3.4.1` | | **Default Pipeline** | `transformer`, `tagger`, `morphologizer`, `lemmatizer`, `parser`, `ner`, `textcat` | | **Components** | `transformer`, `tagger`, `morphologizer`, `lemmatizer`, `parser`, `ner`, `textcat` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** |[roberta-large-ca-v2] (https://huggingface.co/projecte-aina/roberta-large-ca-v2) <br /> Ancora_UD_10 <br />[WikiCAT_ca] (https://huggingface.co/datasets/projecte-aina/WikiCAT_ca) | | **License** | `GNU GPL 3.0` | | **Author** | [AINA project](https://huggingface.co/projecte-aina) |
bcf1659c47d1063d9a780859534c940e
gpl-3.0
['spacy', 'token-classification']
false
Label scheme <details> <summary>View label scheme (342 labels for 5 components)</summary> | Component | Labels | | --- | --- | | **`tagger`** | `ADJ`, `ADP`, `ADV`, `AUX`, `CCONJ`, `DET`, `INTJ`, `NOUN`, `NUM`, `PART`, `PRON`, `PROPN`, `PUNCT`, `SCONJ`, `SYM`, `VERB`, `X` | | **`morphologizer`** | `Definite=Def\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `POS=PROPN`, `POS=PUNCT\|PunctSide=Ini\|PunctType=Brck`, `POS=PUNCT\|PunctSide=Fin\|PunctType=Brck`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Definite=Def\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Fem\|Number=Sing\|POS=NOUN`, `POS=ADP`, `NumType=Card\|Number=Plur\|POS=NUM`, `Gender=Masc\|Number=Plur\|POS=NOUN`, `Number=Sing\|POS=ADJ`, `POS=CCONJ`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `NumForm=Digit\|NumType=Card\|POS=NUM`, `NumForm=Digit\|POS=NOUN`, `Gender=Masc\|Number=Plur\|POS=ADJ`, `POS=PUNCT\|PunctType=Comm`, `POS=AUX\|VerbForm=Inf`, `Case=Acc,Dat\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Definite=Def\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `POS=PRON\|PronType=Rel`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Definite=Def\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Gender=Fem\|Number=Plur\|POS=NOUN`, `Gender=Fem\|Number=Plur\|POS=ADJ`, `POS=VERB\|VerbForm=Inf`, `Case=Acc,Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Number=Plur\|POS=ADJ`, `POS=PUNCT\|PunctType=Peri`, `Number=Sing\|POS=PRON\|PronType=Rel`, `Gender=Masc\|Number=Sing\|POS=NOUN`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=ADJ\|VerbForm=Part`, `POS=SCONJ`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Definite=Def\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Gender=Fem\|Number=Plur\|POS=ADJ\|VerbForm=Part`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `POS=VERB\|VerbForm=Ger`, `POS=NOUN`, `Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part`, `Gender=Fem\|NumType=Ord\|Number=Plur\|POS=ADJ`, `POS=SYM`, `Gender=Masc\|Number=Sing\|POS=ADJ`, `Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `POS=ADV\|Polarity=Neg`, `POS=ADV`, `Number=Sing\|POS=PRON\|PronType=Dem`, `Number=Sing\|POS=NOUN`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Number=Plur\|POS=NOUN`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=ADJ`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Tot`, `Case=Loc\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Fem\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Degree=Cmp\|POS=ADV`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Gender=Fem\|Number=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Gender=Masc\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `NumType=Card\|POS=NUM`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Number=Sing\|POS=PRON\|PronType=Ind`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Number=Plur\|POS=DET\|PronType=Ind`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Number=Sing\|POS=DET\|PronType=Ind`, `POS=PUNCT`, `Number=Sing\|POS=DET\|PronType=Rel`, `Case=Gen\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Fem\|NumType=Card\|Number=Plur\|POS=NUM`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `POS=DET\|PronType=Ind`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc,Dat\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Degree=Cmp\|Number=Sing\|POS=ADJ`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Ind`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `Gender=Masc\|Number=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Fem,Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Ind`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Ind`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|PronType=Rel`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `AdvType=Tim\|POS=NOUN`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=3\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Number=Sing\|POS=DET\|PronType=Art`, `Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Int`, `POS=PUNCT\|PunctType=Semi`, `Mood=Cnd\|Number=Plur\|POS=AUX\|Person=3\|VerbForm=Fin`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Masc\|NumType=Card\|Number=Plur\|POS=NUM`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Ind`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `NumForm=Digit\|POS=SYM`, `Gender=Masc\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Part`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Int`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int`, `POS=PRON\|PronType=Int`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Fin`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=3\|VerbForm=Fin`, `POS=PART`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Dem`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Dem`, `POS=ADJ`, `Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Degree=Cmp\|Number=Plur\|POS=ADJ`, `POS=PUNCT\|PunctType=Dash`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Masc\|POS=NOUN`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Int`, `Gender=Masc\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Fut\|VerbForm=Fin`, `POS=PUNCT\|PunctType=Colo`, `Gender=Masc\|NumType=Card\|POS=NUM`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Number=Sing\|POS=PRON\|PronType=Int`, `POS=PUNCT\|PunctType=Quot`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `POS=AUX\|VerbForm=Ger`, `Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Imp\|Number=Sing\|POS=AUX\|Person=3\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|PronType=Ind`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Acc,Dat\|Number=Sing\|POS=PRON\|Person=2\|Polite=Infm\|PrepCase=Npr\|PronType=Prs`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Int`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `NumForm=Digit\|NumType=Frac\|POS=NUM`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Dem`, `Gender=Fem\|POS=NOUN`, `Case=Acc,Dat\|Number=Sing\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Fut\|VerbForm=Fin`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=2\|Polite=Infm\|PronType=Prs`, `POS=X`, `Mood=Cnd\|Number=Plur\|POS=AUX\|Person=1\|VerbForm=Fin`, `Number=Sing\|POS=DET\|PronType=Dem`, `POS=DET`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `POS=DET\|PronType=Art`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `NumType=Ord\|Number=Sing\|POS=ADJ`, `Gender=Fem\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Part`, `Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Gender=Fem\|Number=Plur\|POS=AUX\|Tense=Past\|VerbForm=Part`, `Gender=Masc\|Number=Plur\|POS=AUX\|Tense=Past\|VerbForm=Part`, `Number=Plur\|POS=PRON\|PronType=Dem`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=1\|VerbForm=Fin`, `POS=PRON\|PronType=Ind`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=3\|VerbForm=Fin`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PrepCase=Pre\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `POS=PUNCT\|PunctSide=Fin\|PunctType=Qest`, `NumForm=Digit\|NumType=Ord\|POS=ADJ`, `Case=Acc\|POS=PRON\|Person=3\|PrepCase=Pre\|PronType=Prs\|Reflex=Yes`, `NumForm=Digit\|NumType=Frac\|POS=SYM`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `POS=PUNCT\|PunctSide=Ini\|PunctType=Qest`, `NumType=Card\|Number=Sing\|POS=NUM`, `Foreign=Yes\|POS=PRON\|PronType=Int`, `Foreign=Yes\|Mood=Ind\|POS=VERB\|VerbForm=Fin`, `Foreign=Yes\|POS=ADP`, `Gender=Masc\|Number=Sing\|POS=PROPN`, `POS=PUNCT\|PunctSide=Ini\|PunctType=Excl`, `POS=PUNCT\|PunctSide=Fin\|PunctType=Excl`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=1\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|Person=2\|Polite=Form\|PronType=Prs`, `Mood=Sub\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `POS=PUNCT\|PunctSide=Ini\|PunctType=Comm`, `POS=PUNCT\|PunctSide=Fin\|PunctType=Comm`, `Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Acc,Dat\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1\|VerbForm=Fin`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=1\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Number=Sing\|POS=PRON\|Person=2\|Polite=Form\|PronType=Prs`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `POS=VERB\|Tense=Past\|VerbForm=Part`, `Mood=Imp\|Number=Plur\|POS=AUX\|Person=3\|VerbForm=Fin`, `Case=Nom\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Rel`, `Definite=Ind\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `POS=AUX\|Tense=Past\|VerbForm=Part`, `Gender=Fem\|NumType=Card\|POS=NUM`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `AdvType=Tim\|Degree=Cmp\|POS=ADV`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|Polite=Infm\|PrepCase=Pre\|PronType=Prs`, `POS=DET\|PronType=Rel`, `Definite=Ind\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Fut\|VerbForm=Fin`, `POS=INTJ`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `POS=VERB\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Foreign=Yes\|POS=NOUN`, `Foreign=Yes\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Foreign=Yes\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Foreign=Yes\|POS=SCONJ`, `Foreign=Yes\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Masc\|POS=SYM`, `Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Gender=Fem\|Number=Sing\|POS=PROPN`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Definite=Def\|Foreign=Yes\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Foreign=Yes\|POS=VERB`, `Foreign=Yes\|POS=ADJ`, `Foreign=Yes\|POS=DET`, `Foreign=Yes\|POS=ADV`, `Degree=Cmp\|POS=ADJ`, `AdvType=Tim\|POS=SYM`, `Number=Plur\|POS=DET\|PronType=Dem`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Fut\|VerbForm=Fin` | | **`parser`** | `ROOT`, `acl`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `case`, `cc`, `ccomp`, `compound`, `conj`, `cop`, `csubj`, `dep`, `det`, `expl:pass`, `fixed`, `flat`, `iobj`, `mark`, `nmod`, `nsubj`, `nummod`, `obj`, `obl`, `parataxis`, `punct`, `xcomp` | | **`ner`** | `LOC`, `MISC`, `ORG`, `PER` | | **`textcat`** | `Economia`, `Enginyeria`, `Entreteniment`, `Història`, `Humanitats`, `Dret`, `Matemàtiques`, `Música`, `Filosofia`, `Política`, `Religió`, `Esport`, `Ciència_i_Tecnologia` | </details>
67c8cf85eace5c77bcec6db009bdf142
gpl-3.0
['spacy', 'token-classification']
false
Evaluation results | Type | Score | | --- | --- | | `TAG_ACC` | 96.35 | | `POS_ACC` | 96.36 | | `MORPH_ACC` | 95.71 | | `LEMMA_ACC` | 97.58 | | `DEP_UAS` | 95.16 | | `DEP_LAS` | 93.53 | | `SENTS_P` | 99.30 | | `SENTS_R` | 99.30 | | `SENTS_F` | 99.30 | | `ENTS_F` | 92.02 | | `ENTS_P` | 92.46 | | `ENTS_R` | 91.59 | | `TRANSFORMER_LOSS` | 2061930.61 | | `TAGGER_LOSS` | 462421.82 | | `MORPHOLOGIZER_LOSS` | 583505.58 | | `PARSER_LOSS` | 628332.01 | | `NER_LOSS` | 12427.23 |
be2d9ebdbe80107ec4f45340586a6cfd
apache-2.0
['multiberts', 'multiberts-seed_0', 'multiberts-seed_0-step_600k']
false
MultiBERTs, Intermediate Checkpoint - Seed 0, Step 600k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model
2e79b9bbf1074ddcd1746d41048a2328
apache-2.0
['multiberts', 'multiberts-seed_0', 'multiberts-seed_0-step_600k']
false
How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_600k') model = TFBertModel.from_pretrained("google/multiberts-seed_0-step_600k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_600k') model = BertModel.from_pretrained("google/multiberts-seed_0-step_600k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ```
09f20dc4396034d3e1ece61f5627f0b1
apache-2.0
['pythae', 'reproducibility']
false
Downloading this model from the Hub This model was trained with pythae. It can be downloaded or reloaded using the method `load_from_hf_hub` ```python >>> from pythae.models import AutoModel >>> model = AutoModel.load_from_hf_hub(hf_hub_path="clementchadebec/reproduced_aae") ```
75b6ab826032fafbb505945f16a7d078
apache-2.0
['pythae', 'reproducibility']
false
Reproducibility This trained model reproduces the results of Table 1 in [1]. | Model | Dataset | Metric | Obtained value | Reference value | |:---:|:---:|:---:|:---:|:---:| | AAE | CELEBA 64 | FID | 43.3 | 42 | [1] Tolstikhin, O Bousquet, S Gelly, and B Schölkopf. Wasserstein auto-encoders. In 6th International Conference on Learning Representations (ICLR 2018), 2018.
80c299cd9b9ccdc0c3108d628a13655f
apache-2.0
['generated_from_trainer']
false
DISO_bsc_test16 This model is a fine-tuned version of [PlanTL-GOB-ES/bsc-bio-ehr-es](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1732 - Diso Precision: 0.7577 - Diso Recall: 0.7757 - Diso F1: 0.7666 - Diso Number: 4552 - Overall Precision: 0.7577 - Overall Recall: 0.7757 - Overall F1: 0.7666 - Overall Accuracy: 0.9732
87e17ec955891318a88fa59ab5c7a3e9
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 8e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8
ea71026c6e995ccb7453e3dc34d4d4a1
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Diso Precision | Diso Recall | Diso F1 | Diso Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------------:|:-----------:|:-------:|:-----------:|:-----------------:|:--------------:|:----------:|:----------------:| | 0.0948 | 1.0 | 1400 | 0.0766 | 0.7157 | 0.7594 | 0.7369 | 4552 | 0.7157 | 0.7594 | 0.7369 | 0.9710 | | 0.0631 | 2.0 | 2800 | 0.0818 | 0.7442 | 0.7599 | 0.7520 | 4552 | 0.7442 | 0.7599 | 0.7520 | 0.9726 | | 0.0454 | 3.0 | 4200 | 0.0842 | 0.7544 | 0.7654 | 0.7599 | 4552 | 0.7544 | 0.7654 | 0.7599 | 0.9728 | | 0.0311 | 4.0 | 5600 | 0.1113 | 0.7678 | 0.7700 | 0.7689 | 4552 | 0.7678 | 0.7700 | 0.7689 | 0.9732 | | 0.0217 | 5.0 | 7000 | 0.1231 | 0.7745 | 0.7687 | 0.7716 | 4552 | 0.7745 | 0.7687 | 0.7716 | 0.9743 | | 0.015 | 6.0 | 8400 | 0.1482 | 0.7651 | 0.7733 | 0.7691 | 4552 | 0.7651 | 0.7733 | 0.7691 | 0.9735 | | 0.0101 | 7.0 | 9800 | 0.1498 | 0.7576 | 0.7709 | 0.7642 | 4552 | 0.7576 | 0.7709 | 0.7642 | 0.9730 | | 0.0073 | 8.0 | 11200 | 0.1732 | 0.7577 | 0.7757 | 0.7666 | 4552 | 0.7577 | 0.7757 | 0.7666 | 0.9732 |
55cf5f19d86678c79196046aef5f5f01
gpl-3.0
['generated_from_trainer']
false
IceBERT-finetuned-iec-sentence-bs16 This model is a fine-tuned version of [vesteinn/IceBERT](https://huggingface.co/vesteinn/IceBERT) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2508 - Matthews Correlation: 0.8169
3e4cfbdbcddd494623a37083033bce5d
gpl-3.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:-----:|:---------------:|:--------------------:| | 0.5278 | 1.0 | 3640 | 0.4777 | 0.5396 | | 0.4648 | 2.0 | 7280 | 0.3886 | 0.6437 | | 0.3807 | 3.0 | 10920 | 0.3478 | 0.7060 | | 0.3061 | 4.0 | 14560 | 0.2523 | 0.8083 | | 0.2477 | 5.0 | 18200 | 0.2508 | 0.8169 |
e62002d3e9fa4df3dd36ed1ecdf71d85
apache-2.0
['generated_from_trainer']
false
korean-aihub-learning-math-16batch This model is a fine-tuned version of [kresnik/wav2vec2-large-xlsr-korean](https://huggingface.co/kresnik/wav2vec2-large-xlsr-korean) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1497 - Wer: 0.5260
b677324ec1bfdb3289ccdb2048696870
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP
729f8e85b449a24202d18aa33106ac62
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 20 | 32.0718 | 1.0 | | No log | 2.0 | 40 | 24.7403 | 1.0808 | | No log | 3.0 | 60 | 5.8389 | 1.0 | | No log | 4.0 | 80 | 4.8543 | 1.0 | | 19.6583 | 5.0 | 100 | 4.4453 | 1.0 | | 19.6583 | 6.0 | 120 | 4.3923 | 1.0 | | 19.6583 | 7.0 | 140 | 4.2902 | 1.0 | | 19.6583 | 8.0 | 160 | 3.9026 | 0.9959 | | 19.6583 | 9.0 | 180 | 3.0616 | 0.9740 | | 3.7358 | 10.0 | 200 | 2.2049 | 0.8534 | | 3.7358 | 11.0 | 220 | 1.6666 | 0.7288 | | 3.7358 | 12.0 | 240 | 1.4123 | 0.6603 | | 3.7358 | 13.0 | 260 | 1.3113 | 0.6164 | | 3.7358 | 14.0 | 280 | 1.2269 | 0.6356 | | 0.8398 | 15.0 | 300 | 1.2349 | 0.5945 | | 0.8398 | 16.0 | 320 | 1.1970 | 0.5658 | | 0.8398 | 17.0 | 340 | 1.2144 | 0.5562 | | 0.8398 | 18.0 | 360 | 1.2551 | 0.5658 | | 0.8398 | 19.0 | 380 | 1.1971 | 0.5493 | | 0.2649 | 20.0 | 400 | 1.1967 | 0.5247 | | 0.2649 | 21.0 | 420 | 1.2796 | 0.5849 | | 0.2649 | 22.0 | 440 | 1.2156 | 0.5521 | | 0.2649 | 23.0 | 460 | 1.2118 | 0.5425 | | 0.2649 | 24.0 | 480 | 1.1637 | 0.5384 | | 0.1801 | 25.0 | 500 | 1.1846 | 0.5562 | | 0.1801 | 26.0 | 520 | 1.1927 | 0.5534 | | 0.1801 | 27.0 | 540 | 1.2015 | 0.5384 | | 0.1801 | 28.0 | 560 | 1.2077 | 0.5397 | | 0.1801 | 29.0 | 580 | 1.1554 | 0.5260 | | 0.1364 | 30.0 | 600 | 1.1497 | 0.5260 |
75d03f8f62a423a59327a11fd64309f1
cc-by-sa-4.0
[]
false
How to use You can use this model for masked language modeling as follows: ```python from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("nlp-waseda/roberta-large-japanese-seq512-with-auto-jumanpp") model = AutoModelForMaskedLM.from_pretrained("nlp-waseda/roberta-large-japanese-seq512-with-auto-jumanpp") sentence = '早稲田大学で自然言語処理を[MASK]する。' encoding = tokenizer(sentence, return_tensors='pt') ... ``` You can fine-tune this model on downstream tasks.
ed008e4c700337815f7fa56aea7f241c
cc-by-sa-4.0
[]
false
Tokenization `BertJapaneseTokenizer` now supports automatic tokenization for [Juman++](https://github.com/ku-nlp/jumanpp). However, if your dataset is large, you may take a long time since `BertJapaneseTokenizer` still does not supoort fast tokenization. You can still do the Juman++ tokenization by your self and use the old model [nlp-waseda/roberta-large-japanese-seq512](https://huggingface.co/nlp-waseda/roberta-large-japanese-seq512). Juman++ 2.0.0-rc3 was used for pretraining. Each word is tokenized into tokens by [sentencepiece](https://github.com/google/sentencepiece).
34bbd0644119381ec80da2655ea2b082
cc-by-sa-4.0
[]
false
Training procedure This model was trained on Japanese Wikipedia (as of 20210920) and the Japanese portion of CC-100 from the checkpoint of [nlp-waseda/roberta-large-japanese](https://huggingface.co/nlp-waseda/roberta-large-japanese). It took a week using eight NVIDIA A100 GPUs. The following hyperparameters were used during pretraining: - learning_rate: 6e-5 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 4120 (max_seq_length=128), 4032 (max_seq_length=512) - max_seq_length: 512 - optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-6 - lr_scheduler_type: linear - training_steps: 670000 (max_seq_length=128) + 70000 (max_seq_length=512) - warmup_steps: 10000 - mixed_precision_training: Native AMP
172a5810c53d0a65fea876bd0502f92c
apache-2.0
['generated_from_keras_callback']
false
malay-patel/bert-finetuned-squad-nq This model is a fine-tuned version of [nlpconnect/roberta-base-squad2-nq](https://huggingface.co/nlpconnect/roberta-base-squad2-nq) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.5461 - Train End Logits Accuracy: 0.6253 - Train Start Logits Accuracy: 0.6120 - Epoch: 2
6c26f48fe93c993d7bd98e43974ff9c7
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 861, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16
04b045dabd194d6c78f7c8b46bb8091a
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:-----:| | 1.5548 | 0.6236 | 0.6172 | 0 | | 1.5423 | 0.6286 | 0.6192 | 1 | | 1.5461 | 0.6253 | 0.6120 | 2 |
7721bda51d2de2e2c811306a0d92448c
apache-2.0
['generated_from_keras_callback']
false
varun1/bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.2322 - Epoch: 0
58dbb8871f33bc4f76c10565850d63ed
apache-2.0
['generated_from_trainer']
false
distilbart-podimo-data-eval-1 This model is a fine-tuned version of [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.3983 - Rouge1: 34.6132 - Rouge2: 7.9113 - Rougel: 17.9418 - Rougelsum: 31.5251 - Gen Len: 141.5587
6bd482898e2f0df09940e0b36bb78270
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:--------:| | 4.1934 | 0.98 | 44 | 3.7592 | 32.8148 | 6.457 | 16.8696 | 29.6986 | 141.4441 | | 3.6362 | 1.98 | 88 | 3.5809 | 33.0442 | 6.851 | 17.1323 | 30.1382 | 141.324 | | 3.3554 | 2.98 | 132 | 3.4835 | 33.66 | 7.1375 | 17.5152 | 30.5783 | 141.2793 | | 3.1566 | 3.98 | 176 | 3.4301 | 34.524 | 7.757 | 17.995 | 31.5808 | 141.7151 | | 3.0107 | 4.98 | 220 | 3.4099 | 34.3459 | 7.7512 | 18.0605 | 31.4531 | 141.4106 | | 2.901 | 5.98 | 264 | 3.4073 | 35.028 | 7.9099 | 17.9907 | 31.8304 | 141.5419 | | 2.8246 | 6.98 | 308 | 3.3983 | 34.1937 | 7.8606 | 17.7858 | 31.1331 | 141.5279 | | 2.7306 | 7.98 | 352 | 3.3983 | 34.6132 | 7.9113 | 17.9418 | 31.5251 | 141.5587 |
ad9f3e4a69b584a52a3236d681025113
apache-2.0
['automatic-speech-recognition', 'et']
false
exp_w2v2t_et_vp-it_s222 Fine-tuned [facebook/wav2vec2-large-it-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-it-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (et)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
6f9b1bf96f6e41165c16567d6808bf6c
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5.0
563d8f0ab48c1372a9aa119eac651b90
creativeml-openrail-m
['pytorch', 'diffusers', 'text-to-image', 'dreambooth-hackathon', 'animal']
false
Dreambooth Model for Animals trained on a custom dataset. This is a Stable Diffusion model fine-tuned on the animal concept with DreamBooth. It can be used by modifying the `instance_prompt`: **A photo of vishu cat** This model was created as part of the DreamBooth Hackathon 🔥.
69ef4d810f0ab90fd93292d99cce1753
creativeml-openrail-m
['pytorch', 'diffusers', 'text-to-image', 'dreambooth-hackathon', 'animal']
false
Examples Some examples of images generated with their prompts are (Guidance scale = 7.5 and Number of Inference steps = 50 for all): Prompt = A photo of vishu cat as a genshin impact character ![a photo of vishu cat as a genshin impact character, high res, infstep=50, gs=7.5.png](https://s3.amazonaws.com/moonup/production/uploads/1673376172445-6366451164bcbbd03e2fcd19.png) Prompt = A photo of vishu cat shaking hands with Donald Trump ![a photo of vishu cat shaking hands with Donald Trump, infstep=50, gs=7.5, no neg prompts.png](https://s3.amazonaws.com/moonup/production/uploads/1673376265681-6366451164bcbbd03e2fcd19.png) Prompt = A photo of vishu cat as a Disney Princess ![vishu cat as a disney princess, infstep=50, gs=7.5, seed=1024.png](https://s3.amazonaws.com/moonup/production/uploads/1673376287080-6366451164bcbbd03e2fcd19.png) Prompt = A photo of vishu cat cocking a gun ![a photo of vishu cat cocking a gun, infstep=50, gs=7.5, seed=1024.png](https://s3.amazonaws.com/moonup/production/uploads/1673376294767-6366451164bcbbd03e2fcd19.png)
39130b94ecd0f03b8e432d06a3e85601
mit
['exbert', 'commonsense', 'semeval2020', 'comve']
false
Model description Finetuned model on Commonsense Validation and Explanation (ComVE) dataset introduced in [SemEval2020 Task4](https://competitions.codalab.org/competitions/21080) using a causal language modeling (CLM) objective. The model is able to generate a reason why a given natural language statement is against commonsense.
c4061d96158fb9ec01427e23bb302b40
mit
['exbert', 'commonsense', 'semeval2020', 'comve']
false
How to use You can use this model directly to generate reasons why the given statement is against commonsense using [`generate.sh`](https://github.com/AliOsm/SemEval2020-Task4-ComVE/tree/master/TaskC-Generation) script. *Note:* make sure that you are using version `2.4.1` of `transformers` package. Newer versions has some issue in text generation and the model repeats the last token generated again and again.
721c26b373399df0f296dfcade4d93bb
mit
['exbert', 'commonsense', 'semeval2020', 'comve']
false
Training data The model is initialized from the [distilgpt2](https://github.com/huggingface/transformers/blob/master/model_cards/distilgpt2-README.md) model and finetuned using [ComVE](https://github.com/wangcunxiang/SemEval2020-Task4-Commonsense-Validation-and-Explanation) dataset which contains 10K against commonsense sentences, each of them is paired with three reference reasons.
b5a3714229a1ae211c1ae014b6cf4d5a
mit
['exbert', 'commonsense', 'semeval2020', 'comve']
false
Training procedure Each natural language statement that against commonsense is concatenated with its reference reason with `<|continue|>` as a separator, then the model finetuned using CLM objective. The model trained on Nvidia Tesla P100 GPU from Google Colab platform with 5e-5 learning rate, 15 epochs, 128 maximum sequence length and 64 batch size. <center> <img src="https://i.imgur.com/xKbrwBC.png"> </center>
bb382a35d9214f4c123f4ed3a4f3f1de
mit
['exbert', 'commonsense', 'semeval2020', 'comve']
false
BibTeX entry and citation info ```bibtex @article{fadel2020justers, title={JUSTers at SemEval-2020 Task 4: Evaluating Transformer Models Against Commonsense Validation and Explanation}, author={Fadel, Ali and Al-Ayyoub, Mahmoud and Cambria, Erik}, year={2020} } ``` <a href="https://huggingface.co/exbert/?model=aliosm/ComVE-distilgpt2"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
d18e38dabd98ba9ca38c1ac2f69a6234
apache-2.0
[]
false
Model Description This is a retriever model based on ColBERT v2 with [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) language model.<br> This model was trained with the OpenNQ data.<br> The architecture of the model and hyper parameters are described in the paper ‘Relevance-guided Supervision for OpenQA with ColBERT’.
4048a24cbb0ce35c17ebd6ec17bfbbf2
apache-2.0
[]
false
BibTeX entry and citation info ```bibtex @article{Khattab2021RelevanceguidedSF, title={Relevance-guided Supervision for OpenQA with ColBERT}, author={O. Khattab and Christopher Potts and Matei A. Zaharia}, journal={Transactions of the Association for Computational Linguistics}, year={2021}, } ``` ```bibtex @article{Lee2019LatentRF, title={Latent Retrieval for Weakly Supervised Open Domain Question Answering}, author={Kenton Lee and Ming-Wei Chang and Kristina Toutanova}, journal={ACL}, year={2019} } ```
a135210ceae5a5b6cae45e5b035de619
apache-2.0
['generated_from_trainer']
false
distilBERT_token_itr0_0.0001_all_01_03_2022-15_22_12 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2811 - Precision: 0.3231 - Recall: 0.5151 - F1: 0.3971 - Accuracy: 0.8913
e90e8bcbd06ffaeae98837b1de4a90aa
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5
c3b2a20470734e17ec0d91f0884c3c70
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 30 | 0.2881 | 0.2089 | 0.3621 | 0.2650 | 0.8715 | | No log | 2.0 | 60 | 0.2500 | 0.2619 | 0.3842 | 0.3115 | 0.8845 | | No log | 3.0 | 90 | 0.2571 | 0.2327 | 0.4338 | 0.3030 | 0.8809 | | No log | 4.0 | 120 | 0.2479 | 0.3051 | 0.4761 | 0.3719 | 0.8949 | | No log | 5.0 | 150 | 0.2783 | 0.3287 | 0.4761 | 0.3889 | 0.8936 |
3900ef71352b4b23a4e3712cd4d2bc75
apache-2.0
['translation']
false
opus-mt-bi-sv * source languages: bi * target languages: sv * OPUS readme: [bi-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bi-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/bi-sv/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bi-sv/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bi-sv/opus-2020-01-20.eval.txt)
16edafa117c4811bdd20116a452c2f77
mit
['gpt_neo', 'code_synthesis']
false
GPT-Neo-125M-APPS > **Please refer to our new [GitHub Wiki](https://github.com/ncoop57/gpt-code-clippy/wiki) which documents our efforts in detail in creating the open source version of GitHub Copilot**
ddca66f116a6774dfa64e6c209568fc5
mit
['gpt_neo', 'code_synthesis']
false
Training data The model is trained on the [Automated Programming Progress Standard (APPS) dataset](https://github.com/hendrycks/apps). The dataset consists of 10,000 coding problems in total, with 131,836 test cases for checking solutions and 232,444 ground-truth solutions written by humans. Problems can be complicated, as the average length of a problem is 293.2 words. The data are split evenly into training and test sets, with 5,000 problems each.
76e490221b0aa0ef5ee45b51973f3cea
mit
['gpt_neo', 'code_synthesis']
false
Training procedure The training script used to train this model can be found [here](https://github.com/ncoop57/gpt-code-clippy/blob/camera-ready/training/run_clm_apps.py). Training is done for 5 epochs using AdamW optimizer and leaner decay learning rate schedule with 800 warmup steps. To reproduce the training one can use this command with the above script: ```bash python run_clm_apps.py \ --output_dir $HOME/gpt-neo-125M-apps \ --model_name_or_path EleutherAI/gpt-neo-125M \ --dataset_name $HOME/gpt-code-clippy/data_processing/apps.py \ --dataset_config_name formatted \ --do_train --do_eval \ --block_size="1024" \ --per_device_train_batch_size="16" \ --per_device_eval_batch_size="16" \ --preprocessing_num_workers="16" \ --learning_rate="8e-5" \ --warmup_steps="800" \ --adam_beta1="0.9" \ --adam_beta2="0.98" \ --weight_decay="0.1" \ --overwrite_output_dir \ --num_train_epochs="5" \ --logging_steps="50" \ --eval_steps="2000" \ --report_to="wandb" \ --dtype="bfloat16" \ --save_strategy epoch \ --gradient_accumulation_steps 2 \ ```
71972a34885a666a9286180b1db34fd0
mit
['gpt_neo', 'code_synthesis']
false
How to use You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: ```py from transformers import AutoModelForCausalLM, AutoTokenizer, FlaxAutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("flax-community/gpt-neo-125M-apps") tokenizer = AutoTokenizer.from_pretrained("flax-community/gpt-neo-125M-apps") prompt = """ A function to greet user. Given a user name it should say hello def greet(name): ANSWER: """ input_ids = tokenizer(prompt, return_tensors='pt').input_ids.to(device) start = input_ids.size(1) out = model.generate(input_ids, do_sample=True, max_length=50, num_beams=2, early_stopping=True, eos_token_id=tokenizer.eos_token_id, ) print(tokenizer.decode(out[0][start:])) ```
81cba947da4a7b8cf87137a84893ae15
apache-2.0
['generated_from_trainer']
false
KoT5-test-add-data-from5ep This model is a fine-tuned version of [hyorea1/KoT5-test](https://huggingface.co/hyorea1/KoT5-test) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1737 - Rouge1: 11.8294 - Rouge2: 3.2314 - Rougel: 11.7891 - Rougelsum: 11.8237 - Gen Len: 35.2824
805464a086deceff3418ea499d91cae7
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 100 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10
6e71e36de6d1eed9ddc9594c935f2c3b
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 1.9029 | 0.16 | 400 | 1.1695 | 12.8243 | 3.2659 | 12.7542 | 12.8276 | 35.5743 | | 1.7971 | 0.32 | 800 | 1.1646 | 12.259 | 3.0668 | 12.1254 | 12.1927 | 35.2353 | | 1.4396 | 0.48 | 1200 | 1.1681 | 12.1151 | 3.1908 | 11.9507 | 12.0305 | 35.3125 | | 1.0945 | 0.64 | 1600 | 1.1703 | 12.0576 | 2.9688 | 11.9292 | 11.9792 | 35.0926 | | 1.1924 | 0.8 | 2000 | 1.1667 | 11.7835 | 2.9605 | 11.6755 | 11.7318 | 35.3596 | | 1.3711 | 0.97 | 2400 | 1.1668 | 11.9873 | 3.1107 | 11.9369 | 12.0207 | 34.5309 | | 1.6031 | 1.13 | 2800 | 1.1673 | 11.6049 | 3.1121 | 11.5527 | 11.5976 | 34.6551 | | 1.5254 | 1.29 | 3200 | 1.1693 | 11.6803 | 2.8527 | 11.6116 | 11.6829 | 34.8066 | | 1.641 | 1.45 | 3600 | 1.1737 | 11.8294 | 3.2314 | 11.7891 | 11.8237 | 35.2824 |
ec4bbd7812afdd7439b5ec26c6a9d05c
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-wnli This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6950 - Accuracy: 0.5493
57b8527a2929891195aa170d160d2d04
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 40 | 0.6929 | 0.5211 | | No log | 2.0 | 80 | 0.6951 | 0.4789 | | No log | 3.0 | 120 | 0.6950 | 0.5493 | | No log | 4.0 | 160 | 0.6966 | 0.5352 | | No log | 5.0 | 200 | 0.6966 | 0.5352 |
f509b77ad8407963987b680c5e0b18a0
apache-2.0
['automatic-speech-recognition', 'ar']
false
exp_w2v2t_ar_unispeech-sat_s504 Fine-tuned [microsoft/unispeech-sat-large](https://huggingface.co/microsoft/unispeech-sat-large) for speech recognition using the train split of [Common Voice 7.0 (ar)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
9f244147cbcddbfa3e94486d231322a4