license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
bsd-3-clause
['code', 'generative']
false
Model description CodeGen is a family of autoregressive language models for **program synthesis** from the paper: [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. The models are originally released in [this repository](https://github.com/salesforce/CodeGen), under 3 pre-training data variants (`NL`, `Multi`, `Mono`) and 4 model size variants (`350M`, `2B`, `6B`, `16B`). The checkpoint included in this repository is finetuned on top of the **CodeGen-Multi 350M**, where "Multi" means the model is initialized with *CodeGen-NL 350M* and further pre-trained on a dataset of multiple programming languages, and "350M" refers to the number of trainable parameters. It has been finetuned on CSS code contained in bigcode/the-stack dataset on huggingface
c2f96b26f58c23c903420a07eeae89c3
bsd-3-clause
['code', 'generative']
false
Training data This checkpoint (CodeGen-Multi 350M) was firstly initialized with *CodeGen-NL 350M*, and then pre-trained on [BigQuery](https://console.cloud.google.com/marketplace/details/github/github-repos), a large-scale dataset of multiple programming languages from GitHub repositories. The data consists of 119.2B tokens and includes C, C++, Go, Java, JavaScript, and Python. Lastly it has been finetuned on CSS code contained in [bigcode/the-stack](https://huggingface.co/datasets/bigcode/the-stack) dataset on huggingface
307965ed40e9eb8102f602a197c572ad
bsd-3-clause
['code', 'generative']
false
Training procedure Initially: CodeGen was trained using cross-entropy loss to maximize the likelihood of sequential inputs. The family of models are trained using multiple TPU-v4-512 by Google, leveraging data and model parallelism. See Section 2.3 of the [paper](https://arxiv.org/abs/2203.13474) for more details. Finetune: I fine tuned the 350M model on a single A100 with 40Gb of RAM, with batch size 10 and an input length of 512 tokens Used 80-90% of the RAM
871df1b23e7fa366140600b6e3676937
bsd-3-clause
['code', 'generative']
false
How to use This model can be easily loaded using the `AutoModelForCausalLM` functionality: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-350M-multi") model = AutoModelForCausalLM.from_pretrained("alecsharpie/codegen_350m_css") text = ".header-container {" input_ids = tokenizer(text, return_tensors="pt").input_ids generated_ids = model.generate(input_ids, max_length=128) print(tokenizer.decode(generated_ids[0], skip_special_tokens=True)) ```
7b5326de3e3a7179001d29135c1354a5
apache-2.0
[]
false
Details of ByT5 - Base 🧠 ByT5 is a tokenizer-free version of [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) and generally follows the architecture of [MT5](https://huggingface.co/google/mt5-base). ByT5 was only pre-trained on [mC4](https://www.tensorflow.org/datasets/catalog/c4
ddad1f79e2f741772239ee50663e7fc4
apache-2.0
[]
false
c4multilingual) excluding any supervised training with an average span-mask of 20 UTF-8 characters. Therefore, this model has to be fine-tuned before it is useable on a downstream task. ByT5 works especially well on noisy text data,*e.g.*, `google/byt5-base` significantly outperforms [mt5-base](https://huggingface.co/google/mt5-base) on [TweetQA](https://arxiv.org/abs/1907.06292). Paper: [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/pdf/1910.10683.pdf) Authors: *Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel*
6b92a060840780c5609df593bda05e56
apache-2.0
[]
false
Details of byt5-is-ocr-post-processing-old-texts This model generates a revised version of a given Icelandic OCRed text. The model was trained with [simpleT5](https://github.com/Shivanandroy/simpleT5) on 900.000 lines (\~7.000.000 tokens) of which only 50.000 (\~400.000 tokens) were from real OCRed texts. The rest were extracted from [The Icelandic Gigaword Corpus](https://clarin.is/en/resources/gigaword/) and augmented with artificial errors. It can be assumed that increasing the amount of OCRed data can significantly improve the model. For inference, it is recommended to feed the model one line (not necessarily whole sentences, though) at a time.
8d226e01a99726e85c0b82c3a12ea7c4
apache-2.0
[]
false
Usage ```python from transformers import pipeline from transformers.pipelines.pt_utils import KeyDataset from datasets import load_dataset MODEL = 'atlijas/byt5-is-ocr-post-processing-old-texts' correct_ocr = pipeline('text2text-generation', model=MODEL, tokenizer=MODEL, num_return_sequences=1) dataset = load_dataset('/path/to/', data_files='my_ocred_file.txt') lines = dataset['train'] file_length = len(lines) for corrected in correct_ocr(KeyDataset(lines, 'text'), max_length=150, batch_size=32): print(corrected[0]['generated_text']) ```
52c88d9da86c96771b3aecb2b42eaef6
apache-2.0
[]
false
Evaluation results The test set for this model consists of various Icelandic texts from the 19th and early 20th century. On it, the model achieves a chrF error rate reduction of 39.3%, with the original text's score being 94.6, and the processed one's 96.7. The model achieves a proportional BLEU improvement of 51.6%, with the original text's BLEU score being 97.2 and the processed one's 98.6.
364c0eeac293873ef7c41c1f7b052ffb
apache-2.0
[]
false
Acknowledgments This project was funded by the Language Technology Programme for Icelandic 2019-2023. The programme, which is managed and coordinated by [Almannarómur](https://almannaromur.is/), is funded by the Icelandic Ministry of Education, Science and Culture.
d27cf584d4883357c17ee235dd3287f0
other
['generated_from_trainer']
false
distilroberta-propaganda-2class This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the QCRI propaganda dataset. It achieves the following results on the evaluation set: - Loss: 0.5087 - Acc: 0.7424
4e856404d11c5042b888d9c99ff4eccc
other
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Acc | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.5737 | 1.0 | 493 | 0.5998 | 0.6515 | | 0.4954 | 2.0 | 986 | 0.5530 | 0.7080 | | 0.4774 | 3.0 | 1479 | 0.5331 | 0.7258 | | 0.4846 | 4.0 | 1972 | 0.5247 | 0.7339 | | 0.4749 | 5.0 | 2465 | 0.5392 | 0.7199 | | 0.502 | 6.0 | 2958 | 0.5124 | 0.7466 | | 0.457 | 7.0 | 3451 | 0.5167 | 0.7432 | | 0.4899 | 8.0 | 3944 | 0.5160 | 0.7428 | | 0.4833 | 9.0 | 4437 | 0.5280 | 0.7339 | | 0.5114 | 10.0 | 4930 | 0.5112 | 0.7436 | | 0.4419 | 11.0 | 5423 | 0.5060 | 0.7525 | | 0.4743 | 12.0 | 5916 | 0.5031 | 0.7547 | | 0.4597 | 13.0 | 6409 | 0.5043 | 0.7517 | | 0.4861 | 14.0 | 6902 | 0.5055 | 0.7487 | | 0.499 | 15.0 | 7395 | 0.5091 | 0.7419 | | 0.501 | 16.0 | 7888 | 0.5037 | 0.7521 | | 0.4659 | 17.0 | 8381 | 0.5087 | 0.7424 |
4298423ee45d1fa914398a4c999fdc80
cc-by-4.0
['named-entity-recognition', 'legal', 'ner']
false
Model description This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on the [German LER Dataset](https://huggingface.co/datasets/elenanereiss/german-ler). Distribution of classes in the dataset: | | | **Fine-grained classes** | **
6706a8150d79cb497cf71aea0273c373
cc-by-4.0
['named-entity-recognition', 'legal', 'ner']
false
** | **%** | |----|---------|--------------------------|------------|---------| | 1 | **PER** | _Person_ | 1,747 | 3.26 | | 2 | **RR** | _Judge_ | 1,519 | 2.83 | | 3 | **AN** | _Lawyer_ | 111 | 0.21 | | 4 | **LD** | _Country_ | 1,429 | 2.66 | | 5 | **ST** | _City_ | 705 | 1.31 | | 6 | **STR** | _Street_ | 136 | 0.25 | | 7 | **LDS** | _Landscape_ | 198 | 0.37 | | 8 | **ORG** | _Organization_ | 1,166 | 2.17 | | 9 | **UN** | _Company_ | 1,058 | 1.97 | | 10 | **INN** | _Institution_ | 2,196 | 4.09 | | 11 | **GRT** | _Court_ | 3,212 | 5.99 | | 12 | **MRK** | _Brand_ | 283 | 0.53 | | 13 | **GS** | _Law_ | 18,52 | 34.53 | | 14 | **VO** | _Ordinance_ | 797 | 1.49 | | 15 | **EUN** | _European legal norm_ | 1,499 | 2.79 | | 16 | **VS** | _Regulation_ | 607 | 1.13 | | 17 | **VT** | _Contract_ | 2,863 | 5.34 | | 18 | **RS** | _Court decision_ | 12,58 | 23.46 | | 19 | **LIT** | _Legal literature_ | 3,006 | 5.60 | | | | **Total** | **53,632** | **100** | How to fine-tune another model on the German LER Dataset, see [GitHub](https://github.com/elenanereiss/bert-legal-ner).
8fb60ed901c76144c895d0c3ad52643f
cc-by-4.0
['named-entity-recognition', 'legal', 'ner']
false
Results on the dev set: ``` precision recall f1-score support AN 0.75 0.50 0.60 12 EUN 0.92 0.93 0.92 116 GRT 0.95 0.99 0.97 331 GS 0.98 0.98 0.98 1720 INN 0.84 0.91 0.88 199 LD 0.95 0.95 0.95 109 LDS 0.82 0.43 0.56 21 LIT 0.88 0.92 0.90 231 MRK 0.50 0.70 0.58 23 ORG 0.64 0.71 0.67 103 PER 0.86 0.93 0.90 186 RR 0.97 0.98 0.97 144 RS 0.94 0.95 0.94 1126 ST 0.91 0.88 0.89 58 STR 0.29 0.29 0.29 7 UN 0.81 0.85 0.83 143 VO 0.76 0.95 0.84 37 VS 0.62 0.80 0.70 56 VT 0.87 0.92 0.90 275 micro avg 0.92 0.94 0.93 4897 macro avg 0.80 0.82 0.80 4897 weighted avg 0.92 0.94 0.93 4897 ```
3ff2751c66bad1e2c958ee9fe3781861
cc-by-4.0
['named-entity-recognition', 'legal', 'ner']
false
Results on the test set: ``` precision recall f1-score support AN 1.00 0.89 0.94 9 EUN 0.90 0.97 0.93 150 GRT 0.98 0.98 0.98 321 GS 0.98 0.99 0.98 1818 INN 0.90 0.95 0.92 222 LD 0.97 0.92 0.94 149 LDS 0.91 0.45 0.61 22 LIT 0.92 0.96 0.94 314 MRK 0.78 0.88 0.82 32 ORG 0.82 0.88 0.85 113 PER 0.92 0.88 0.90 173 RR 0.95 0.99 0.97 142 RS 0.97 0.98 0.97 1245 ST 0.79 0.86 0.82 64 STR 0.75 0.80 0.77 15 UN 0.90 0.95 0.93 108 VO 0.80 0.83 0.81 71 VS 0.73 0.84 0.78 64 VT 0.93 0.97 0.95 290 micro avg 0.94 0.96 0.95 5322 macro avg 0.89 0.89 0.89 5322 weighted avg 0.95 0.96 0.95 5322 ```
e56b51658a76d44a7b51c7067b44e366
cc-by-4.0
['named-entity-recognition', 'legal', 'ner']
false
Reference ``` @misc{https://doi.org/10.48550/arxiv.2003.13016, doi = {10.48550/ARXIV.2003.13016}, url = {https://arxiv.org/abs/2003.13016}, author = {Leitner, Elena and Rehm, Georg and Moreno-Schneider, Julián}, keywords = {Computation and Language (cs.CL), Information Retrieval (cs.IR), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {A Dataset of German Legal Documents for Named Entity Recognition}, publisher = {arXiv}, year = {2020}, copyright = {arXiv.org perpetual, non-exclusive license} } ```
025515b2ab2fba82babcdbd080858097
afl-3.0
[]
false
This model is used detecting **abusive speech** in **Code-Mixed Urdu**. It is finetuned on MuRIL model using code-mixed Urdu abusive speech dataset. The model is trained with learning rates of 2e-5. Training code can be found at this [url](https://github.com/hate-alert/IndicAbusive) LABEL_0 :-> Normal LABEL_1 :-> Abusive
c03e9f18922a8f0cef136efca3c94133
apache-2.0
['generated_from_trainer']
false
bert-large-cased-sigir-support-no-label-40-sigir-tune2nd-LR10-labelled-40 This model is a fine-tuned version of [jojoUla/bert-large-cased-sigir-support-no-label-40](https://huggingface.co/jojoUla/bert-large-cased-sigir-support-no-label-40) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.4091
fe6ef22e6a53fa8ab1772702d34aa9b6
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.0728 | 1.0 | 1 | 3.3883 | | 3.6431 | 2.0 | 2 | 4.0303 | | 3.4087 | 3.0 | 3 | 2.1767 | | 2.522 | 4.0 | 4 | 2.3348 | | 1.8187 | 5.0 | 5 | 0.7921 | | 1.5562 | 6.0 | 6 | 1.6986 | | 1.505 | 7.0 | 7 | 1.7494 | | 1.4673 | 8.0 | 8 | 1.5797 | | 1.22 | 9.0 | 9 | 1.7811 | | 1.5497 | 10.0 | 10 | 2.0455 | | 0.8699 | 11.0 | 11 | 2.7731 | | 1.6008 | 12.0 | 12 | 2.3984 | | 0.9909 | 13.0 | 13 | 1.7870 | | 1.4982 | 14.0 | 14 | 1.5336 | | 0.88 | 15.0 | 15 | 0.5394 | | 0.5231 | 16.0 | 16 | 0.5391 | | 1.1294 | 17.0 | 17 | 1.2333 | | 1.5638 | 18.0 | 18 | 1.4246 | | 1.5274 | 19.0 | 19 | 0.7396 | | 1.1525 | 20.0 | 20 | 0.7160 | | 0.7708 | 21.0 | 21 | 3.9853 | | 0.6681 | 22.0 | 22 | 1.8747 | | 0.6073 | 23.0 | 23 | 1.0765 | | 0.64 | 24.0 | 24 | 0.7888 | | 1.3657 | 25.0 | 25 | 1.0972 | | 1.1772 | 26.0 | 26 | 0.6801 | | 1.6493 | 27.0 | 27 | 0.8378 | | 0.8971 | 28.0 | 28 | 0.5728 | | 1.3524 | 29.0 | 29 | 1.7829 | | 0.7754 | 30.0 | 30 | 1.8142 | | 1.1628 | 31.0 | 31 | 0.7712 | | 0.4534 | 32.0 | 32 | 1.3779 | | 0.6799 | 33.0 | 33 | 1.0512 | | 1.2813 | 34.0 | 34 | 0.5455 | | 0.6709 | 35.0 | 35 | 1.8824 | | 0.4398 | 36.0 | 36 | 2.1419 | | 0.1491 | 37.0 | 37 | 1.2215 | | 0.7378 | 38.0 | 38 | 1.7122 | | 0.657 | 39.0 | 39 | 1.4764 | | 0.9551 | 40.0 | 40 | 0.5116 |
08d7f071d5902504abcc2426e85ee3be
mit
['speech', 'text', 'cross-modal', 'unified model', 'self-supervised learning', 'SpeechT5', 'Voice Conversion']
false
SpeechT5 VC Manifest | [**Github**](https://github.com/microsoft/SpeechT5) | [**Huggingface**](https://huggingface.co/mechanicalsea/speecht5-vc) | This manifest is an attempt to recreate the Voice Conversion recipe used for training [SpeechT5](https://aclanthology.org/2022.acl-long.393). This manifest was constructed using [CMU ARCTIC](http://www.festvox.org/cmu_arctic/) four speakers, e.g., bdl, clb, rms, slt. There are 932 utterances for training, 100 utterances for validation, and 100 utterance for evaluation.
55604fc826d8d3a251b16ebd6bdd3462
mit
['speech', 'text', 'cross-modal', 'unified model', 'self-supervised learning', 'SpeechT5', 'Voice Conversion']
false
News - 8 February 2023: SpeechT5 is integrated as an official model into the Hugging Face Transformers library [[Blog](https://huggingface.co/blog/speecht5)] and [[Demo](https://huggingface.co/spaces/Matthijs/speecht5-vc-demo)].
0e287be8389a72f04f78675c508456b0
mit
['speech', 'text', 'cross-modal', 'unified model', 'self-supervised learning', 'SpeechT5', 'Voice Conversion']
false
Model and Samples - [`speecht5_vc.pt`](./speecht5_vc.pt) are reimplemented Voice Conversion fine-tuning on the released manifest **but with a smaller batch size or max updates** (Ensure the manifest is ok). - `samples` are created by the released fine-tuned model and vocoder.
b7059932ad2d3ca9b3788357b2ccae4b
apache-2.0
['generated_from_trainer']
false
tiny-mlm-glue-mrpc-target-glue-mrpc This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-mrpc](https://huggingface.co/muhtasham/tiny-mlm-glue-mrpc) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0963 - Accuracy: 0.7034 - F1: 0.7738
c1dfcab37bf90602519435b2ab82d096
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.5884 | 4.35 | 500 | 0.5523 | 0.7059 | 0.8046 | | 0.4494 | 8.7 | 1000 | 0.5547 | 0.7574 | 0.8358 | | 0.304 | 13.04 | 1500 | 0.6339 | 0.7525 | 0.8256 | | 0.1927 | 17.39 | 2000 | 0.7843 | 0.7230 | 0.8000 | | 0.1179 | 21.74 | 2500 | 1.0963 | 0.7034 | 0.7738 |
209de4774473df69b53fcda713f3dde3
apache-2.0
['translation']
false
eng-spa * source group: English * target group: Spanish * OPUS readme: [eng-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-spa/README.md) * model: transformer * source language(s): eng * target language(s): spa * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-08-18.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-spa/opus-2020-08-18.zip) * test set translations: [opus-2020-08-18.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-spa/opus-2020-08-18.test.txt) * test set scores: [opus-2020-08-18.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-spa/opus-2020-08-18.eval.txt)
e6a2fee49b42b42aa2cb09b5a1170b43
apache-2.0
['translation']
false
Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newssyscomb2009-engspa.eng.spa | 31.0 | 0.583 | | news-test2008-engspa.eng.spa | 29.7 | 0.564 | | newstest2009-engspa.eng.spa | 30.2 | 0.578 | | newstest2010-engspa.eng.spa | 36.9 | 0.620 | | newstest2011-engspa.eng.spa | 38.2 | 0.619 | | newstest2012-engspa.eng.spa | 39.0 | 0.625 | | newstest2013-engspa.eng.spa | 35.0 | 0.598 | | Tatoeba-test.eng.spa | 54.9 | 0.721 |
e455aee2895c42526afb2cf8c2f0207d
apache-2.0
['translation']
false
System Info: - hf_name: eng-spa - source_languages: eng - target_languages: spa - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-spa/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'es'] - src_constituents: {'eng'} - tgt_constituents: {'spa'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-spa/opus-2020-08-18.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-spa/opus-2020-08-18.test.txt - src_alpha3: eng - tgt_alpha3: spa - short_pair: en-es - chrF2_score: 0.721 - bleu: 54.9 - brevity_penalty: 0.978 - ref_len: 77311.0 - src_name: English - tgt_name: Spanish - train_date: 2020-08-18 00:00:00 - src_alpha2: en - tgt_alpha2: es - prefer_old: False - long_pair: eng-spa - helsinki_git_sha: d2f0910c89026c34a44e331e785dec1e0faa7b82 - transformers_git_sha: f7af09b4524b784d67ae8526f0e2fcc6f5ed0de9 - port_machine: brutasse - port_time: 2020-08-24-18:20
dc3aa6b89330222f930794dfc1072086
apache-2.0
['summarization', 'generated_from_trainer']
false
t5-small-finetuned-amazon-en This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - eval_loss: 5.1622 - eval_rouge1: 14.7056 - eval_rouge2: 6.5373 - eval_rougeL: 13.8753 - eval_rougeLsum: 13.9924 - eval_runtime: 3.8484 - eval_samples_per_second: 35.08 - eval_steps_per_second: 4.417 - step: 0
5f87145f4031ee46e87246853c4712a1
apache-2.0
['summarization', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 100
33da635b61c6ed99fc7dd293a34cfc30
apache-2.0
['summarization', 'generated_from_trainer']
false
mt5-small-finetuned-amazon-en-es This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 2.8942 - Rouge1: 10.9755 - Rouge2: 4.1273 - Rougel: 10.7296 - Rougelsum: 10.8385
23120b9f6ce25bd5f300581aa3e19618
apache-2.0
['summarization', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:| | 10.5642 | 1.0 | 439 | 3.4153 | 7.3011 | 1.5565 | 7.0922 | 7.2304 | | 4.6623 | 2.0 | 878 | 3.0078 | 14.2168 | 5.1958 | 13.9667 | 14.0056 | | 4.0693 | 3.0 | 1317 | 2.9778 | 11.6795 | 5.5855 | 11.7257 | 11.6912 | | 3.8168 | 4.0 | 1756 | 2.9269 | 11.9956 | 5.5567 | 11.8085 | 12.056 | | 3.6715 | 5.0 | 2195 | 2.9169 | 11.0503 | 4.6811 | 10.8545 | 11.0054 | | 3.562 | 6.0 | 2634 | 2.8999 | 10.8282 | 4.4821 | 10.6345 | 10.7208 | | 3.514 | 7.0 | 3073 | 2.8974 | 11.6036 | 4.7404 | 11.4618 | 11.4119 | | 3.4704 | 8.0 | 3512 | 2.8942 | 10.9755 | 4.1273 | 10.7296 | 10.8385 |
7ffe40b7ec61e969db73e941102b1596
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity']
false
average_word_embeddings_glove.6B.300d **This model is a copy of [this model repository](https://huggingface.co/sentence-transformers/average_word_embeddings_glove.6B.300d) from sentence-transformers at the specific commit `5d2b7d1c127036ae98b9d487eca4d48744edc709`.** This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 300 dimensional dense vector space and can be used for tasks like clustering or semantic search.
73be9d18ee033929271d51d1062b9243
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity']
false
Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/average_word_embeddings_glove.6B.300d') embeddings = model.encode(sentences) print(embeddings) ```
57f075f93374bd92664d3e3d7a79d507
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity']
false
Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/average_word_embeddings_glove.6B.300d)
66ae1d966b99398fa3aee6f64d9f1cab
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity']
false
Full Model Architecture ``` SentenceTransformer( (0): WordEmbeddings( (emb_layer): Embedding(400001, 300) ) (1): Pooling({'word_embedding_dimension': 300, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ```
f24056dbac0d28bb8f57054d4883a3a5
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 12
00b9c4a8986e0e62d868dfdf0fd8527f
apache-2.0
['translation']
false
opus-mt-en-ROMANCE * source languages: en * target languages: fr,fr_BE,fr_CA,fr_FR,wa,frp,oc,ca,rm,lld,fur,lij,lmo,es,es_AR,es_CL,es_CO,es_CR,es_DO,es_EC,es_ES,es_GT,es_HN,es_MX,es_NI,es_PA,es_PE,es_PR,es_SV,es_UY,es_VE,pt,pt_br,pt_BR,pt_PT,gl,lad,an,mwl,it,it_IT,co,nap,scn,vec,sc,ro,la * OPUS readme: [en-fr+fr_BE+fr_CA+fr_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es_AR+es_CL+es_CO+es_CR+es_DO+es_EC+es_ES+es_GT+es_HN+es_MX+es_NI+es_PA+es_PE+es_PR+es_SV+es_UY+es_VE+pt+pt_br+pt_BR+pt_PT+gl+lad+an+mwl+it+it_IT+co+nap+scn+vec+sc+ro+la](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-fr+fr_BE+fr_CA+fr_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es_AR+es_CL+es_CO+es_CR+es_DO+es_EC+es_ES+es_GT+es_HN+es_MX+es_NI+es_PA+es_PE+es_PR+es_SV+es_UY+es_VE+pt+pt_br+pt_BR+pt_PT+gl+lad+an+mwl+it+it_IT+co+nap+scn+vec+sc+ro+la/README.md) * dataset: opus * model: transformer * pre-processing: normalization + SentencePiece * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-04-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-fr+fr_BE+fr_CA+fr_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es_AR+es_CL+es_CO+es_CR+es_DO+es_EC+es_ES+es_GT+es_HN+es_MX+es_NI+es_PA+es_PE+es_PR+es_SV+es_UY+es_VE+pt+pt_br+pt_BR+pt_PT+gl+lad+an+mwl+it+it_IT+co+nap+scn+vec+sc+ro+la/opus-2020-04-21.zip) * test set translations: [opus-2020-04-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-fr+fr_BE+fr_CA+fr_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es_AR+es_CL+es_CO+es_CR+es_DO+es_EC+es_ES+es_GT+es_HN+es_MX+es_NI+es_PA+es_PE+es_PR+es_SV+es_UY+es_VE+pt+pt_br+pt_BR+pt_PT+gl+lad+an+mwl+it+it_IT+co+nap+scn+vec+sc+ro+la/opus-2020-04-21.test.txt) * test set scores: [opus-2020-04-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-fr+fr_BE+fr_CA+fr_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es_AR+es_CL+es_CO+es_CR+es_DO+es_EC+es_ES+es_GT+es_HN+es_MX+es_NI+es_PA+es_PE+es_PR+es_SV+es_UY+es_VE+pt+pt_br+pt_BR+pt_PT+gl+lad+an+mwl+it+it_IT+co+nap+scn+vec+sc+ro+la/opus-2020-04-21.eval.txt)
87e89c5da4aee563be6676f3d04825ab
apache-2.0
['generated_from_trainer']
false
wav2vec2-large-xls-r-300m-chuvash-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - eval_loss: 0.6998 - eval_wer: 0.7356 - eval_runtime: 233.6193 - eval_samples_per_second: 3.373 - eval_steps_per_second: 0.424 - epoch: 9.75 - step: 400
736aac66b86ce41c43b2c9d098546d62
apache-2.0
['automatic-speech-recognition', 'sv-SE']
false
exp_w2v2t_sv-se_vp-100k_s904 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (sv-SE)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
0a1f8114e405faf6475d6f85bfe89e2a
apache-2.0
['generated_from_trainer']
false
scideberta-cs-tdm-pretrained-finetuned-ner-finetuned-ner This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 0.7548 - Overall Precision: 0.5582 - Overall Recall: 0.7048 - Overall F1: 0.6230 - Overall Accuracy: 0.9578 - Datasetname F1: 0.6225 - Hyperparametername F1: 0.5707 - Hyperparametervalue F1: 0.6796 - Methodname F1: 0.6812 - Metricname F1: 0.5039 - Metricvalue F1: 0.7097 - Taskname F1: 0.5776
0510f92535f14d535a14a68dd1bdcf41
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | Datasetname F1 | Hyperparametername F1 | Hyperparametervalue F1 | Methodname F1 | Metricname F1 | Metricvalue F1 | Taskname F1 | |:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:----------------:|:--------------:|:---------------------:|:----------------------:|:-------------:|:-------------:|:--------------:|:-----------:| | No log | 1.0 | 132 | 0.6819 | 0.2314 | 0.3769 | 0.2867 | 0.9125 | 0.1270 | 0.2305 | 0.2479 | 0.4072 | 0.3119 | 0.0635 | 0.2366 | | No log | 2.0 | 264 | 0.4337 | 0.3977 | 0.5687 | 0.4681 | 0.9429 | 0.4516 | 0.3704 | 0.5419 | 0.5900 | 0.2446 | 0.4340 | 0.4609 | | No log | 3.0 | 396 | 0.3968 | 0.3617 | 0.6367 | 0.4613 | 0.9335 | 0.4828 | 0.3586 | 0.5649 | 0.5331 | 0.3190 | 0.4800 | 0.4585 | | 0.5603 | 4.0 | 528 | 0.3730 | 0.3605 | 0.6327 | 0.4593 | 0.9363 | 0.4750 | 0.3789 | 0.6066 | 0.5376 | 0.3229 | 0.4571 | 0.4375 | | 0.5603 | 5.0 | 660 | 0.4132 | 0.4650 | 0.6871 | 0.5546 | 0.9482 | 0.4943 | 0.4965 | 0.6577 | 0.6465 | 0.4387 | 0.5306 | 0.5039 | | 0.5603 | 6.0 | 792 | 0.4071 | 0.4482 | 0.6884 | 0.5429 | 0.9468 | 0.5541 | 0.4341 | 0.5991 | 0.6037 | 0.4865 | 0.64 | 0.5688 | | 0.5603 | 7.0 | 924 | 0.4077 | 0.4830 | 0.6952 | 0.5700 | 0.9508 | 0.5063 | 0.4953 | 0.7032 | 0.6397 | 0.4286 | 0.6263 | 0.5469 | | 0.1161 | 8.0 | 1056 | 0.5215 | 0.5426 | 0.6925 | 0.6085 | 0.9577 | 0.6423 | 0.5190 | 0.7115 | 0.6711 | 0.5175 | 0.6286 | 0.5797 | | 0.1161 | 9.0 | 1188 | 0.5192 | 0.4859 | 0.7020 | 0.5743 | 0.9518 | 0.5578 | 0.5195 | 0.5992 | 0.6571 | 0.4744 | 0.5532 | 0.5611 | | 0.1161 | 10.0 | 1320 | 0.5301 | 0.5478 | 0.7020 | 0.6154 | 0.9563 | 0.5732 | 0.5782 | 0.7619 | 0.6462 | 0.4675 | 0.7253 | 0.5727 | | 0.1161 | 11.0 | 1452 | 0.4965 | 0.5139 | 0.7048 | 0.5944 | 0.9531 | 0.5857 | 0.5290 | 0.7189 | 0.6639 | 0.4235 | 0.6476 | 0.5532 | | 0.049 | 12.0 | 1584 | 0.6207 | 0.5713 | 0.6925 | 0.6261 | 0.9582 | 0.64 | 0.5377 | 0.7594 | 0.7207 | 0.5070 | 0.6136 | 0.5530 | | 0.049 | 13.0 | 1716 | 0.6056 | 0.5360 | 0.7088 | 0.6104 | 0.9570 | 0.5921 | 0.5035 | 0.7000 | 0.7115 | 0.4648 | 0.6939 | 0.5854 | | 0.049 | 14.0 | 1848 | 0.6540 | 0.5804 | 0.6925 | 0.6315 | 0.9599 | 0.6466 | 0.5344 | 0.7324 | 0.6874 | 0.5401 | 0.7083 | 0.5980 | | 0.049 | 15.0 | 1980 | 0.5911 | 0.5068 | 0.7048 | 0.5896 | 0.9528 | 0.5399 | 0.5176 | 0.7150 | 0.6397 | 0.4625 | 0.6800 | 0.5865 | | 0.0225 | 16.0 | 2112 | 0.5788 | 0.5186 | 0.7007 | 0.5961 | 0.9531 | 0.5874 | 0.5011 | 0.7177 | 0.6796 | 0.4810 | 0.6744 | 0.5517 | | 0.0225 | 17.0 | 2244 | 0.6097 | 0.5399 | 0.6912 | 0.6062 | 0.9547 | 0.5811 | 0.5744 | 0.6900 | 0.6439 | 0.5033 | 0.7253 | 0.5470 | | 0.0225 | 18.0 | 2376 | 0.7006 | 0.5714 | 0.6748 | 0.6188 | 0.9590 | 0.6471 | 0.5645 | 0.6465 | 0.6710 | 0.5426 | 0.6809 | 0.5755 | | 0.0149 | 19.0 | 2508 | 0.6051 | 0.5400 | 0.7252 | 0.6190 | 0.9554 | 0.6443 | 0.5514 | 0.6547 | 0.6777 | 0.5132 | 0.6947 | 0.6 | | 0.0149 | 20.0 | 2640 | 0.7220 | 0.5995 | 0.6884 | 0.6409 | 0.9605 | 0.6429 | 0.5570 | 0.6806 | 0.7339 | 0.5865 | 0.7416 | 0.5540 | | 0.0149 | 21.0 | 2772 | 0.6912 | 0.5977 | 0.7034 | 0.6462 | 0.9599 | 0.6377 | 0.5387 | 0.7343 | 0.7281 | 0.5846 | 0.7273 | 0.5899 | | 0.0149 | 22.0 | 2904 | 0.6952 | 0.5802 | 0.6939 | 0.6320 | 0.9574 | 0.5867 | 0.5445 | 0.7358 | 0.6951 | 0.5736 | 0.7473 | 0.5830 | | 0.0097 | 23.0 | 3036 | 0.7600 | 0.6241 | 0.6912 | 0.6559 | 0.9618 | 0.6119 | 0.5895 | 0.7629 | 0.7356 | 0.5512 | 0.6897 | 0.5837 | | 0.0097 | 24.0 | 3168 | 0.7184 | 0.5924 | 0.6980 | 0.6408 | 0.9598 | 0.6486 | 0.5640 | 0.7179 | 0.7146 | 0.5630 | 0.7174 | 0.5714 | | 0.0097 | 25.0 | 3300 | 0.7120 | 0.5485 | 0.7007 | 0.6153 | 0.9566 | 0.6579 | 0.5441 | 0.6667 | 0.6993 | 0.4774 | 0.6522 | 0.5766 | | 0.0097 | 26.0 | 3432 | 0.7914 | 0.6009 | 0.7088 | 0.6504 | 0.9583 | 0.6443 | 0.6070 | 0.7293 | 0.7082 | 0.5645 | 0.6737 | 0.5872 | | 0.0065 | 27.0 | 3564 | 0.7986 | 0.5800 | 0.6952 | 0.6324 | 0.9589 | 0.6309 | 0.5521 | 0.7150 | 0.7281 | 0.4844 | 0.7097 | 0.5714 | | 0.0065 | 28.0 | 3696 | 0.7767 | 0.6087 | 0.7007 | 0.6515 | 0.9599 | 0.6364 | 0.5824 | 0.7526 | 0.7169 | 0.5238 | 0.7097 | 0.6038 | | 0.0065 | 29.0 | 3828 | 0.7435 | 0.6077 | 0.6912 | 0.6467 | 0.9612 | 0.6479 | 0.5674 | 0.7396 | 0.7088 | 0.5255 | 0.7333 | 0.6066 | | 0.0065 | 30.0 | 3960 | 0.8305 | 0.6230 | 0.6857 | 0.6528 | 0.9613 | 0.6483 | 0.5650 | 0.7817 | 0.7341 | 0.4715 | 0.7174 | 0.5962 | | 0.0051 | 31.0 | 4092 | 0.7180 | 0.5776 | 0.7088 | 0.6365 | 0.9583 | 0.6194 | 0.5825 | 0.7393 | 0.6874 | 0.4923 | 0.7021 | 0.5962 | | 0.0051 | 32.0 | 4224 | 0.7526 | 0.5708 | 0.6857 | 0.6230 | 0.9585 | 0.64 | 0.5276 | 0.7246 | 0.7083 | 0.4627 | 0.6813 | 0.5922 | | 0.0051 | 33.0 | 4356 | 0.7548 | 0.5582 | 0.7048 | 0.6230 | 0.9578 | 0.6225 | 0.5707 | 0.6796 | 0.6812 | 0.5039 | 0.7097 | 0.5776 |
bac460aca0db67306b3c40255f030169
cc-by-4.0
['espnet', 'audio', 'text-to-speech']
false
`kan-bayashi/ljspeech_fastspeech` ♻️ Imported from https://zenodo.org/record/3986231/ This model was trained by kan-bayashi using ljspeech/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
9b8fa5d94dd96d5a83a5350e64010335
apache-2.0
['generated_from_trainer']
false
bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3619 - Precision: 0.7737 - Recall: 0.7568 - F1: 0.7651 - Accuracy: 0.8876
2f6cd472a26a1deb953352c7156e92ad
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.3965 | 1.0 | 6529 | 0.3917 | 0.7565 | 0.7324 | 0.7442 | 0.8791 | | 0.361 | 2.0 | 13058 | 0.3706 | 0.7765 | 0.7453 | 0.7606 | 0.8859 | | 0.3397 | 3.0 | 19587 | 0.3619 | 0.7737 | 0.7568 | 0.7651 | 0.8876 |
abefb9d1a6f210b54ab9e3599061763a
cc-by-4.0
[]
false
Icelandic ConvBERT-Base This model was pretrained on the [Icelandic Gigaword Corpus](http://igc.arnastofnun.is/), which contains approximately 1.69B tokens, using default settings. The model uses a WordPiece tokenizer with a vocabulary size of 32,105.
17297712864ed3ed5957d32fc76d0c02
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15
b5e57e9bbe4aa08b9a7cb8c55ef92416
apache-2.0
['setfit', 'sentence-transformers', 'text-classification']
false
fathyshalab/massive_takeaway-roberta-large-v1-5-88 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer.
6bd3a7c91fd654dd07a6fbc33fbae666
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Whisper Small fy-NL - RuudVelo This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. It achieves the following results on the test set: - Loss: 0.1443 - Wer: 21.03
076a1e711d922ba64282665977102ff4
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 64 - eval_batch_size: 8 - training_steps: 5000 - mixed_precision_training: Native AMP
89cee473f1015cdf087022b73f3e033e
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training results | Training Loss | Step | Validation Loss | Wer | |:-------------:|:-----:|:---------------:|:------:| | 0.0053 | 1000 | 0.4201 | 21.64 | | 0.0008 | 2000 | 0.4607 | 21.03 | | 0.0004 | 3000 | 0.4853 | 21.11 | | 0.0003 | 4000 | 0.5015 | 21.14 | | 0.0002 | 5000 | 0.5084 | 21.20 |
906af8f3cb9f8a963c2825f23ae6665f
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-gc-art3e This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0841 - Accuracy: 0.983 - F1: 0.9755
0a921da476d1d918c839dcb7db902d93
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.0576 | 1.0 | 32 | 0.0846 | 0.982 | 0.9731 | | 0.0388 | 2.0 | 64 | 0.0878 | 0.98 | 0.9737 | | 0.0372 | 3.0 | 96 | 0.0841 | 0.983 | 0.9755 |
b03e642dbfdf44942cce432dff48fad6
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 187 | 2.4258 | | No log | 2.0 | 374 | 2.3627 | | 2.5802 | 3.0 | 561 | 2.3284 | | 2.5802 | 4.0 | 748 | 2.3109 | | 2.5802 | 5.0 | 935 | 2.2958 | | 2.3212 | 6.0 | 1122 | 2.2850 | | 2.3212 | 7.0 | 1309 | 2.2779 | | 2.3212 | 8.0 | 1496 | 2.2726 | | 2.1892 | 9.0 | 1683 | 2.2703 | | 2.1892 | 10.0 | 1870 | 2.2689 | | 2.111 | 11.0 | 2057 | 2.2683 | | 2.111 | 12.0 | 2244 | 2.2672 | | 2.111 | 13.0 | 2431 | 2.2655 | | 2.0484 | 14.0 | 2618 | 2.2685 | | 2.0484 | 15.0 | 2805 | 2.2703 | | 2.0484 | 16.0 | 2992 | 2.2698 | | 2.0019 | 17.0 | 3179 | 2.2699 | | 2.0019 | 18.0 | 3366 | 2.2715 | | 1.9803 | 19.0 | 3553 | 2.2719 | | 1.9803 | 20.0 | 3740 | 2.2717 |
cfc52e66863ed802b898aee131433651
apache-2.0
['translation']
false
Model Details - **Model Description:** This model has been pre-trained for English-Chinese Translation, and use datasets of THUOCL to fine tune the model. - **source group**: English - **target group**: Chinese - **Parent Model:** Helsinki-NLP/opus-mt-en-zh, see https://huggingface.co/Helsinki-NLP/opus-mt-en-zh - **Model Type:** Translation
c872c4ddd055517fa0ed255e75797e6c
apache-2.0
['translation']
false
How to Get Started With the Model ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("BubbleSheep/Hgn_trans_en2zh") model = AutoModelForSeq2SeqLM.from_pretrained("BubbleSheep/Hgn_trans_en2zh") ```
763d06ccdd0dff8df61b67f28a7f5b0b
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: tpu - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2
a2fc6f8e957969e50d1c88309c5e2454
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-new2-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2107 - Matthews Correlation: 0.9155
6df663b4dad207b21e5156b7125d16c1
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | No log | 1.0 | 164 | 0.4352 | 0.8059 | | No log | 2.0 | 328 | 0.2626 | 0.8950 | | No log | 3.0 | 492 | 0.2422 | 0.9063 | | 0.4552 | 4.0 | 656 | 0.2107 | 0.9155 | | 0.4552 | 5.0 | 820 | 0.2160 | 0.9134 |
e6b16887b7c584a56a7008b17cac76c1
other
['stable-diffusion', 'text-to-image']
false
dnditem --- Examples | Examples :-------------------------:|:-------------------------: <img src="https://i.imgur.com/XCg4JmW.png" width="50%"/> | <img src="https://i.imgur.com/HRoKRlY.png" width="50%"/> <img src="https://i.imgur.com/9KTpaIZ.png" width="50%"/> | This is a model (dnditem) for creating magic items, for the game Dungeons and Dragons! It was trained to be very similar to the official results that are available here: https://www.dndbeyond.com/magic-items The model was trained in a pretty specific way though, and requires a specific way of prompting to get good results.
b67613d3d4b208aff944e3246d529140
other
['stable-diffusion', 'text-to-image']
false
Prompting --- The keywork is "dnditem", and the prompts should be done in the following way: "dnditem, [item type], [item style], [background]" So, for example, a prompt could look like: "dnditem, a pair of boots, spellguard style, light red circle inner background with white outer background". or "dnditem, a shield, shooting star style, light blue stripe inner background with white outer background".
d3dd79085da19a7d8169da22bd1a9f43
other
['stable-diffusion', 'text-to-image']
false
item type --- Currently the model supports and was trained on the following types: "a pair of boots", "a cloak", "a pair of gloves", "a helmet", "a necklace", "a ring", "a robe", "a rod", "a shield", "a staff", "a sword", "a wand"
228d0b113bdca97b8d15c9f4f3af74b0
other
['stable-diffusion', 'text-to-image']
false
item_styles --- The item styles, or abilities, can be found in the itemstyles.txt file. There are over 100 of them, of all sorts of different types of dnditems. Some cool ones to check out are "ultimate evil style", "blue and green transparent animated style", and "spell storing style".
48ddc2061530e2ecb627c10fd44f5368
other
['stable-diffusion', 'text-to-image']
false
background --- Backgrounds should be promopted with an inner and an other background, as well as a "shape" that is either "circle" or "stripe". So Something like "light blue circle inner background with white outer background".
f17bec7cc41fbd8a6f5973c6c98acef4
apache-2.0
['generated_from_trainer']
false
test_trainer This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the yelp_review_full dataset. It achieves the following results on the evaluation set: - Loss: 1.1174 - Accuracy: 0.557
c71ea993ab7ece0fc565523047955a5c
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 16 | 1.4821 | 0.384 | | No log | 2.0 | 32 | 1.2059 | 0.535 | | No log | 3.0 | 48 | 1.1174 | 0.557 |
6e0a499918ed30849c5c917ad1163dac
cc-by-4.0
['espnet', 'audio', 'audio-to-audio']
false
Demo: How to use in ESPnet2 ```bash cd espnet pip install -e . cd egs2/chime4/enh1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/Wangyou_Zhang_chime4_enh_train_enh_conv_tasnet_raw ```
2d285a0381e94ac97ad6906fb5fb9098
cc-by-4.0
['espnet', 'audio', 'audio-to-audio']
false
ENH config <details><summary>expand</summary> ``` config: conf/tuning/train_enh_conv_tasnet.yaml print_config: false log_level: INFO dry_run: false iterator_type: chunk output_dir: exp/enh_train_enh_conv_tasnet_raw ngpu: 1 seed: 0 num_workers: 4 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: 2 dist_rank: 0 local_rank: 0 dist_master_addr: localhost dist_master_port: 57680 dist_launcher: null multiprocessing_distributed: true cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 100 patience: 4 val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - si_snr - max - - valid - loss - min keep_nbest_models: 1 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null unused_parameters: false use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null pretrain_path: null init_param: [] freeze_param: [] num_iters_per_epoch: null batch_size: 8 valid_batch_size: null batch_bins: 1000000 valid_batch_bins: null train_shape_file: - exp/enh_stats_16k/train/speech_mix_shape - exp/enh_stats_16k/train/speech_ref1_shape valid_shape_file: - exp/enh_stats_16k/valid/speech_mix_shape - exp/enh_stats_16k/valid/speech_ref1_shape batch_type: folded valid_batch_type: null fold_length: - 80000 - 80000 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 32000 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/tr05_simu_isolated_1ch_track/wav.scp - speech_mix - sound - - dump/raw/tr05_simu_isolated_1ch_track/spk1.scp - speech_ref1 - sound valid_data_path_and_name_and_type: - - dump/raw/dt05_simu_isolated_1ch_track/wav.scp - speech_mix - sound - - dump/raw/dt05_simu_isolated_1ch_track/spk1.scp - speech_ref1 - sound allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 0.001 eps: 1.0e-08 weight_decay: 1.0e-05 scheduler: reducelronplateau scheduler_conf: mode: min factor: 0.5 patience: 3 init: xavier_uniform model_conf: loss_type: si_snr use_preprocessor: false encoder: conv encoder_conf: channel: 256 kernel_size: 20 stride: 10 separator: tcn separator_conf: num_spk: 1 layer: 8 stack: 4 bottleneck_dim: 256 hidden_dim: 512 kernel: 3 causal: false norm_type: gLN nonlinear: relu decoder: conv decoder_conf: channel: 256 kernel_size: 20 stride: 10 required: - output_dir version: 0.9.7 distributed: true ``` </details>
2314851791fdc917b7e47fbf20a623b9
apache-2.0
['generated_from_trainer']
false
bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0633 - Precision: 0.9306 - Recall: 0.9485 - F1: 0.9395 - Accuracy: 0.9859
4818ad52debaa7803fb779a2cf181f53
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0834 | 1.0 | 1756 | 0.0676 | 0.9162 | 0.9315 | 0.9238 | 0.9824 | | 0.0388 | 2.0 | 3512 | 0.0587 | 0.9286 | 0.9473 | 0.9379 | 0.9852 | | 0.0188 | 3.0 | 5268 | 0.0633 | 0.9306 | 0.9485 | 0.9395 | 0.9859 |
ddf8a1192bc83e3099dbac37a0a79463
apache-2.0
['generated_from_keras_callback']
false
hsohn3/mayo-bert-visit-uncased-wordlevel-block512-batch4-ep100 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.9559 - Epoch: 99
3bd1913aabe6b4be9df5a7648435fb5a
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Epoch | |:----------:|:-----:| | 4.1247 | 0 | | 3.5129 | 1 | | 3.4726 | 2 | | 3.4483 | 3 | | 3.4395 | 4 | | 3.4301 | 5 | | 3.4260 | 6 | | 3.4131 | 7 | | 3.3831 | 8 | | 3.2925 | 9 | | 3.2454 | 10 | | 3.2092 | 11 | | 3.1695 | 12 | | 3.1346 | 13 | | 3.0797 | 14 | | 3.0154 | 15 | | 2.9557 | 16 | | 2.8814 | 17 | | 2.7720 | 18 | | 2.5472 | 19 | | 2.3193 | 20 | | 2.1005 | 21 | | 1.9331 | 22 | | 1.7971 | 23 | | 1.6859 | 24 | | 1.6062 | 25 | | 1.5310 | 26 | | 1.4706 | 27 | | 1.4203 | 28 | | 1.3681 | 29 | | 1.3222 | 30 | | 1.2939 | 31 | | 1.2726 | 32 | | 1.2494 | 33 | | 1.2330 | 34 | | 1.2161 | 35 | | 1.1998 | 36 | | 1.1874 | 37 | | 1.1767 | 38 | | 1.1641 | 39 | | 1.1550 | 40 | | 1.1407 | 41 | | 1.1363 | 42 | | 1.1272 | 43 | | 1.1227 | 44 | | 1.1163 | 45 | | 1.1065 | 46 | | 1.1008 | 47 | | 1.0957 | 48 | | 1.0837 | 49 | | 1.0844 | 50 | | 1.0778 | 51 | | 1.0741 | 52 | | 1.0693 | 53 | | 1.0662 | 54 | | 1.0608 | 55 | | 1.0521 | 56 | | 1.0526 | 57 | | 1.0476 | 58 | | 1.0454 | 59 | | 1.0452 | 60 | | 1.0348 | 61 | | 1.0333 | 62 | | 1.0342 | 63 | | 1.0293 | 64 | | 1.0249 | 65 | | 1.0241 | 66 | | 1.0194 | 67 | | 1.0177 | 68 | | 1.0102 | 69 | | 1.0055 | 70 | | 1.0052 | 71 | | 1.0038 | 72 | | 1.0005 | 73 | | 0.9981 | 74 | | 0.9991 | 75 | | 0.9950 | 76 | | 0.9928 | 77 | | 0.9898 | 78 | | 0.9906 | 79 | | 0.9873 | 80 | | 0.9849 | 81 | | 0.9808 | 82 | | 0.9804 | 83 | | 0.9792 | 84 | | 0.9789 | 85 | | 0.9797 | 86 | | 0.9741 | 87 | | 0.9781 | 88 | | 0.9678 | 89 | | 0.9686 | 90 | | 0.9651 | 91 | | 0.9652 | 92 | | 0.9613 | 93 | | 0.9599 | 94 | | 0.9566 | 95 | | 0.9571 | 96 | | 0.9577 | 97 | | 0.9536 | 98 | | 0.9559 | 99 |
880ea8409ee9e8fd15e863171fe22b2c
apache-2.0
['automatic-speech-recognition', 'fr']
false
exp_w2v2t_fr_r-wav2vec2_s251 Fine-tuned [facebook/wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
0cebe9ced4a369ddc55372ae39b3a3bc
mit
[]
false
million-live-akane-shifuku-3k on Stable Diffusion This is the `<akane>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<akane> 0](https://huggingface.co/sd-concepts-library/million-live-akane-shifuku-3k/resolve/main/concept_images/0.png) ![<akane> 1](https://huggingface.co/sd-concepts-library/million-live-akane-shifuku-3k/resolve/main/concept_images/1.png) ![<akane> 2](https://huggingface.co/sd-concepts-library/million-live-akane-shifuku-3k/resolve/main/concept_images/2.png) ![<akane> 3](https://huggingface.co/sd-concepts-library/million-live-akane-shifuku-3k/resolve/main/concept_images/3.png)
206cfaa597b7df159112110da0f5a79e
apache-2.0
['thai', 'token-classification', 'pos', 'wikipedia', 'dependency-parsing']
false
Model Description This is a RoBERTa model pre-trained on Thai Wikipedia texts for POS-tagging and dependency-parsing, derived from [roberta-base-thai-syllable](https://huggingface.co/KoichiYasuoka/roberta-base-thai-syllable). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
5e6c796c4306a0eb88574aab16d9d916
apache-2.0
['thai', 'token-classification', 'pos', 'wikipedia', 'dependency-parsing']
false
How to Use ```py import torch from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-thai-syllable-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-base-thai-syllable-upos") s="หลายหัวดีกว่าหัวเดียว" t=tokenizer.tokenize(s) p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]] print(list(zip(t,p))) ``` or ``` import esupar nlp=esupar.load("KoichiYasuoka/roberta-base-thai-syllable-upos") print(nlp("หลายหัวดีกว่าหัวเดียว")) ```
a1479cc86dbbd5b92fea32b2b1acfa46
apache-2.0
['generated_from_trainer']
false
wav2vec2-base-TPU-cv-fine-tune This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 1.6987 - Wer: 0.6019
e153b4b121686a21d10a22b408bf22d6
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.1017 | 8.88 | 400 | 1.4635 | 0.7084 | | 0.436 | 17.77 | 800 | 1.4765 | 0.6231 | | 0.1339 | 26.66 | 1200 | 1.6987 | 0.6019 |
f73ca6b2157cca16703413f924426842
apache-2.0
['translation']
false
opus-mt-fi-sm * source languages: fi * target languages: sm * OPUS readme: [fi-sm](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-sm/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-sm/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-sm/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-sm/opus-2020-01-20.eval.txt)
5fc8b9a52b24d80fd9d42abd70f2a7b5
apache-2.0
['image-classification', 'pytorch', 'onnx']
false
Usage instructions ```python from PIL import Image from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize from torchvision.transforms.functional import InterpolationMode from holocron.models import model_from_hf_hub model = model_from_hf_hub("frgfm/rexnet1_3x").eval() img = Image.open(path_to_an_image).convert("RGB")
a24ca4a23ce70907f8bec951c75a689b
apache-2.0
['multiberts', 'multiberts-seed_1', 'multiberts-seed_1-step_1300k']
false
MultiBERTs, Intermediate Checkpoint - Seed 1, Step 1300k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model
8c3b008a220f2cd0f89d71e53041e783
apache-2.0
['multiberts', 'multiberts-seed_1', 'multiberts-seed_1-step_1300k']
false
How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_1300k') model = TFBertModel.from_pretrained("google/multiberts-seed_1-step_1300k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_1300k') model = BertModel.from_pretrained("google/multiberts-seed_1-step_1300k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ```
b9fb7ffb4c568d2d2dc8c519fed28d6c
apache-2.0
['generated_from_trainer']
false
vit-base-patch16-224-in21k_Human_Activity_Recognition This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.7403 - Accuracy: 0.8381 - Weighted f1: 0.8388 - Micro f1: 0.8381 - Macro f1: 0.8394 - Weighted recall: 0.8381 - Micro recall: 0.8381 - Macro recall: 0.8390 - Weighted precision: 0.8421 - Micro precision: 0.8381 - Macro precision: 0.8424
e90850ad06497a9c07f6e92a2f31a200
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5
11a6603e83debf947a506422a0e1af7b
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Weighted f1 | Micro f1 | Macro f1 | Weighted recall | Micro recall | Macro recall | Weighted precision | Micro precision | Macro precision | |:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------:|:--------:|:--------:|:---------------:|:------------:|:------------:|:------------------:|:---------------:|:---------------:| | 1.0814 | 1.0 | 630 | 0.7368 | 0.7794 | 0.7795 | 0.7794 | 0.7798 | 0.7794 | 0.7794 | 0.7797 | 0.7896 | 0.7794 | 0.7896 | | 0.5149 | 2.0 | 1260 | 0.6439 | 0.8060 | 0.8049 | 0.8060 | 0.8036 | 0.8060 | 0.8060 | 0.8051 | 0.8136 | 0.8060 | 0.8130 | | 0.3023 | 3.0 | 1890 | 0.7026 | 0.8254 | 0.8272 | 0.8254 | 0.8278 | 0.8254 | 0.8254 | 0.8256 | 0.8335 | 0.8254 | 0.8345 | | 0.0507 | 4.0 | 2520 | 0.7414 | 0.8317 | 0.8342 | 0.8317 | 0.8348 | 0.8317 | 0.8317 | 0.8321 | 0.8427 | 0.8317 | 0.8438 | | 0.0128 | 5.0 | 3150 | 0.7403 | 0.8381 | 0.8388 | 0.8381 | 0.8394 | 0.8381 | 0.8381 | 0.8390 | 0.8421 | 0.8381 | 0.8424 |
426b7f19b0fbb7be44e273ad3a89592c
afl-3.0
[]
false
Mixing reward model with sampling We can use reward model to rank the best answer using this example code: ``` import torch from transformers import AutoModelForSequenceClassification from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("facebook/galactica-1.3b-base-finetuned/checkpoint-1000") model = AutoModelForCausalLM.from_pretrained("facebook/galactica-1.3b-base-finetuned/checkpoint-1000").eval().half().cuda() reward_name = "theblackcat102/electra-large-reward-model" rank_model, rank_tokenizer = AutoModelForSequenceClassification.from_pretrained(reward_name), AutoTokenizer.from_pretrained(reward_name) rank_model = rank_model.eval().half().cuda() questions = ["<question>How do I make a resume?<answer>"] for question in questions: inputs = tokenizer(question, return_tensors="pt", padding=True).to(0) if 'token_type_ids' in inputs: inputs.pop('token_type_ids') outputs = model.generate(**inputs, do_sample=True, top_k=60, max_length=220, num_return_sequences=80, early_stopping=True ) print(question) results = [] for i, beam_output in enumerate(outputs): output = tokenizer.decode(beam_output, truncate_before_pattern=[r"\n\n^
8edf6e528dd9f0b865e7cd7e90df646d
afl-3.0
[]
false
", "^'''", "\n\n\n"]) question, answer = output.split('<answer>', maxsplit=1) answer = answer.split('<question>')[0].replace('<|endoftext|>', '').lstrip().split('<answer>')[0] rank_inputs = rank_tokenizer(question, answer, return_tensors="pt", padding=True, max_length=512, truncation=True).to(1) score = rank_model(**rank_inputs).logits[0].cpu().detach() results.append((answer, score, output)) full_results[question] = results sorted_result = sorted(results, key=lambda x:x[1], reverse=True) total_scores += sorted_result[0][1].item() print('score',sorted_result[0][1].item()) print('-----Best rank-----') print(sorted_result[0][0]) print('-------------------') ``` Checkout weights and biases [report](https://api.wandb.ai/report/theblackcat102/8yg0c0r2) for training detail. Thanks to [BASIC lab](https://basiclab.lab.nycu.edu.tw/Yummy/index.html
1385343b9a51db79e1e08af937740e3e
apache-2.0
['generated_from_trainer']
false
bert-base-uncased-Ganapati This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0000
7b8b5f2e02e46fcd11069d353838c5de
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.0 | 1.0 | 2273 | 0.0000 | | 0.0 | 2.0 | 4546 | 0.0000 | | 0.0 | 3.0 | 6819 | 0.0000 |
b67e24fbf8a54eb239c51d78d1014935
apache-2.0
['NER']
false
Model description **roberta-base-pcm** is a model based on the fine-tuned RoBERTa base model. It has been trained to recognize four types of entities: - dates & time (DATE) - Location (LOC) - Organizations (ORG) - Person (PER)
c96851e51c1a369ff6fb8a8f7535a4ae
apache-2.0
['NER']
false
Usage ```python from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline tokenizer = AutoTokenizer.from_pretrained("arnolfokam/roberta-base-pcm") model = AutoModelForTokenClassification.from_pretrained("arnolfokam/roberta-base-pcm") nlp = pipeline("ner", model=model, tokenizer=tokenizer) example = "Mixed Martial Arts joinbodi, Ultimate Fighting Championship, UFC don decide say dem go enta back di octagon on Saturday, 9 May, for Jacksonville, Florida." ner_results = nlp(example) print(ner_results) ```
f4820fde1209e514e1c8d06625c47f30
apache-2.0
['lexical normalization']
false
Fine-tuned ByT5-small for MultiLexNorm (Italian version) ![model image](https://github.com/ufal/multilexnorm2021/raw/master/img/overall.png) This is the official release of the fine-tuned models for **the winning entry** to the [*W-NUT 2021: Multilingual Lexical Normalization (MultiLexNorm)* shared task](https://noisy-text.github.io/2021/multi-lexnorm.html), which evaluates lexical-normalization systems on 12 social media datasets in 11 languages. Our system is based on [ByT5](https://arxiv.org/abs/2105.13626), which we first pre-train on synthetic data and then fine-tune on authentic normalization data. It achieves the best performance by a wide margin in intrinsic evaluation, and also the best performance in extrinsic evaluation through dependency parsing. In addition to these fine-tuned models, we also release the source files on [GitHub](https://github.com/ufal/multilexnorm2021) and an interactive demo on [Google Colab](https://colab.research.google.com/drive/1rxpI8IlKk-D2crFqi2hdzbTBIezqgsCg?usp=sharing).
5edcfe7c3a19e989a685f6c8d3c47ace
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Wav2Vec2-Large-XLSR-53-Assamese Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Assamese using the [Common Voice](https://huggingface.co/datasets/common_voice) When using this model, make sure that your speech input is sampled at 16kHz.
e8b417453d5905d6edde77d1646a95f4
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "as", split="test[:2%]"). processor = Wav2Vec2Processor.from_pretrained("manandey/wav2vec2-large-xlsr-assamese") model = Wav2Vec2ForCTC.from_pretrained("manandey/wav2vec2-large-xlsr-assamese") resampler = torchaudio.transforms.Resample(48_000, 16_000)
e07465eaf6af66627bc14215a9a54560
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ```
b9df7d2dc729301ffd77a54db6e531ec
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Evaluation The model can be evaluated as follows on the {language} test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "as", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("manandey/wav2vec2-large-xlsr-assamese") model = Wav2Vec2ForCTC.from_pretrained("manandey/wav2vec2-large-xlsr-assamese") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\'\।]' resampler = torchaudio.transforms.Resample(48_000, 16_000)
0928846d6efef2c26d79e6d231da7c17
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
We need to read the audio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn)
506fbdf08781dc6c8e4e505db8646b0d
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 74.25%
77415856b54166f6ea2d006cac065a47
cc-by-4.0
['generated_from_trainer']
false
roberta-base-squad2-finetuned-squad This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 5.0220
4bf6c0201a8001e5304a644433afb8d3
cc-by-4.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30
6ab8cf03f83184aabff0bf1162932e99