license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
apache-2.0
[]
false
Uses & Limitations This model is intended to be used for a variety of downstream NLP tasks for Indian languages. This model is trained on transliterated data as well, a phenomomenon commonly observed in the Indian context. This model is not expected to perform well on languages other than the ones used in pretraining, i.e. 17 Indian languages.
31bcb74ee776ba6297239ffd7f8d185b
apache-2.0
[]
false
Evaluation We provide the results of fine-tuning this model on a set of downstream tasks.<br/> We choose these tasks from the XTREME benchmark, with evaluation done on Indian language test-sets.<br/> We also transliterate the test-sets and evaluate on the same.<br/> We use the same fine-tuning setting as is used by [9], except for TyDiQA, where we use additional SQuAD v1.1 English training data, similar to [10].<br/> For Tatoeba, we do not fine-tune the model, and use the pooled_output of the last layer as the sentence embedding.<br/> All results are computed in a zero-shot setting, with English being the high resource training set language. * Shown below are results on datasets from the XTREME benchmark (in %) <br/> PANX (F1) | ml | ta | te | en | bn | hi | mr | ur | Average :-------- | ----: | ----: | ----: | ----: | ----: | ----: | ----: | ----: | ------: mBERT | 54.77 | 51.24 | 50.16 | 84.40 | 68.59 | 65.13 | 58.44 | 31.36 | 58.01 MuRIL | 75.74 | 71.86 | 64.99 | 84.43 | 85.97 | 78.09 | 74.63 | 85.07 | 77.60 <br/> UDPOS (F1) | en | hi | mr | ta | te | ur | Average :--------- | ----: | ----: | ----: | ----: | ----: | ----: | ------: mBERT | 95.35 | 66.09 | 71.27 | 59.58 | 76.98 | 57.85 | 71.19 MuRIL | 95.55 | 64.47 | 82.95 | 62.57 | 85.63 | 58.93 | 75.02 <br/> XNLI (Accuracy) | en | hi | ur | Average :-------------- | ----: | ----: | ----: | ------: mBERT | 81.72 | 60.52 | 58.20 | 66.81 MuRIL | 83.85 | 70.66 | 67.70 | 74.07 <br/> Tatoeba (Accuracy) | ml | ta | te | bn | hi | mr | ur | Average :----------------- | ----: | ----: | ----: | ----: | ----: | ----: | ----: | ------: mBERT | 20.23 | 12.38 | 14.96 | 12.80 | 27.80 | 18.00 | 22.70 | 18.41 MuRIL | 26.35 | 36.81 | 17.52 | 20.20 | 31.50 | 26.60 | 17.10 | 25.15 <br/> XQUAD (F1/EM) | en | hi | Average :------------ | ----------: | ----------: | ----------: mBERT | 83.85/72.86 | 58.46/43.53 | 71.15/58.19 MuRIL | 84.31/72.94 | 73.93/58.32 | 79.12/65.63 <br/> MLQA (F1/EM) | en | hi | Average :----------- | ----------: | ----------: | ----------: mBERT | 80.39/67.30 | 50.28/35.18 | 65.34/51.24 MuRIL | 80.28/67.37 | 67.34/50.22 | 73.81/58.80 <br/> TyDiQA (F1/EM) | en | bn | te | Average :---------------- | ----------: | ----------: | ----------: | ----------: mBERT | 75.21/65.00 | 60.62/45.13 | 53.55/44.54 | 63.13/51.66 MuRIL | 74.10/64.55 | 78.03/66.37 | 73.95/46.94 | 75.36/59.28 * Shown below are results on the transliterated versions of the above test-sets. PANX (F1) | ml_tr | ta_tr | te_tr | bn_tr | hi_tr | mr_tr | ur_tr | Average :-------- | ----: | ----: | ----: | ----: | ----: | ----: | ----: | ------: mBERT | 7.53 | 1.04 | 8.24 | 41.77 | 25.46 | 8.34 | 7.30 | 14.24 MuRIL | 63.39 | 7.00 | 53.62 | 72.94 | 69.75 | 68.77 | 68.41 | 57.70 <br/> UDPOS (F1) | hi_tr | mr_tr | ta_tr | te_tr | ur_tr | Average :--------- | ----: | ----: | ----: | ----: | ----: | ------: mBERT | 25.00 | 33.67 | 24.02 | 36.21 | 22.07 | 28.20 MuRIL | 63.09 | 67.19 | 58.40 | 65.30 | 56.49 | 62.09 <br/> XNLI (Accuracy) | hi_tr | ur_tr | Average :-------------- | ----: | ----: | ------: mBERT | 39.6 | 38.86 | 39.23 MuRIL | 68.24 | 61.16 | 64.70 <br/> Tatoeba (Accuracy) | ml_tr | ta_tr | te_tr | bn_tr | hi_tr | mr_tr | ur_tr | Average :----------------- | ----: | ----: | ----: | ----: | ----: | ----: | ----: | ------: mBERT | 2.18 | 1.95 | 5.13 | 1.80 | 3.00 | 2.40 | 2.30 | 2.68 MuRIL | 10.33 | 11.07 | 11.54 | 8.10 | 14.90 | 7.20 | 13.70 | 10.98 <br/>
accb10e144b22cf5fdfafbc069208737
apache-2.0
[]
false
References \[1]: Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova. [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805). arXiv preprint arXiv:1810.04805, 2018. \[2]: [Wikipedia](https://www.tensorflow.org/datasets/catalog/wikipedia) \[3]: [Common Crawl](http://commoncrawl.org/the-data/) \[4]: [PMINDIA](http://lotus.kuee.kyoto-u.ac.jp/WAT/indic-multilingual/index.html) \[5]: [Dakshina](https://github.com/google-research-datasets/dakshina) \[6]: Assamese (as), Bengali (bn), English (en), Gujarati (gu), Hindi (hi), Kannada (kn), Kashmiri (ks), Malayalam (ml), Marathi (mr), Nepali (ne), Oriya (or), Punjabi (pa), Sanskrit (sa), Sindhi (sd), Tamil (ta), Telugu (te) and Urdu (ur). \[7]: Conneau, Alexis, et al. [Unsupervised cross-lingual representation learning at scale](https://arxiv.org/pdf/1911.02116.pdf). arXiv preprint arXiv:1911.02116 (2019). \[8]: [IndicTrans](https://github.com/libindic/indic-trans) \[9]: Hu, J., Ruder, S., Siddhant, A., Neubig, G., Firat, O., & Johnson, M. (2020). [Xtreme: A massively multilingual multi-task benchmark for evaluating cross-lingual generalization.](https://arxiv.org/pdf/2003.11080.pdf) arXiv preprint arXiv:2003.11080. \[10]: Fang, Y., Wang, S., Gan, Z., Sun, S., & Liu, J. (2020). [FILTER: An Enhanced Fusion Method for Cross-lingual Language Understanding.](https://arxiv.org/pdf/2009.05166.pdf) arXiv preprint arXiv:2009.05166.
d598d5df88e9224ee2c1861ffb53006b
apache-2.0
[]
false
Citation If you find MuRIL useful in your applications, please cite the following paper: ``` @misc{khanuja2021muril, title={MuRIL: Multilingual Representations for Indian Languages}, author={Simran Khanuja and Diksha Bansal and Sarvesh Mehtani and Savya Khosla and Atreyee Dey and Balaji Gopalan and Dilip Kumar Margam and Pooja Aggarwal and Rajiv Teja Nagipogu and Shachi Dave and Shruti Gupta and Subhash Chandra Bose Gali and Vish Subramanian and Partha Talukdar}, year={2021}, eprint={2103.10730}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
a8f1ec214bd6ca513033bcce5d91f9c3
wtfpl
['gpt-j', 'spanish', 'LLM', 'gpt-j-6b']
false
Go [here](https://huggingface.co/mrm8488/bertin-gpt-j-6B-ES-v1-8bit) to use the latest checkpoint. This model (and model card) is an adaptation of [hivemind/gpt-j-6B-8bit](https://huggingface.co/hivemind/gpt-j-6B-8bit), so all credits to him/her. This is a version of **[bertin-project/bertin-gpt-j-6B](https://huggingface.co/bertin-project/bertin-gpt-j-6B)** that is modified so you can generate **and fine-tune the model in colab or equivalent desktop GPU (e.g. single 1080Ti)**. Here's how to run it: [![colab](https://camo.githubusercontent.com/84f0493939e0c4de4e6dbe113251b4bfb5353e57134ffd9fcab6b8714514d4d1/68747470733a2f2f636f6c61622e72657365617263682e676f6f676c652e636f6d2f6173736574732f636f6c61622d62616467652e737667)](https://colab.research.google.com/drive/1ft6wQU0BhqG5PRlwgaZJv2VukKKjU4Es) __The [original GPT-J](https://huggingface.co/EleutherAI/gpt-j-6B/tree/main)__ takes 22+ GB memory for float32 parameters alone, and that's before you account for gradients & optimizer. Even if you cast everything to 16-bit, it will still not fit onto most single-GPU setups short of A6000 and A100. You can inference it [on TPU](https://colab.research.google.com/github/kingoflolz/mesh-transformer-jax/blob/master/colab_demo.ipynb) or CPUs, but fine-tuning is way more expensive. Here, we apply several techniques to make GPT-J usable and fine-tunable on a single GPU with ~11 GB memory: - large weight tensors are quantized using dynamic 8-bit quantization and de-quantized just-in-time for multiplication - using gradient checkpoints to store one only activation per layer: using dramatically less memory at the cost of 30% slower training - scalable fine-tuning with [LoRA](https://arxiv.org/abs/2106.09685) and [8-bit Adam](https://arxiv.org/abs/2110.02861) In other words, all of the large weight-matrices are frozen in 8-bit, and you only train small adapters and optionally 1d tensors (layernorm scales, biases). ![img](https://i.imgur.com/n4XXo1x.png) __Does 8-bit affect model quality?__ Technically yes, but the effect is negligible in practice. [This notebook measures wikitext test perplexity](https://nbviewer.org/urls/huggingface.co/hivemind/gpt-j-6B-8bit/raw/main/check_perplexity.ipynb) and it is nigh indistinguishable from the original GPT-J. Quantized model is even slightly better, but that is not statistically significant. Our code differs from other 8-bit methods in that we use **8-bit only for storage, and all computations are performed in float16 or float32**. As a result, we can take advantage of nonlinear quantization that fits to each individual weight distribution. Such nonlinear quantization does not accelerate inference, but it allows for much smaller error. __What about performance?__ Both checkpointing and de-quantization has some overhead, but it's surprisingly manageable. Depending on GPU and batch size, the quantized model is 1-10% slower than the original model on top of using gradient checkpoints (which is 30% overhead). In short, this is because block-wise quantization from bitsandbytes is really fast on GPU.
8ef92b2f7e9aea6fefb36adfd382c22d
wtfpl
['gpt-j', 'spanish', 'LLM', 'gpt-j-6b']
false
How should I fine-tune the model? We recommend starting with the original hyperparameters from [the LoRA paper](https://arxiv.org/pdf/2106.09685.pdf). On top of that, there is one more trick to consider: the overhead from de-quantizing weights does not depend on batch size. As a result, the larger batch size you can fit, the more efficient you will train.
ffe7bbd7fc8423f5567676a5bdc80746
wtfpl
['gpt-j', 'spanish', 'LLM', 'gpt-j-6b']
false
Where can I train for free? You can train fine in colab, but if you get a K80, it's probably best to switch to other free gpu providers: [kaggle](https://towardsdatascience.com/amazon-sagemaker-studio-lab-a-great-alternative-to-google-colab-7194de6ef69a), [aws sagemaker](https://towardsdatascience.com/amazon-sagemaker-studio-lab-a-great-alternative-to-google-colab-7194de6ef69a) or [paperspace](https://docs.paperspace.com/gradient/more/instance-types/free-instances). For intance, this is the same notebook [running in kaggle](https://www.kaggle.com/justheuristic/dmazur-converted) using a more powerful P100 instance.
87aaed733f9864710062b9d61ddc9824
wtfpl
['gpt-j', 'spanish', 'LLM', 'gpt-j-6b']
false
Can I use this technique with other models? The model was converted using [this notebook](https://nbviewer.org/urls/huggingface.co/hivemind/gpt-j-6B-8bit/raw/main/convert-gpt-j.ipynb). It can be adapted to work with other model types. However, please bear in mind that some models replace Linear and Embedding with custom alternatives that require their own BNBWhateverWithAdapters.
577d7578a3d7b8f15b9259e6985412f5
wtfpl
['gpt-j', 'spanish', 'LLM', 'gpt-j-6b']
false
How to use ```sh wget https://huggingface.co/mrm8488/bertin-gpt-j-6B-ES-8bit/resolve/main/utils.py -O Utils.py pip install transformers pip install bitsandbytes-cuda111==0.26.0 ``` ```py import transformers import torch from Utils import GPTJBlock, GPTJForCausalLM device = "cuda" if torch.cuda.is_available() else "cpu" transformers.models.gptj.modeling_gptj.GPTJBlock = GPTJBlock
493d578331a01b23bb118ae218935585
wtfpl
['gpt-j', 'spanish', 'LLM', 'gpt-j-6b']
false
monkey-patch GPT-J ckpt = "mrm8488/bertin-gpt-j-6B-ES-8bit" tokenizer = transformers.AutoTokenizer.from_pretrained(ckpt) model = GPTJForCausalLM.from_pretrained(ckpt, pad_token_id=tokenizer.eos_token_id, low_cpu_mem_usage=True).to(device) prompt = tokenizer("El sentido de la vida es", return_tensors='pt') prompt = {key: value.to(device) for key, value in prompt.items()} out = model.generate(**prompt, max_length=64, do_sample=True) print(tokenizer.decode(out[0])) ```
a85963182d3fafdb0df9fcb76138005f
apache-2.0
['image-classification', 'timm']
false
Model card for davit_small.msft_in1k A DaViT image classification model. Trained on ImageNet-1k by paper authors. Thanks to [Fredo Guan](https://github.com/fffffgggg54) for bringing the classification backbone to `timm`.
f554833489c48cd8f93f62edf59dec8f
apache-2.0
['image-classification', 'timm']
false
Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 49.7 - GMACs: 8.8 - Activations (M): 30.5 - Image size: 224 x 224 - **Papers:** - DaViT: Dual Attention Vision Transformers: https://arxiv.org/abs/2204.03645 - **Original:** https://github.com/dingmyu/davit - **Dataset:** ImageNet-1k
2e737a0449613d823170b296bb1fb94a
apache-2.0
['image-classification', 'timm']
false
Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model('davit_small.msft_in1k', pretrained=True) model = model.eval()
b5b25807693413ed6fd84b3ef83ad7f2
apache-2.0
['image-classification', 'timm']
false
Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'davit_small.msft_in1k', pretrained=True, features_only=True, ) model = model.eval()
13578606d92e9b59fba3f85e92d0df46
apache-2.0
['image-classification', 'timm']
false
Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'davit_small.msft_in1k', pretrained=True, num_classes=0,
e2ce1ffe341d3a8a759e21c17735ad62
apache-2.0
['image-classification', 'timm']
false
By Top-1 |model |top1 |top1_err|top5 |top5_err|param_count|img_size|crop_pct|interpolation| |---------------------|------|--------|------|--------|-----------|--------|--------|-------------| |davit_base.msft_in1k |84.634|15.366 |97.014|2.986 |87.95 |224 |0.95 |bicubic | |davit_small.msft_in1k|84.25 |15.75 |96.94 |3.06 |49.75 |224 |0.95 |bicubic | |davit_tiny.msft_in1k |82.676|17.324 |96.276|3.724 |28.36 |224 |0.95 |bicubic |
9c84fd635a89184c875f83d34e3873bd
apache-2.0
['image-classification', 'timm']
false
Citation ```bibtex @inproceedings{ding2022davit, title={DaViT: Dual Attention Vision Transformer}, author={Ding, Mingyu and Xiao, Bin and Codella, Noel and Luo, Ping and Wang, Jingdong and Yuan, Lu}, booktitle={ECCV}, year={2022}, } ```
16139e1f3cdc21cd4d550b3069cb1b0b
apache-2.0
['automatic-speech-recognition', 'nl']
false
exp_w2v2t_nl_vp-es_s496 Fine-tuned [facebook/wav2vec2-large-es-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-es-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (nl)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
8eac8158118bccbd64a68a84061beff1
bsd-3-clause
[]
false
Copyright 2018-2022, UT-Battelle Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
b01dd5d1775b23b586ef5e6e0028b155
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Whisper Small PL This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 and the FLEURS datasets. It achieves the following results on the evaluation set: - eval_loss: 0.3571 - eval_wer: 14.8004 - eval_runtime: 2233.4204 - eval_samples_per_second: 3.714 - eval_steps_per_second: 0.232 - epoch: 4.03 - step: 3000
3e09211c941de9b71e6533a12b0b6245
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 24 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 48 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 8000 - mixed_precision_training: Native AMP
c3f754f96b3f8bab57b11ca3bcff3c89
mit
['generated_from_trainer']
false
twitter-xlm-roberta-base-sentiment This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6256 - Accuracy: 0.7297
a7fa607bbc891be88a89e2274347cebc
apache-2.0
['Axon', 'Elixir']
false
ResNet This ResNet34 model was translated from the ONNX ResNetv1 model found at https://github.com/onnx/models/tree/main/vision/classification/resnet into Axon using [AxonOnnx](https://github.com/elixir-nx/axon_onnx) The following description is copied from the relevant description at the ONNX repository.
0b43400763ffd2c653e49fdb8f2c2127
apache-2.0
['Axon', 'Elixir']
false
References * **ResNetv1** [Deep residual learning for image recognition](https://arxiv.org/abs/1512.03385) He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778. 2016. * **ONNX source model** [onnx/models vision/classification/resnet resnet34-v1-7.onnx](https://github.com/onnx/models/tree/main/vision/classification/resnet/README)
df00c26c2f9b1f82d932d885a8d35223
apache-2.0
['italian', 'sequence-to-sequence', 'squad_it', 'text2text-question-answering', 'text2text-generation']
false
mT5 Base for Question Answering ⁉️ 🇮🇹 This repository contains the checkpoint for the [mT5 Base](https://huggingface.co/google/mt5-base) model fine-tuned on extractive question answering on the [SQuAD-IT corpus](https://huggingface.co/datasets/squad_it) as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io). A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
a01b0830640234e4a8aceb0f4c3a5f68
apache-2.0
['italian', 'sequence-to-sequence', 'squad_it', 'text2text-question-answering', 'text2text-generation']
false
Using the model Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as: ```python from transformers import pipelines qa = pipeline("text2text-generation", model='it5/mt5-base-question-answering') qa("In seguito all' evento di estinzione del Cretaceo-Paleogene, l' estinzione dei dinosauri e il clima umido possono aver permesso alla foresta pluviale tropicale di diffondersi in tutto il continente. Dal 66-34 Mya, la foresta pluviale si estendeva fino a sud fino a 45°. Le fluttuazioni climatiche degli ultimi 34 milioni di anni hanno permesso alle regioni della savana di espandersi fino ai tropici. Durante l' Oligocene, ad esempio, la foresta pluviale ha attraversato una banda relativamente stretta. Si espandeva di nuovo durante il Miocene medio, poi si ritrasse ad una formazione prevalentemente interna all' ultimo massimo glaciale. Tuttavia, la foresta pluviale è riuscita ancora a prosperare durante questi periodi glaciali, consentendo la sopravvivenza e l' evoluzione di un' ampia varietà di specie. Domanda: La foresta pluviale amazzonica è diventata per lo più una foresta interna intorno a quale evento globale?") >>> [{"generated_text": "ultimo massimo glaciale"}] ``` or loaded using autoclasses: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("it5/mt5-base-question-answering") model = AutoModelForSeq2SeqLM.from_pretrained("it5/mt5-base-question-answering") ``` If you use this model in your research, please cite our work as: ```bibtex @article{sarti-nissim-2022-it5, title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation}, author={Sarti, Gabriele and Nissim, Malvina}, journal={ArXiv preprint 2203.03759}, url={https://arxiv.org/abs/2203.03759}, year={2022}, month={mar} } ```
a1a3abc6978f76560d2220133d4b5fca
apache-2.0
['translation', 'generated_from_trainer']
false
marian-finetuned-kde4-en-to-fr-2 This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset. It achieves the following results on the evaluation set: - Loss: 0.8559 - Bleu: 52.9326
056b38bf3532615bb05302cc5b0f7aa6
apache-2.0
['generated_from_trainer']
false
wav2vec2-large-xls-r-1b-korean-sample5 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1118 - Cer: 0.0217
2b82e8bf4d7c808b95f92f739a9e4b1a
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 5
d69293638456c3152fa87845456a7e68
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Cer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.3411 | 1.0 | 12588 | 0.2680 | 0.0738 | | 0.2237 | 2.0 | 25176 | 0.1812 | 0.0470 | | 0.1529 | 3.0 | 37764 | 0.1482 | 0.0339 | | 0.1011 | 4.0 | 50352 | 0.1168 | 0.0256 | | 0.0715 | 5.0 | 62940 | 0.1118 | 0.0217 |
513702f3efb2f1367cfa6298faf4f76c
apache-2.0
['generated_from_trainer']
false
bert-large-cased-sigir-support-no-label-40-sigir-tune2nd-LR100-labelled-30 This model is a fine-tuned version of [jojoUla/bert-large-cased-sigir-support-no-label-40](https://huggingface.co/jojoUla/bert-large-cased-sigir-support-no-label-40) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6520
d6f260bee5685e0b59be59a918332f12
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 30 - eval_batch_size: 30 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30.0 - mixed_precision_training: Native AMP
148e992a6fa72529f0d1f01c35462563
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 4.8321 | 1.0 | 2 | 4.3250 | | 3.383 | 2.0 | 4 | 2.4023 | | 1.9548 | 3.0 | 6 | 1.2925 | | 1.4856 | 4.0 | 8 | 1.5152 | | 0.9588 | 5.0 | 10 | 1.7731 | | 1.2668 | 6.0 | 12 | 1.3830 | | 0.8441 | 7.0 | 14 | 1.9760 | | 1.0173 | 8.0 | 16 | 1.2364 | | 0.6814 | 9.0 | 18 | 1.1771 | | 0.9044 | 10.0 | 20 | 1.4721 | | 0.6889 | 11.0 | 22 | 0.8518 | | 0.5845 | 12.0 | 24 | 0.6993 | | 0.4068 | 13.0 | 26 | 1.1771 | | 0.5957 | 14.0 | 28 | 0.5895 | | 0.4277 | 15.0 | 30 | 0.5326 | | 0.3736 | 16.0 | 32 | 1.0893 | | 0.413 | 17.0 | 34 | 1.3267 | | 0.5718 | 18.0 | 36 | 1.0331 | | 0.3892 | 19.0 | 38 | 1.0793 | | 0.3913 | 20.0 | 40 | 0.8742 | | 0.4794 | 21.0 | 42 | 1.1264 | | 0.4626 | 22.0 | 44 | 1.1857 | | 0.2683 | 23.0 | 46 | 1.5181 | | 0.3436 | 24.0 | 48 | 1.4419 | | 0.3793 | 25.0 | 50 | 1.4198 | | 0.356 | 26.0 | 52 | 1.1776 | | 0.2189 | 27.0 | 54 | 0.7166 | | 0.286 | 28.0 | 56 | 0.7601 | | 0.3681 | 29.0 | 58 | 1.2592 | | 0.5858 | 30.0 | 60 | 0.6520 |
fe74cef319e9d0359bdcb2947a5693ed
apache-2.0
['summarization', 'generated_from_trainer']
false
AraBART-finetuned-ar This model is a fine-tuned version of [moussaKam/AraBART](https://huggingface.co/moussaKam/AraBART) on the xlsum dataset. It achieves the following results on the evaluation set: - Loss: 3.7449 - Rouge-1: 31.08 - Rouge-2: 14.68 - Rouge-l: 27.36 - Gen Len: 19.64 - Bertscore: 73.86
7c41f661378d5f48b67140998b559bb9
apache-2.0
['summarization', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 250 - num_epochs: 10 - label_smoothing_factor: 0.1
ca0f5fdae87bb5b95226092c4ed87f58
apache-2.0
['summarization', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge-1 | Rouge-2 | Rouge-l | Gen Len | Bertscore | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:-------:|:---------:| | 4.4318 | 1.0 | 2345 | 3.7996 | 28.93 | 13.2 | 25.56 | 19.51 | 73.17 | | 4.0338 | 2.0 | 4690 | 3.7483 | 30.29 | 14.24 | 26.73 | 19.5 | 73.59 | | 3.8586 | 3.0 | 7035 | 3.7281 | 30.44 | 14.44 | 26.92 | 19.75 | 73.58 | | 3.7289 | 4.0 | 9380 | 3.7204 | 30.55 | 14.49 | 26.88 | 19.66 | 73.73 | | 3.6245 | 5.0 | 11725 | 3.7199 | 30.73 | 14.63 | 27.11 | 19.69 | 73.68 | | 3.5392 | 6.0 | 14070 | 3.7221 | 30.85 | 14.65 | 27.21 | 19.7 | 73.77 | | 3.4694 | 7.0 | 16415 | 3.7286 | 31.08 | 14.8 | 27.41 | 19.62 | 73.84 | | 3.4126 | 8.0 | 18760 | 3.7384 | 31.06 | 14.77 | 27.41 | 19.64 | 73.82 | | 3.3718 | 9.0 | 21105 | 3.7398 | 31.18 | 14.89 | 27.49 | 19.67 | 73.87 | | 3.3428 | 10.0 | 23450 | 3.7449 | 31.19 | 14.88 | 27.44 | 19.68 | 73.87 |
4470d086d61ad7fccb28486cc210f9e8
apache-2.0
['catalan', 'named entity recognition', 'ner', 'CaText', 'Catalan Textual Corpus']
false
Model description The **roberta-base-ca-cased-ner** is a Named Entity Recognition (NER) model for the Catalan language fine-tuned from the [BERTa](https://huggingface.co/PlanTL-GOB-ES/roberta-base-ca) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model pre-trained on a medium-size corpus collected from publicly available corpora and crawlers (check the BERTa model card for more details).
8f5b7d20fdd94d98dc2eb4eb02e7b942
apache-2.0
['catalan', 'named entity recognition', 'ner', 'CaText', 'Catalan Textual Corpus']
false
Evaluation We evaluated the _roberta-base-ca-cased-ner_ on the Ancora-ca-ner test set against standard multilingual and monolingual baselines: | Model | Ancora-ca-ner (F1)| | ------------|:-------------| | roberta-base-ca-cased-ner | **88.13** | | mBERT | 86.38 | | XLM-RoBERTa | 87.66 | | WikiBERT-ca | 77.66 | For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/projecte-aina/club).
a58a76f59ffd03c12539b82b12898534
apache-2.0
['translation']
false
dra-eng * source group: Dravidian languages * target group: English * OPUS readme: [dra-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/dra-eng/README.md) * model: transformer * source language(s): kan mal tam tel * target language(s): eng * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/dra-eng/opus2m-2020-07-31.zip) * test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/dra-eng/opus2m-2020-07-31.test.txt) * test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/dra-eng/opus2m-2020-07-31.eval.txt)
c366c1ed85eb2cf33b67595e2a4f944e
apache-2.0
['translation']
false
Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.kan-eng.kan.eng | 9.1 | 0.312 | | Tatoeba-test.mal-eng.mal.eng | 42.0 | 0.584 | | Tatoeba-test.multi.eng | 30.0 | 0.493 | | Tatoeba-test.tam-eng.tam.eng | 30.2 | 0.467 | | Tatoeba-test.tel-eng.tel.eng | 15.9 | 0.378 |
ab5510205c0c03445a70651057515632
apache-2.0
['translation']
false
System Info: - hf_name: dra-eng - source_languages: dra - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/dra-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ta', 'kn', 'ml', 'te', 'dra', 'en'] - src_constituents: {'tam', 'kan', 'mal', 'tel'} - tgt_constituents: {'eng'} - src_multilingual: True - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/dra-eng/opus2m-2020-07-31.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/dra-eng/opus2m-2020-07-31.test.txt - src_alpha3: dra - tgt_alpha3: eng - short_pair: dra-en - chrF2_score: 0.493 - bleu: 30.0 - brevity_penalty: 1.0 - ref_len: 10641.0 - src_name: Dravidian languages - tgt_name: English - train_date: 2020-07-31 - src_alpha2: dra - tgt_alpha2: en - prefer_old: False - long_pair: dra-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
fd3ef476b5b202c202972de59aeb181b
creativeml-openrail-m
['text-to-image', 'stable-diffusion']
false
sggryzza Dreambooth model trained by Xeronate with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
06a25872022b22284fd38ca8519b7337
cc-by-4.0
['question-answering, multi-step-reasoning, multi-hop-reasoning']
false
digit_tokenization.py from https://github.com/stonybrooknlp/teabreac model_name = "StonyBrookNLP/teabreac-poet-large-iirc-retrieved" tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False)
16238b30bf6174eee9a63e619348dd78
apache-2.0
['audio', 'automatic-speech-recognition']
false
Model Details - **Model Description:** 해당 모델은 wav2vec2-conformer base architecture에 scratch pre-training 되었습니다. <br /> Wav2Vec2ConformerForCTC를 이용하여 KsponSpeech에 대한 Fine-Tuning 모델입니다. <br /> - Dataset use [AIHub KsponSpeech](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&aihubDataSe=realm&dataSetSn=123) <br /> Datasets는 해당 Data를 전처리하여 임의로 만들어 사용하였습니다. <br /> del-1s의 의미는 1초 이하의 데이터 필터링을 의미합니다. <br /> 해당 모델은 **음성전사를 자체 커스텀한 42maru** 기준의 데이터로 학습된 모델입니다. (숫자와 영어는 한글 표기법을 따름) <br /> - **Developed by:** TADev (@lIlBrother, @ddobokki, @jp42maru) - **Language(s):** Korean - **License:** apache-2.0 - **Parent Model:** See the [wav2vec2-conformer](https://huggingface.co/docs/transformers/model_doc/wav2vec2-conformer) for more information about the pre-trained base model. (해당 모델은 wav2vec2-conformer base architecture에 scratch pre-training 되었습니다.)
fad4029d7ad3e8f6a5874e15c575a2f3
apache-2.0
['audio', 'automatic-speech-recognition']
false
How to Get Started With the Model KenLM과 혼용된 Wav2Vec2ProcessorWithLM 예제를 보시려면 [42maru-kenlm 예제](https://huggingface.co/42MARU/ko-ctc-kenlm-42maru-only-wiki)를 참고하세요 ```python import librosa from pyctcdecode import build_ctcdecoder from transformers import ( AutoConfig, AutoFeatureExtractor, AutoModelForCTC, AutoTokenizer, Wav2Vec2ProcessorWithLM, ) from transformers.pipelines import AutomaticSpeechRecognitionPipeline audio_path = ""
4ead953feaea112f327fbad6ba931085
apache-2.0
['audio', 'automatic-speech-recognition']
false
모델과 토크나이저, 예측을 위한 각 모듈들을 불러옵니다. model = AutoModelForCTC.from_pretrained("42MARU/ko-42maru-wav2vec2-conformer-del-1s") feature_extractor = AutoFeatureExtractor.from_pretrained("42MARU/ko-42maru-wav2vec2-conformer-del-1s") tokenizer = AutoTokenizer.from_pretrained("42MARU/ko-42maru-wav2vec2-conformer-del-1s") beamsearch_decoder = build_ctcdecoder( labels=list(tokenizer.encoder.keys()), kenlm_model_path=None, ) processor = Wav2Vec2ProcessorWithLM( feature_extractor=feature_extractor, tokenizer=tokenizer, decoder=beamsearch_decoder )
c871427baf1c54d8958b60089dc11b13
apache-2.0
['audio', 'automatic-speech-recognition']
false
실제 예측을 위한 파이프라인에 정의된 모듈들을 삽입. asr_pipeline = AutomaticSpeechRecognitionPipeline( model=model, tokenizer=processor.tokenizer, feature_extractor=processor.feature_extractor, decoder=processor.decoder, device=-1, )
6a84ec6746938c7bad77faaa661df9e4
mit
['indonesian-roberta-base-indonli']
false
Indonesian RoBERTa Base IndoNLI Indonesian RoBERTa Base IndoNLI is a natural language inference (NLI) model based on the [RoBERTa](https://arxiv.org/abs/1907.11692) model. The model was originally the pre-trained [Indonesian RoBERTa Base](https://hf.co/flax-community/indonesian-roberta-base) model, which is then fine-tuned on [`IndoNLI`](https://github.com/ir-nlp-csui/indonli)'s dataset consisting of Indonesian Wikipedia, news, and Web articles [1]. After training, the model achieved an evaluation/dev accuracy of 77.06%. On the benchmark `test_lay` subset, the model achieved an accuracy of 74.24% and on the benchmark `test_expert` subset, the model achieved an accuracy of 61.66%. Hugging Face's `Trainer` class from the [Transformers](https://huggingface.co/transformers) library was used to train the model. PyTorch was used as the backend framework during training, but the model remains compatible with other frameworks nonetheless.
733e38618b0cbb4e04c26b1f9b7e5a16
mit
['indonesian-roberta-base-indonli']
false
params | Arch. | Training/Validation data (text) | | --------------------------------- | ------- | ------------ | ------------------------------- | | `indonesian-roberta-base-indonli` | 124M | RoBERTa Base | `IndoNLI` |
0c35726ab680d6739b152782ae725cac
mit
['indonesian-roberta-base-indonli']
false
Evaluation Results The model was trained for 5 epochs, with a batch size of 16, a learning rate of 2e-5, a weight decay of 0.1, and a warmup ratio of 0.2, with linear annealing to 0. The best model was loaded at the end. | Epoch | Training Loss | Validation Loss | Accuracy | | ----- | ------------- | --------------- | -------- | | 1 | 0.989200 | 0.691663 | 0.731452 | | 2 | 0.673000 | 0.621913 | 0.766045 | | 3 | 0.449900 | 0.662543 | 0.770596 | | 4 | 0.293600 | 0.777059 | 0.768320 | | 5 | 0.194200 | 0.948068 | 0.764224 |
3ece6b3c828763ef6d0e17bd67bfcc06
mit
['indonesian-roberta-base-indonli']
false
As NLI Classifier ```python from transformers import pipeline pretrained_name = "w11wo/indonesian-roberta-base-indonli" nlp = pipeline( "sentiment-analysis", model=pretrained_name, tokenizer=pretrained_name ) nlp("Andi tersenyum karena mendapat hasil baik. </s></s> Andi sedih.") ```
ce4b1126e8d034756fbb690f636a01de
mit
['indonesian-roberta-base-indonli']
false
References [1] Mahendra, R., Aji, A. F., Louvan, S., Rahman, F., & Vania, C. (2021, November). [IndoNLI: A Natural Language Inference Dataset for Indonesian](https://arxiv.org/abs/2110.14566). _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_. Association for Computational Linguistics.
15d6c3478557bd2ad3fb660454581450
mit
['indonesian-roberta-base-indonli']
false
Author Indonesian RoBERTa Base IndoNLI was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Colaboratory using their free GPU access.
5dc66ae322e730a61705d50d4649559b
apache-2.0
['generated_from_trainer']
false
finetuning-sentiment-model-5000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.4210 - Accuracy: 0.8383 - F1: 0.8348
272459983c401b9eb921bb434caa0ec7
apache-2.0
['exbert', 'multiberts', 'multiberts-seed-0']
false
MultiBERTs Seed 0 Checkpoint 500k (uncased) Seed 0 intermediate checkpoint 500k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint. The final checkpoint can be found at [multiberts-seed-0](https://hf.co/multberts-seed-0). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
9e6aeae7f5067a95518c76481dc0fdb4
apache-2.0
['exbert', 'multiberts', 'multiberts-seed-0']
false
How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0-500k') model = BertModel.from_pretrained("multiberts-seed-0-500k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ```
db4cc95b30dc0e1272bbdfc292b1f18b
apache-2.0
['generated_from_trainer']
false
distilroberta-base-finetuned-SarcojiComplEmojisDistilRoberta-baseMLM1 This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.8333
361218e2efbac204144fd4aae8abaf1e
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.2176 | 1.0 | 768 | 2.9178 | | 2.9632 | 2.0 | 1536 | 2.8355 | | 2.9201 | 3.0 | 2304 | 2.8462 |
3a55a76a64e8051881e91cc09eca76ee
apache-2.0
['automatic-speech-recognition', 'de']
false
exp_w2v2r_de_vp-100k_gender_male-0_female-10_s601 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
0bf8741e64ff0663cac8101b44f7cf4a
mit
['generated_from_trainer']
false
roberta-base-unlabeled-gab-semeval2023-task10-45000samplesample This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.1441
06d0b5458458cd8c723bb3e4ca1ed0f0
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5
4973ebdfcea2ecefee1965b5714e352f
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.4294 | 1.0 | 1407 | 2.2323 | | 2.3091 | 2.0 | 2814 | 2.1470 | | 2.23 | 3.0 | 4221 | 2.1767 | | 2.1866 | 4.0 | 5628 | 2.1625 | | 2.171 | 5.0 | 7035 | 2.1441 |
88e532f1bb98218b0ffd5b2704745aac
apache-2.0
['translation']
false
opus-mt-de-ee * source languages: de * target languages: ee * OPUS readme: [de-ee](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-ee/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-ee/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-ee/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-ee/opus-2020-01-20.eval.txt)
43e388a81872c494847aa3e56c9ae316
apache-2.0
['generated_from_keras_callback']
false
silviacamplani/distilbert-base-uncased-finetuned-dapt-ner-ai_data This model is a fine-tuned version of [silviacamplani/distilbert-base-uncased-finetuned-ai_data](https://huggingface.co/silviacamplani/distilbert-base-uncased-finetuned-ai_data) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.3549 - Validation Loss: 2.3081 - Train Precision: 0.0 - Train Recall: 0.0 - Train F1: 0.0 - Train Accuracy: 0.6392 - Epoch: 2
40997f8375f513465307c17d52d88d84
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000} - training_precision: mixed_float16
fa103db79d7b7f9546f8db926e4649ef
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch | |:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:| | 3.0905 | 2.8512 | 0.0 | 0.0 | 0.0 | 0.6376 | 0 | | 2.6612 | 2.4783 | 0.0 | 0.0 | 0.0 | 0.6392 | 1 | | 2.3549 | 2.3081 | 0.0 | 0.0 | 0.0 | 0.6392 | 2 |
c7dc9c3517f66cf0295f1d3ac3bfc134
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.7737 - Matthews Correlation: 0.5335
c1cea6e82e0b21da9a7f3b7220d43592
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5225 | 1.0 | 535 | 0.5170 | 0.4007 | | 0.3509 | 2.0 | 1070 | 0.5220 | 0.4837 | | 0.2405 | 3.0 | 1605 | 0.6164 | 0.5186 | | 0.1777 | 4.0 | 2140 | 0.7737 | 0.5335 | | 0.1295 | 5.0 | 2675 | 0.8374 | 0.5162 |
99512baffbf227ac76111a0a6383df28
apache-2.0
['generated_from_trainer']
false
distilbert_sa_GLUE_Experiment_rte_384 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE RTE dataset. It achieves the following results on the evaluation set: - Loss: 0.6919 - Accuracy: 0.5271
e2e5a37c7e77595cd1b1e5e782163018
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.698 | 1.0 | 10 | 0.6962 | 0.4729 | | 0.6969 | 2.0 | 20 | 0.6966 | 0.4729 | | 0.6955 | 3.0 | 30 | 0.6919 | 0.5271 | | 0.6932 | 4.0 | 40 | 0.6990 | 0.4729 | | 0.6941 | 5.0 | 50 | 0.6931 | 0.5054 | | 0.6892 | 6.0 | 60 | 0.6929 | 0.5199 | | 0.6843 | 7.0 | 70 | 0.6931 | 0.5560 | | 0.6399 | 8.0 | 80 | 0.7372 | 0.4982 |
832245db6243b663d98f583a63a85737
cc-by-sa-4.0
['japanese', 'token-classification', 'pos', 'dependency-parsing']
false
Model Description This is a DeBERTa(V2) model pre-trained on 青空文庫 texts for POS-tagging and dependency-parsing, derived from [deberta-large-japanese-unidic](https://huggingface.co/KoichiYasuoka/deberta-large-japanese-unidic). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech) [FEATS](https://universaldependencies.org/u/feat/).
5c9867ab084059f68a26190f646b77a3
cc-by-sa-4.0
['japanese', 'token-classification', 'pos', 'dependency-parsing']
false
How to Use ```py import torch from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/deberta-large-japanese-unidic-luw-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/deberta-large-japanese-unidic-luw-upos") s="国境の長いトンネルを抜けると雪国であった。" t=tokenizer.tokenize(s) p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]] print(list(zip(t,p))) ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/deberta-large-japanese-unidic-luw-upos") print(nlp("国境の長いトンネルを抜けると雪国であった。")) ``` [fugashi](https://pypi.org/project/fugashi), [unidic-lite](https://pypi.org/project/unidic-lite) and [pytokenizations](https://pypi.org/project/pytokenizations) are required.
eaf938e13ea2c3d4f7a30c3b2abbe269
apache-2.0
['image-classification', 'pytorch', 'onnx']
false
MobileNet V3 - Large model Pretrained on a dataset for wildfire binary classification (soon to be shared). The MobileNet V3 architecture was introduced in [this paper](https://arxiv.org/pdf/1905.02244.pdf).
5edad2cea014e85439f9e5189847c723
apache-2.0
['image-classification', 'pytorch', 'onnx']
false
Latest stable release You can install the last stable release of the package using [pypi](https://pypi.org/project/pyrovision/) as follows: ```shell pip install pyrovision ``` or using [conda](https://anaconda.org/pyronear/pyrovision): ```shell conda install -c pyronear pyrovision ```
504aeebe35eee5ab11fa183f00a12a32
apache-2.0
['image-classification', 'pytorch', 'onnx']
false
Developer mode Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*: ```shell git clone https://github.com/pyronear/pyro-vision.git pip install -e pyro-vision/. ```
4a1ea33f547410a67cec7325e506d160
apache-2.0
['image-classification', 'pytorch', 'onnx']
false
Usage instructions ```python from PIL import Image from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize from torchvision.transforms.functional import InterpolationMode from pyrovision.models import model_from_hf_hub model = model_from_hf_hub("pyronear/mobilenet_v3_large").eval() img = Image.open(path_to_an_image).convert("RGB")
0acafcb69b6bf735d2a10789803cd759
apache-2.0
['image-classification', 'pytorch', 'onnx']
false
Citation Original paper ```bibtex @article{DBLP:journals/corr/abs-1905-02244, author = {Andrew Howard and Mark Sandler and Grace Chu and Liang{-}Chieh Chen and Bo Chen and Mingxing Tan and Weijun Wang and Yukun Zhu and Ruoming Pang and Vijay Vasudevan and Quoc V. Le and Hartwig Adam}, title = {Searching for MobileNetV3}, journal = {CoRR}, volume = {abs/1905.02244}, year = {2019}, url = {http://arxiv.org/abs/1905.02244}, eprinttype = {arXiv}, eprint = {1905.02244}, timestamp = {Thu, 27 May 2021 16:20:51 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1905-02244.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` Source of this implementation ```bibtex @software{chintala_torchvision_2017, author = {Chintala, Soumith}, month = {4}, title = {{Torchvision}}, url = {https://github.com/pytorch/vision}, year = {2017} } ```
6e84c65ad722940906579cf1d9dd8cf0
apache-2.0
['translation']
false
opus-mt-es-ro * source languages: es * target languages: ro * OPUS readme: [es-ro](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-ro/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-ro/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ro/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ro/opus-2020-01-20.eval.txt)
dbccbd51d3c70d38efd38afd4e82ec9c
creativeml-openrail-m
['stable-diffusion', 'text-to-image', 'image-to-image']
false
DarkSouls Diffusion <p> <img src="https://huggingface.co/Guizmus/DarkSoulsDiffusion/resolve/main/showcase.jpg"/><br/> This is a Dreamboothed Stable Diffusion model trained on the DarkSouls series Style.<br/> The total dataset is made of 100 pictures, and the training has been done on runawayml 1.5 and the new VAE, with 2500 steps (LR1e-6) then 24k more steps (LR1e-7).<br/> The token "DarkSouls Style" will bring in the new concept.<br/> The recommended sampling is k_Euler_a or DPM++ 2M Karras on 20 steps, CFGS 7 . </p> [CKPT download link](https://huggingface.co/Guizmus/DarkSoulsDiffusion/resolve/main/DarkSoulsStyle_v1-3.ckpt)
857727017c619d6b582e7df88f9e2245
creativeml-openrail-m
['stable-diffusion', 'text-to-image', 'image-to-image']
false
🧨 Diffusers This model can be used just like any other Stable Diffusion model. For more information, please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion). You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX](). ```python from diffusers import StableDiffusionPipeline import torch model_id = "Guizmus/DarkSoulsDiffusion" pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe = pipe.to("cuda") prompt = "a soldier engulfed in fire, DarkSouls Style" image = pipe(prompt).images[0] image.save("./DarkSouls Style.png") ```
91e160544b4432c0b802d1bccc71f691
apache-2.0
['generated_from_trainer']
false
wav2vec2-large-xls-r-300m-ja-colab-3 This model is a fine-tuned version of [pinot/wav2vec2-large-xls-r-300m-ja-colab-2](https://huggingface.co/pinot/wav2vec2-large-xls-r-300m-ja-colab-2) on the common_voice_10_0 dataset. It achieves the following results on the evaluation set: - Loss: 1.2696 - Wer: 0.2299
0a64f022e5cbc28cef56191776b619bd
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 - mixed_precision_training: Native AMP
f62a94ee52f0ebf46680ec091b5a3b74
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 637 | 1.4666 | 0.2862 | | No log | 2.0 | 1274 | 1.4405 | 0.2866 | | No log | 3.0 | 1911 | 1.4162 | 0.2762 | | No log | 4.0 | 2548 | 1.4128 | 0.2709 | | 0.2814 | 5.0 | 3185 | 1.3927 | 0.2613 | | 0.2814 | 6.0 | 3822 | 1.3629 | 0.2536 | | 0.2814 | 7.0 | 4459 | 1.3349 | 0.2429 | | 0.2814 | 8.0 | 5096 | 1.3116 | 0.2356 | | 0.1624 | 9.0 | 5733 | 1.2774 | 0.2307 | | 0.1624 | 10.0 | 6370 | 1.2696 | 0.2299 |
95bb9b0578b7cd51cdb8856b5a6d374e
apache-2.0
['generated_from_keras_callback']
false
kasrahabib/all-MiniLM-L6-v2-finetunned-90percentile-384embd-kmeans-propogated This model is a fine-tuned version of [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0070 - Validation Loss: 0.1409 - Train Precision: 0.9618 - Train Recall: 0.9758 - Train F1: 0.9688 - Epoch: 9
b4d295c8b00f46917c61e81aec91a35d
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 4140, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32
00279773b9edc049b822169c5905571b
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Epoch | |:----------:|:---------------:|:---------------:|:------------:|:--------:|:-----:| | 0.2455 | 0.1360 | 0.9231 | 0.9879 | 0.9544 | 0 | | 0.0735 | 0.1060 | 0.9640 | 0.9734 | 0.9687 | 1 | | 0.0450 | 0.1178 | 0.9485 | 0.9806 | 0.9643 | 2 | | 0.0286 | 0.1038 | 0.9599 | 0.9855 | 0.9725 | 3 | | 0.0194 | 0.1229 | 0.9684 | 0.9661 | 0.9673 | 4 | | 0.0183 | 0.1307 | 0.9617 | 0.9734 | 0.9675 | 5 | | 0.0113 | 0.1295 | 0.9618 | 0.9758 | 0.9688 | 6 | | 0.0101 | 0.1397 | 0.9508 | 0.9831 | 0.9667 | 7 | | 0.0093 | 0.1417 | 0.9618 | 0.9758 | 0.9688 | 8 | | 0.0070 | 0.1409 | 0.9618 | 0.9758 | 0.9688 | 9 |
c596bee32d5154fefcacb81a1e3c8107
apache-2.0
['translation', 'generated_from_trainer']
false
Anjan-finetuned-iitbombay-en-to-hi This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-hi](https://huggingface.co/Helsinki-NLP/opus-mt-en-hi) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.7924 - Bleu: 6.3001
34d75738ee1f808694aee49345d907e9
apache-2.0
['translation']
false
opus-mt-lg-sv * source languages: lg * target languages: sv * OPUS readme: [lg-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lg-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lg-sv/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lg-sv/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lg-sv/opus-2020-01-09.eval.txt)
27fb0ce83922bcde09b2f539ec947683
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0608 - Precision: 0.9290 - Recall: 0.9371 - F1: 0.9331 - Accuracy: 0.9840
3a7804348f84d813cafa9195148c5a26
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2276 | 1.0 | 878 | 0.0685 | 0.9204 | 0.9246 | 0.9225 | 0.9814 | | 0.0498 | 2.0 | 1756 | 0.0622 | 0.9238 | 0.9358 | 0.9298 | 0.9833 | | 0.0298 | 3.0 | 2634 | 0.0608 | 0.9290 | 0.9371 | 0.9331 | 0.9840 |
8965ec2565ea746183cab06015834d87
apache-2.0
['part-of-speech', 'token-classification']
false
XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Classical Chinese This model is part of our paper called: - Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
cc6bf68fdcdeb3aa8395e9e0d57aade1
apache-2.0
['part-of-speech', 'token-classification']
false
Usage ```python from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-lzh") model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-lzh") ```
9ad330094110baf8b246a9b0baa21c12
cc-by-4.0
['generated_from_trainer']
false
hing-roberta-NCM-run-1 This model is a fine-tuned version of [l3cube-pune/hing-roberta](https://huggingface.co/l3cube-pune/hing-roberta) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.2912 - Accuracy: 0.6667 - Precision: 0.6513 - Recall: 0.6494 - F1: 0.6502
f67c65b7bc2f225a0bc514d0448a8ea8
cc-by-4.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.8968 | 1.0 | 927 | 0.8552 | 0.6257 | 0.6508 | 0.5961 | 0.5969 | | 0.7022 | 2.0 | 1854 | 1.1142 | 0.3937 | 0.3270 | 0.3273 | 0.2051 | | 0.5569 | 3.0 | 2781 | 0.9130 | 0.6591 | 0.6566 | 0.6612 | 0.6509 | | 0.363 | 4.0 | 3708 | 1.6630 | 0.6526 | 0.6634 | 0.6414 | 0.6436 | | 0.2801 | 5.0 | 4635 | 2.0458 | 0.6451 | 0.6339 | 0.6345 | 0.6330 | | 0.1925 | 6.0 | 5562 | 2.3378 | 0.6570 | 0.6439 | 0.6254 | 0.6277 | | 0.1297 | 7.0 | 6489 | 2.5205 | 0.6839 | 0.6719 | 0.6651 | 0.6675 | | 0.114 | 8.0 | 7416 | 2.8373 | 0.6505 | 0.6379 | 0.6249 | 0.6280 | | 0.0994 | 9.0 | 8343 | 2.5358 | 0.6634 | 0.6539 | 0.6446 | 0.6474 | | 0.0977 | 10.0 | 9270 | 2.8244 | 0.6537 | 0.6489 | 0.6210 | 0.6238 | | 0.0623 | 11.0 | 10197 | 2.7593 | 0.6764 | 0.6602 | 0.6487 | 0.6510 | | 0.0537 | 12.0 | 11124 | 2.9823 | 0.6677 | 0.6679 | 0.6450 | 0.6488 | | 0.0432 | 13.0 | 12051 | 3.0792 | 0.6537 | 0.6465 | 0.6352 | 0.6378 | | 0.0406 | 14.0 | 12978 | 3.0707 | 0.6688 | 0.6592 | 0.6509 | 0.6534 | | 0.0296 | 15.0 | 13905 | 3.3289 | 0.6667 | 0.6596 | 0.6452 | 0.6486 | | 0.0288 | 16.0 | 14832 | 3.2147 | 0.6645 | 0.6592 | 0.6512 | 0.6528 | | 0.024 | 17.0 | 15759 | 3.3284 | 0.6645 | 0.6470 | 0.6405 | 0.6425 | | 0.0201 | 18.0 | 16686 | 3.2428 | 0.6688 | 0.6515 | 0.6515 | 0.6515 | | 0.0176 | 19.0 | 17613 | 3.2680 | 0.6710 | 0.6574 | 0.6536 | 0.6547 | | 0.0168 | 20.0 | 18540 | 3.2912 | 0.6667 | 0.6513 | 0.6494 | 0.6502 |
4c8bd28abb9ef93cf7d915a2df630285
apache-2.0
['multiberts', 'multiberts-seed_3', 'multiberts-seed_3-step_800k']
false
MultiBERTs, Intermediate Checkpoint - Seed 3, Step 800k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model
b4500346672afea323f791bc8731ecac
apache-2.0
['multiberts', 'multiberts-seed_3', 'multiberts-seed_3-step_800k']
false
How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_3-step_800k') model = TFBertModel.from_pretrained("google/multiberts-seed_3-step_800k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_3-step_800k') model = BertModel.from_pretrained("google/multiberts-seed_3-step_800k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ```
cab2f473e900ff7375bc9c78e8f7e51f
mit
['generated_from_trainer']
false
indobert-base-p2-finetuned-mer-10k This model is a fine-tuned version of [indobenchmark/indobert-base-p2](https://huggingface.co/indobenchmark/indobert-base-p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.3370
5b85b0db15445ecdacfd696e782a7842
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP
4d0b6b6c3f8118235da45ff52f57a4ef
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 4.9568 | 1.0 | 274 | 3.6237 | | 3.4802 | 2.0 | 548 | 3.0803 | | 3.0626 | 3.0 | 822 | 2.8108 | | 2.8591 | 4.0 | 1096 | 2.6345 | | 2.7182 | 5.0 | 1370 | 2.5492 | | 2.6223 | 6.0 | 1644 | 2.4692 | | 2.5426 | 7.0 | 1918 | 2.4122 | | 2.5019 | 8.0 | 2192 | 2.3611 | | 2.4649 | 9.0 | 2466 | 2.3447 | | 2.4631 | 10.0 | 2740 | 2.3392 |
73946336c529a7be6be9700778a760e5
apache-2.0
['generated_from_trainer']
false
wav2vec2-libri-train360_2-colab This model is a fine-tuned version of [GW12/wav2vec2-libri-train360-colab](https://huggingface.co/GW12/wav2vec2-libri-train360-colab) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1024 - Wer: 0.0959
b02f59ea06d31649951a9bc496279199