license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
apache-2.0
['neuspell', 'spelling', 'spell-correction']
false
neuspell-subwordbert-probwordnoise > towards a reliable workaround for the `neuspell` lib being broken See the [github repository](https://github.com/neuspell/neuspell) for usage and all official information.
335966ae969a3c4ad5d8b27464ab2c68
apache-2.0
['neuspell', 'spelling', 'spell-correction']
false
Usage Clone this model repo with git: ```bash sudo apt-get install git-lfs -q git clone https://huggingface.co/pszemraj/neuspell-subwordbert-probwordnoise ``` Install `neuspell` from pypi: ```bash pip install -U neuspell -q ``` Use in python for spell correction: ```python from neuspell import BertChecker checker = BertChecker() checker.from_pretrained("./neuspell-subwordbert-probwordnoise/") checker.correct("I luk foward to receving your reply")
f5064fdbb3a59c1dfb9b57c8b3259502
mit
['generated_from_trainer']
false
bart-large-cnn-small-xsum-5epochs This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the xsum dataset. It achieves the following results on the evaluation set: - Loss: 2.7051 - Rouge1: 0.2859 - Rouge2: 0.0937 - Rougel: 0.2033 - Rougelsum: 0.2101
11dc78606b5adab23a3cf4cab73a4577
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.045e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 16 - num_epochs: 5 - mixed_precision_training: Native AMP
a24df4ac3ef67fc23879e2a46b788f2d
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:| | 2.5007 | 0.32 | 16 | 2.0311 | 0.2393 | 0.0609 | 0.1618 | 0.1832 | | 2.0942 | 0.64 | 32 | 1.9169 | 0.2906 | 0.1053 | 0.2072 | 0.2166 | | 1.7543 | 0.96 | 48 | 1.9069 | 0.2904 | 0.0955 | 0.2058 | 0.2187 | | 1.2476 | 1.28 | 64 | 1.9614 | 0.2928 | 0.1043 | 0.2081 | 0.2257 | | 1.2318 | 1.6 | 80 | 1.9622 | 0.2892 | 0.0976 | 0.2099 | 0.2245 | | 1.0768 | 1.92 | 96 | 2.0244 | 0.2935 | 0.1008 | 0.2095 | 0.2209 | | 0.8845 | 2.24 | 112 | 2.0605 | 0.2886 | 0.0992 | 0.2039 | 0.2146 | | 0.5722 | 2.56 | 128 | 2.2340 | 0.2852 | 0.0946 | 0.1983 | 0.2146 | | 0.7132 | 2.88 | 144 | 2.1948 | 0.2838 | 0.0961 | 0.2047 | 0.2163 | | 0.4438 | 3.2 | 160 | 2.3758 | 0.2869 | 0.0906 | 0.1987 | 0.2102 | | 0.4194 | 3.52 | 176 | 2.5609 | 0.2882 | 0.0916 | 0.2022 | 0.2133 | | 0.3404 | 3.84 | 192 | 2.4988 | 0.2884 | 0.0907 | 0.2022 | 0.213 | | 0.2929 | 4.16 | 208 | 2.5802 | 0.2885 | 0.0967 | 0.2046 | 0.2141 | | 0.2466 | 4.48 | 224 | 2.6590 | 0.2823 | 0.094 | 0.1994 | 0.2119 | | 0.1889 | 4.8 | 240 | 2.7051 | 0.2859 | 0.0937 | 0.2033 | 0.2101 |
a0f5ffead400cf8828464181bd064aaa
mit
['gpt2-base-thai']
false
GPT-2 Base Thai GPT-2 Base Thai is a causal language model based on the [OpenAI GPT-2](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) model. It was trained on the [OSCAR](https://huggingface.co/datasets/oscar) dataset, specifically the `unshuffled_deduplicated_th` subset. The model was trained from scratch and achieved an evaluation loss of 1.708 and an evaluation perplexity of 5.516. This model was trained using HuggingFace's Flax framework and is part of the [JAX/Flax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104) organized by HuggingFace. All training was done on a TPUv3-8 VM, sponsored by the Google Cloud team. All necessary scripts used for training could be found in the [Files and versions](https://hf.co/flax-community/gpt2-base-thai/tree/main) tab, as well as the [Training metrics](https://hf.co/flax-community/gpt2-base-thai/tensorboard) logged via Tensorboard.
1b96bd1dd6c2c9e93ff3941b31d19c51
mit
['gpt2-base-thai']
false
params | Arch. | Training/Validation data (text) | | ---------------- | ------- | ----- | ------------------------------------ | | `gpt2-base-thai` | 124M | GPT-2 | `unshuffled_deduplicated_th` Dataset |
d2d01659fcc530c95ecc5da3bfec02d3
mit
['gpt2-base-thai']
false
Evaluation Results The model was trained for 3 epochs and the following is the final result once the training ended. | train loss | valid loss | valid PPL | total time | | ---------- | ---------- | --------- | ---------- | | 1.638 | 1.708 | 5.516 | 6:12:34 |
8498e294b0fa9dbf2ff8fcfdcc1b63e5
mit
['gpt2-base-thai']
false
As Causal Language Model ```python from transformers import pipeline pretrained_name = "flax-community/gpt2-base-thai" nlp = pipeline( "text-generation", model=pretrained_name, tokenizer=pretrained_name ) nlp("สวัสดีตอนเช้า") ```
3754ffc0c0ad50235efe29a5af1ea373
mit
['gpt2-base-thai']
false
Feature Extraction in PyTorch ```python from transformers import GPT2Model, GPT2TokenizerFast pretrained_name = "flax-community/gpt2-base-thai" model = GPT2Model.from_pretrained(pretrained_name) tokenizer = GPT2TokenizerFast.from_pretrained(pretrained_name) prompt = "สวัสดีตอนเช้า" encoded_input = tokenizer(prompt, return_tensors='pt') output = model(**encoded_input) ```
7b4c1dccf6c6871dceffe242131c43d4
apache-2.0
['token-classification']
false
distilroberta-base-ner-wikiann This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the wikiann dataset. eval F1-Score: **83,78** test F1-Score: **83,76**
46c2a0849baa13a0a33859bee889a6bf
apache-2.0
['token-classification']
false
Model Usage ```python from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline tokenizer = AutoTokenizer.from_pretrained("philschmid/distilroberta-base-ner-wikiann") model = AutoModelForTokenClassification.from_pretrained("philschmid/distilroberta-base-ner-wikiann") nlp = pipeline("ner", model=model, tokenizer=tokenizer, grouped_entities=True) example = "My name is Philipp and live in Germany" nlp(example) ```
cc255c336e95272dcd95ee080fba71cf
apache-2.0
['token-classification']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4.9086903597787154e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 - mixed_precision_training: Native AMP
2bf802c2385933865b71a99d13daff54
apache-2.0
['token-classification']
false
Training results It achieves the following results on the evaluation set: - Loss: 0.3156 - Precision: 0.8332 - Recall: 0.8424 - F1: 0.8378 - Accuracy: 0.9193 It achieves the following results on the test set: - Loss: 0.3023 - Precision: 0.8301 - Recall: 0.8452 - F1: 0.8376 - Accuracy: 0.92
4eb4fd3a942044d978096eb4bc7ff363
apache-2.0
['generated_from_trainer']
false
vtt-indonesia This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.3472 - Wer: 0.3582
dbff470ea8d09c396fde30694f67da93
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.7612 | 3.23 | 400 | 0.6405 | 0.6714 | | 0.4143 | 6.45 | 800 | 0.3772 | 0.4974 | | 0.2068 | 9.68 | 1200 | 0.3877 | 0.4442 | | 0.1436 | 12.9 | 1600 | 0.3785 | 0.4212 | | 0.1133 | 16.13 | 2000 | 0.3944 | 0.4144 | | 0.09 | 19.35 | 2400 | 0.3695 | 0.3925 | | 0.0705 | 22.58 | 2800 | 0.3706 | 0.3846 | | 0.057 | 25.81 | 3200 | 0.3720 | 0.3725 | | 0.048 | 29.03 | 3600 | 0.3472 | 0.3582 |
1a14c8a053bd29d8494dc167dee1f604
apache-2.0
['automatic-speech-recognition', 'uk']
false
exp_w2v2t_uk_wavlm_s722 Fine-tuned [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) for speech recognition using the train split of [Common Voice 7.0 (uk)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
0e7785025b4930cf84a1112c1e96058b
apache-2.0
['italian', 'sequence-to-sequence', 'style-transfer', 'formality-style-transfer']
false
IT5 Base for Formal-to-informal Style Transfer 🤗 This repository contains the checkpoint for the [IT5 Base](https://huggingface.co/gsarti/it5-base) model fine-tuned on Formal-to-informal style transfer on the Italian subset of the XFORMAL dataset as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io). A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
cb4bdcc3937782ffc9a695dc1ed850e7
apache-2.0
['italian', 'sequence-to-sequence', 'style-transfer', 'formality-style-transfer']
false
Using the model Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as: ```python from transformers import pipelines f2i = pipeline("text2text-generation", model='it5/it5-base-formal-to-informal') f2i("Vi ringrazio infinitamente per vostra disponibilità") >>> [{"generated_text": "e grazie per la vostra disponibilità!"}] ``` or loaded using autoclasses: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("it5/it5-base-formal-to-informal") model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-base-formal-to-informal") ``` If you use this model in your research, please cite our work as: ```bibtex @article{sarti-nissim-2022-it5, title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation}, author={Sarti, Gabriele and Nissim, Malvina}, journal={ArXiv preprint 2203.03759}, url={https://arxiv.org/abs/2203.03759}, year={2022}, month={mar} } ```
e84aed108a01243dd68a33f4e5f973aa
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2217 - Accuracy: 0.921 - F1: 0.9212
9527243117ed322fafb3f6d3f2bb3219
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8094 | 1.0 | 250 | 0.3157 | 0.9055 | 0.9009 | | 0.2462 | 2.0 | 500 | 0.2217 | 0.921 | 0.9212 |
c866c6c6c3fe315727765a84ad0fdc98
apache-2.0
['catalan', 'masked-lm', 'RoBERTa-large-ca-v2', 'CaText', 'Catalan Textual Corpus']
false
Model description The **roberta-large-ca-v2** is a transformer-based masked language model for the Catalan language. It is based on the [RoBERTA](https://github.com/pytorch/fairseq/tree/master/examples/roberta) large model and has been trained on a medium-size corpus collected from publicly available corpora and crawlers.
ab37953918f36430c0812738fa773649
apache-2.0
['catalan', 'masked-lm', 'RoBERTa-large-ca-v2', 'CaText', 'Catalan Textual Corpus']
false
Intended uses and limitations **roberta-large-ca-v2** model is ready-to-use only for masked language modeling to perform the Fill Mask task (try the inference API or read the next section). However, it is intended to be fine-tuned on non-generative downstream tasks such as Question Answering, Text Classification, or Named Entity Recognition.
7674a94c32f09db723c35acac4fbce95
apache-2.0
['catalan', 'masked-lm', 'RoBERTa-large-ca-v2', 'CaText', 'Catalan Textual Corpus']
false
How to use Here is how to use this model: ```python from transformers import AutoModelForMaskedLM from transformers import AutoTokenizer, FillMaskPipeline from pprint import pprint tokenizer_hf = AutoTokenizer.from_pretrained('projecte-aina/roberta-large-ca-v2') model = AutoModelForMaskedLM.from_pretrained('projecte-aina/roberta-large-ca-v2') model.eval() pipeline = FillMaskPipeline(model, tokenizer_hf) text = f"Em dic <mask>." res_hf = pipeline(text) pprint([r['token_str'] for r in res_hf]) ```
b2092e35dbfbf710d1afc507d21df54b
apache-2.0
['catalan', 'masked-lm', 'RoBERTa-large-ca-v2', 'CaText', 'Catalan Textual Corpus']
false
Training procedure The training corpus has been tokenized using a byte version of [Byte-Pair Encoding (BPE)](https://github.com/openai/gpt-2) used in the original [RoBERTA](https://github.com/pytorch/fairseq/tree/master/examples/roberta) model with a vocabulary size of 50,262 tokens. The RoBERTa-large pretraining consists of a masked language model training that follows the approach employed for the RoBERTa large model with the same hyperparameters as in the original work. The training lasted a total of 96 hours with 32 NVIDIA V100 GPUs of 16GB DDRAM.
acde1fa177cb848c534ace36cb208915
apache-2.0
['catalan', 'masked-lm', 'RoBERTa-large-ca-v2', 'CaText', 'Catalan Textual Corpus']
false
CLUB benchmark The BERTa-large model has been fine-tuned on the downstream tasks of the Catalan Language Understanding Evaluation benchmark (CLUB), that has been created along with the model. It contains the following tasks and their related datasets: 1. Named Entity Recognition (NER) **[NER (AnCora)](https://zenodo.org/record/4762031
ede648b5e26b1546c0fd972decfaac64
apache-2.0
['generated_from_trainer']
false
tiny-mlm-glue-rte-target-glue-rte This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-rte](https://huggingface.co/muhtasham/tiny-mlm-glue-rte) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1886 - Accuracy: 0.6209
9c5bfc37ce5dc67af0cb51571e497f60
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6394 | 6.41 | 500 | 0.6611 | 0.6318 | | 0.4349 | 12.82 | 1000 | 0.8110 | 0.6245 | | 0.268 | 19.23 | 1500 | 0.9771 | 0.6209 | | 0.1653 | 25.64 | 2000 | 1.1886 | 0.6209 |
587a1c4abc3ce95a7c09e5495cdce76c
apache-2.0
['bert', 'wnli', 'glue', 'kd', 'torchdistill']
false
`bert-base-uncased` fine-tuned on WNLI dataset, using fine-tuned `bert-large-uncased` as a teacher model, [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_kd_and_submission.ipynb) for knowledge distillation. The training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/wnli/kd/bert_base_uncased_from_bert_large_uncased.yaml). I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **78.9**.
ccf2156cd0d67bf1c9b30aaea5875cbd
apache-2.0
['generated_from_trainer', 'summarization']
false
t5-seven-epoch-base-german This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the mlsum de dataset. It achieves the following results on the evaluation set: - Loss: 1.5491 - Rouge1: 42.3787 - Rouge2: 32.0253 - Rougel: 38.9529 - Rougelsum: 40.4544 - Gen Len: 47.7873
baf185a13b31826185f9c90daaa649ff
apache-2.0
['generated_from_trainer', 'summarization']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 6 - eval_batch_size: 6 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7.0
2c38df04ca710e423af22d91e01dfcc7
mit
['generated_from_trainer']
false
finetuned_gpt2-xl_sst2_negation0.1_pretrainedFalse_epochs10 This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on the sst2 dataset. It achieves the following results on the evaluation set: - Loss: 4.2784
b86ea2e18c8bea19baf9e02bae7d30e4
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.7811 | 1.0 | 1329 | 3.1311 | | 1.0842 | 2.0 | 2658 | 3.4312 | | 0.8781 | 3.0 | 3987 | 3.6260 | | 0.7678 | 4.0 | 5316 | 3.7834 | | 0.706 | 5.0 | 6645 | 3.9070 | | 0.6531 | 6.0 | 7974 | 3.9999 | | 0.6115 | 7.0 | 9303 | 4.0954 | | 0.5744 | 8.0 | 10632 | 4.1809 | | 0.5402 | 9.0 | 11961 | 4.2368 | | 0.5158 | 10.0 | 13290 | 4.2784 |
ca7bbe0efa8a7dfa7f15cc905823bf8f
apache-2.0
['national library of spain', 'spanish', 'bne', 'gpt2-large-bne']
false
Model description **GPT2-large-bne** is a transformer-based model for the Spanish language. It is based on the [GPT-2](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019.
be44b550d2208bf8b6f642507a1e5063
apache-2.0
['national library of spain', 'spanish', 'bne', 'gpt2-large-bne']
false
How to use Here is how to use this model: You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline, set_seed >>> tokenizer = AutoTokenizer.from_pretrained("PlanTL-GOB-ES/gpt2-large-bne") >>> model = AutoModelForCausalLM.from_pretrained("PlanTL-GOB-ES/gpt2-large-bne") >>> generator = pipeline('text-generation', tokenizer=tokenizer, model=model) >>> set_seed(42) >>> generator("La Biblioteca Nacional de España es una entidad pública y sus fines son", num_return_sequences=5) [{'generated_text': 'La Biblioteca Nacional de España es una entidad pública y sus fines son servir como herramienta básica en la difusión de la cultura. '}, {'generated_text': 'La Biblioteca Nacional de España es una entidad pública y sus fines son el desarrollo de la educación, la cultura y el conocimiento, promoviendo actividades a través de Internet con la información que recibe del acceso a los fondos que en ella se almacenan. '}, {'generated_text': 'La Biblioteca Nacional de España es una entidad pública y sus fines son la publicación y difusión cultural. '}, {'generated_text': 'La Biblioteca Nacional de España es una entidad pública y sus fines son preservar y difundir los fondos y colecciones de la Biblioteca Nacional, así como servir de punto de encuentro para toda la comunidad científica, la academia y para la sociedad civil. '}, {'generated_text': 'La Biblioteca Nacional de España es una entidad pública y sus fines son la conservación, estudio y difusión del Patrimonio Bibliográfico en cualquiera de sus formas así como la formación y perfeccionamiento de los especialistas e investigadores en el campo de la información y de las bibliotecas.'}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python >>> from transformers import AutoTokenizer, GPT2Model >>> tokenizer = AutoTokenizer.from_pretrained("PlanTL-GOB-ES/gpt2-large-bne") >>> model = GPT2Model.from_pretrained("PlanTL-GOB-ES/gpt2-large-bne") >>> text = "La Biblioteca Nacional de España es una entidad pública y sus fines son" >>> encoded_input = tokenizer(text, return_tensors='pt') >>> output = model(**encoded_input) >>> print(output.last_hidden_state.shape) torch.Size([1, 14, 1280]) ```
2625a8cc5054288093a450e20660020a
apache-2.0
['national library of spain', 'spanish', 'bne', 'gpt2-large-bne']
false
Limitations and bias At the time of submission, no measures have been taken to estimate the bias and toxicity embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. Nevertheless, here's an example of how the model can have biased predictions: ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline, set_seed >>> tokenizer = AutoTokenizer.from_pretrained("PlanTL-GOB-ES/gpt2-large-bne") >>> model = AutoModelForCausalLM.from_pretrained("PlanTL-GOB-ES/gpt2-large-bne") >>> generator = pipeline('text-generation', tokenizer=tokenizer, model=model) >>> set_seed(42) >>> generator("El hombre se dedica a", num_return_sequences=5) [{'generated_text': 'El hombre se dedica a comprar móviles a sus padres, pero les paga por ellos y luego les devuelve la pasta a ella. '}, {'generated_text': 'El hombre se dedica a la venta ambulante ilegal en la zona de la Alameda, con puestos del rastro callejero o de supermercados a los que luego roba. '}, {'generated_text': 'El hombre se dedica a la venta ambulante en el Paseo de Melilla. '}, {'generated_text': 'El hombre se dedica a los tatuajes y los dibujos en el cuerpo con su apariencia física y no da a basto en las tareas domésticas. '}, {'generated_text': 'El hombre se dedica a la caza indiscriminada de animales. '}] >>> set_seed(42) >>> generator("La mujer se dedica a", num_return_sequences=5) [{'generated_text': 'La mujer se dedica a comprar móviles a sus padres, pero les paga por ellos y luego no paga la factura." '}, {'generated_text': 'La mujer se dedica a la venta ambulante y su pareja vende cupones en el mercadillo navideño. '}, {'generated_text': 'La mujer se dedica a la venta al por mayor de perfumes, cosmética, complementos, y otros bienes de consumo. '}, {'generated_text': 'La mujer se dedica a los servicios sexuales y se aprovecha de los servicios religiosos. '}, {'generated_text': 'La mujer se dedica a la prostitución y tiene dos hijas del matrimonio y la propia familia de la víctima. '}] ```
24c438ecdb63a6142fde43efca74af14
apache-2.0
['national library of spain', 'spanish', 'bne', 'gpt2-large-bne']
false
Training data The [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) crawls all .es domains once a year. The training corpus consists of 59TB of WARC files from these crawls, carried out from 2009 to 2019. To obtain a high-quality training corpus, the corpus has been preprocessed with a pipeline of operations, including among others, sentence splitting, language detection, filtering of bad-formed sentences, and deduplication of repetitive contents. During the process, document boundaries are kept. This resulted in 2TB of Spanish clean corpus. Further global deduplication among the corpus is applied, resulting in 570GB of text. Some of the statistics of the corpus: | Corpora | Number of documents | Number of tokens | Size (GB) | |---------|---------------------|------------------|-----------| | BNE | 201,080,084 | 135,733,450,668 | 570GB |
26a4017d27a6ee5d546529121094a227
apache-2.0
['national library of spain', 'spanish', 'bne', 'gpt2-large-bne']
false
Training procedure The pretraining objective used for this architecture is next token prediction. The configuration of the **GPT2-large-bne** model is as follows: - gpt2-large: 36-layer, 1280-hidden, 20-heads, 774M parameters. The training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original [GPT-2](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) model with a vocabulary size of 50,262 tokens. The GPT2-large-bne pre-training consists of an autoregressive language model training that follows the approach of the GPT-2. The training lasted a total of 10 days with 32 computing nodes each one with 4 NVIDIA V100 GPUs of 16GB VRAM.
e8a2bd2186b3d8c1d10f7f6e11cbf878
apache-2.0
['national library of spain', 'spanish', 'bne', 'gpt2-large-bne']
false
Citation information If you use this model, please cite our [paper](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405): ``` @article{, abstract = {We want to thank the National Library of Spain for such a large effort on the data gathering and the Future of Computing Center, a Barcelona Supercomputing Center and IBM initiative (2020). This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.}, author = {Asier Gutiérrez Fandiño and Jordi Armengol Estapé and Marc Pàmies and Joan Llop Palao and Joaquin Silveira Ocampo and Casimiro Pio Carrino and Carme Armentano Oller and Carlos Rodriguez Penagos and Aitor Gonzalez Agirre and Marta Villegas}, doi = {10.26342/2022-68-3}, issn = {1135-5948}, journal = {Procesamiento del Lenguaje Natural}, keywords = {Artificial intelligence,Benchmarking,Data processing.,MarIA,Natural language processing,Spanish language modelling,Spanish language resources,Tractament del llenguatge natural (Informàtica),Àrees temàtiques de la UPC::Informàtica::Intel·ligència artificial::Llenguatge natural}, publisher = {Sociedad Española para el Procesamiento del Lenguaje Natural}, title = {MarIA: Spanish Language Models}, volume = {68}, url = {https://upcommons.upc.edu/handle/2117/367156
6b6757f6d76a12610823ef955a8f4880
apache-2.0
['generated_from_trainer']
false
tiny-mlm-glue-stsb-from-scratch-custom-tokenizer-expand-vocab This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 6.5710
e56499c300200fb347a709495fabf376
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 9.575 | 0.7 | 500 | 8.4501 | | 7.8603 | 1.39 | 1000 | 7.2557 | | 7.0873 | 2.09 | 1500 | 6.8941 | | 6.8132 | 2.78 | 2000 | 6.7624 | | 6.8004 | 3.48 | 2500 | 6.5626 | | 6.7383 | 4.17 | 3000 | 6.6079 | | 6.6661 | 4.87 | 3500 | 6.5800 | | 6.6778 | 5.56 | 4000 | 6.5710 |
ff5281a2d2e97e9fb3e46300b873f1e0
mit
['generated_from_trainer']
false
predict-perception-xlmr-blame-concept This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.9414 - Rmse: 0.7875 - Rmse Blame::a Un concetto astratto o un'emozione: 0.7875 - Mae: 0.6165 - Mae Blame::a Un concetto astratto o un'emozione: 0.6165 - R2: 0.2291 - R2 Blame::a Un concetto astratto o un'emozione: 0.2291 - Cos: 0.1304 - Pair: 0.0 - Rank: 0.5 - Neighbors: 0.3509 - Rsa: nan
d617042578918334a1f50252f602b8f8
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Blame::a Un concetto astratto o un'emozione | Mae | Mae Blame::a Un concetto astratto o un'emozione | R2 | R2 Blame::a Un concetto astratto o un'emozione | Cos | Pair | Rank | Neighbors | Rsa | |:-------------:|:-----:|:----:|:---------------:|:------:|:------------------------------------------------:|:------:|:-----------------------------------------------:|:------:|:----------------------------------------------:|:-------:|:----:|:----:|:---------:|:---:| | 1.0549 | 1.0 | 15 | 1.2093 | 0.8925 | 0.8925 | 0.6659 | 0.6659 | 0.0097 | 0.0097 | -0.3043 | 0.0 | 0.5 | 0.4013 | nan | | 1.0085 | 2.0 | 30 | 1.2199 | 0.8964 | 0.8964 | 0.6494 | 0.6494 | 0.0010 | 0.0010 | -0.1304 | 0.0 | 0.5 | 0.4515 | nan | | 1.0131 | 3.0 | 45 | 1.1798 | 0.8815 | 0.8815 | 0.6412 | 0.6412 | 0.0339 | 0.0339 | -0.2174 | 0.0 | 0.5 | 0.2402 | nan | | 0.9931 | 4.0 | 60 | 1.1726 | 0.8788 | 0.8788 | 0.6370 | 0.6370 | 0.0397 | 0.0397 | -0.1304 | 0.0 | 0.5 | 0.2911 | nan | | 0.9668 | 5.0 | 75 | 1.1194 | 0.8587 | 0.8587 | 0.5925 | 0.5925 | 0.0833 | 0.0833 | 0.2174 | 0.0 | 0.5 | 0.3303 | nan | | 0.8759 | 6.0 | 90 | 1.0776 | 0.8425 | 0.8425 | 0.6265 | 0.6265 | 0.1175 | 0.1175 | 0.3043 | 0.0 | 0.5 | 0.4190 | nan | | 0.8787 | 7.0 | 105 | 1.0513 | 0.8321 | 0.8321 | 0.6087 | 0.6087 | 0.1391 | 0.1391 | 0.2174 | 0.0 | 0.5 | 0.3303 | nan | | 0.7637 | 8.0 | 120 | 1.0537 | 0.8331 | 0.8331 | 0.6265 | 0.6265 | 0.1372 | 0.1372 | 0.2174 | 0.0 | 0.5 | 0.3303 | nan | | 0.6568 | 9.0 | 135 | 0.9104 | 0.7744 | 0.7744 | 0.5887 | 0.5887 | 0.2544 | 0.2544 | 0.3043 | 0.0 | 0.5 | 0.3680 | nan | | 0.6354 | 10.0 | 150 | 0.9055 | 0.7723 | 0.7723 | 0.6222 | 0.6222 | 0.2585 | 0.2585 | 0.1304 | 0.0 | 0.5 | 0.3987 | nan | | 0.5107 | 11.0 | 165 | 1.0173 | 0.8186 | 0.8186 | 0.6168 | 0.6168 | 0.1669 | 0.1669 | 0.2174 | 0.0 | 0.5 | 0.3303 | nan | | 0.4598 | 12.0 | 180 | 0.9155 | 0.7765 | 0.7765 | 0.6284 | 0.6284 | 0.2503 | 0.2503 | 0.1304 | 0.0 | 0.5 | 0.3987 | nan | | 0.3815 | 13.0 | 195 | 0.9255 | 0.7808 | 0.7808 | 0.6140 | 0.6140 | 0.2421 | 0.2421 | 0.1304 | 0.0 | 0.5 | 0.3987 | nan | | 0.3303 | 14.0 | 210 | 0.8506 | 0.7485 | 0.7485 | 0.6076 | 0.6076 | 0.3035 | 0.3035 | 0.0435 | 0.0 | 0.5 | 0.2862 | nan | | 0.2799 | 15.0 | 225 | 1.0272 | 0.8226 | 0.8226 | 0.6699 | 0.6699 | 0.1588 | 0.1588 | 0.0435 | 0.0 | 0.5 | 0.2862 | nan | | 0.2998 | 16.0 | 240 | 0.9969 | 0.8103 | 0.8103 | 0.6461 | 0.6461 | 0.1836 | 0.1836 | 0.0435 | 0.0 | 0.5 | 0.2862 | nan | | 0.3131 | 17.0 | 255 | 0.9066 | 0.7727 | 0.7727 | 0.5849 | 0.5849 | 0.2576 | 0.2576 | 0.2174 | 0.0 | 0.5 | 0.3303 | nan | | 0.2234 | 18.0 | 270 | 0.8741 | 0.7588 | 0.7588 | 0.5953 | 0.5953 | 0.2842 | 0.2842 | 0.2174 | 0.0 | 0.5 | 0.3303 | nan | | 0.2481 | 19.0 | 285 | 1.0022 | 0.8125 | 0.8125 | 0.6549 | 0.6549 | 0.1793 | 0.1793 | 0.0435 | 0.0 | 0.5 | 0.2862 | nan | | 0.2333 | 20.0 | 300 | 0.9238 | 0.7801 | 0.7801 | 0.6180 | 0.6180 | 0.2435 | 0.2435 | 0.0435 | 0.0 | 0.5 | 0.2862 | nan | | 0.2407 | 21.0 | 315 | 0.9868 | 0.8062 | 0.8062 | 0.6457 | 0.6457 | 0.1919 | 0.1919 | 0.0435 | 0.0 | 0.5 | 0.2862 | nan | | 0.2122 | 22.0 | 330 | 0.9514 | 0.7916 | 0.7916 | 0.6204 | 0.6204 | 0.2209 | 0.2209 | 0.0435 | 0.0 | 0.5 | 0.2862 | nan | | 0.2162 | 23.0 | 345 | 0.9227 | 0.7796 | 0.7796 | 0.6053 | 0.6053 | 0.2444 | 0.2444 | 0.1304 | 0.0 | 0.5 | 0.3509 | nan | | 0.1739 | 24.0 | 360 | 0.9147 | 0.7762 | 0.7762 | 0.5979 | 0.5979 | 0.2510 | 0.2510 | 0.1304 | 0.0 | 0.5 | 0.3509 | nan | | 0.2084 | 25.0 | 375 | 0.9645 | 0.7970 | 0.7970 | 0.6296 | 0.6296 | 0.2102 | 0.2102 | 0.0435 | 0.0 | 0.5 | 0.2862 | nan | | 0.1702 | 26.0 | 390 | 0.9587 | 0.7946 | 0.7946 | 0.6279 | 0.6279 | 0.2149 | 0.2149 | 0.0435 | 0.0 | 0.5 | 0.2862 | nan | | 0.2146 | 27.0 | 405 | 0.9519 | 0.7918 | 0.7918 | 0.6273 | 0.6273 | 0.2205 | 0.2205 | 0.0435 | 0.0 | 0.5 | 0.2862 | nan | | 0.1645 | 28.0 | 420 | 0.9398 | 0.7868 | 0.7868 | 0.6181 | 0.6181 | 0.2304 | 0.2304 | 0.0435 | 0.0 | 0.5 | 0.2862 | nan | | 0.2052 | 29.0 | 435 | 0.9492 | 0.7907 | 0.7907 | 0.6228 | 0.6228 | 0.2227 | 0.2227 | 0.0435 | 0.0 | 0.5 | 0.2862 | nan | | 0.147 | 30.0 | 450 | 0.9414 | 0.7875 | 0.7875 | 0.6165 | 0.6165 | 0.2291 | 0.2291 | 0.1304 | 0.0 | 0.5 | 0.3509 | nan |
644b059828d6f5e821ede6692cab6daa
mit
[]
false
center-table on Stable Diffusion This is the `<wakefit-center-table>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<wakefit-center-table> 0](https://huggingface.co/sd-concepts-library/center-table/resolve/main/concept_images/0.jpeg) ![<wakefit-center-table> 1](https://huggingface.co/sd-concepts-library/center-table/resolve/main/concept_images/3.jpeg) ![<wakefit-center-table> 2](https://huggingface.co/sd-concepts-library/center-table/resolve/main/concept_images/4.jpeg) ![<wakefit-center-table> 3](https://huggingface.co/sd-concepts-library/center-table/resolve/main/concept_images/2.jpeg) ![<wakefit-center-table> 4](https://huggingface.co/sd-concepts-library/center-table/resolve/main/concept_images/5.jpeg) ![<wakefit-center-table> 5](https://huggingface.co/sd-concepts-library/center-table/resolve/main/concept_images/1.jpeg)
bfe657ce36b75e7a900d65d2b3154743
mit
['generated_from_trainer']
false
deberta-v3-large__sst2__train-8-6 This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.4331 - Accuracy: 0.7106
e24ff4a8ca900138304026cd35790870
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6486 | 1.0 | 3 | 0.7901 | 0.25 | | 0.6418 | 2.0 | 6 | 0.9259 | 0.25 | | 0.6169 | 3.0 | 9 | 1.0574 | 0.25 | | 0.5639 | 4.0 | 12 | 1.1372 | 0.25 | | 0.4562 | 5.0 | 15 | 0.6090 | 0.5 | | 0.3105 | 6.0 | 18 | 0.4435 | 1.0 | | 0.2303 | 7.0 | 21 | 0.2804 | 1.0 | | 0.1388 | 8.0 | 24 | 0.2205 | 1.0 | | 0.0918 | 9.0 | 27 | 0.1282 | 1.0 | | 0.0447 | 10.0 | 30 | 0.0643 | 1.0 | | 0.0297 | 11.0 | 33 | 0.0361 | 1.0 | | 0.0159 | 12.0 | 36 | 0.0211 | 1.0 | | 0.0102 | 13.0 | 39 | 0.0155 | 1.0 | | 0.0061 | 14.0 | 42 | 0.0158 | 1.0 | | 0.0049 | 15.0 | 45 | 0.0189 | 1.0 | | 0.0035 | 16.0 | 48 | 0.0254 | 1.0 | | 0.0027 | 17.0 | 51 | 0.0305 | 1.0 | | 0.0021 | 18.0 | 54 | 0.0287 | 1.0 | | 0.0016 | 19.0 | 57 | 0.0215 | 1.0 | | 0.0016 | 20.0 | 60 | 0.0163 | 1.0 | | 0.0014 | 21.0 | 63 | 0.0138 | 1.0 | | 0.0015 | 22.0 | 66 | 0.0131 | 1.0 | | 0.001 | 23.0 | 69 | 0.0132 | 1.0 | | 0.0014 | 24.0 | 72 | 0.0126 | 1.0 | | 0.0011 | 25.0 | 75 | 0.0125 | 1.0 | | 0.001 | 26.0 | 78 | 0.0119 | 1.0 | | 0.0008 | 27.0 | 81 | 0.0110 | 1.0 | | 0.0007 | 28.0 | 84 | 0.0106 | 1.0 | | 0.0008 | 29.0 | 87 | 0.0095 | 1.0 | | 0.0009 | 30.0 | 90 | 0.0089 | 1.0 | | 0.0008 | 31.0 | 93 | 0.0083 | 1.0 | | 0.0007 | 32.0 | 96 | 0.0075 | 1.0 | | 0.0008 | 33.0 | 99 | 0.0066 | 1.0 | | 0.0006 | 34.0 | 102 | 0.0059 | 1.0 | | 0.0007 | 35.0 | 105 | 0.0054 | 1.0 | | 0.0008 | 36.0 | 108 | 0.0051 | 1.0 | | 0.0007 | 37.0 | 111 | 0.0049 | 1.0 | | 0.0007 | 38.0 | 114 | 0.0047 | 1.0 | | 0.0006 | 39.0 | 117 | 0.0045 | 1.0 | | 0.0006 | 40.0 | 120 | 0.0046 | 1.0 | | 0.0005 | 41.0 | 123 | 0.0045 | 1.0 | | 0.0006 | 42.0 | 126 | 0.0044 | 1.0 | | 0.0006 | 43.0 | 129 | 0.0043 | 1.0 | | 0.0006 | 44.0 | 132 | 0.0044 | 1.0 | | 0.0005 | 45.0 | 135 | 0.0045 | 1.0 | | 0.0006 | 46.0 | 138 | 0.0043 | 1.0 | | 0.0006 | 47.0 | 141 | 0.0043 | 1.0 | | 0.0006 | 48.0 | 144 | 0.0041 | 1.0 | | 0.0007 | 49.0 | 147 | 0.0042 | 1.0 | | 0.0005 | 50.0 | 150 | 0.0042 | 1.0 |
61f18a93231202c62ced35f6a60087c7
apache-2.0
['generated_from_trainer']
false
bart-paraphrase-v4-e1-rev This model is a fine-tuned version of [eugenesiow/bart-paraphrase](https://huggingface.co/eugenesiow/bart-paraphrase) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4221 - Rouge1: 62.2412 - Rouge2: 56.1611 - Rougel: 59.4952 - Rougelsum: 61.581 - Gen Len: 19.6036
8b885b6e6551ea966bfdf1ef1991a2e1
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 0.0975 | 1.0 | 14185 | 0.4221 | 62.2412 | 56.1611 | 59.4952 | 61.581 | 19.6036 |
3462664ad13d3d1ab9e2ac4fb621b7dd
creativeml-openrail-m
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'animal']
false
DreamBooth model for the zzelda concept trained by Sanderbaduk on dataset of cats. This is a Stable Diffusion model fine-tuned on pictures of my mum's cat "Zelda" with DreamBooth. It can be used by using the phrase 'zzelda cat' in a prompt. This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part! <table> <tr> <td>One of the images used to fine-tune on<br>"a photo of zzelda cat on a chair"</td> <td>One of the images generated by the model<br>"a photo of zzelda cat in space"</td> </tr> <tr> <td> <img src="http://i.imgur.com/zFOzQtf.jpg" style="max-height:400px"> </td> <td> <img src="http://i.imgur.com/12Nilhg.png" style="max-height:400px"> </td> </tr> </table>
690e91956a84b13456744578d292f03c
creativeml-openrail-m
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'animal']
false
Description This is a Stable Diffusion model fine-tuned on images of my mum's cat Zelda for the animal theme. To experiment a bit, I used a custom prompt for each image based on the file name. This works, but does not seem to have made much of a difference. The model was trained on CPU after encountering issues with CUDA, taking around 2 hours on 32 cores. It works a lot better locally than in the widget, where it tends to take a few more tries to get the right cat.
ff1bf8d59d0cbdb4a35b9132f7db2403
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Whisper Large Pashto This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the google/fleurs ps_af dataset. It achieves the following results on the evaluation set: - Loss: 0.8623 - Wer: 54.0685
865f7fefc827b8f9efb26f15ed16c7fe
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-07 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - training_steps: 700 - mixed_precision_training: Native AMP
180ef8d00ca4e4ba2972e9af941432be
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:-------:| | 1.2281 | 16.59 | 100 | 1.0951 | 69.3118 | | 0.7529 | 33.3 | 200 | 0.8693 | 57.5635 | | 0.5372 | 49.89 | 300 | 0.8399 | 54.7350 | | 0.4398 | 66.59 | 400 | 0.8623 | 54.0685 | | 0.3244 | 83.3 | 500 | 0.9098 | 54.7505 | | 0.238 | 99.89 | 600 | 0.9607 | 55.3782 | | 0.2014 | 116.59 | 700 | 1.0077 | 55.9206 |
c96b5059577c379754c745af29e4156b
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 - mixed_precision_training: Native AMP
30a867f8002d6fad5484a744a820549f
apache-2.0
['translation']
false
opus-mt-st-es * source languages: st * target languages: es * OPUS readme: [st-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/st-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/st-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/st-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/st-es/opus-2020-01-16.eval.txt)
55225b29fb298308b167a824487678fb
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1368 - F1: 0.8517
81e3be5eaf0752e24f8403ac858108b4
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2468 | 1.0 | 787 | 0.1583 | 0.8312 | | 0.1187 | 2.0 | 1574 | 0.1368 | 0.8517 |
8503a9de24ee09efa8e1457c331bad8e
mit
[]
false
Misaki DialoGPT Model I tried to base it off of "Misaki Ayuzawa" from the anime "Kaichou Wa Maid-Sama! (会長はメイド様!, lit. The Student Council President is a Maid)". This was mostly done just for fun, but is open for any pull requests to make it better. :> There are currently a couple issues with the model like how it just blurts out '!!!!!!'. I haven't had much time to ponder what makes it happen. (do let me know if there's something I can change.) This uses the infamous Microsoft's DialoGPT medium model and is trained with transcripts of the anime episodes.
b6246dfa782b761a1e300b3caae9a7fd
creativeml-openrail-m
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'wildcard']
false
DreamBooth model for the huihui concept trained by CharyWind. This is a Stable Diffusion model fine-tuned on the huihui concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of huihui cat** This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
ef20c656c36998247b1e8d0884492513
creativeml-openrail-m
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'wildcard']
false
Description This is a Stable Diffusion model fine-tuned on `cat` images for the wildcard theme, for the Hugging Face DreamBooth Hackathon, from the HF CN Community, corporated with the HeyWhale.
3e35042edd318f9ac97e4908c8b8016f
apache-2.0
['deep-narrow']
false
T5-Efficient-SMALL-KV256 (Deep-Narrow version) T5-Efficient-SMALL-KV256 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5). It is a *pretrained-only* checkpoint and was released with the paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*. In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures of similar parameter count. To quote the paper: > We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased > before considering any other forms of uniform scaling across other dimensions. This is largely due to > how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a > tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise, > a tall base model might also generally more efficient compared to a large model. We generally find > that, regardless of size, even if absolute performance might increase as we continue to stack layers, > the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36 > layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e., > params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params, > FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to > consider. To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially. A sequence of word embeddings is therefore processed sequentially by each transformer block.
97cd0ca40c79425b03e49ddbe50c33a4
apache-2.0
['deep-narrow']
false
Details model architecture This model checkpoint - **t5-efficient-small-kv256** - is of model type **Small** with the following variations: - **kv** is **256** It has **117.14** million parameters and thus requires *ca.* **468.58 MB** of memory in full precision (*fp32*) or **234.29 MB** of memory in half precision (*fp16* or *bf16*). A summary of the *original* T5 model architectures can be seen here: | Model | nl (el/dl) | ff | dm | kv | nh |
a36282fa61e9751f9a50969e8beafd5a
apache-2.0
['automatic-speech-recognition', 'es']
false
exp_w2v2r_es_vp-100k_age_teens-5_sixties-5_s625 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
a40838b5b3d838806789669e19eaeb7a
mit
['generated_from_trainer']
false
farsi_lastname_classifier_1 This model is a fine-tuned version of [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0482 - Pearson: 0.9232
abc7d5884901d619c115c9c25eab5f06
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Pearson | |:-------------:|:-----:|:----:|:---------------:|:-------:| | No log | 1.0 | 12 | 0.2705 | 0.7018 | | No log | 2.0 | 24 | 0.0993 | 0.7986 | | No log | 3.0 | 36 | 0.0804 | 0.8347 | | No log | 4.0 | 48 | 0.0433 | 0.9246 | | No log | 5.0 | 60 | 0.0559 | 0.9176 | | No log | 6.0 | 72 | 0.0465 | 0.9334 | | No log | 7.0 | 84 | 0.0503 | 0.9154 | | No log | 8.0 | 96 | 0.0438 | 0.9222 | | No log | 9.0 | 108 | 0.0468 | 0.9260 | | No log | 10.0 | 120 | 0.0482 | 0.9232 |
fd3553439d5e3defeb1de8589d647206
apache-2.0
['automatic-speech-recognition', 'de']
false
exp_w2v2r_de_xls-r_gender_male-2_female-8_s755 Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
2ff225b775af57b43c00b21e02e40b14
apache-2.0
['pytorch', 'causal-lm']
false
Model Description Genji is a transformer model finetuned on EleutherAI's GPT-J 6B model. This particular model is trained on python only code approaching 4GB in size. Split model has the checkpoints splitted, which makes it use less system RAM while loading and makes it faster to load. This model needs more effort to set up as you need to install git-lfs and pull the repo. | Hyperparameter | Value | |-------------------|--------| | n_parameters | 6,053,381,344 | | n_layers | 28* | | d_model | 4,096 | | d_ff | 16,384 | | n_heads | 16 | | d_head | 256 | | n_ctx | 2,048 | | n_vocab | 50,400 (same tokenizer as GPT-2/3) | | position encoding | [Rotary position encodings (RoPE)](https://arxiv.org/abs/2104.09864) | | RoPE dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py
b8706f70a1669523c5ca4d20683bf4b8
apache-2.0
['pytorch', 'causal-lm']
false
L223) | `*` each layer consists of one feedforward block and one self attention block The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model dimension is split into 16 heads, each with a dimension of 256. Rotary position encodings (RoPE) was applied to 64 dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as GPT-2/GPT-3.
659534ca07238472f13c3fb4733e1d78
apache-2.0
['pytorch', 'causal-lm']
false
Training data GPT-J 6B was pretrained on the [Pile](pile.eleuther.ai), a large scale curated dataset created by EleutherAI for the purpose of training this model. After the pre-training, it's finetuned on the python code that was taken from the Pile.
eb2666d0897f5f21328618166a4b2fa3
apache-2.0
['pytorch', 'causal-lm']
false
How to use This model is only usable with our fork because GPT-J is not merged to the main transformers repo yet. When it's merged, we will make this model easily loadable. For now, you need to use this fork: [Fork](https://github.com/finetuneanon/transformers) to install with pip: ```bash pip install git+https://github.com/finetuneanon/transformers@gpt-neo-localattention3-rp-b ``` **git-lfs** also needs to be installed, on ubuntu: ```bash apt install git-lfs ``` after it's installed, initialize git-lfs: ```bash git lfs install ``` then clone this repo: ```bash git clone https://huggingface.co/NovelAI/genji-python-6B-split ``` Now we can load the model. We recommend the usage of the model as FP16. That way, it fits in 16GB VRAM cards. How to use: ```python from transformers import ( AutoTokenizer, AutoModelForCausalLM, GPTNeoForCausalLM, ) model = AutoModelForCausalLM.from_pretrained("genji-python-6B-split/model").half().eval().cuda() tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-2.7B") text = '''def print_customer_name''' tokens = tokenizer(text, return_tensors="pt").input_ids generated_tokens = model.generate(tokens.long().cuda(), use_cache=True, do_sample=True, top_k=50, temperature=0.3, top_p=0.9, repetition_penalty=1.125, min_length=1, max_length=len(tokens[0]) + 400, pad_token_id=tokenizer.eos_token_id) last_tokens = generated_tokens[0][len(tokens[0]):] generated_text = tokenizer.decode(last_tokens) print("Generation:\n" + generated_text) ``` When ran, this code generates: ```python Prompt: def print_customer_name Generation: (self, customer): """Print the name of a customer.""" if not self.is_valid(): return print("Customer: {}".format(customer)) ``` For example usage, you can see our colab notebook as well: [Notebook](https://colab.research.google.com/drive/1PnWpx02IEUkY8jhLKd_NewUGEXahAska?usp=sharing)
3fcb1f75d4836b5d76e159155703e05d
apache-2.0
['pytorch', 'causal-lm']
false
Acknowledgements This project was possible because of the compute provided by the [TPU Research Cloud](https://sites.research.google/trc/) and [EleutherAI](https://eleuther.ai/) for pretraining of the GPT-J 6B. Thanks to everyone who contributed to this project: - [Aero](https://github.com/AeroScripts) - [Finetune](https://github.com/finetuneanon) - [Kurumuz](https://github.com/kurumuz)
fc8e7d970302043773d6390b988d5cbb
apache-2.0
['generated_from_trainer']
false
tiny-mlm-snli-target-glue-wnli This model is a fine-tuned version of [muhtasham/tiny-mlm-snli](https://huggingface.co/muhtasham/tiny-mlm-snli) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.1223 - Accuracy: 0.0704
ac062ff3927f5d1041ba31daf5afdc59
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.689 | 25.0 | 500 | 0.7743 | 0.2394 | | 0.6581 | 50.0 | 1000 | 1.1395 | 0.1127 | | 0.6078 | 75.0 | 1500 | 1.6260 | 0.0704 | | 0.5462 | 100.0 | 2000 | 2.1223 | 0.0704 |
e27ef6ec5afd9a6605ee78dff0efd5b6
cc-by-sa-4.0
['vietnamese', 'masked-lm', 'wikipedia']
false
Model Description This is a RoBERTa model pre-trained on Vietnamese Wikipedia texts. NVIDIA A100-SXM4-40GB took 20 hours 11 minutes for training. You can fine-tune `roberta-base-vietnamese` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-base-vietnamese-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/roberta-base-vietnamese-ud-goeswith), and so on.
2792b0e3774cac362bd177e1d5f3e948
cc-by-sa-4.0
['vietnamese', 'masked-lm', 'wikipedia']
false
How to Use ```py from transformers import AutoTokenizer,AutoModelForMaskedLM tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-vietnamese") model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-base-vietnamese") ```
fa3fcfc1eb36b7d648571392edef2581
mit
[]
false
sintez-ico on Stable Diffusion This is the `<sintez-ico>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<sintez-ico> 0](https://huggingface.co/sd-concepts-library/sintez-ico/resolve/main/concept_images/7.jpeg) ![<sintez-ico> 1](https://huggingface.co/sd-concepts-library/sintez-ico/resolve/main/concept_images/2.jpeg) ![<sintez-ico> 2](https://huggingface.co/sd-concepts-library/sintez-ico/resolve/main/concept_images/6.jpeg) ![<sintez-ico> 3](https://huggingface.co/sd-concepts-library/sintez-ico/resolve/main/concept_images/4.jpeg) ![<sintez-ico> 4](https://huggingface.co/sd-concepts-library/sintez-ico/resolve/main/concept_images/5.jpeg) ![<sintez-ico> 5](https://huggingface.co/sd-concepts-library/sintez-ico/resolve/main/concept_images/0.jpeg) ![<sintez-ico> 6](https://huggingface.co/sd-concepts-library/sintez-ico/resolve/main/concept_images/1.jpeg) ![<sintez-ico> 7](https://huggingface.co/sd-concepts-library/sintez-ico/resolve/main/concept_images/3.jpeg)
8533a7bc5b648bc8af324f6dff29d0a6
apache-2.0
[]
false
bert-base-en-fr-ar-cased We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages. Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy. For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
b5dbb72a1c5bcda2d0eb012cb2c29e89
apache-2.0
[]
false
How to use ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-fr-ar-cased") model = AutoModel.from_pretrained("Geotrend/bert-base-en-fr-ar-cased") ``` To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
ab7d39ac7d99e5da0d327ff9fd603fac
apache-2.0
['translation']
false
opus-mt-sv-ln * source languages: sv * target languages: ln * OPUS readme: [sv-ln](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-ln/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-ln/opus-2020-01-21.zip) * test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-ln/opus-2020-01-21.test.txt) * test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-ln/opus-2020-01-21.eval.txt)
c85065c50fd06c5197bbb9c39e7b38d0
apache-2.0
['generated_from_trainer']
false
beit-sketch-classifier-pt-metaset-2 This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6703 - Accuracy: 0.8282
91d5a0ca62783d8136d6d6e69bbf1bd5
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5
7e370cbc0b99e85b26310b972f94bb0d
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:------:|:---------------:|:--------:| | 0.8028 | 1.0 | 76608 | 0.7586 | 0.8007 | | 0.7168 | 2.0 | 153216 | 0.6983 | 0.8154 | | 0.6357 | 3.0 | 229824 | 0.6676 | 0.8240 | | 0.5707 | 4.0 | 306432 | 0.6606 | 0.8276 | | 0.4254 | 5.0 | 383040 | 0.6703 | 0.8282 |
27a7c2ecee10ffa272f94e60b15571e2
apache-2.0
['automatic-speech-recognition', 'en']
false
exp_w2v2r_en_xls-r_gender_male-10_female-0_s682 Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
9ce23a66891fcaf28208231cdf3d3652
cc0-1.0
['stable-diffusion', 'text-to-image']
false
Usage Use by adding the keyword "jannismayr" to the prompt. The model was trained with different classnames, which can also be added to the prompt. These classnames are the second words of the filenames.
9e23bd68ae07f9dfbc745e757827ff74
cc0-1.0
['stable-diffusion', 'text-to-image']
false
Samples For this model I experimented and made several versions. I won't bore you with details but there were variations in learning rates and classifications. Just look at the samples and pick the one that looks like it suits you best. The full images can be found in the files and versions tab as they are quite large. <img src="https://huggingface.co/Froddan/jannismayr/resolve/main/xy_grid-0000-1454625692-.jpg"/> <img src="https://huggingface.co/Froddan/jannismayr/resolve/main/xy_grid-0001-3762916514-.jpg"/> <img src="https://huggingface.co/Froddan/jannismayr/resolve/main/xy_grid-0002-590770723-.jpg"/>
63b3fc5cf0fb22e2122319929ce70797
cc-by-4.0
['questions and answers generation']
false
Model Card of `lmqg/bart-base-tweetqa-qag` This model is fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) for question & answer pair generation task on the [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
95491ec33e53e2f8a3c674336d05f002
cc-by-4.0
['questions and answers generation']
false
Overview - **Language model:** [facebook/bart-base](https://huggingface.co/facebook/bart-base) - **Language:** en - **Training data:** [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
453310deba17411823dafccef49a121f
cc-by-4.0
['questions and answers generation']
false
model prediction question_answer_pairs = model.generate_qa("William Turner was an English painter who specialised in watercolour landscapes") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/bart-base-tweetqa-qag") output = pipe("Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.") ```
2516c466f5836d81a9eac661d0112490
cc-by-4.0
['questions and answers generation']
false
Evaluation - ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/lmqg/bart-base-tweetqa-qag/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qag_tweetqa.default.json) | | Score | Type | Dataset | |:--------------------------------|--------:|:--------|:---------------------------------------------------------------------| | BERTScore | 91.19 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) | | Bleu_1 | 39.8 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) | | Bleu_2 | 27.7 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) | | Bleu_3 | 19.05 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) | | Bleu_4 | 13.27 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) | | METEOR | 25.66 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) | | MoverScore | 61.59 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) | | QAAlignedF1Score (BERTScore) | 91.5 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) | | QAAlignedF1Score (MoverScore) | 63.78 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) | | QAAlignedPrecision (BERTScore) | 91.9 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) | | QAAlignedPrecision (MoverScore) | 64.77 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) | | QAAlignedRecall (BERTScore) | 91.11 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) | | QAAlignedRecall (MoverScore) | 62.89 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) | | ROUGE_L | 33.39 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
f6ec03f2711f24bd9fc53cfeccd186ce
cc-by-4.0
['questions and answers generation']
false
Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qag_tweetqa - dataset_name: default - input_types: ['paragraph'] - output_types: ['questions_answers'] - prefix_types: None - model: facebook/bart-base - max_length: 256 - max_length_output: 128 - epoch: 15 - batch: 32 - lr: 5e-05 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 4 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/bart-base-tweetqa-qag/raw/main/trainer_config.json).
9b3c478a99b8f4e44597d32e34a65840
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Whisper Small PT with Common Voice 11 This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.3487 - Wer: 14.3802
02139b9a3a592a23425c18cf8b715310
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - training_steps: 10000
b4695dd4e4ce5c2091b0a25437473976
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:-------:| | 0.1202 | 0.88 | 1000 | 0.2225 | 15.5847 | | 0.1024 | 1.76 | 2000 | 0.2160 | 15.0651 | | 0.0832 | 2.64 | 3000 | 0.2259 | 15.0923 | | 0.0081 | 3.51 | 4000 | 0.2519 | 14.7345 | | 0.0387 | 4.39 | 5000 | 0.2718 | 14.7311 | | 0.0039 | 5.27 | 6000 | 0.3031 | 14.5914 | | 0.001 | 6.15 | 7000 | 0.3238 | 14.5710 | | 0.0007 | 7.03 | 8000 | 0.3285 | 14.5113 | | 0.0009 | 7.91 | 9000 | 0.3467 | 14.3580 | | 0.0008 | 8.79 | 10000 | 0.3487 | 14.3802 |
4e119403b3f1b3ba2548322fb0fdba8c
apache-2.0
['generated_from_trainer']
false
distilbert_sa_GLUE_Experiment_qqp_256 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE QQP dataset. It achieves the following results on the evaluation set: - Loss: 0.4425 - Accuracy: 0.8030 - F1: 0.7323 - Combined Score: 0.7677
0efbc3c25ba755b0cbd85aeb75007869
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:| | 0.53 | 1.0 | 1422 | 0.5023 | 0.7557 | 0.6592 | 0.7075 | | 0.479 | 2.0 | 2844 | 0.4823 | 0.7679 | 0.6483 | 0.7081 | | 0.4522 | 3.0 | 4266 | 0.4788 | 0.7741 | 0.6474 | 0.7108 | | 0.4263 | 4.0 | 5688 | 0.4753 | 0.7829 | 0.6911 | 0.7370 | | 0.4009 | 5.0 | 7110 | 0.4536 | 0.7906 | 0.7194 | 0.7550 | | 0.3772 | 6.0 | 8532 | 0.4497 | 0.7949 | 0.7200 | 0.7574 | | 0.3548 | 7.0 | 9954 | 0.4453 | 0.8010 | 0.7201 | 0.7606 | | 0.3332 | 8.0 | 11376 | 0.4425 | 0.8030 | 0.7323 | 0.7677 | | 0.3132 | 9.0 | 12798 | 0.4654 | 0.7938 | 0.7375 | 0.7657 | | 0.2951 | 10.0 | 14220 | 0.4551 | 0.8056 | 0.7423 | 0.7739 | | 0.2777 | 11.0 | 15642 | 0.4675 | 0.8120 | 0.7374 | 0.7747 | | 0.2625 | 12.0 | 17064 | 0.4946 | 0.8082 | 0.7451 | 0.7766 | | 0.2473 | 13.0 | 18486 | 0.5041 | 0.8102 | 0.7469 | 0.7786 |
26c9d6354074f5c945d56fa483fedce0
cc-by-4.0
[]
false
Modern French normalisation model Normalisation model from Modern (17th c.) French to contemporary French. It was introduced in [this paper](https://hal.inria.fr/hal-03540226/) (see citation below). The main research repository can be found [here](https://github.com/rbawden/ModFr-Norm). If you use this model, please cite our research paper (see [below](
3d728d06e30b558d8d9befc80e74594e
cc-by-4.0
[]
false
Model description The normalisation model is trained on the [FreEM_norm corpus](https://freem-corpora.github.io/corpora/norm/), which is a parallel data of French texts from the 17th century and their manually normalised versions that follow contemporary French spelling. The model is a transformer model with 2 encoder layers, 4 decoder layers, embedding dimensions of size 256, feedforward dimension of 1024. The associated tokeniser is trained with SentencePiece and the BPE strategy with a BPE vocabulary of 1000 tokens.
4cdae4a8d147190d2a4484bab0f2e9c5
cc-by-4.0
[]
false
Intended uses & limitations The model is designed to be used to normalise 17th c. French texts. The best performance can be seen on texts from similar genres as those produced within this century of French.
b2ce08fae8787ac6bdc94e449412fd93
cc-by-4.0
[]
false
How to use The model is to be used with the custom pipeline available in this repository (transformers>=4.21.0): ``` from transformers import pipeline normaliser = pipeline(model="rbawden/modern_french_normalisation", batch_size=32, beam_size=5, cache_file="./cache.pickle", trust_remote_code=True) list_inputs = ["Elle haïſſoit particulierement le Cardinal de Lorraine;", "Adieu, i'iray chez vous tantoſt vous rendre grace."] list_outputs = normaliser(list_inputs) print(list_outputs) >> [{'text': 'Elle haïssait particulièrement le Cardinal de Lorraine; ', 'alignment': [([0, 3], [0, 3]), ([5, 12], [5, 12]), ([14, 29], [14, 29]), ([31, 32], [31, 32]), ([34, 41], [34, 41]), ([43, 44], [43, 44]), ([46, 53], [46, 53]), ([54, 54], [54, 54])]}, {'text': "Adieu, j'irai chez vous tantôt vous rendre grâce. ", 'alignment': [([0, 4], [0, 4]), ([5, 5], [5, 5]), ([7, 8], [7, 8]), ([9, 12], [9, 12]), ([14, 17], [14, 17]), ([19, 22], [19, 22]), ([24, 30], [24, 29]), ([32, 35], [31, 34]), ([37, 42], [36, 41]), ([44, 48], [43, 47]), ([49, 49], [48, 48])]}] ``` To disable postprocessing (faster but less good normalisation), set the arguments `no_postproc_lex` and `no_post_clean` to True when instantiating the pipeline: ``` normaliser = pipeline(model="rbawden/modern_french_normalisation", no_postproc_lex=True, no_post_clean=True, batch_size=32, beam_size=5, cache_file="./cache.pickle", trust_remote_code=True) ```
0c8e7c788060f4d946dbccd4c184ccef
cc-by-4.0
[]
false
Limitations and bias The model has been learnt in a supervised fashion and therefore like any such model is likely to perform well on texts similar to those used for training and less well on other texts. Whilst care was taken to include a range of different domains from different periods in the 17th c. in the training data, there are nevertheless imbalances, notably with some decades (e.g. 1610s) being underrepresented. The model reaches a high performance, but could in rare cases result in changes to the text other than those involving spelling conventions (e.g. changing words, deleting or hallucinating words). A post-processing step is introduced in the pipeline file to avoid these problems, which involves a look-up in a contemporary French lexicon ([The Le*fff*](http://almanach.inria.fr/software_and_resources/custom/Alexina-en.html)) and checks to make sure that the normalised words do not stray too far from the original source words.
563196886f3b31ea9260467c509e977d