license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
apache-2.0
['generated_from_trainer']
false
Sentiment140_ELECTRA_5E This model is a fine-tuned version of [google/electra-base-discriminator](https://huggingface.co/google/electra-base-discriminator) on the sentiment140 dataset. It achieves the following results on the evaluation set: - Loss: 0.5410 - Accuracy: 0.84
da37d9c2e37642386197d6ed6431efff
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6896 | 0.08 | 50 | 0.6605 | 0.7133 | | 0.6664 | 0.16 | 100 | 0.6054 | 0.7133 | | 0.5915 | 0.24 | 150 | 0.4777 | 0.8333 | | 0.5053 | 0.32 | 200 | 0.4735 | 0.7733 | | 0.4946 | 0.4 | 250 | 0.3847 | 0.8267 | | 0.4578 | 0.48 | 300 | 0.4025 | 0.8067 | | 0.4724 | 0.56 | 350 | 0.3642 | 0.8333 | | 0.4309 | 0.64 | 400 | 0.3762 | 0.86 | | 0.4818 | 0.72 | 450 | 0.3829 | 0.84 | | 0.416 | 0.8 | 500 | 0.3599 | 0.8467 | | 0.4201 | 0.88 | 550 | 0.3469 | 0.8533 | | 0.3664 | 0.96 | 600 | 0.3462 | 0.8467 | | 0.4289 | 1.04 | 650 | 0.3470 | 0.86 | | 0.3859 | 1.12 | 700 | 0.3440 | 0.8533 | | 0.3599 | 1.2 | 750 | 0.3475 | 0.8533 | | 0.3287 | 1.28 | 800 | 0.3524 | 0.8467 | | 0.3331 | 1.36 | 850 | 0.3475 | 0.8733 | | 0.3236 | 1.44 | 900 | 0.3657 | 0.8467 | | 0.3502 | 1.52 | 950 | 0.3525 | 0.84 | | 0.3702 | 1.6 | 1000 | 0.3655 | 0.8333 | | 0.3323 | 1.68 | 1050 | 0.3405 | 0.84 | | 0.3452 | 1.76 | 1100 | 0.3376 | 0.8533 | | 0.3742 | 1.84 | 1150 | 0.3481 | 0.8533 | | 0.3145 | 1.92 | 1200 | 0.3472 | 0.86 | | 0.3657 | 2.0 | 1250 | 0.3302 | 0.8733 | | 0.2601 | 2.08 | 1300 | 0.3612 | 0.86 | | 0.2954 | 2.16 | 1350 | 0.3640 | 0.8533 | | 0.2888 | 2.24 | 1400 | 0.3670 | 0.8467 | | 0.2572 | 2.32 | 1450 | 0.4118 | 0.84 | | 0.2955 | 2.4 | 1500 | 0.3811 | 0.86 | | 0.2431 | 2.48 | 1550 | 0.4221 | 0.84 | | 0.318 | 2.56 | 1600 | 0.3844 | 0.8467 | | 0.2615 | 2.64 | 1650 | 0.4109 | 0.8333 | | 0.2389 | 2.72 | 1700 | 0.4420 | 0.8467 | | 0.2983 | 2.8 | 1750 | 0.4203 | 0.8467 | | 0.2828 | 2.88 | 1800 | 0.3629 | 0.8733 | | 0.2897 | 2.96 | 1850 | 0.3916 | 0.8733 | | 0.2239 | 3.04 | 1900 | 0.4143 | 0.86 | | 0.2093 | 3.12 | 1950 | 0.4521 | 0.84 | | 0.2438 | 3.2 | 2000 | 0.4271 | 0.8467 | | 0.2282 | 3.28 | 2050 | 0.4548 | 0.8333 | | 0.1918 | 3.36 | 2100 | 0.4533 | 0.86 | | 0.1698 | 3.44 | 2150 | 0.5177 | 0.84 | | 0.2765 | 3.52 | 2200 | 0.4884 | 0.84 | | 0.2282 | 3.6 | 2250 | 0.4697 | 0.8533 | | 0.239 | 3.68 | 2300 | 0.4766 | 0.8533 | | 0.2219 | 3.76 | 2350 | 0.4628 | 0.8533 | | 0.2375 | 3.84 | 2400 | 0.4704 | 0.8533 | | 0.1883 | 3.92 | 2450 | 0.4744 | 0.84 | | 0.2049 | 4.0 | 2500 | 0.4977 | 0.84 | | 0.1958 | 4.08 | 2550 | 0.4906 | 0.84 | | 0.1656 | 4.16 | 2600 | 0.5219 | 0.8333 | | 0.1543 | 4.24 | 2650 | 0.5379 | 0.8333 | | 0.2082 | 4.32 | 2700 | 0.5107 | 0.84 | | 0.1724 | 4.4 | 2750 | 0.5208 | 0.84 | | 0.1778 | 4.48 | 2800 | 0.5238 | 0.84 | | 0.1914 | 4.56 | 2850 | 0.5325 | 0.84 | | 0.2436 | 4.64 | 2900 | 0.5279 | 0.84 | | 0.1662 | 4.72 | 2950 | 0.5295 | 0.84 | | 0.1288 | 4.8 | 3000 | 0.5392 | 0.84 | | 0.2087 | 4.88 | 3050 | 0.5409 | 0.84 | | 0.1612 | 4.96 | 3100 | 0.5410 | 0.84 |
e8497edbccf895ba19f383b69a5d870b
mit
[]
false
retropixelart pinguin on Stable Diffusion This is the `<retropixelart-style>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<retropixelart-style> 0](https://huggingface.co/sd-concepts-library/retropixelart-pinguin/resolve/main/concept_images/5.jpeg) ![<retropixelart-style> 1](https://huggingface.co/sd-concepts-library/retropixelart-pinguin/resolve/main/concept_images/6.jpeg) ![<retropixelart-style> 2](https://huggingface.co/sd-concepts-library/retropixelart-pinguin/resolve/main/concept_images/3.jpeg) ![<retropixelart-style> 3](https://huggingface.co/sd-concepts-library/retropixelart-pinguin/resolve/main/concept_images/0.jpeg) ![<retropixelart-style> 4](https://huggingface.co/sd-concepts-library/retropixelart-pinguin/resolve/main/concept_images/2.jpeg) ![<retropixelart-style> 5](https://huggingface.co/sd-concepts-library/retropixelart-pinguin/resolve/main/concept_images/7.jpeg) ![<retropixelart-style> 6](https://huggingface.co/sd-concepts-library/retropixelart-pinguin/resolve/main/concept_images/1.jpeg) ![<retropixelart-style> 7](https://huggingface.co/sd-concepts-library/retropixelart-pinguin/resolve/main/concept_images/4.jpeg)
60cd2bc9bcca2e6d5bb58075934dd212
apache-2.0
['automatic-speech-recognition', 'en']
false
exp_w2v2t_en_unispeech_s870 Fine-tuned [microsoft/unispeech-large-1500h-cv](https://huggingface.co/microsoft/unispeech-large-1500h-cv) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
d3e370d8bf483145b6e09963fa538398
cc-by-sa-4.0
['coptic', 'token-classification', 'pos', 'dependency-parsing']
false
Model Description This is a DeBERTa(V2) model pre-trained with [UD_Coptic](https://universaldependencies.org/cop/) for POS-tagging and dependency-parsing, derived from [deberta-base-coptic](https://huggingface.co/KoichiYasuoka/deberta-base-coptic). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
ca83a729975f148b9c5370cda92a1624
cc-by-sa-4.0
['coptic', 'token-classification', 'pos', 'dependency-parsing']
false
How to Use ```py from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/deberta-base-coptic-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/deberta-base-coptic-upos") ``` or ``` import esupar nlp=esupar.load("KoichiYasuoka/deberta-base-coptic-upos") ```
f11023d4db0b4e9d0242e3a9a599a3ba
apache-2.0
['automatic-speech-recognition', 'zh-CN']
false
exp_w2v2t_zh-cn_xlsr-53_s533 Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) for speech recognition using the train split of [Common Voice 7.0 (zh-CN)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
643eba29a16d7accd9387069a4621101
creativeml-openrail-m
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'animal']
false
DreamBooth model for the shiba concept trained by ashiqabdulkhader on the ashiqabdulkhader/animals dataset. This is a Stable Diffusion model fine-tuned on the shiba concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of shiba dog** This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
2c6d2f68dae9e9ebbcc808642226b491
apache-2.0
['translation']
false
opus-mt-he-sv * source languages: he * target languages: sv * OPUS readme: [he-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/he-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/he-sv/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/he-sv/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/he-sv/opus-2020-01-09.eval.txt)
a30a63ff9b3d420c9d1135e462ff2f20
afl-3.0
[]
false
Model Description We release all models introduced in our [paper](https://arxiv.org/pdf/2206.11147.pdf), covering 13 different application scenarios. Each model contains 11 billion parameters. | Model | Description | Recommended Application | ----------- | ----------- |----------- | | **rst-all-11b** | **Trained with all the signals below except signals that are used to train Gaokao models** | **All applications below (specialized models are recommended first if high performance is preferred)** | | rst-fact-retrieval-11b | Trained with the following signals: WordNet meaning, WordNet part-of-speech, WordNet synonym, WordNet antonym, wikiHow category hierarchy, Wikidata relation, Wikidata entity typing, Paperswithcode entity typing | Knowledge intensive tasks, information extraction tasks,factual checker | | rst-summarization-11b | Trained with the following signals: DailyMail summary, Paperswithcode summary, arXiv summary, wikiHow summary | Summarization or other general generation tasks, meta-evaluation (e.g., BARTScore) | | rst-temporal-reasoning-11b | Trained with the following signals: DailyMail temporal information, wikiHow procedure | Temporal reasoning, relation extraction, event-based extraction | | rst-information-extraction-11b | Trained with the following signals: Paperswithcode entity, Paperswithcode entity typing, Wikidata entity typing, Wikidata relation, Wikipedia entity | Named entity recognition, relation extraction and other general IE tasks in the news, scientific or other domains| | rst-intent-detection-11b | Trained with the following signals: wikiHow goal-step relation | Intent prediction, event prediction | | rst-topic-classification-11b | Trained with the following signals: DailyMail category, arXiv category, wikiHow text category, Wikipedia section title | general text classification | | rst-word-sense-disambiguation-11b | Trained with the following signals: WordNet meaning, WordNet part-of-speech, WordNet synonym, WordNet antonym | Word sense disambiguation, part-of-speech tagging, general IE tasks, common sense reasoning | | rst-natural-language-inference-11b | Trained with the following signals: ConTRoL dataset, DREAM dataset, LogiQA dataset, RACE & RACE-C dataset, ReClor dataset, DailyMail temporal information | Natural language inference, multiple-choice question answering, reasoning | | rst-sentiment-classification-11b | Trained with the following signals: Rotten Tomatoes sentiment, Wikipedia sentiment | Sentiment classification, emotion classification | | rst-gaokao-rc-11b | Trained with multiple-choice QA datasets that are used to train the [T0pp](https://huggingface.co/bigscience/T0pp) model | General multiple-choice question answering| | rst-gaokao-cloze-11b | Trained with manually crafted cloze datasets | General cloze filling| | rst-gaokao-writing-11b | Trained with example essays from past Gaokao-English exams and grammar error correction signals | Essay writing, story generation, grammar error correction and other text generation tasks |
a32c73a6f04d337ed1ead80250f5ba3e
apache-2.0
['Early Modern French', 'Historical']
false
D'AlemBERT base model This model is a [RoBERTa base model](https://huggingface.co/bert-base-uncased) pre-trained on the [FreEMmax corpus](https://doi.org/10.5281/zenodo.6481135) for Early Modern French. It was introduced in [this paper](https://aclanthology.org/2022.lrec-1.359/). This model is Cased and was trained with a mix of normalized and unnormalized data.
dd17fd1e3c69d65339eb1b84c5184989
apache-2.0
['Early Modern French', 'Historical']
false
Model description D'AlemBERT is a transformers mode pretrained on the raw texts only with no humans labelling them in any way with an automatic process to generate inputs and labels from those texts using the RoBERTa base model. More precisely, it was pretrained with one objective: - Masked language modeling (MLM): this is part of the original training loss of the BERT base model. When taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence.
44a26f39051dcc46ec418dd4a8c6b212
apache-2.0
['Early Modern French', 'Historical']
false
Intended uses & limitations You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT. The model is primarily intended for use in Digital Humanities and Historical NLP.
bcad88e2f148e7369aa93ed71f4ebb0b
apache-2.0
['Early Modern French', 'Historical']
false
Limitations and bias This model is trained with historical French data from starting from the 16th c., so it might produce results that seem extremely biased by today standards. It might not work well on contemporary data and it is not intended to be used on it. This bias will also affect all fine-tuned versions of this model.
c63b19231016d670325555149333893e
apache-2.0
['Early Modern French', 'Historical']
false
Training data D'AlemBERT was pretrained on the non-freely available version of the [FreEMmax corpus](https://doi.org/10.5281/zenodo.6481135), a dataset consisting of more than 180k tokens coming from 22 different sources, and comprising French textual data going from the 16th c to the early 20th c.
4dbaa8e630a186a58374bd4fc9ca3fab
apache-2.0
['Early Modern French', 'Historical']
false
BibTeX entry and citation info ```bibtex @inproceedings{gabay-etal-2022-freem, title = "From {F}re{EM} to D{'}{A}lem{BERT}: a Large Corpus and a Language Model for Early {M}odern {F}rench", author = "Gabay, Simon and Ortiz Suarez, Pedro and Bartz, Alexandre and Chagu{\'e}, Alix and Bawden, Rachel and Gambette, Philippe and Sagot, Beno{\^\i}t", booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference", month = jun, year = "2022", address = "Marseille, France", publisher = "European Language Resources Association", url = "https://aclanthology.org/2022.lrec-1.359", pages = "3367--3374", abstract = "anguage models for historical states of language are becoming increasingly important to allow the optimal digitisation and analysis of old textual sources. Because these historical states are at the same time more complex to process and more scarce in the corpora available, this paper presents recent efforts to overcome this difficult situation. These efforts include producing a corpus, creating the model, and evaluating it with an NLP task currently used by scholars in other ongoing projects.", } ```
7887192756d13a5dddc88ce50bafca99
apache-2.0
['generated_from_trainer']
false
wav2vec2-large-TIMIT-IPA2 This model is a fine-tuned version of [facebook/wav2vec2-large](https://huggingface.co/facebook/wav2vec2-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1531 - Per: 0.0638
ab8ffedfb6319422f3832e27b5b3bba2
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Per | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.0846 | 6.85 | 500 | 0.1810 | 0.0991 | | 0.1857 | 13.7 | 1000 | 0.1411 | 0.0691 | | 0.0948 | 20.55 | 1500 | 0.1345 | 0.0666 | | 0.0646 | 27.4 | 2000 | 0.1444 | 0.0673 | | 0.0502 | 34.25 | 2500 | 0.1436 | 0.0628 | | 0.0381 | 41.1 | 3000 | 0.1383 | 0.0637 | | 0.0309 | 47.95 | 3500 | 0.1533 | 0.0638 | | 0.0261 | 54.79 | 4000 | 0.1531 | 0.0638 |
9c07290c9593927b298d42263ab64f91
creativeml-openrail-m
['stable-diffusion', 'text-to-image']
false
The embeddings in this repository were trained for the 768px [Stable Diffusion v2.1](https://huggingface.co/stabilityai/stable-diffusion-2-1) model. The embeddings should work on any model that uses SD v2.1 as a base. **Examples** <div align="center"> <img src="https://huggingface.co/ProGamerGov/winter-cat-embeddings-sd-v2-1/resolve/main/v1_size_768_t3x3_1.png"> </div> * [Full Image](https://huggingface.co/ProGamerGov/winter-cat-embeddings-sd-v2-1/resolve/main/v1_size_768_t3x3_1.png) <div align="center"> <img src="https://huggingface.co/ProGamerGov/winter-cat-embeddings-sd-v2-1/resolve/main/v1_size_768_t3x3_2.png"> </div> * [Full Image](https://huggingface.co/ProGamerGov/winter-cat-embeddings-sd-v2-1/resolve/main/v1_size_768_t3x3_2.png) <div align="center"> <img src="https://huggingface.co/ProGamerGov/winter-cat-embeddings-sd-v2-1/resolve/main/v1_size_768_t3x3_3.png"> </div> * [Full Image](https://huggingface.co/ProGamerGov/winter-cat-embeddings-sd-v2-1/resolve/main/v1_size_768_t3x3_3.png) <div align="center"> <img src="https://huggingface.co/ProGamerGov/winter-cat-embeddings-sd-v2-1/resolve/main/v1_size_768_t3x3_4.png"> </div> * [Full Image](https://huggingface.co/ProGamerGov/winter-cat-embeddings-sd-v2-1/resolve/main/v1_size_768_t3x3_4.png) **Usage** To use the embeddings, download and then rename the files to whatever trigger word you want to use or just keep the original name. Example Prompt: ``` a cute skinny cat wearing a winter toque and scarf in the snow, realistic, at artstation and behance, art by wcat8-v2-2000, cinematic lighting, night, heavy snowfall, snowstorm ``` Negative Prompt: ``` blurry, photo, smooth, cartoon animal, vector art, 2d, illustration, two deer wearing suits, comic book, 1970 film photography, very sexy woman with black hair, red on black, lace dress, deformed, tokyo japan, pen drawing, fat, chubby, concept art ```
b17462fc1dd588761b297d6adf0bf596
cc-by-4.0
['questions and answers generation']
false
Model Card of `lmqg/t5-small-squad-qag` This model is fine-tuned version of [t5-small](https://huggingface.co/t5-small) for question & answer pair generation task on the [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
776fa0ac83ab96569b6b2524ef514099
cc-by-4.0
['questions and answers generation']
false
Overview - **Language model:** [t5-small](https://huggingface.co/t5-small) - **Language:** en - **Training data:** [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
0354c0ab203b77ba26452de5d76b961b
cc-by-4.0
['questions and answers generation']
false
model prediction question_answer_pairs = model.generate_qa("William Turner was an English painter who specialised in watercolour landscapes") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/t5-small-squad-qag") output = pipe("generate question and answer: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.") ```
b559bfe7cc284d6e51f86826019511a4
cc-by-4.0
['questions and answers generation']
false
Evaluation - ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/lmqg/t5-small-squad-qag/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qag_squad.default.json) | | Score | Type | Dataset | |:--------------------------------|--------:|:--------|:-----------------------------------------------------------------| | QAAlignedF1Score (BERTScore) | 92.76 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) | | QAAlignedF1Score (MoverScore) | 64.59 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) | | QAAlignedPrecision (BERTScore) | 92.87 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) | | QAAlignedPrecision (MoverScore) | 65.3 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) | | QAAlignedRecall (BERTScore) | 92.68 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) | | QAAlignedRecall (MoverScore) | 63.99 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) |
85b455aeecabe8180689a071d2dd6dd9
cc-by-4.0
['questions and answers generation']
false
Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qag_squad - dataset_name: default - input_types: ['paragraph'] - output_types: ['questions_answers'] - prefix_types: ['qag'] - model: t5-small - max_length: 512 - max_length_output: 256 - epoch: 18 - batch: 32 - lr: 0.0001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 2 - label_smoothing: 0.0 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/t5-small-squad-qag/raw/main/trainer_config.json).
fd0406c57d11f3842c2308895e765627
apache-2.0
['generated_from_keras_callback']
false
my-finetuned-distilbert This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.6482 - Validation Loss: 1.3103 - Epoch: 0
b6c50e9b64af6fafb182b80f58c2a932
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32
39ca7ca3cf88121a24fa42c3746294e0
mit
['generated_from_trainer']
false
camembert-ner This model is a fine-tuned version of [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1179 - Overall Precision: 0.7367 - Overall Recall: 0.7522 - Overall F1: 0.7444 - Overall Accuracy: 0.9728 - Humanprod F1: 0.1639 - Loc F1: 0.7657 - Org F1: 0.5352 - Per F1: 0.7961
1da3ca293c14219b3819f3e884856abc
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | Humanprod F1 | Loc F1 | Org F1 | Per F1 | |:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:----------------:|:------------:|:------:|:------:|:------:| | No log | 1.0 | 307 | 0.1254 | 0.7185 | 0.7420 | 0.7300 | 0.9715 | 0.0357 | 0.7579 | 0.5052 | 0.7778 | | 0.1195 | 2.0 | 614 | 0.1179 | 0.7367 | 0.7522 | 0.7444 | 0.9728 | 0.1639 | 0.7657 | 0.5352 | 0.7961 |
2f3a577215c859a52a8893c9c05b122e
apache-2.0
['automatic-speech-recognition', 'en']
false
exp_w2v2r_en_xls-r_gender_male-5_female-5_s73 Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
63055a1677c85e10c5ba48dc8862346e
apache-2.0
['generated_from_trainer']
false
T5-model-1-feedback-0510 This model is a fine-tuned version of [theojolliffe/T5-model-1-feedback-1109](https://huggingface.co/theojolliffe/T5-model-1-feedback-1109) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2334 - Rouge1: 91.6115 - Rouge2: 86.7084 - Rougel: 91.0616 - Rougelsum: 91.1197 - Gen Len: 14.7895
e31f5845ead9c98a7ce7aa852554dd0c
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 0.4381 | 1.0 | 542 | 0.2731 | 90.324 | 84.2616 | 89.0178 | 89.1459 | 14.5614 | | 0.2999 | 2.0 | 1084 | 0.2374 | 91.6458 | 86.2909 | 90.8275 | 90.8257 | 14.7719 | | 0.273 | 3.0 | 1626 | 0.2382 | 91.4445 | 86.8218 | 91.1231 | 91.0886 | 14.8947 | | 0.2248 | 4.0 | 2168 | 0.2334 | 91.6115 | 86.7084 | 91.0616 | 91.1197 | 14.7895 |
047fafd6b8f398cf624c5f5ff748a050
apache-2.0
['timm', 'vision']
false
Model Details The CLIP model was developed by researchers at OpenAI to learn about what contributes to robustness in computer vision tasks. The model was also developed to test the ability of models to generalize to arbitrary image classification tasks in a zero-shot manner. It was not developed for general model deployment - to deploy models like CLIP, researchers will first need to carefully study their capabilities in relation to the specific context they’re being deployed within. This instance of the CLIP model is intended for loading in * `timm` (https://github.com/rwightman/pytorch-image-models) and * `OpenCLIP` (https://github.com/mlfoundations/open_clip) libraries. Please see https://huggingface.co/openai/clip-vit-base-patch32 for use in Hugging Face Transformers.
dea7ffb2cfea543f469565c5e175f404
apache-2.0
['timm', 'vision']
false
Model Type The model uses a ViT-B/32 Transformer architecture as an image encoder and uses a masked self-attention Transformer as a text encoder. These encoders are trained to maximize the similarity of (image, text) pairs via a contrastive loss. The original implementation had two variants: one using a ResNet image encoder and the other using a Vision Transformer. This repository has the variant with the Vision Transformer.
29b1ddda620f47a6b6713d5b311b3759
apache-2.0
['timm', 'vision']
false
Intended Use The model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such models - the CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis.
c2181220578ed4f63d0c8d7f9a6c4c23
apache-2.0
['timm', 'vision']
false
Primary intended uses The primary intended users of these models are AI researchers. We primarily imagine the model will be used by researchers to better understand robustness, generalization, and other capabilities, biases, and constraints of computer vision models.
52812f7de1c8ca1b7f3ce849b35b68fd
apache-2.0
['timm', 'vision']
false
Out-of-Scope Use Cases **Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful. Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use. Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases.
91de8001632f1ee6ce944737616e0cd2
apache-2.0
['timm', 'vision']
false
Data The model was trained on publicly available image-caption data. This was done through a combination of crawling a handful of websites and using commonly-used pre-existing image datasets such as [YFCC100M](http://projects.dfki.uni-kl.de/yfcc100m/). A large portion of the data comes from our crawling of the internet. This means that the data is more representative of people and societies most connected to the internet which tend to skew towards more developed nations, and younger, male users.
507234f10751679d596ec75392ffa6b6
apache-2.0
['timm', 'vision']
false
Data Mission Statement Our goal with building this dataset was to test out robustness and generalizability in computer vision tasks. As a result, the focus was on gathering large quantities of data from different publicly-available internet data sources. The data was gathered in a mostly non-interventionist manner. However, we only crawled websites that had policies against excessively violent and adult images and allowed us to filter out such content. We do not intend for this dataset to be used as the basis for any commercial or deployed model and will not be releasing the dataset.
7724f314b1dab207e83d0207db56f1cb
apache-2.0
['timm', 'vision']
false
Limitations CLIP and our analysis of it have a number of limitations. CLIP currently struggles with respect to certain tasks such as fine grained classification and counting objects. CLIP also poses issues with regards to fairness and bias which we discuss in the paper and briefly in the next section. Additionally, our approach to testing CLIP also has an important limitation- in many cases we have used linear probes to evaluate the performance of CLIP and there is evidence suggesting that linear probes can underestimate model performance.
e17865921fcd0f8f0734716c60dc31ef
apache-2.0
['timm', 'vision']
false
Bias and Fairness We find that the performance of CLIP - and the specific biases it exhibits - can depend significantly on class design and the choices one makes for categories to include and exclude. We tested the risk of certain kinds of denigration with CLIP by classifying images of people from [Fairface](https://arxiv.org/abs/1908.04913) into crime-related and non-human animal categories. We found significant disparities with respect to race and gender. Additionally, we found that these disparities could shift based on how the classes were constructed. (Details captured in the Broader Impacts Section in the paper). We also tested the performance of CLIP on gender, race and age classification using the Fairface dataset (We default to using race categories as they are constructed in the Fairface dataset.) in order to assess quality of performance across different demographics. We found accuracy >96% across all races for gender classification with ‘Middle Eastern’ having the highest accuracy (98.4%) and ‘White’ having the lowest (96.5%). Additionally, CLIP averaged ~93% for racial classification and ~63% for age classification. Our use of evaluations to test for gender, race and age classification as well as denigration harms is simply to evaluate performance of the model across people and surface potential risks and not to demonstrate an endorsement/enthusiasm for such tasks.
f663de7dbfc583d4f575a0ba4587de7b
mit
['generated_from_trainer']
false
deberta-classifier-feedback-1024-pseudo-final This model is a fine-tuned version of [TTian/deberta-classifier-feedback-1024-pseudo](https://huggingface.co/TTian/deberta-classifier-feedback-1024-pseudo) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5263
88c569917cb592837b885ed2767e5f81
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP
2564c0e46e4e16855c09bb0bffc2e2ac
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.5814 | 0.04 | 10 | 0.5888 | | 0.5521 | 0.08 | 20 | 0.5736 | | 0.5685 | 0.13 | 30 | 0.5809 | | 0.6052 | 0.17 | 40 | 0.5702 | | 0.5532 | 0.21 | 50 | 0.5571 | | 0.6177 | 0.25 | 60 | 0.5848 | | 0.6196 | 0.3 | 70 | 0.5464 | | 0.5772 | 0.34 | 80 | 0.5307 | | 0.5805 | 0.38 | 90 | 0.5550 | | 0.6453 | 0.42 | 100 | 0.5467 | | 0.5756 | 0.47 | 110 | 0.5587 | | 0.5901 | 0.51 | 120 | 0.5482 | | 0.568 | 0.55 | 130 | 0.5263 | | 0.5452 | 0.59 | 140 | 0.5698 | | 0.5949 | 0.64 | 150 | 0.5484 | | 0.5537 | 0.68 | 160 | 0.5783 | | 0.5327 | 0.72 | 170 | 0.5202 | | 0.5449 | 0.76 | 180 | 0.5272 | | 0.5345 | 0.81 | 190 | 0.5621 | | 0.5837 | 0.85 | 200 | 0.5501 | | 0.5969 | 0.89 | 210 | 0.5470 | | 0.5905 | 0.93 | 220 | 0.5924 | | 0.5481 | 0.97 | 230 | 0.5415 | | 0.5035 | 1.02 | 240 | 0.5321 | | 0.4508 | 1.06 | 250 | 0.5371 | | 0.4227 | 1.1 | 260 | 0.5276 | | 0.4423 | 1.14 | 270 | 0.5324 | | 0.432 | 1.19 | 280 | 0.5378 | | 0.4317 | 1.23 | 290 | 0.5302 | | 0.46 | 1.27 | 300 | 0.5302 | | 0.435 | 1.31 | 310 | 0.5326 | | 0.3813 | 1.36 | 320 | 0.5431 | | 0.4422 | 1.4 | 330 | 0.5323 | | 0.4298 | 1.44 | 340 | 0.5575 | | 0.5068 | 1.48 | 350 | 0.5529 | | 0.4619 | 1.53 | 360 | 0.5589 | | 0.4852 | 1.57 | 370 | 0.5256 | | 0.3888 | 1.61 | 380 | 0.5731 | | 0.4319 | 1.65 | 390 | 0.5335 | | 0.4422 | 1.69 | 400 | 0.5419 | | 0.4522 | 1.74 | 410 | 0.5547 | | 0.4276 | 1.78 | 420 | 0.5263 | | 0.3988 | 1.82 | 430 | 0.5481 | | 0.4063 | 1.86 | 440 | 0.5404 | | 0.4141 | 1.91 | 450 | 0.5292 | | 0.4149 | 1.95 | 460 | 0.5241 | | 0.4104 | 1.99 | 470 | 0.5263 |
92c908a97dab96d12006b0f21e960579
apache-2.0
['generated_from_keras_callback']
false
fintuned-bert-disfluency This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0814 - Train Sparse Categorical Accuracy: 0.9795 - Validation Loss: 0.0816 - Validation Sparse Categorical Accuracy: 0.9795 - Epoch: 2
4059b2968a353d3ecc5f2fa4824b2c01
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Train Sparse Categorical Accuracy | Validation Loss | Validation Sparse Categorical Accuracy | Epoch | |:----------:|:---------------------------------:|:---------------:|:--------------------------------------:|:-----:| | 0.1105 | 0.9694 | 0.0821 | 0.9800 | 0 | | 0.0942 | 0.9759 | 0.0987 | 0.9765 | 1 | | 0.0814 | 0.9795 | 0.0816 | 0.9795 | 2 |
7341b36b471a4c70be93de6e19d2e4f8
apache-2.0
['generated_from_trainer']
false
convnext-tiny-224-eurosat This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.3153 - Accuracy: 0.9537
b6be9ae7385343ae0820bb5a9e418648
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.863 | 0.98 | 33 | 1.5775 | 0.7619 | | 1.039 | 1.98 | 66 | 0.8142 | 0.9008 | | 0.5825 | 2.98 | 99 | 0.4442 | 0.9339 | | 0.3228 | 3.98 | 132 | 0.3153 | 0.9537 | | 0.2641 | 4.98 | 165 | 0.2868 | 0.9524 |
cf7504f6fb9b5386634aaeb7f69201b8
creativeml-openrail-m
['text-to-image', 'stable-diffusion']
false
eduardosflopes2 Dreambooth model trained by eduardosflopes with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
d4121df397ed8d1280bbcfba7740a520
mit
['generated_from_trainer']
false
vit-swin-base-224-gpt2-image-captioning This model is a fine-tuned [VisionEncoderDecoder](https://huggingface.co/docs/transformers/model_doc/vision-encoder-decoder) model on 60% of the [COCO2014](https://huggingface.co/datasets/HuggingFaceM4/COCO) dataset. It achieves the following results on the testing set: - Loss: 0.7989 - Rouge1: 53.1153 - Rouge2: 24.2307 - Rougel: 51.5002 - Rougelsum: 51.4983 - Bleu: 17.7765
094590cef1c0068a0c90c915fe883598
mit
['generated_from_trainer']
false
Model description The model was initialized on [microsoft/swin-base-patch4-window7-224-in22k](https://huggingface.co/microsoft/swin-base-patch4-window7-224-in22k) as the vision encoder, the [gpt2](https://huggingface.co/gpt2) as the decoder.
7bbe07c5dc9f6a40dce06986e3399fd5
mit
['generated_from_trainer']
false
How to use You can either use the simple pipeline API: ```python from transformers import pipeline image_captioner = pipeline("image-to-text", model="Abdou/vit-swin-base-224-gpt2-image-captioning")
1e700ba771780a6072ae9d247145c833
mit
['generated_from_trainer']
false
infer the caption caption = image_captioner("http://images.cocodataset.org/test-stuff2017/000000000019.jpg")[0]['generated_text'] print(f"caption: {caption}") ``` Or initialize everything for more flexibility: ```python from transformers import VisionEncoderDecoderModel, GPT2TokenizerFast, ViTImageProcessor import torch
ec7a30691bf00b98cba278fce5ca9e36
mit
['generated_from_trainer']
false
load the fine-tuned image captioning model and corresponding tokenizer and image processor model = VisionEncoderDecoderModel.from_pretrained("Abdou/vit-swin-base-224-gpt2-image-captioning").to(device) tokenizer = GPT2TokenizerFast.from_pretrained("Abdou/vit-swin-base-224-gpt2-image-captioning") image_processor = ViTImageProcessor.from_pretrained("Abdou/vit-swin-base-224-gpt2-image-captioning")
116abe12f283e1f36c53898538c6dd2c
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|:-------:| | 1.0018 | 0.38 | 2000 | 0.8860 | 38.6537 | 13.8145 | 35.3932 | 35.3935 | 8.2448 | 11.2946 | | 0.8827 | 0.75 | 4000 | 0.8395 | 40.0458 | 14.8829 | 36.5321 | 36.5366 | 9.1169 | 11.2946 | | 0.8378 | 1.13 | 6000 | 0.8140 | 41.2736 | 15.9576 | 37.5504 | 37.5512 | 9.871 | 11.2946 | | 0.7913 | 1.51 | 8000 | 0.8012 | 41.6642 | 16.1987 | 37.8786 | 37.8891 | 10.0786 | 11.2946 | | 0.7794 | 1.89 | 10000 | 0.7933 | 41.9119 | 16.3738 | 38.1062 | 38.1292 | 10.288 | 11.2946 | Total training time: ~5 hours on NVIDIA A100 GPU.
09f26cab4e046ac4001d52975459e068
mit
['generated_from_trainer']
false
roberta-base-finetuned-imdb This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.1783 - Accuracy: 0.9552
ecb1b7c6a993c5f7f6307100615a61c1
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1904 | 1.0 | 1563 | 0.1423 | 0.9517 | | 0.1187 | 2.0 | 3126 | 0.1783 | 0.9552 |
aff50e9505ed74116859a0bab01d1fc3
apache-2.0
['Image Captioning']
false
Model Description These are model weights originally provided by the authors of the paper [Text-Only Training for Image Captioning using Noise-Injected CLIP](https://arxiv.org/pdf/2211.00575.pdf). Their method aims to train CLIP with only text samples. Therefore they are injecting zero-mean Gaussian Noise into the text embeddings before decoding. In their words: *Specifically, we assume that the visual embedding corresponding to a text embedding lies somewhere within a ball of small radius around the text embedding (see Fig. 1). We would like all text embeddings in this ball to decode to the same caption,which should also correspond to the visual content mapped to this ball. We implement this intuition by adding zero-mean Gaussian noise of STD to the text embedding before decoding it.* The "Noise Level" of 0 is equivalent to the Noise Variance which is the square of the STD. The reported metrics are results of a model with a Noise Variance of 0.016, which the authors unfortunately do not provide in their repository.
4e0236c0a357360536a6fd88b0ca866d
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 367 | 0.4436 | 0.8106 | 0.8597 |
60b6ff1144a51599ae56933f55b629f1
mit
[]
false
German GPT-2 model In this repository we release (yet another) GPT-2 model, that was trained on various texts for German. The model is meant to be an entry point for fine-tuning on other texts, and it is definitely not as good or "dangerous" as the English GPT-3 model. We do not plan extensive PR or staged releases for this model 😉 **Note**: The model was initially released under an anonymous alias (`anonymous-german-nlp/german-gpt2`) so we now "de-anonymize" it. More details about GPT-2 can be found in the great [Hugging Face](https://huggingface.co/transformers/model_doc/gpt2.html) documentation.
7ab16dbc3f8f1e0fd5b6672413acd740
mit
[]
false
Changelog 16.08.2021: Public release of re-trained version of our German GPT-2 model with better results. 15.11.2020: Initial release. Please use the tag `v1.0` for [this older version](https://huggingface.co/dbmdz/german-gpt2/tree/v1.0).
c0d5845a63ad1d642e5cd1608a4578ef
mit
[]
false
Training corpora We use pretty much the same corpora as used for training the DBMDZ BERT model, that can be found in [this repository](https://github.com/dbmdz/berts). Thanks to the awesome Hugging Face team, it is possible to create byte-level BPE with their awesome [Tokenizers](https://github.com/huggingface/tokenizers) library. With the previously mentioned awesome Tokenizers library we created a 50K byte-level BPE vocab based on the training corpora. After creating the vocab, we could train the GPT-2 for German on a v3-8 TPU over the complete training corpus for 20 epochs. All hyperparameters can be found in the official JAX/FLAX documentation [here](https://github.com/huggingface/transformers/blob/master/examples/flax/language-modeling/README.md) from Transformers.
7727af875f28eed4e7775d21e1997387
mit
[]
false
Using the model The model itself can be used in this way: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("dbmdz/german-gpt2") model = AutoModelWithLMHead.from_pretrained("dbmdz/german-gpt2") ``` However, text generation is a bit more interesting, so here's an example that shows how to use the great Transformers *Pipelines* for generating text: ```python from transformers import pipeline pipe = pipeline('text-generation', model="dbmdz/german-gpt2", tokenizer="dbmdz/german-gpt2") text = pipe("Der Sinn des Lebens ist es", max_length=100)[0]["generated_text"] print(text) ``` This could output this beautiful text: ``` Der Sinn des Lebens ist es, im Geist zu verweilen, aber nicht in der Welt zu sein, sondern ganz im Geist zu leben. Die Menschen beginnen, sich nicht nach der Natur und nach der Welt zu richten, sondern nach der Seele,' ```
57c792c082bd230cac991445be12ea38
apache-2.0
['generated_from_trainer']
false
recipe-lr2e05-wd0.01-bs16 This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2792 - Rmse: 0.5284 - Mse: 0.2792 - Mae: 0.4332
044120fb9f3e0a26807b14825f7a6638
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:| | 0.2768 | 1.0 | 1245 | 0.2747 | 0.5241 | 0.2747 | 0.4081 | | 0.2737 | 2.0 | 2490 | 0.2793 | 0.5285 | 0.2793 | 0.4288 | | 0.2722 | 3.0 | 3735 | 0.2792 | 0.5284 | 0.2792 | 0.4332 |
b248b98b07ef87942acb6b026f0a6587
cc-by-4.0
['automatic-speech-recognition', 'speech', 'audio', 'CTC', 'Citrinet', 'Transformer', 'pytorch', 'NeMo', 'hf-asr-leaderboard']
false
deployment-with-nvidia-riva) | This model transcribes speech in lower case English alphabet along with spaces and apostrophes. It is an "extra-small" versions of Citrinet-CTC (around 10M parameters) model. See the [model architecture](
3fce83434442217daa6a1f4841997ebc
cc-by-4.0
['automatic-speech-recognition', 'speech', 'audio', 'CTC', 'Citrinet', 'Transformer', 'pytorch', 'NeMo', 'hf-asr-leaderboard']
false
Transcribing many audio files ```shell python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py pretrained_name="nvidia/stt_en_citrinet_256_ls" audio_dir="<DIRECTORY CONTAINING AUDIO FILES>" ```
0e1aef9fc9183dd1eade5972d29163fc
cc-by-4.0
['automatic-speech-recognition', 'speech', 'audio', 'CTC', 'Citrinet', 'Transformer', 'pytorch', 'NeMo', 'hf-asr-leaderboard']
false
Model Architecture Citrinet-CTC model is an autoregressive variant of Citrinet model [1] for Automatic Speech Recognition which uses CTC loss/decoding instead of Transducer Loss. You may find more info on the detail of this model here: [Citrinet Model](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html).
324289f3ed355b6b25f9b094f7a08f01
cc-by-4.0
['automatic-speech-recognition', 'speech', 'audio', 'CTC', 'Citrinet', 'Transformer', 'pytorch', 'NeMo', 'hf-asr-leaderboard']
false
Training The NeMo toolkit [3] was used for training the models for over several hundred epochs. These model are trained with this [example script](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/asr_ctc/speech_to_text_ctc_bpe.py) and this [base config](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/conf/citrinet/citrinet_1024.yaml) (Note: Change the `model.model_defaults.filters` to match the model size). The tokenizers for these models were built using the text transcripts of the train set with this [script](https://github.com/NVIDIA/NeMo/blob/main/scripts/tokenizers/process_asr_text_tokenizer.py).
013387bca0127684b1e5a38b8d5a502b
cc-by-4.0
['automatic-speech-recognition', 'speech', 'audio', 'CTC', 'Citrinet', 'Transformer', 'pytorch', 'NeMo', 'hf-asr-leaderboard']
false
Performance The list of the available models in this collection is shown in the following table. Performances of the ASR models are reported in terms of Word Error Rate (WER%) with greedy decoding. | Version | Tokenizer | Vocabulary Size | LS test-other | LS test-clean | |---------|---------------------------|-----------------|---------------|---------------| | 1.0.0 | SentencePiece Unigram [2] | 256 | 9.8 | 3.8 |
129c564b72efbf392aa1806383e0e5a9
cc-by-4.0
['automatic-speech-recognition', 'speech', 'audio', 'CTC', 'Citrinet', 'Transformer', 'pytorch', 'NeMo', 'hf-asr-leaderboard']
false
References [1] [ Citrinet: Closing the Gap between Non-Autoregressive and Autoregressive End-to-End Models for Automatic Speech Recognition](https://arxiv.org/abs/2104.01721) [2] [Google Sentencepiece Tokenizer](https://github.com/google/sentencepiece) [3] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
13cf833782b0bf4305f26874c051fd88
apache-2.0
['generated_from_trainer']
false
small-mlm-glue-stsb-target-glue-mrpc This model is a fine-tuned version of [muhtasham/small-mlm-glue-stsb](https://huggingface.co/muhtasham/small-mlm-glue-stsb) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9122 - Accuracy: 0.7598 - F1: 0.8322
6113ef813fac065e8bf23bdcc8eda964
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.3924 | 4.35 | 500 | 0.8097 | 0.7647 | 0.8416 | | 0.0751 | 8.7 | 1000 | 1.4556 | 0.7574 | 0.8374 | | 0.0294 | 13.04 | 1500 | 1.7098 | 0.7647 | 0.8356 | | 0.0186 | 17.39 | 2000 | 1.9122 | 0.7598 | 0.8322 |
486c7a645c14b0b9cc9fc5580f01ffd2
apache-2.0
['generated_from_trainer']
false
finetuned-mt5-base This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the wmt16 ro-en dataset. It achieves the following results on the evaluation set: - Loss: 1.3594 - Bleu: 27.1659 - Gen Len: 43.9575
9018152f49f5476c10a7009fe79fa186
apache-2.0
['generated_from_trainer']
false
42 This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the SST2 dataset. It achieves the following results on the evaluation set: - Loss: 0.3109 - Accuracy: 0.9255
60bf798f7e2824812aa3690c0c56ae21
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - distributed_type: not_parallel - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5
2068d1c7359371e50d974d6f56a43af2
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | No log | 1.0 | 2105 | 0.2167 | 0.9232 | | 0.2049 | 2.0 | 4210 | 0.2375 | 0.9278 | | 0.123 | 3.0 | 6315 | 0.2636 | 0.9243 | | 0.0839 | 4.0 | 8420 | 0.2865 | 0.9243 | | 0.058 | 5.0 | 10525 | 0.3109 | 0.9255 |
009afa575c12c77f6bdb3aece261dcdc
cc-by-4.0
['question generation', 'answer extraction']
false
Model Card of `lmqg/mt5-base-esquad-qg-ae` This model is fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) for question generation and answer extraction jointly on the [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
e4325f344298b661e7ebec5df9505c7e
cc-by-4.0
['question generation', 'answer extraction']
false
model prediction question_answer_pairs = model.generate_qa("a noviembre , que es también la estación lluviosa.") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/mt5-base-esquad-qg-ae")
cd44c3dcd3d883323d61c2548f19d8c4
cc-by-4.0
['question generation', 'answer extraction']
false
question generation question = pipe("extract answers: <hl> En la diáspora somalí, múltiples eventos islámicos de recaudación de fondos se llevan a cabo cada año en ciudades como Birmingham, Londres, Toronto y Minneapolis, donde los académicos y profesionales somalíes dan conferencias y responden preguntas de la audiencia. <hl> El propósito de estos eventos es recaudar dinero para nuevas escuelas o universidades en Somalia, para ayudar a los somalíes que han sufrido como consecuencia de inundaciones y / o sequías, o para reunir fondos para la creación de nuevas mezquitas como.") ```
4395793711bfcf7a5c6145f46c19c868
cc-by-4.0
['question generation', 'answer extraction']
false
Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/mt5-base-esquad-qg-ae/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_esquad.default.json) | | Score | Type | Dataset | |:-----------|--------:|:--------|:-----------------------------------------------------------------| | BERTScore | 83.97 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | Bleu_1 | 25.88 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | Bleu_2 | 17.67 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | Bleu_3 | 12.84 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | Bleu_4 | 9.62 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | METEOR | 23.11 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | MoverScore | 59.15 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | ROUGE_L | 24.82 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | - ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/lmqg/mt5-base-esquad-qg-ae/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_esquad.default.json) | | Score | Type | Dataset | |:--------------------------------|--------:|:--------|:-----------------------------------------------------------------| | QAAlignedF1Score (BERTScore) | 79.67 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | QAAlignedF1Score (MoverScore) | 54.82 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | QAAlignedPrecision (BERTScore) | 77.14 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | QAAlignedPrecision (MoverScore) | 53.27 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | QAAlignedRecall (BERTScore) | 82.44 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | QAAlignedRecall (MoverScore) | 56.56 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | - ***Metric (Answer Extraction)***: [raw metric file](https://huggingface.co/lmqg/mt5-base-esquad-qg-ae/raw/main/eval/metric.first.answer.paragraph_sentence.answer.lmqg_qg_esquad.default.json) | | Score | Type | Dataset | |:-----------------|--------:|:--------|:-----------------------------------------------------------------| | AnswerExactMatch | 57.98 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | AnswerF1Score | 75.33 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | BERTScore | 90.04 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | Bleu_1 | 37.35 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | Bleu_2 | 32.53 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | Bleu_3 | 28.86 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | Bleu_4 | 25.75 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | METEOR | 43.74 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | MoverScore | 80.94 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | ROUGE_L | 49.61 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
8dc2c18ff85178ee543d39e2dfd83b4d
cc-by-4.0
['question generation', 'answer extraction']
false
Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_esquad - dataset_name: default - input_types: ['paragraph_answer', 'paragraph_sentence'] - output_types: ['question', 'answer'] - prefix_types: ['qg', 'ae'] - model: google/mt5-base - max_length: 512 - max_length_output: 32 - epoch: 7 - batch: 32 - lr: 0.001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 2 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mt5-base-esquad-qg-ae/raw/main/trainer_config.json).
37772f440c07ca6085addc2ff288b902
apache-2.0
['generated_from_trainer']
false
distilbert_sa_GLUE_Experiment_data_aug_sst2_384 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE SST2 dataset. It achieves the following results on the evaluation set: - Loss: 0.5433 - Accuracy: 0.7878
3bdd38f98eb0cdda2dfe842f36b68073
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.3436 | 1.0 | 4374 | 0.5433 | 0.7878 | | 0.2417 | 2.0 | 8748 | 0.6281 | 0.7890 | | 0.1823 | 3.0 | 13122 | 0.7529 | 0.7775 | | 0.1432 | 4.0 | 17496 | 0.8767 | 0.7741 | | 0.117 | 5.0 | 21870 | 0.9864 | 0.7638 | | 0.0986 | 6.0 | 26244 | 1.1162 | 0.7649 |
41febc490dddf59d3a44b659c56d555e
mit
['generated_from_keras_callback']
false
syp1229/xlm-roberta-base-finetuned-koidiom-epoch5 This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.0826 - Validation Loss: 1.9873 - Epoch: 4
377c9ad8dcd249ef18544970c65250dd
mit
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 2.7703 | 2.0462 | 0 | | 2.2504 | 2.0178 | 1 | | 2.1653 | 1.9992 | 2 | | 2.1310 | 1.9829 | 3 | | 2.0826 | 1.9873 | 4 |
bbab20e1a78b38355b2083de731bc1f7
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
Demo: How to use in ESPnet2 ```bash cd espnet git checkout 91325a1e58ca0b13494b94bf79b186b095fe0b58 pip install -e . cd egs2/mr_openslr64/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/marathi_openslr64 ``` <!-- Generated by scripts/utils/show_asr_result.sh -->
5d1d8608c5a2747fd75a46fe8ad641da
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
Environments - date: `Mon Mar 21 16:06:03 UTC 2022` - python version: `3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]` - espnet version: `espnet 0.10.7a1` - pytorch version: `pytorch 1.11.0+cu102` - Git hash: `91325a1e58ca0b13494b94bf79b186b095fe0b58` - Commit date: `Mon Mar 21 00:40:52 2022 +0000`
ea6a49ece7a8dd70d3dbff57cbc16c00
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
ASR config <details><summary>expand</summary> ``` config: conf/tuning/train_asr_conformer_xlsr.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_train_asr_conformer_xlsr_raw_bpe150_sp ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 60 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - acc - max keep_nbest_models: 5 nbest_averaging_interval: 0 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 3 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_matplotlib: true use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: - frontend.upstream num_iters_per_epoch: null batch_size: 20 valid_batch_size: null batch_bins: 10000 valid_batch_bins: null train_shape_file: - exp/asr_stats_raw_bpe150_sp/train/speech_shape - exp/asr_stats_raw_bpe150_sp/train/text_shape.bpe valid_shape_file: - exp/asr_stats_raw_bpe150_sp/valid/speech_shape - exp/asr_stats_raw_bpe150_sp/valid/text_shape.bpe batch_type: numel valid_batch_type: null fold_length: - 80000 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/marathi_train_sp/wav.scp - speech - sound - - dump/raw/marathi_train_sp/text - text - text valid_data_path_and_name_and_type: - - dump/raw/marathi_dev/wav.scp - speech - sound - - dump/raw/marathi_dev/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 0.0005 scheduler: warmuplr scheduler_conf: warmup_steps: 20000 token_list: - <blank> - <unk> - ▁ - ा - ी - े - त - र - ं - न - क - ् - व - ि - ल - ▁म - स - ो - श - द - च - म - ▁अ - ▁आ - ण - ु - ला - ह - ▁आहे - य - ▁स - ग - ▁ह - ्या - चा - ▁प - ड - ▁क - प - ट - ▁ब - ज - र् - ्र - ▁? - ▁ज - ब - ून - वा - ▁एक - ▁या - ळ - ात - ख - ध - ▁ति - ठ - ल्या - ले - ू - ▁तुम्हाला - ां - ार - घ - ची - ▁अस - थ - ▁का - ने - णि - ॅ - ▁त - ▁परवा - ▁ते - ली - ▁गेल - ळा - ष - ▁कर - . - च्या - ▁न - वर - ▁त्या - ▁प्र - ▁करू - ▁ग - ्ट - ई - झ - ▁फ - ाय - क्ष - ▁काय - पूर - ▁होती - मध - ▁तिथ - ▁काही - ए - ▁वि - ▁दोन - ▁महिन्या - व्हा - तील - जार - ▁नाही - ँ - ▁पुत - ॉ - ▁झाला - ▁दिसल - ▁साल - ▁रस्त्यावर - स्त - जवळ - न्म - मध्य - ऊ - ▁इथे - ▁तुमच - ▁शकते - मान - ▁उद् - फ - ै - ढ - ',' - इ - ौ - ‍ - ृ - ओ - ः - ॲ - आ - '-' - ञ - औ - '!' - ऑ - ऱ - ऐ - छ - उ - '?' - भ - अ - ऋ - <sos/eos> init: xavier_uniform input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: true joint_net_conf: null use_preprocessor: true token_type: bpe bpemodel: data/token_list/bpe_unigram150/bpe.model non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' frontend: s3prl frontend_conf: frontend_conf: upstream: wav2vec2_xlsr download_dir: ./hub multilayer_feature: true fs: 16k specaug: specaug specaug_conf: apply_time_warp: true time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 30 num_freq_mask: 2 apply_time_mask: true time_mask_width_range: - 0 - 40 num_time_mask: 2 normalize: utterance_mvn normalize_conf: {} model: espnet model_conf: ctc_weight: 0.3 lsm_weight: 0.1 length_normalized_loss: false extract_feats_in_collect_stats: false preencoder: linear preencoder_conf: input_size: 1024 output_size: 80 encoder: conformer encoder_conf: output_size: 512 attention_heads: 4 linear_units: 1024 num_blocks: 3 dropout_rate: 0.3 positional_dropout_rate: 0.3 attention_dropout_rate: 0.3 input_layer: conv2d normalize_before: true macaron_style: false pos_enc_layer_type: rel_pos selfattention_layer_type: rel_selfattn activation_type: swish use_cnn_module: true cnn_module_kernel: 17 postencoder: null postencoder_conf: {} decoder: transformer decoder_conf: attention_heads: 4 linear_units: 1024 num_blocks: 3 dropout_rate: 0.3 positional_dropout_rate: 0.3 self_attention_dropout_rate: 0.3 src_attention_dropout_rate: 0.3 required: - output_dir - token_list version: 0.10.7a1 distributed: false ``` </details>
047b9f92fb1d0fa1655b863a5423218a
apache-2.0
['exbert', 'multiberts', 'multiberts-seed-2']
false
MultiBERTs Seed 2 Checkpoint 20k (uncased) Seed 2 intermediate checkpoint 20k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint. The final checkpoint can be found at [multiberts-seed-2](https://hf.co/multberts-seed-2). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
69de28d8981a610d4a3c3ab5e79a1243
apache-2.0
['exbert', 'multiberts', 'multiberts-seed-2']
false
How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-2-20k') model = BertModel.from_pretrained("multiberts-seed-2-20k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ```
14cdc93506754089a7c5853d75df476b
mit
['generated_from_trainer']
false
upbeat_ramanujan This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets.
4595b95ccccb6453e5e00ed967c58cee
mit
['generated_from_trainer']
false
Full config {'dataset': {'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000', 'tomekkorbak/detoxify-pile-chunk3-50000-100000', 'tomekkorbak/detoxify-pile-chunk3-100000-150000', 'tomekkorbak/detoxify-pile-chunk3-150000-200000', 'tomekkorbak/detoxify-pile-chunk3-200000-250000', 'tomekkorbak/detoxify-pile-chunk3-250000-300000', 'tomekkorbak/detoxify-pile-chunk3-300000-350000', 'tomekkorbak/detoxify-pile-chunk3-350000-400000', 'tomekkorbak/detoxify-pile-chunk3-400000-450000', 'tomekkorbak/detoxify-pile-chunk3-450000-500000', 'tomekkorbak/detoxify-pile-chunk3-500000-550000', 'tomekkorbak/detoxify-pile-chunk3-550000-600000', 'tomekkorbak/detoxify-pile-chunk3-600000-650000', 'tomekkorbak/detoxify-pile-chunk3-650000-700000', 'tomekkorbak/detoxify-pile-chunk3-700000-750000', 'tomekkorbak/detoxify-pile-chunk3-750000-800000', 'tomekkorbak/detoxify-pile-chunk3-800000-850000', 'tomekkorbak/detoxify-pile-chunk3-850000-900000', 'tomekkorbak/detoxify-pile-chunk3-900000-950000', 'tomekkorbak/detoxify-pile-chunk3-950000-1000000', 'tomekkorbak/detoxify-pile-chunk3-1000000-1050000', 'tomekkorbak/detoxify-pile-chunk3-1050000-1100000', 'tomekkorbak/detoxify-pile-chunk3-1100000-1150000', 'tomekkorbak/detoxify-pile-chunk3-1150000-1200000', 'tomekkorbak/detoxify-pile-chunk3-1200000-1250000', 'tomekkorbak/detoxify-pile-chunk3-1250000-1300000', 'tomekkorbak/detoxify-pile-chunk3-1300000-1350000', 'tomekkorbak/detoxify-pile-chunk3-1350000-1400000', 'tomekkorbak/detoxify-pile-chunk3-1400000-1450000', 'tomekkorbak/detoxify-pile-chunk3-1450000-1500000', 'tomekkorbak/detoxify-pile-chunk3-1500000-1550000', 'tomekkorbak/detoxify-pile-chunk3-1550000-1600000', 'tomekkorbak/detoxify-pile-chunk3-1600000-1650000', 'tomekkorbak/detoxify-pile-chunk3-1650000-1700000', 'tomekkorbak/detoxify-pile-chunk3-1700000-1750000', 'tomekkorbak/detoxify-pile-chunk3-1750000-1800000', 'tomekkorbak/detoxify-pile-chunk3-1800000-1850000', 'tomekkorbak/detoxify-pile-chunk3-1850000-1900000', 'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'], 'is_split_by_sentences': True}, 'generation': {'every_n_steps': 16, 'force_call_on': [25354], 'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}], 'scenario_configs': [{'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_samples': 2048}, {'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'challenging_rtp', 'num_samples': 2048, 'prompts_path': 'resources/challenging_rtp.jsonl'}], 'scorer_config': {'device': 'cuda:0'}}, 'kl_gpt3_callback': {'every_n_steps': 16, 'force_call_on': [25354], 'max_tokens': 64, 'num_samples': 4096}, 'model': {'from_scratch': True, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'model_kwargs': {'value_head_config': {'is_detached': False}}, 'path_or_name': 'gpt2'}, 'objective': {'alpha': 0.5, 'beta': 10, 'name': 'AWR'}, 'tokenizer': {'path_or_name': 'gpt2'}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 1024, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'upbeat_ramanujan', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.001, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000, 'output_dir': 'training_output104340', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 1673, 'save_strategy': 'steps', 'seed': 42, 'warmup_ratio': 0.01, 'weight_decay': 0.1}}
f54e02aea9ec8c71ba475bb22028efad
apache-2.0
['translation']
false
eng-phi * source group: English * target group: Philippine languages * OPUS readme: [eng-phi](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-phi/README.md) * model: transformer * source language(s): eng * target language(s): akl_Latn ceb hil ilo pag war * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-phi/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-phi/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-phi/opus2m-2020-08-01.eval.txt)
87055bcd1c0101261a3bc9f7f41de984
apache-2.0
['translation']
false
Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng-akl.eng.akl | 7.1 | 0.245 | | Tatoeba-test.eng-ceb.eng.ceb | 10.5 | 0.435 | | Tatoeba-test.eng-hil.eng.hil | 18.0 | 0.506 | | Tatoeba-test.eng-ilo.eng.ilo | 33.4 | 0.590 | | Tatoeba-test.eng.multi | 13.1 | 0.392 | | Tatoeba-test.eng-pag.eng.pag | 19.4 | 0.481 | | Tatoeba-test.eng-war.eng.war | 12.8 | 0.441 |
93140539840f60ecda857204ecccd63f
apache-2.0
['translation']
false
System Info: - hf_name: eng-phi - source_languages: eng - target_languages: phi - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-phi/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'phi'] - src_constituents: {'eng'} - tgt_constituents: {'ilo', 'akl_Latn', 'war', 'hil', 'pag', 'ceb'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-phi/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-phi/opus2m-2020-08-01.test.txt - src_alpha3: eng - tgt_alpha3: phi - short_pair: en-phi - chrF2_score: 0.392 - bleu: 13.1 - brevity_penalty: 1.0 - ref_len: 30022.0 - src_name: English - tgt_name: Philippine languages - train_date: 2020-08-01 - src_alpha2: en - tgt_alpha2: phi - prefer_old: False - long_pair: eng-phi - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
15df51a482aa9d381c9a6404212c97d8
apache-2.0
['national library of spain', 'spanish', 'bne', 'capitel', 'ner']
false
Spanish RoBERTa-base trained on BNE finetuned for CAPITEL Named Entity Recognition (NER) dataset. RoBERTa-base-bne is a transformer-based masked language model for the Spanish language. It is based on the [RoBERTa](https://arxiv.org/abs/1907.11692) base model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019. Original pre-trained model can be found here: https://huggingface.co/BSC-TeMU/roberta-base-bne
e54ca0228e2f50efa5ab5bf270f0a988
apache-2.0
['national library of spain', 'spanish', 'bne', 'capitel', 'ner']
false
Dataset The dataset used is the one from the [CAPITEL competition at IberLEF 2020](https://sites.google.com/view/capitel2020) (sub-task 1). **IMPORTANT ABOUT THIS MODEL:** We modified the dataset to make this model more robust to general Spanish input. In the Spanish language all the name entities are capitalized, as this dataset has been elaborated by experts, it is totally correct in terms of Spanish language. We randomly took some entities and we lower-cased some of them for the model to learn not only that the named entities are capitalized, but also the structure of a sentence that should contain a named entity. For instance: "My name is [placeholder]", this [placeholder] should be a named entity even though it is not written capitalized. The model trained on the original capitel dataset can be found here: https://huggingface.co/BSC-TeMU/roberta-base-bne-capitel-ner Examples: This model: - "Me llamo asier y vivo en barcelona todo el año." → "Me llamo {as:S-PER}{ier:S-PER} y vivo en {barcelona:S-LOC} todo el año." - "Hoy voy a visitar el parc güell tras salir del barcelona supercomputing center." → "Hoy voy a visitar el {par:B-LOC}{k:I-LOC} {gü:E-LOC}{ell:E-LOC} tras salir del {barcelona:B-ORG} {super:I-ORG}{com:I-ORG}{pu:I-ORG}{ting:I-ORG} {cen:E-ORG}{ter:E-ORG}." Model trained on original data: - "Me llamo asier y vivo en barcelona todo el año." → "Me llamo asier y vivo en barcelona todo el año." (nothing) - "Hoy voy a visitar el parc güell tras salir del barcelona supercomputing center." → "Hoy voy a visitar el parc güell tras salir del barcelona supercomputing center." (nothing)
9dc3ce3683d6fef4a6337b516d9e824d
apache-2.0
['translation']
false
opus-mt-de-loz * source languages: de * target languages: loz * OPUS readme: [de-loz](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-loz/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-loz/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-loz/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-loz/opus-2020-01-20.eval.txt)
8c472a9df25ac8a51a6044ed4781a413
apache-2.0
['translation']
false
ara-tur * source group: Arabic * target group: Turkish * OPUS readme: [ara-tur](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-tur/README.md) * model: transformer * source language(s): apc_Latn ara ara_Latn arq_Latn * target language(s): tur * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-tur/opus-2020-07-03.zip) * test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-tur/opus-2020-07-03.test.txt) * test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-tur/opus-2020-07-03.eval.txt)
7df2d33e9fe154a3b783d2bb3df8e652
apache-2.0
['translation']
false
System Info: - hf_name: ara-tur - source_languages: ara - target_languages: tur - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-tur/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ar', 'tr'] - src_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'} - tgt_constituents: {'tur'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-tur/opus-2020-07-03.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-tur/opus-2020-07-03.test.txt - src_alpha3: ara - tgt_alpha3: tur - short_pair: ar-tr - chrF2_score: 0.619 - bleu: 33.1 - brevity_penalty: 0.9570000000000001 - ref_len: 6949.0 - src_name: Arabic - tgt_name: Turkish - train_date: 2020-07-03 - src_alpha2: ar - tgt_alpha2: tr - prefer_old: False - long_pair: ara-tur - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
1da36a6b828f98629d5e55a2e044a838