license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
apache-2.0
[]
false
Abstractive Summarization ``` from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("bloomberg/KeyBART") model = AutoModelForSeq2SeqLM.from_pretrained("bloomberg/KeyBART") from datasets import load_dataset dataset = load_dataset("cnn_dailymail") ``` Reported Results: | Model | R1 | R2 | RL | |--------------|-------|-------|-------| | BART (Lewis et al., 2019) | 44.16 | 21.28 | 40.9 | | BART* | 42.93 | 20.12 | 39.72 | | KeyBART-DOC* | 42.92 | 20.07 | 39.69 | | KeyBART* | 43.10 | 20.26 | 39.90 |
2c6eccd0256e4c488174549eebf2f6fd
apache-2.0
[]
false
Zero-shot settings ``` from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("bloomberg/KeyBART") model = AutoModelForSeq2SeqLM.from_pretrained("bloomberg/KeyBART") ``` Alternatively use the Hosted Inference API console provided in https://huggingface.co/bloomberg/KeyBART Sample Zero Shot result: ``` Input: In this work, we explore how to learn task specific language models aimed towards learning rich representation of keyphrases from text documents. We experiment with different masking strategies for pre-training transformer language models (LMs) in discriminative as well as generative settings. In the discriminative setting, we introduce a new pre-training objective - Keyphrase Boundary Infilling with Replacement (KBIR), showing large gains in performance (upto 9.26 points in F1) over SOTA, when LM pre-trained using KBIR is fine-tuned for the task of keyphrase extraction. In the generative setting, we introduce a new pre-training setup for BART - KeyBART, that reproduces the keyphrases related to the input text in the CatSeq format, instead of the denoised original input. This also led to gains in performance (upto 4.33 points in F1@M) over SOTA for keyphrase generation. Additionally, we also fine-tune the pre-trained language models on named entity recognition (NER), question answering (QA), relation extraction (RE), abstractive summarization and achieve comparable performance with that of the SOTA, showing that learning rich representation of keyphrases is indeed beneficial for many other fundamental NLP tasks. Output: language model;keyphrase generation;new pre-training objective;pre-training setup; ```
d9d5ef877bbd6dc77ee932600f26a489
apache-2.0
[]
false
Citation Please cite this work using the following BibTeX entry: ``` @inproceedings{kulkarni-etal-2022-learning, title = "Learning Rich Representation of Keyphrases from Text", author = "Kulkarni, Mayank and Mahata, Debanjan and Arora, Ravneet and Bhowmik, Rajarshi", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2022", month = jul, year = "2022", address = "Seattle, United States", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.findings-naacl.67", doi = "10.18653/v1/2022.findings-naacl.67", pages = "891--906", abstract = "In this work, we explore how to train task-specific language models aimed towards learning rich representation of keyphrases from text documents. We experiment with different masking strategies for pre-training transformer language models (LMs) in discriminative as well as generative settings. In the discriminative setting, we introduce a new pre-training objective - Keyphrase Boundary Infilling with Replacement (KBIR), showing large gains in performance (upto 8.16 points in F1) over SOTA, when the LM pre-trained using KBIR is fine-tuned for the task of keyphrase extraction. In the generative setting, we introduce a new pre-training setup for BART - KeyBART, that reproduces the keyphrases related to the input text in the CatSeq format, instead of the denoised original input. This also led to gains in performance (upto 4.33 points in F1@M) over SOTA for keyphrase generation. Additionally, we also fine-tune the pre-trained language models on named entity recognition (NER), question answering (QA), relation extraction (RE), abstractive summarization and achieve comparable performance with that of the SOTA, showing that learning rich representation of keyphrases is indeed beneficial for many other fundamental NLP tasks.", } ``` Please direct all questions to dpreotiucpie@bloomberg.net
98d908b18eb430b062e69fc50b9d94ed
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned_emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3916 - Accuracy: 0.886 - F1: 0.8818
1748b3764781ee7f0ce811238b85c300
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 125 | 0.6487 | 0.7875 | 0.7547 | | 0.8271 | 2.0 | 250 | 0.3916 | 0.886 | 0.8818 |
facad25e2d9404325294a3145f029f38
apache-2.0
['automatic-speech-recognition', 'pl']
false
exp_w2v2t_pl_vp-sv_s507 Fine-tuned [facebook/wav2vec2-large-sv-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-sv-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (pl)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
40c3f13725c31aea245d79371c80fe7f
cc-by-4.0
['espnet', 'audio', 'text-to-speech']
false
`kan-bayashi/jsut_tts_train_conformer_fastspeech2_tacotron2_teacher_raw_phn_jaconv_pyopenjtalk_accent_with_pause_train.loss.ave` ♻️ Imported from https://zenodo.org/record/4436448/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
2bb79a04369401f5e2ed978187238463
mit
[]
false
TomCat on Stable Diffusion This is the `<tom-cat>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<tom-cat> 0](https://huggingface.co/sd-concepts-library/tomcat/resolve/main/concept_images/3.jpeg) ![<tom-cat> 1](https://huggingface.co/sd-concepts-library/tomcat/resolve/main/concept_images/0.jpeg) ![<tom-cat> 2](https://huggingface.co/sd-concepts-library/tomcat/resolve/main/concept_images/2.jpeg) ![<tom-cat> 3](https://huggingface.co/sd-concepts-library/tomcat/resolve/main/concept_images/1.jpeg) ![<tom-cat> 4](https://huggingface.co/sd-concepts-library/tomcat/resolve/main/concept_images/4.jpeg)
bd062e5810d5592e159e4f84a80b51ba
apache-2.0
['generated_from_trainer']
false
Full config {'dataset': {'datasets': ['kejian/codeparrot-train-more-filter-3.3b-cleaned'], 'is_split_by_sentences': True, 'skip_tokens': 1649999872}, 'generation': {'batch_size': 128, 'every_n_steps': 512, 'force_call_on': [12588], 'metrics_configs': [{}, {'n': 1}, {}], 'scenario_configs': [{'display_as_html': True, 'generate_kwargs': {'do_sample': True, 'eos_token_id': 0, 'max_length': 640, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_hits_threshold': 0, 'num_samples': 2048}, {'display_as_html': True, 'generate_kwargs': {'do_sample': True, 'eos_token_id': 0, 'max_length': 272, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'functions', 'num_hits_threshold': 0, 'num_samples': 2048, 'prompts_path': 'resources/functions_csnet.jsonl', 'use_prompt_for_scoring': True}], 'scorer_config': {}}, 'kl_gpt3_callback': {'every_n_steps': 512, 'force_call_on': [12588], 'gpt3_kwargs': {'model_name': 'code-cushman-001'}, 'max_tokens': 64, 'num_samples': 4096}, 'model': {'from_scratch': False, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'model_kwargs': {'revision': 'c38e2b6acf17781918d39a310ee1adc4674a8225', 'value_head_config': {'is_detached': False}}, 'path_or_name': 'kejian/mighty-rwr'}, 'objective': {'alpha': 1, 'beta': 10, 'name': 'AWR'}, 'tokenizer': {'path_or_name': 'codeparrot/codeparrot-small'}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 128, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'curious-rwr', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.0005, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000.0, 'output_dir': 'training_output', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 12588, 'save_strategy': 'steps', 'seed': 42, 'tokens_already_seen': 1649999872, 'warmup_ratio': 0.01, 'weight_decay': 0.1}}
e333af3254555477d2178e9a6d56b9b2
cc-by-sa-4.0
['generated_from_trainer']
false
t5-base-TEDxJP-9front-1body-9rear This model is a fine-tuned version of [sonoisa/t5-base-japanese](https://huggingface.co/sonoisa/t5-base-japanese) on the te_dx_jp dataset. It achieves the following results on the evaluation set: - Loss: 0.4361 - Wer: 0.1687 - Mer: 0.1630 - Wil: 0.2486 - Wip: 0.7514 - Hits: 55941 - Substitutions: 6292 - Deletions: 2354 - Insertions: 2252 - Cer: 0.1338
e8945cecc42ecb2ee46b588ef39038eb
cc-by-sa-4.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Mer | Wil | Wip | Hits | Substitutions | Deletions | Insertions | Cer | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:------:|:-----:|:-------------:|:---------:|:----------:|:------:| | 0.6124 | 1.0 | 1457 | 0.4613 | 0.2407 | 0.2209 | 0.3091 | 0.6909 | 54843 | 6758 | 2986 | 5804 | 0.2153 | | 0.4968 | 2.0 | 2914 | 0.4171 | 0.1777 | 0.1716 | 0.2580 | 0.7420 | 55404 | 6354 | 2829 | 2293 | 0.1402 | | 0.4817 | 3.0 | 4371 | 0.4129 | 0.1731 | 0.1673 | 0.2534 | 0.7466 | 55636 | 6332 | 2619 | 2227 | 0.1349 | | 0.4257 | 4.0 | 5828 | 0.4089 | 0.1722 | 0.1659 | 0.2520 | 0.7480 | 55904 | 6346 | 2337 | 2437 | 0.1361 | | 0.3831 | 5.0 | 7285 | 0.4144 | 0.1705 | 0.1646 | 0.2508 | 0.7492 | 55868 | 6343 | 2376 | 2290 | 0.1358 | | 0.3057 | 6.0 | 8742 | 0.4198 | 0.1690 | 0.1632 | 0.2492 | 0.7508 | 55972 | 6333 | 2282 | 2298 | 0.1350 | | 0.2919 | 7.0 | 10199 | 0.4220 | 0.1693 | 0.1635 | 0.2492 | 0.7508 | 55936 | 6310 | 2341 | 2281 | 0.1337 | | 0.2712 | 8.0 | 11656 | 0.4252 | 0.1688 | 0.1632 | 0.2487 | 0.7513 | 55905 | 6286 | 2396 | 2218 | 0.1348 | | 0.2504 | 9.0 | 13113 | 0.4332 | 0.1685 | 0.1629 | 0.2482 | 0.7518 | 55931 | 6270 | 2386 | 2226 | 0.1331 | | 0.2446 | 10.0 | 14570 | 0.4361 | 0.1687 | 0.1630 | 0.2486 | 0.7514 | 55941 | 6292 | 2354 | 2252 | 0.1338 |
1e81a7b3b96cc33afec7dc1224dc8fb7
cc-by-4.0
['espnet', 'audio', 'text-to-speech']
false
`kan-bayashi/libritts_tts_train_xvector_conformer_fastspeech2_transformer_teacher_raw_phn_tacotron_g2p_en_no_space_train.loss` ♻️ Imported from https://zenodo.org/record/4418754/ This model was trained by kan-bayashi using libritts/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
aa72eaa912069fb94663514f737dde4b
apache-2.0
['translation']
false
opus-mt-en-uk * source languages: en * target languages: uk * OPUS readme: [en-uk](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-uk/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-uk/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-uk/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-uk/opus-2020-01-08.eval.txt)
58df04f0df3967eff1390f90dc644046
apache-2.0
['generated_from_keras_callback']
false
vinitharaj/distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 4.5718 - Validation Loss: 4.2502 - Epoch: 1
2b436880f03affdc4802853ac9947132
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 46, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32
0ab7f1750d3114d53594f0946ee6b959
cc-by-4.0
['question generation']
false
Model Card of `research-backup/bart-large-squad-qg-no-paragraph` This model is fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) for question generation task on the [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation). This model is fine-tuned without pargraph information but only the sentence that contains the answer.
80ff387a8cac2c5bc64f0e81d0c9a799
cc-by-4.0
['question generation']
false
model prediction questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "research-backup/bart-large-squad-qg-no-paragraph") output = pipe("<hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.") ```
eb4c46fdf1fee432880478c4383b4585
cc-by-4.0
['question generation']
false
Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/research-backup/bart-large-squad-qg-no-paragraph/raw/main/eval/metric.first.sentence.sentence_answer.question.lmqg_qg_squad.default.json) | | Score | Type | Dataset | |:-----------|--------:|:--------|:---------------------------------------------------------------| | BERTScore | 90.7 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_1 | 55.85 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_2 | 39.85 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_3 | 30.44 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_4 | 23.86 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | METEOR | 25.18 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | MoverScore | 63.85 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | ROUGE_L | 51.43 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
b5fc329a2b8d2d14f316404bc0159a50
cc-by-4.0
['question generation']
false
Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_squad - dataset_name: default - input_types: ['sentence_answer'] - output_types: ['question'] - prefix_types: None - model: facebook/bart-large - max_length: 128 - max_length_output: 32 - epoch: 8 - batch: 32 - lr: 5e-05 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 16 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/research-backup/bart-large-squad-qg-no-paragraph/raw/main/trainer_config.json).
a0ab6b27bf54ff530a7b131d1b5f9135
creativeml-openrail-m
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'wildcard']
false
DreamBooth model for the Wall-E-01 concept trained by DiamondYin. This is a Stable Diffusion model fine-tuned on the Wall-E-01 concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of Wall-E-01 robot** This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
4a1005c0d0a4dfed85745d802984c88c
creativeml-openrail-m
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'wildcard']
false
Description This is a Stable Diffusion model fine-tuned on `robot` images for the wildcard theme, for the Hugging Face DreamBooth Hackathon, from the HF CN Community, corporated with the HeyWhale. The production cost of WALL-E is $180 million. It tells about a lonely robot designed to clean up the polluted earth. The unique feature of this film is that there is almost no dialogue in the first 40 minutes or so. On the contrary, the audience enters a world of robots; How it thinks, how it works, how it speaks (or doesn't speak). Pixar's classic film was a success. The film has a global box office of more than 520 million US dollars, won a number of Oscar nominations, and ranked first on Time magazine's list of the best films of the decade. Now we can easily create Wally's pictures and present the script's pictures with the help of the Stable Diffusion model. We can write a series of stories for WALL-E, but we don't have to bear such expensive costs. This is the advantage of the Stable Diffusion model 机器总动员这部电影(WALL-E)的生产成本为1.8亿美元。它讲述了一个孤独的机器人被设计来清理被污染的地球。这部电影的独特之处在于,前40分钟左右几乎没有对话,相反,观众进入了一个机器人的世界;它如何思考,如何工作,如何说话(或不说话)。皮克斯的经典电影获得了成功。 该片全球票房超过5.2亿美元,获得多项奥斯卡提名,并在《时代》杂志十年最佳影片排行榜上排名第一。现在,我们可以通过Stable Diffusion model轻松创建WALL-E的图片并呈现脚本的图片。我们可以为WALL-E写一系列故事,但我们不必承担如此昂贵的成本。这是稳定扩散模型的优点 下面是相关实例,大家可以体验。 调用时请注意主体的名称是:Wall-E-01 robot When calling, please note that the name of the subject is: Wall-E-01 robot Prompt: Wall-E-01 robot on the moon 8K resolution, 16:9,Cyberpunk ![02.png](https://s3.amazonaws.com/moonup/production/uploads/1673799514124-636c3909181c81c337f0be90.png) ![11.png](https://s3.amazonaws.com/moonup/production/uploads/1673801747581-63bec1efda08ed0544f5a813.png) Prompt: Wall-E-01 robot, the background is an old bridge and a pond, mist and swirly clouds in the background, fantastic landscape, hyperrealism, no blur, 4k resolution, ultra detailed, style of Anton Fadeev, Ivan Shishkin, John Berkey ![04.png](https://s3.amazonaws.com/moonup/production/uploads/1673799593235-636c3909181c81c337f0be90.png) Prompt: illustration of a Wall-E robot sitting on top of the deck of a battle ship traveling through the open sea ![07.png](https://s3.amazonaws.com/moonup/production/uploads/1673799674000-636c3909181c81c337f0be90.png) Prompt: Wall-E-01 robot cartoon image with rainbow background ![01.png](https://s3.amazonaws.com/moonup/production/uploads/1673799451032-636c3909181c81c337f0be90.png) ![08.png](https://s3.amazonaws.com/moonup/production/uploads/1673799761904-636c3909181c81c337f0be90.png) ![14.png](https://s3.amazonaws.com/moonup/production/uploads/1673801746877-63bec1efda08ed0544f5a813.png) Prompt:"Wall-E, a small robot with a binocular-shaped head, sitting in the cockpit of a large spaceship, surrounded by high-tech controls and screens displaying various information about the ship's status and location, with a focus on Wall-E's expression and the intricate details of the ship's controls. The image should be in high resolution and have a realistic, futuristic aesthetic." ![15.png](https://s3.amazonaws.com/moonup/production/uploads/1673801745824-63bec1efda08ed0544f5a813.png) ![13.png](https://s3.amazonaws.com/moonup/production/uploads/1673801747231-63bec1efda08ed0544f5a813.png) ![12.png](https://s3.amazonaws.com/moonup/production/uploads/1673801747574-63bec1efda08ed0544f5a813.png)
4e53a1f8f1719e05be20863052cbc322
apache-2.0
['visual-question-answering']
false
Vision-and-Language Transformer (ViLT), fine-tuned on VQAv2 Vision-and-Language Transformer (ViLT) model fine-tuned on [VQAv2](https://visualqa.org/). It was introduced in the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Kim et al. and first released in [this repository](https://github.com/dandelin/ViLT). Disclaimer: The team releasing ViLT did not write a model card for this model so this model card has been written by the Hugging Face team.
98b2712482e5be52f53da76ad87a7191
apache-2.0
['visual-question-answering']
false
prepare image + question url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) text = "How many cats are there?" processor = ViltProcessor.from_pretrained("dandelin/vilt-b32-finetuned-vqa") model = ViltForQuestionAnswering.from_pretrained("dandelin/vilt-b32-finetuned-vqa")
24581b87b6fd0780850700fde7e684d7
creativeml-openrail-m
['text-to-image', 'stable-diffusion']
false
elyyoorrcchh Dreambooth model trained by Jorgitosch with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
c4d83c0f0522684f8e77ea248b0c481d
apache-2.0
['generated_from_trainer']
false
tiny-bert-sst2-distilled This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 1.7305 - Accuracy: 0.8326
c201e2c6bcf063e59b14beca31f2c2eb
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0007199555649276667 - train_batch_size: 1024 - eval_batch_size: 1024 - seed: 33 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7 - mixed_precision_training: Native AMP
f241b02c269a5e2e4ac3304dac4844a7
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.77 | 1.0 | 66 | 1.6939 | 0.8165 | | 0.729 | 2.0 | 132 | 1.5090 | 0.8326 | | 0.5242 | 3.0 | 198 | 1.5369 | 0.8257 | | 0.4017 | 4.0 | 264 | 1.7025 | 0.8326 | | 0.327 | 5.0 | 330 | 1.6743 | 0.8245 | | 0.2749 | 6.0 | 396 | 1.7305 | 0.8337 | | 0.2521 | 7.0 | 462 | 1.7305 | 0.8326 |
8915c5808b513b940bf866b3b4180031
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2256 - Accuracy: 0.923 - F1: 0.9226
a015fee0b5f9bfc7b883c76ba035549c
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.807 | 1.0 | 250 | 0.3202 | 0.8995 | 0.8968 | | 0.2491 | 2.0 | 500 | 0.2256 | 0.923 | 0.9226 |
e055cdeeb9f409506af3c6918b18318c
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
wav2vec2-large-xlsr-53-hebrew Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the several downloaded youtube samples. When using this model, make sure that your speech input is sampled at 16kHz.
648607ce519fb796e524fbe9f400bbbd
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "he", split="test[:2%]")
71c6e4824a7e110faaac5e042e710785
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
there is no common dataset for Hebrew, please, paste your data processor = Wav2Vec2Processor.from_pretrained("imvladikon/wav2vec2-large-xlsr-53-hebrew") model = Wav2Vec2ForCTC.from_pretrained("imvladikon/wav2vec2-large-xlsr-53-hebrew") resampler = torchaudio.transforms.Resample(48_000, 16_000)
9835dc581f23f33991658c00fbae4a2e
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ```
2431ac0838c846fb4417b857026a4638
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Evaluation The model can be evaluated as follows on some Hebrew test data ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "he", split="test")
7c9c3b22824ab766433c4e525751d9a4
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
there is no common dataset for Hebrew, please, paste your data wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("imvladikon/wav2vec2-large-xlsr-53-hebrew") model = Wav2Vec2ForCTC.from_pretrained("imvladikon/wav2vec2-large-xlsr-53-hebrew").to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�]' resampler = torchaudio.transforms.Resample(48_000, 16_000)
09baa3bd6f3b0e517d7653dfd7212370
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**:
aca635f7a6f2068815c2d1d343d1fcbf
apache-2.0
['generated_from_trainer']
false
distilbert_sa_GLUE_Experiment_data_aug_wnli This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE WNLI dataset. It achieves the following results on the evaluation set: - Loss: 2.4231 - Accuracy: 0.0845
cc1aa466f007d2fccc9280a79c2456b4
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6166 | 1.0 | 218 | 2.4231 | 0.0845 | | 0.4183 | 2.0 | 436 | 4.2000 | 0.0986 | | 0.3033 | 3.0 | 654 | 5.7862 | 0.0704 | | 0.2294 | 4.0 | 872 | 7.2969 | 0.0704 | | 0.1768 | 5.0 | 1090 | 7.5620 | 0.0986 | | 0.1365 | 6.0 | 1308 | 7.3554 | 0.0845 |
0fb4a15bff8d96bebce09ffaa1d785e9
apache-2.0
['generated_from_trainer']
false
recipe-distilroberta-Is This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.7427
b45905973709f100bc9cfd6c2e008edc
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 25 - mixed_precision_training: Native AMP
86da6716478087d9d24266a08c562bd9
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 19.6191 | 1.0 | 2135 | 10.5217 | | 8.6838 | 2.0 | 4270 | 7.3017 | | 6.884 | 3.0 | 6405 | 6.4445 | | 6.2953 | 4.0 | 8540 | 6.0610 | | 6.0205 | 5.0 | 10675 | 5.9047 | | 5.851 | 6.0 | 12810 | 5.7790 | | 5.7464 | 7.0 | 14945 | 5.7164 | | 5.6684 | 8.0 | 17080 | 5.6415 | | 5.6138 | 9.0 | 19215 | 5.5671 | | 5.5638 | 10.0 | 21350 | 5.5360 | | 5.5288 | 11.0 | 23485 | 5.5069 | | 5.4968 | 12.0 | 25620 | 5.4968 | | 5.4696 | 13.0 | 27755 | 5.4539 | | 5.4468 | 14.0 | 29890 | 5.4416 | | 5.4177 | 15.0 | 32025 | 5.3722 | | 5.3717 | 16.0 | 34160 | 5.3226 | | 5.317 | 17.0 | 36295 | 5.2197 | | 5.2367 | 18.0 | 38430 | 5.0888 | | 5.1543 | 19.0 | 40565 | 4.9954 | | 5.0919 | 20.0 | 42700 | 4.9306 | | 5.038 | 21.0 | 44835 | 4.8657 | | 4.9983 | 22.0 | 46970 | 4.8137 | | 4.9639 | 23.0 | 49105 | 4.7704 | | 4.9426 | 24.0 | 51240 | 4.7486 | | 4.9312 | 25.0 | 53375 | 4.7427 |
ddaa7a7249345a61a4ef3206c68fefae
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2232 - Accuracy: 0.9215 - F1: 0.9218
01cf06fcdf35d5f8352b24f0a3877412
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8098 | 1.0 | 250 | 0.3138 | 0.9025 | 0.9001 | | 0.2429 | 2.0 | 500 | 0.2232 | 0.9215 | 0.9218 |
6f252d64376ce59d8b826a58d80167f3
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 125 | 0.5730 | 0.7840 |
86a78ba74aeeb83b12633f6b2ea24f51
creativeml-openrail-m
['text-to-image']
false
SksUminaoshiSimabu Dreambooth model trained by Hirokusa with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb) Sample pictures of this concept: ダウンロード ![ダウンロード 0](https://huggingface.co/Hirokusa/sksuminaoshisimabu/resolve/main/sample_images/ダウンロード_(7).png)
ce2da44e00f43313b5917931c87342e9
mit
['pytorch', 'diffusers', 'unconditional-image-generation', 'diffusion-models-class']
false
Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute 🦋. In this run I just ran each cell of the NB to understand what is going on. Experimentation to follow 🙏
0d0270bf23ae06d7c6ad38e95cb8c5d3
apache-2.0
['generated_from_trainer']
false
mdeberta-targin-final This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5637 - Accuracy: 0.7091 - Precision: 0.6841 - Recall: 0.6557 - F1: 0.6617
336359cb9473ce866eac3df78b1c8697
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | No log | 1.0 | 296 | 0.6001 | 0.6435 | 0.6344 | 0.5087 | 0.4156 | | 0.6011 | 2.0 | 592 | 0.5633 | 0.7091 | 0.6879 | 0.6464 | 0.6521 | | 0.6011 | 3.0 | 888 | 0.5501 | 0.7234 | 0.6991 | 0.6841 | 0.6892 | | 0.5401 | 4.0 | 1184 | 0.5558 | 0.7082 | 0.6818 | 0.6595 | 0.6652 | | 0.5401 | 5.0 | 1480 | 0.5637 | 0.7091 | 0.6841 | 0.6557 | 0.6617 |
06c4d63f61a350c2d5c52266f114081e
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Wav2Vec2-Large-XLSR-53-Telugu Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Telugu using the [OpenSLR SLR66](http://openslr.org/66/) dataset. When using this model, make sure that your speech input is sampled at 16kHz.
72a22b81b24341099195b2ac7c8b441b
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import pandas as pd
5808b28679e656984b7f78bd3ddae7c0
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Evaluation notebook contains the procedure to download the data df = pd.read_csv("/content/te/test.tsv", sep="\t") df["path"] = "/content/te/clips/" + df["path"] test_dataset = Dataset.from_pandas(df) processor = Wav2Vec2Processor.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-telugu") model = Wav2Vec2ForCTC.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-telugu") resampler = torchaudio.transforms.Resample(48_000, 16_000)
db710b7fc05409a6d1250ab4336bfef1
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Evaluation ```python import torch import torchaudio from datasets import Dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re from sklearn.model_selection import train_test_split import pandas as pd
187c3acea47e5c6b76b7159a8b6f72a4
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Evaluation notebook contains the procedure to download the data df = pd.read_csv("/content/te/test.tsv", sep="\t") df["path"] = "/content/te/clips/" + df["path"] test_dataset = Dataset.from_pandas(df) wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-telugu") model = Wav2Vec2ForCTC.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-telugu") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\_\;\:\"\“\%\‘\”\।\’\'\&]' resampler = torchaudio.transforms.Resample(48_000, 16_000) def normalizer(text):
a73f51f624cf21d6bf6242bff208968a
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Use your custom normalizer text = text.replace("\\n","\n") text = ' '.join(text.split()) text = re.sub(r'''([a-z]+)''','',text,flags=re.IGNORECASE) text = re.sub(r'''%'''," శాతం ", text) text = re.sub(r'''(/|-|_)'''," ", text) text = re.sub("ై","ై", text) text = text.strip() return text def speech_file_to_array_fn(batch): batch["sentence"] = normalizer(batch["sentence"]) batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()+ " " speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn)
a961eee82f284eceb106ac26f6ac981d
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 44.98%
fe421ae70d2a3455b87a0b88104f4649
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Training 70% of the OpenSLR Telugu dataset was used for training. Train Split of annotations is [here](https://www.dropbox.com/s/xqc0wtour7f9h4c/train.tsv) Test Split of annotations is [here](https://www.dropbox.com/s/qw1uy63oj4qdiu4/test.tsv) Training Data Preparation notebook can be found [here](https://colab.research.google.com/drive/1_VR1QtY9qoiabyXBdJcOI29-xIKGdIzU?usp=sharing) Training notebook can be found[here](https://colab.research.google.com/drive/14N-j4m0Ng_oktPEBN5wiUhDDbyrKYt8I?usp=sharing) Evaluation notebook is [here](https://colab.research.google.com/drive/1SLEvbTWBwecIRTNqpQ0fFTqmr1-7MnSI?usp=sharing)
47e124b603e8a46f27f7b3eee38ae55c
creativeml-openrail-m
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'landscape']
false
DreamBooth model for the stupa concept trained by Someman on the Someman/boudhastupa dataset. This is a Stable Diffusion model fine-tuned on the concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of boudhanath stupa** This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
2b360235c820f80b93b06bd4cb6b381a
cc-by-4.0
['generated_from_trainer']
false
bert-base-cased-squad2-coffee20230113 This model is a fine-tuned version of [deepset/bert-base-cased-squad2](https://huggingface.co/deepset/bert-base-cased-squad2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.3859
45905d8a96318d3e062ddbc7306b9002
cc-by-4.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 88 | 2.0785 | | 2.2589 | 2.0 | 176 | 1.9542 | | 1.4038 | 3.0 | 264 | 1.7714 | | 0.9533 | 4.0 | 352 | 2.2673 | | 0.5394 | 5.0 | 440 | 2.5496 | | 0.4353 | 6.0 | 528 | 3.2302 | | 0.4201 | 7.0 | 616 | 3.7247 | | 0.2477 | 8.0 | 704 | 3.4248 | | 0.2477 | 9.0 | 792 | 3.8344 | | 0.1633 | 10.0 | 880 | 4.1582 | | 0.0979 | 11.0 | 968 | 3.8764 | | 0.0621 | 12.0 | 1056 | 4.1686 | | 0.0242 | 13.0 | 1144 | 4.2762 | | 0.0091 | 14.0 | 1232 | 4.4176 | | 0.0061 | 15.0 | 1320 | 4.3859 |
4df6cade815d2752b4f2d6fd52f88cf9
apache-2.0
['multiberts', 'multiberts-seed_3', 'multiberts-seed_3-step_2000k']
false
MultiBERTs, Intermediate Checkpoint - Seed 3, Step 2000k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model
ae3c14cd85fc991487b12ccf3d13db28
apache-2.0
['multiberts', 'multiberts-seed_3', 'multiberts-seed_3-step_2000k']
false
How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_3-step_2000k') model = TFBertModel.from_pretrained("google/multiberts-seed_3-step_2000k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_3-step_2000k') model = BertModel.from_pretrained("google/multiberts-seed_3-step_2000k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ```
a1803a30e7a8e154acd1e72066943a7b
gpl-3.0
['audio', 'music']
false
This model encodes audio files into vectors of 100 dimensions. It was trained on 240,000 Spotify playlists and on 30 second samples of over 4 million songs. The details can be found [here](https://github.com/teticio/Deej-AI). To encode an audio first install the package with ``` pip install audiodiffusion ``` and then run ```python from audiodiffusion.audio_encoder import AudioEncoder audio_encoder = AudioEncoder.from_pretrained("teticio/audio-encoder") audio_encoder.encode(<list of audio files>) ```
5a4c3f4092735b860afcb277e1e71e2b
apache-2.0
['generated_from_trainer']
false
distilled-mt5-small-0.0-0.5 This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset. It achieves the following results on the evaluation set: - Loss: 2.8994 - Bleu: 7.5838 - Gen Len: 45.058
5ff66ac0612d1b8be0e8ef212c2dd4b8
mit
['torch']
false
How to use Here is how to use this model in PyTorch: ```python >>> from transformers import EncoderDecoderModel, XLMRobertaTokenizer >>> >>> model_id = "rmihaylov/roberta2roberta-shared-nmt-bg" >>> model = EncoderDecoderModel.from_pretrained(model_id) >>> model.encoder.pooler = None >>> tokenizer = XLMRobertaTokenizer.from_pretrained(model_id) >>> >>> text = """ Others were photographed ransacking the building, smiling while posing with congressional items such as House Speaker Nancy Pelosi's lectern or at her staffer's desk, or publicly bragged about the crowd's violent and destructive joyride. """ >>> >>> inputs = tokenizer.encode_plus(text, max_length=100, return_tensors='pt', truncation=True) >>> >>> translation = model.generate(**inputs, >>> max_length=100, >>> num_beams=4, >>> do_sample=True, >>> num_return_sequences=1, >>> top_p=0.95, >>> decoder_start_token_id=tokenizer.bos_token_id) >>> >>> print([tokenizer.decode(g.tolist(), skip_special_tokens=True) for g in translation]) ['Други бяха заснети да бягат из сградата, усмихвайки се, докато се представят с конгресни предмети, като например лекцията на председателя на парламента Нанси Пелози или на бюрото на нейния служител, или публично се хвалят за насилието и разрушителната радост на тълпата.'] ```
3accb120bf654ddaabc8055a9ce6a585
mit
['torch']
false
How to use Here is how to use this model in PyTorch: ```python >>> from transformers import PegasusForConditionalGeneration, AutoTokenizer >>> >>> model_id = "rmihaylov/pegasus-base-cnn-dailymail-bg" >>> model = PegasusForConditionalGeneration.from_pretrained(model_id) >>> tokenizer = AutoTokenizer.from_pretrained(model_id) >>> >>> text = """Лукашенко поиска още полицията "да защити работническите колективи и организации и медии от заплахите на улицата", а който от държавните медии протестира, изобщо да не се връща на работа. На граничните служби бе наредено да засилят охраната на цялата граница, "за да не се допускат в Беларус от други държави бойци, оръжие, боеприпаси, пари за финансиране на безредиците, защото виждаме, че такива пари пристигат". Министерството на отбраната трябва да следи "движението на войски на НАТО на територията на Полша и Литва, тяхното направление и замисли, които в момента виждаме - и някои от тях ни карат да се замислим - и да не се притеснява да изкарва нашите въоръжени сили и техника в направлението на тяхното придвижване". Лукашенко изрично посочи събитията в град Гродно, "защото там има по-голямо желание за дестабилизация на обстановката, отколкото в Минск". Гродно стана вчера първият по-голям град, в който властите се разбраха с протестиращите да протестират на определени места в центъра на града. Той нарече опозицията "черносотници", тласкащи страната към пропаст и унищожение, както и към сблъсък с "исторически братския руски народ". Медиите трябва специално да се активизират срещу това, заръча Лукашенко.""" >>> >>> batch = tokenizer( >>> src_text, >>> truncation=True, >>> padding="longest", >>> return_tensors="pt", >>> return_token_type_ids=False) >>> >>> inputs = { >>> 'max_length': 150, >>> 'min_length': 10, >>> 'do_sample': False, >>> 'temperature': 1.0, >>> 'top_k': 50, >>> 'top_p': 1.0, >>> 'repetition_penalty': 1.0, >>> 'no_repeat_ngram_size': 0, >>> 'use_cache': True, >>> 'num_beams': 2, >>> 'length_penalty': 1.0, >>> 'num_return_sequences': 1, >>> 'early_stopping': False} >>> >>> batch.update(inputs) >>> >>> summary = model.generate(**batch) >>> >>> tgt_text = tokenizer.batch_decode(summary, skip_special_tokens=True) >>> print(tgt_text) ['Лукашенко изрично посочи събитията в Гродно, "защото там има по-голямо желание за дестабилизация на обстановката, отколкото в Минск" Той нарече опозицията "черносотници", тласкащи страната към пропаст и унищожение, както и сблъсък с "исторически братския руски народ"'] ```
107a2b605d610ffaf1473bbbc4659a3a
mit
['generated_from_trainer']
false
kobart_4_5.6e-5_datav2_min30_lp5.0_temperature1.0 This model is a fine-tuned version of [gogamza/kobart-base-v2](https://huggingface.co/gogamza/kobart-base-v2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.9891 - Rouge1: 35.4597 - Rouge2: 12.0824 - Rougel: 23.0161 - Bleu1: 29.793 - Bleu2: 16.882 - Bleu3: 9.6468 - Bleu4: 5.3654 - Gen Len: 50.6014
2477f6399900a357a2175bbbff8a3e9d
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 4 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5.0
6e22a32ba6c049a45550a885560f288e
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Bleu1 | Bleu2 | Bleu3 | Bleu4 | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:------:|:-------:| | 2.3968 | 0.47 | 5000 | 2.9096 | 32.7469 | 10.9679 | 21.4954 | 27.0594 | 15.1133 | 8.4503 | 4.564 | 48.5501 | | 2.2338 | 0.94 | 10000 | 2.8002 | 33.2148 | 11.5121 | 22.7066 | 26.4886 | 15.0125 | 8.5792 | 4.8523 | 41.1049 | | 1.9652 | 1.42 | 15000 | 2.7699 | 34.4269 | 11.8551 | 22.8478 | 28.2628 | 16.0909 | 9.0427 | 4.9254 | 46.9744 | | 2.001 | 1.89 | 20000 | 2.7201 | 34.157 | 11.8683 | 22.6775 | 28.3593 | 16.1361 | 9.221 | 4.8616 | 46.979 | | 1.6433 | 2.36 | 25000 | 2.7901 | 33.6354 | 11.5761 | 22.6878 | 27.6475 | 15.6571 | 8.8372 | 4.8672 | 43.9953 | | 1.6204 | 2.83 | 30000 | 2.7724 | 34.9611 | 12.1606 | 23.0246 | 29.1014 | 16.6689 | 9.3661 | 5.1916 | 48.8811 | | 1.2955 | 3.3 | 35000 | 2.8970 | 35.896 | 12.7037 | 23.3781 | 29.9701 | 17.3963 | 10.2978 | 5.9339 | 49.5921 | | 1.3501 | 3.78 | 40000 | 2.8854 | 35.2981 | 12.1133 | 23.1845 | 29.483 | 16.7795 | 9.4124 | 5.2042 | 48.5897 | | 1.0865 | 4.25 | 45000 | 2.9912 | 35.581 | 12.5145 | 23.2262 | 29.9364 | 17.2064 | 10.0427 | 5.62 | 48.31 | | 1.052 | 4.72 | 50000 | 2.9891 | 35.4597 | 12.0824 | 23.0161 | 29.793 | 16.882 | 9.6468 | 5.3654 | 50.6014 |
59fdcef48c83b7ee85d8c52f2cc4590a
creativeml-openrail-m
['coreml', 'stable-diffusion', 'text-to-image']
false
Epic Diffusion: Source(s): [Hugging Face](https://huggingface.co/johnslegers/epic-diffusion) - [CivitAI](https://civitai.com/models/3855/epic-diffusion) Why Epic Diffusion Epîc Diffusion is a general purpose model based on Stable Diffusion 1.x intended to replace the official SD releases as your default model. It is focused on providing high quality output in a wide range of different styles, with support for NFSW content. Epîc Diffusion 1.0 is a heavily calibrated merge of SD 1.4, SD 1.5, Analog Diffusion, Wavy Diffusion, Openjourney Diffusion, Samdoesarts Ultramerge, postapocalypse, Elldreth's Dream, Protogen V2.2, Inkpunk Diffusion, Arcane Diffusion & Van Gogh Diffusion blended and reblended multiple times until I got the quality & consistency I was looking for...
d825adc245724214697cc54b48bfcd20
afl-3.0
[]
false
This model is actually very accurate for this rerank products given one query, intuitively inspired by information retrieval techniques. In 2019, Nils Reimers and Iryna Gurevych introduced a new transformers model called Sentence-BERT, Sentence Embeddings using Siamese BERT-Networks. The model is introduced by this paper https://doi.org/10.48550/arxiv.1908.10084. This new Sentence-BERT model is modified on the BERT model by adding a pooling operation to the output of BERT model. In such a way, it can output a fixed size of the sentence embedding to calculate cosine similarity, and so on. To obtain a meaningful sentence embedding in a sentence vector space where similar or pairwise sentence embedding are close, they created a triplet network to modify the BERT model as the architecture below figure. ![1.png](1.png)
0c7cd2e1dff8e0b5100f173af153853a
afl-3.0
[]
false
Download and Use ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("LiYuan/Amazon-Cup-Cross-Encoder-Regression") model = AutoModelForSequenceClassification.from_pretrained("LiYuan/Amazon-Cup-Cross-Encoder-Regression") ``` As we can observe from above figure, a pooling layer is added on the top of each BERT Model to obtain the sentence embedding $u$ and $v$. Finally, the cosine similarity between $u$ and $v$ can be computed to compare with the true score or make them semantically meaningful, then the mean square error loss, which is the objective function, can be backpropagated through this BERT network model to update the weights. In our amazon case, the query is sentence A and concatenated product attributes are sentence B. We also stratified split the merged set into **571,223** rows for training, **500** rows for validation, **3,000** rows for test. We limited the output score between 0 and 1. The following scores represent the degree of relevance between the query and the product attributes in light of Amazon KDD Cup website; however, this can be adjusted to improve the model performance. - 1: exact - 0.1: substitute - 0.01: complement - 0: irrelevance For this regression model, we used Pearson correlation coefficient and Spearman's rank correlation coefficient} to measure the model performance. If the correlation coefficient is high, the model performs well. The validation Pearson is \textbf{0.5670} and validation Spearman is \textbf{0.5662}. This is not bad result. We also evaluated the model on the test set. We got **0.5321** for Pearson and **0.5276** for Spearman. These results from the test evaluation have results similar to those of the validation set, suggesting that the model has a good generalization. Finally, once we have this fine-tuned Cross-Encoder Regression model, given a new query and its matched product list, we can feed them into this model to get the output score to rerank them so that this can improve the customer online shopping experience.
caf331bce3240732225d2faef8a09f01
apache-2.0
['automatic-speech-recognition', 'it']
false
exp_w2v2t_it_hubert_s722 Fine-tuned [facebook/hubert-large-ll60k](https://huggingface.co/facebook/hubert-large-ll60k) for speech recognition using the train split of [Common Voice 7.0 (it)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
921850125eb25f760976ea393633fa31
apache-2.0
['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'robust-speech-event']
false
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the KRESNIK/ZEROTH_KOREAN - CLEAN dataset. It achieves the following results on the evaluation set: - Loss: 0.0639 - Wer: 0.0449
53c714c7985258635f2c9a1758ca1256
apache-2.0
['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'robust-speech-event']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 4.603 | 0.72 | 500 | 4.6572 | 0.9985 | | 2.6314 | 1.44 | 1000 | 2.0424 | 0.9256 | | 2.2708 | 2.16 | 1500 | 0.9889 | 0.6989 | | 2.1769 | 2.88 | 2000 | 0.8366 | 0.6312 | | 2.1142 | 3.6 | 2500 | 0.7555 | 0.5998 | | 2.0084 | 4.32 | 3000 | 0.7144 | 0.6003 | | 1.9272 | 5.04 | 3500 | 0.6311 | 0.5461 | | 1.8687 | 5.75 | 4000 | 0.6252 | 0.5430 | | 1.8186 | 6.47 | 4500 | 0.5491 | 0.4988 | | 1.7364 | 7.19 | 5000 | 0.5463 | 0.4959 | | 1.6809 | 7.91 | 5500 | 0.4724 | 0.4484 | | 1.641 | 8.63 | 6000 | 0.4679 | 0.4461 | | 1.572 | 9.35 | 6500 | 0.4387 | 0.4236 | | 1.5256 | 10.07 | 7000 | 0.3970 | 0.4003 | | 1.5044 | 10.79 | 7500 | 0.3690 | 0.3893 | | 1.4563 | 11.51 | 8000 | 0.3752 | 0.3875 | | 1.394 | 12.23 | 8500 | 0.3386 | 0.3567 | | 1.3641 | 12.95 | 9000 | 0.3290 | 0.3467 | | 1.2878 | 13.67 | 9500 | 0.2893 | 0.3135 | | 1.2602 | 14.39 | 10000 | 0.2723 | 0.3029 | | 1.2302 | 15.11 | 10500 | 0.2603 | 0.2989 | | 1.1865 | 15.83 | 11000 | 0.2440 | 0.2794 | | 1.1491 | 16.55 | 11500 | 0.2500 | 0.2788 | | 1.093 | 17.27 | 12000 | 0.2279 | 0.2629 | | 1.0367 | 17.98 | 12500 | 0.2076 | 0.2443 | | 0.9954 | 18.7 | 13000 | 0.1844 | 0.2259 | | 0.99 | 19.42 | 13500 | 0.1794 | 0.2179 | | 0.9385 | 20.14 | 14000 | 0.1765 | 0.2122 | | 0.8952 | 20.86 | 14500 | 0.1706 | 0.1974 | | 0.8841 | 21.58 | 15000 | 0.1791 | 0.1969 | | 0.847 | 22.3 | 15500 | 0.1780 | 0.2060 | | 0.8669 | 23.02 | 16000 | 0.1608 | 0.1862 | | 0.8066 | 23.74 | 16500 | 0.1447 | 0.1626 | | 0.7908 | 24.46 | 17000 | 0.1457 | 0.1655 | | 0.7459 | 25.18 | 17500 | 0.1350 | 0.1445 | | 0.7218 | 25.9 | 18000 | 0.1276 | 0.1421 | | 0.703 | 26.62 | 18500 | 0.1177 | 0.1302 | | 0.685 | 27.34 | 19000 | 0.1147 | 0.1305 | | 0.6811 | 28.06 | 19500 | 0.1128 | 0.1244 | | 0.6444 | 28.78 | 20000 | 0.1120 | 0.1213 | | 0.6323 | 29.5 | 20500 | 0.1137 | 0.1166 | | 0.5998 | 30.22 | 21000 | 0.1051 | 0.1107 | | 0.5706 | 30.93 | 21500 | 0.1035 | 0.1037 | | 0.5555 | 31.65 | 22000 | 0.1031 | 0.0927 | | 0.5389 | 32.37 | 22500 | 0.0997 | 0.0900 | | 0.5201 | 33.09 | 23000 | 0.0920 | 0.0912 | | 0.5146 | 33.81 | 23500 | 0.0929 | 0.0947 | | 0.515 | 34.53 | 24000 | 0.1000 | 0.0953 | | 0.4743 | 35.25 | 24500 | 0.0922 | 0.0892 | | 0.4707 | 35.97 | 25000 | 0.0852 | 0.0808 | | 0.4456 | 36.69 | 25500 | 0.0855 | 0.0779 | | 0.443 | 37.41 | 26000 | 0.0843 | 0.0738 | | 0.4388 | 38.13 | 26500 | 0.0816 | 0.0699 | | 0.4162 | 38.85 | 27000 | 0.0752 | 0.0645 | | 0.3979 | 39.57 | 27500 | 0.0761 | 0.0621 | | 0.3889 | 40.29 | 28000 | 0.0771 | 0.0625 | | 0.3923 | 41.01 | 28500 | 0.0755 | 0.0598 | | 0.3693 | 41.73 | 29000 | 0.0730 | 0.0578 | | 0.3642 | 42.45 | 29500 | 0.0739 | 0.0598 | | 0.3532 | 43.17 | 30000 | 0.0712 | 0.0553 | | 0.3513 | 43.88 | 30500 | 0.0762 | 0.0516 | | 0.3349 | 44.6 | 31000 | 0.0731 | 0.0504 | | 0.3305 | 45.32 | 31500 | 0.0725 | 0.0507 | | 0.3285 | 46.04 | 32000 | 0.0709 | 0.0489 | | 0.3179 | 46.76 | 32500 | 0.0667 | 0.0467 | | 0.3158 | 47.48 | 33000 | 0.0653 | 0.0494 | | 0.3033 | 48.2 | 33500 | 0.0638 | 0.0456 | | 0.3023 | 48.92 | 34000 | 0.0644 | 0.0464 | | 0.2975 | 49.64 | 34500 | 0.0643 | 0.0455 |
43804daf83b54a1d00a04807e9cd37bc
apache-2.0
['translation']
false
opus-mt-es-cs * source languages: es * target languages: cs * OPUS readme: [es-cs](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-cs/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-cs/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-cs/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-cs/opus-2020-01-16.eval.txt)
b83911c98602ebc3a04d51973a998484
apache-2.0
['generated_from_trainer']
false
t5-small-finetuned-wikihow_3epoch_v2 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wikihow dataset. It achieves the following results on the evaluation set: - Loss: 2.2758 - Rouge1: 27.48 - Rouge2: 10.7621 - Rougel: 23.4136 - Rougelsum: 26.7923 - Gen Len: 18.5424
02328779394a4d29c58639e3dda93d98
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 2.8423 | 0.13 | 5000 | 2.5715 | 25.2685 | 8.6964 | 21.229 | 24.5773 | 18.4479 | | 2.7345 | 0.25 | 10000 | 2.5236 | 24.982 | 8.7823 | 21.1609 | 24.3066 | 18.3631 | | 2.6811 | 0.38 | 15000 | 2.4911 | 25.7585 | 9.3372 | 21.8388 | 25.1052 | 18.3997 | | 2.6611 | 0.51 | 20000 | 2.4510 | 26.022 | 9.4708 | 22.0899 | 25.3236 | 18.5472 | | 2.6133 | 0.64 | 25000 | 2.4272 | 26.3481 | 9.6769 | 22.4484 | 25.7046 | 18.3863 | | 2.6083 | 0.76 | 30000 | 2.4108 | 26.4131 | 9.6643 | 22.4021 | 25.6958 | 18.5585 | | 2.5842 | 0.89 | 35000 | 2.3866 | 26.2852 | 9.7505 | 22.4525 | 25.5908 | 18.5485 | | 2.5554 | 1.02 | 40000 | 2.3816 | 26.3018 | 9.7218 | 22.3673 | 25.6515 | 18.4912 | | 2.4895 | 1.14 | 45000 | 2.3730 | 26.6439 | 9.9665 | 22.6593 | 25.9521 | 18.5635 | | 2.4781 | 1.27 | 50000 | 2.3541 | 26.8488 | 10.0364 | 22.8202 | 26.1598 | 18.4254 | | 2.4821 | 1.4 | 55000 | 2.3440 | 26.9511 | 10.2079 | 23.0133 | 26.2821 | 18.5712 | | 2.4593 | 1.53 | 60000 | 2.3370 | 26.945 | 10.3123 | 22.9245 | 26.2493 | 18.5978 | | 2.4521 | 1.65 | 65000 | 2.3309 | 26.9652 | 10.314 | 22.9657 | 26.298 | 18.4837 | | 2.4523 | 1.78 | 70000 | 2.3249 | 27.0548 | 10.4204 | 23.1286 | 26.379 | 18.4717 | | 2.4563 | 1.91 | 75000 | 2.3079 | 27.4563 | 10.6452 | 23.3985 | 26.7812 | 18.5642 | | 2.4229 | 2.03 | 80000 | 2.3115 | 27.0538 | 10.44 | 22.9957 | 26.349 | 18.5914 | | 2.3694 | 2.16 | 85000 | 2.3017 | 27.332 | 10.6556 | 23.3135 | 26.629 | 18.459 | | 2.3749 | 2.29 | 90000 | 2.2941 | 27.3294 | 10.5967 | 23.2039 | 26.6411 | 18.5179 | | 2.3779 | 2.42 | 95000 | 2.2891 | 27.3725 | 10.6539 | 23.3455 | 26.707 | 18.5367 | | 2.3638 | 2.54 | 100000 | 2.2895 | 27.3487 | 10.6738 | 23.2894 | 26.681 | 18.6128 | | 2.3549 | 2.67 | 105000 | 2.2833 | 27.408 | 10.6903 | 23.3575 | 26.7137 | 18.6035 | | 2.3652 | 2.8 | 110000 | 2.2788 | 27.561 | 10.8202 | 23.4672 | 26.8584 | 18.5565 | | 2.3553 | 2.93 | 115000 | 2.2758 | 27.48 | 10.7621 | 23.4136 | 26.7923 | 18.5424 |
391ca9fcf950b57c7b4b9165555b54c1
apache-2.0
['generated_from_trainer']
false
olm-bert-tiny-december-2022-target-glue-qqp This model is a fine-tuned version of [muhtasham/olm-bert-tiny-december-2022](https://huggingface.co/muhtasham/olm-bert-tiny-december-2022) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5217 - Accuracy: 0.7433 - F1: 0.6048
495a0515c52e8d4f8ef615cd46d488ca
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.6283 | 0.04 | 500 | 0.5955 | 0.6795 | 0.5186 | | 0.5875 | 0.09 | 1000 | 0.5763 | 0.6972 | 0.5596 | | 0.5791 | 0.13 | 1500 | 0.5690 | 0.6975 | 0.6011 | | 0.5666 | 0.18 | 2000 | 0.5536 | 0.7156 | 0.5520 | | 0.5568 | 0.22 | 2500 | 0.5447 | 0.7230 | 0.5709 | | 0.5489 | 0.26 | 3000 | 0.5386 | 0.7281 | 0.5665 | | 0.5465 | 0.31 | 3500 | 0.5305 | 0.7329 | 0.5917 | | 0.5384 | 0.35 | 4000 | 0.5262 | 0.7357 | 0.6231 | | 0.5422 | 0.4 | 4500 | 0.5207 | 0.7409 | 0.6200 | | 0.5299 | 0.44 | 5000 | 0.5217 | 0.7433 | 0.6048 |
9d36768531ad96e2ef9815a7601e8b69
mit
['bart']
false
Model Description [**BART**](https://arxiv.org/pdf/1910.13461.pdf)(**B**idirectional and **A**uto-**R**egressive **T**ransformers)는 입력 텍스트 일부에 노이즈를 추가하여 이를 다시 원문으로 복구하는 `autoencoder`의 형태로 학습이 됩니다. 한국어 BART(이하 **KoBART**) 는 논문에서 사용된 `Text Infilling` 노이즈 함수를 사용하여 **40GB** 이상의 한국어 텍스트에 대해서 학습한 한국어 `encoder-decoder` 언어 모델입니다. 이를 통해 도출된 `KoBART-base`를 배포합니다. - **Developed by:** More information needed - **Shared by [Optional]:** Heewon(Haven) Jeon - **Model type:** Feature Extraction - **Language(s) (NLP):** Korean - **License:** MIT - **Parent Model:** BART - **Resources for more information:** - [GitHub Repo](https://github.com/haven-jeon/KoBART) - [Model Demo Space](https://huggingface.co/spaces/gogamza/kobart-summarization)
43898082e8d5eb682223da09fcf85929
mit
['bart']
false
Bias, Risks, and Limitations Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
d5e927b10c2aa10428a6e82980bb1f4c
mit
['bart']
false
of Sentences | |-------|---------------:| | Korean Wiki | 5M | | Other corpus | 0.27B | 한국어 위키 백과 이외, 뉴스, 책, [모두의 말뭉치 v1.0(대화, 뉴스, ...)](https://corpus.korean.go.kr/), [청와대 국민청원](https://github.com/akngs/petitions) 등의 다양한 데이터가 모델 학습에 사용되었습니다. `vocab` 사이즈는 30,000 이며 대화에 자주 쓰이는 아래와 같은 이모티콘, 이모지 등을 추가하여 해당 토큰의 인식 능력을 올렸습니다. > 😀, 😁, 😆, 😅, 🤣, .. , `:-)`, `:)`, `-)`, `(-:`...
9b271b1630d1622034e48640f12f94bf
mit
['bart']
false
of heads | ffn_dim | hidden_dims | |--------------|:----:|:-------:|--------:|--------:|--------:|--------------:| | `KoBART-base` | 124M | Encoder | 6 | 16 | 3072 | 768 | | | | Decoder | 6 | 16 | 3072 | 768 |
d529e9711364a6f6f75261afed92bf69
mit
['bart']
false
Results NSMC - acc. : 0.901 The model authors also note in the [GitHub Repo](https://github.com/haven-jeon/KoBART): | | [NSMC](https://github.com/e9t/nsmc)(acc) | [KorSTS](https://github.com/kakaobrain/KorNLUDatasets)(spearman) | [Question Pair](https://github.com/aisolab/nlp_classification/tree/master/BERT_pairwise_text_classification/qpair)(acc) | |---|---|---|---| | **KoBART-base** | 90.24 | 81.66 | 94.34 |
13588b6d9367bca2ad086eac9a206f8a
mit
['bart']
false
How to Get Started with the Model Use the code below to get started with the model. <details> <summary> Click to expand </summary> ```python from transformers import PreTrainedTokenizerFast, BartModel tokenizer = PreTrainedTokenizerFast.from_pretrained('gogamza/kobart-base-v2') model = BartModel.from_pretrained('gogamza/kobart-base-v2') ``` </details>
69b66baaf98a0e0de2d3d428944b8138
apache-2.0
['generated_from_trainer']
false
distilbert_sa_GLUE_Experiment_data_aug_wnli_384 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE WNLI dataset. It achieves the following results on the evaluation set: - Loss: 1.7589 - Accuracy: 0.1268
61b0cf7be740c23579d2f9bee2edbcd2
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | |:-------------:|:-----:|:----:|:--------:|:---------------:| | 0.6429 | 1.0 | 218 | 0.1268 | 1.7589 | | 0.4693 | 2.0 | 436 | 3.1597 | 0.1127 | | 0.3905 | 3.0 | 654 | 4.0613 | 0.0704 | | 0.3365 | 4.0 | 872 | 4.4630 | 0.0986 | | 0.295 | 5.0 | 1090 | 5.3692 | 0.0845 | | 0.2593 | 6.0 | 1308 | 5.3990 | 0.0845 |
3b842d26b5d90fa127fcdcd778ac411e
apache-2.0
['automatic-speech-recognition', 'collectivat/tv3_parla', 'generated_from_trainer', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_8_0', 'projecte-aina/parlament_parla', 'robust-speech-event']
false
wav2vec2-xls-r-300m-ca-lm This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - CA, the [tv3_parla](https://huggingface.co/datasets/collectivat/tv3_parla) and [parlament_parla](https://huggingface.co/datasets/projecte-aina/parlament_parla) datasets. It achieves the following results on the evaluation set (for the three datasets and without the LM): - Loss: 0.2472 - Wer: 0.1499
4804998615b0b0b04305fc63b374198f
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 5
86407266f2ba34af43a95a3a00e37a62
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.5695 | 0.39 | 500 | 0.0591 | | 0.0606 | 0.77 | 1000 | 0.0588 | | 0.0575 | 1.16 | 1500 | 0.0588 | | 0.0551 | 1.55 | 2000 | 0.0586 | | 0.0549 | 1.93 | 2500 | 0.0581 | | 0.0487 | 2.32 | 3000 | 0.0597 | | 0.0478 | 2.71 | 3500 | 0.0594 | | 0.0463 | 3.1 | 4000 | 0.0624 | | 0.0404 | 3.48 | 4500 | 0.0625 | | 0.041 | 3.87 | 5000 | 0.0617 | | 0.0366 | 4.26 | 5500 | 0.0656 | | 0.0347 | 4.64 | 6000 | 0.0658 |
618f3406f4954b946abc5c143b754eb0
apache-2.0
['generated_from_trainer']
false
test This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2778 - Accuracy: 0.9335 - F1:: 0.9337
11f8e6ee1404ff79a58b8921bbab9e21
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1: | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 250 | 0.3285 | 0.9285 | 0.9291 | | No log | 2.0 | 500 | 0.2778 | 0.9335 | 0.9337 |
46178b6329e9b3440db8b4793902540f
mit
[]
false
PolicyBERTa-7d This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on data from the [Manifesto Project](https://manifesto-project.wzb.eu/). It was inspired by the model from [Laurer (2020)](https://huggingface.co/MoritzLaurer/policy-distilbert-7d). It achieves the following results on the evaluation set: - Loss: 0.8549 - Accuracy: 0.7059 - F1-micro: 0.7059 - F1-macro: 0.6683 - F1-weighted: 0.7033 - Precision: 0.7059 - Recall: 0.7059
36816339ad3ee2512c64355292c6e000
mit
[]
false
Model description This model was trained on 115,943 manually annotated sentences to classify text into one of seven political categories: "external relations", "freedom and democracy", "political system", "economy", "welfare and quality of life", "fabric of society" and "social groups".
9dc22d6c03f4ffcfb3870e62b6425508
mit
[]
false
Intended uses & limitations The model output reproduces the limitations of the dataset in terms of country coverage, time span, domain definitions and potential biases of the annotators - as any supervised machine learning model would. Applying the model to other types of data (other types of texts, countries etc.) will reduce performance. ```python from transformers import pipeline import pandas as pd classifier = pipeline( task="text-classification", model="niksmer/PolicyBERTa-7d")
753aa62920ea3b8d5ce1025949a172e9
mit
[]
false
Training and evaluation data PolicyBERTa-7d was trained on the English-speaking subset of the [Manifesto Project Dataset (MPDS2021a)](https://manifesto-project.wzb.eu/datasets). The model was trained on 115,943 sentences from 163 political manifestos in 7 English-speaking countries (Australia, Canada, Ireland, New Zealand, South Africa, United Kingdom, United States). The manifestos were published between 1992 - 2020. | Country | Count manifestos | Count sentences | Time span | |----------------|------------------|-----------------|--------------------| | Australia | 18 | 14,887 | 2010-2016 | | Ireland | 23 | 24,966 | 2007-2016 | | Canada | 14 | 12,344 | 2004-2008 & 2015 | | New Zealand | 46 | 35,079 | 1993-2017 | | South Africa | 29 | 13,334 | 1994-2019 | | USA | 9 | 13,188 | 1992 & 2004-2020 | | United Kingdom | 34 | 30,936 | 1997-2019 | Canadian manifestos between 2004 and 2008 are used as test data. The Manifesto Project mannually annotates individual sentences from political party manifestos in 7 main political domains: 'Economy', 'External Relations', 'Fabric of Society', 'Freedom and Democracy', 'Political System', 'Welfare and Quality of Life' or 'Social Groups' - see the [codebook](https://manifesto-project.wzb.eu/down/papers/handbook_2021_version_5.pdf) for the exact definitions of each domain.
636e5ef480aca2e15a32d083895e5628
mit
[]
false
Tain data Train data was higly imbalanced. | Label | Description | Count | |------------|--------------|--------| | 0 | external relations | 7,640 | | 1 | freedom and democracy | 5,880 | | 2 | political system | 11,234 | | 3 | economy | 29,218 | | 4 | welfare and quality of life | 37,200 | | 5 | fabric of society | 13,594 | | 6 | social groups | 11,177 | Overall count: 115,943
db4ae762cfae63277a1b7986799a4927
mit
[]
false
Validation data The validation was created by chance. | Label | Description | Count | |------------|--------------|--------| | 0 | external relations | 1,345 | | 1 | freedom and democracy | 1,043 | | 2 | political system | 2,038 | | 3 | economy | 5,140 | | 4 | welfare and quality of life | 6,554 | | 5 | fabric of society | 2,384 | | 6 | social groups | 1,957 | Overall count: 20,461
be5c21b003eec250543e66a1f0eb2903
mit
[]
false
Test data The test dataset contains ten canadian manifestos between 2004 and 2008. | Label | Description | Count | |------------|--------------|--------| | 0 | external relations | 824 | | 1 | freedom and democracy | 296 | | 2 | political system | 1,041 | | 3 | economy | 2,188 | | 4 | welfare and quality of life | 2,654 | | 5 | fabric of society | 940 | | 6 | social groups | 387 | Overall count: 8,330
7e4efdcca67c39d331f9a26c72cd060e
mit
[]
false
Training hyperparameters The following hyperparameters were used during training: ``` training_args = TrainingArguments( warmup_steps=0, weight_decay=0.1, learning_rate=1e-05, fp16 = True, evaluation_strategy="epoch", num_train_epochs=5, per_device_train_batch_size=16, overwrite_output_dir=True, per_device_eval_batch_size=16, save_strategy="no", logging_dir='logs', logging_strategy= 'steps', logging_steps=10, push_to_hub=True, hub_strategy="end") ```
de94d971a296d826fd4f0587a1b37fa7