license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2217 - Accuracy: 0.924 - F1: 0.9241
47158314224dc5724e1c6d7b1b915c58
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8136 | 1.0 | 250 | 0.3140 | 0.902 | 0.8998 | | 0.2501 | 2.0 | 500 | 0.2217 | 0.924 | 0.9241 |
53aa2d0dadbc54c52e85a66a9bfc7220
apache-2.0
['translation', 'generated_from_trainer']
false
model_zu-en_updated This model is a fine-tuned version of [Helsinki-NLP/opus-mt-mul-en](https://huggingface.co/Helsinki-NLP/opus-mt-mul-en) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8306 - Bleu: 27.1218
78dd2a34b9744a3e1d77185b2862cbb7
mit
['pytorch', 'deberta', 'deberta-v2', 'question-answering', 'question answering', 'squad']
false
How to use 使い方 transformersおよびpytorch、sentencepiece、Juman++をインストールしてください。 以下のコードを実行することで、Question-Answeringタスクを解かせることができます。 please execute this code. ```python import torch from transformers import AutoTokenizer, AutoModelForQuestionAnswering tokenizer = AutoTokenizer.from_pretrained('ku-nlp/deberta-v2-base-japanese') model=AutoModelForQuestionAnswering.from_pretrained('Mizuiro-sakura/deberta-v2-base-japanese-finetuned-QAe')
42f1030bec2fc8d7be08ac527804ead8
creativeml-openrail-m
['text-to-image', 'v2.0', 'Embedding']
false
Textual Inversion embedding trained on 768x768 images from 80s box arts of Transformers and GIJoe toys and identical sources. *Install by downloading the embedding, and put it in the **\embeddings** folder.* ![01254-273003803-futuristic big game hunter sitting for a photo next to his large alien creature, proud, feat, wild world, large futuristic rifle.png](https://s3.amazonaws.com/moonup/production/uploads/1670513558616-6364e6c712188d67e653853e.png) ![01253-273003803-tfboxart futuristic big game hunter sitting for a photo next to his large alien creature, proud, feat, wild world, large futuris.png](https://s3.amazonaws.com/moonup/production/uploads/1670513558659-6364e6c712188d67e653853e.png) <table> <tr> <th><img src="https://s3.amazonaws.com/moonup/production/uploads/1670510933114-6364e6c712188d67e653853e.png"></th> <th><img src="https://s3.amazonaws.com/moonup/production/uploads/1670510933101-6364e6c712188d67e653853e.png"></th> <th><img src="https://s3.amazonaws.com/moonup/production/uploads/1670510933119-6364e6c712188d67e653853e.png"></th> </tr> </table> <table> <tr> <th><img src="https://s3.amazonaws.com/moonup/production/uploads/1670512344753-6364e6c712188d67e653853e.png"></th> <th><img src="https://s3.amazonaws.com/moonup/production/uploads/1670510933127-6364e6c712188d67e653853e.png"></th> <th><img src="https://s3.amazonaws.com/moonup/production/uploads/1670510933072-6364e6c712188d67e653853e.png"></th> </tr> </table> <table> <tr> <th><img src="https://s3.amazonaws.com/moonup/production/uploads/1670512585719-6364e6c712188d67e653853e.png"></th> <th><img src="https://s3.amazonaws.com/moonup/production/uploads/1670512585692-6364e6c712188d67e653853e.png"></th> </tr> </table> <table> <tr> <th><img src="https://s3.amazonaws.com/moonup/production/uploads/1670512585843-6364e6c712188d67e653853e.png"></th> <th><img src="https://s3.amazonaws.com/moonup/production/uploads/1670512585826-6364e6c712188d67e653853e.png"></th> <th><img src="https://s3.amazonaws.com/moonup/production/uploads/1670513448009-6364e6c712188d67e653853e.png"></th> </tr> </table> <table> <tr> <th><img src="https://s3.amazonaws.com/moonup/production/uploads/1670513239492-6364e6c712188d67e653853e.png"></th> <th><img src="https://s3.amazonaws.com/moonup/production/uploads/1670513239526-6364e6c712188d67e653853e.png"></th> </tr> </table> <table> <tr> <th><img src="https://s3.amazonaws.com/moonup/production/uploads/1670513384195-6364e6c712188d67e653853e.png"></th> <th><img src="https://s3.amazonaws.com/moonup/production/uploads/1670513330528-6364e6c712188d67e653853e.png"></th> <th><img src="https://s3.amazonaws.com/moonup/production/uploads/1670513330573-6364e6c712188d67e653853e.png"></th> </tr> </table> All images rendered in SD v2.1
e1a7374cab27b20c2c76cc075b7c026a
apache-2.0
['deep-narrow']
false
T5-Efficient-BASE-DM256 (Deep-Narrow version) T5-Efficient-BASE-DM256 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5). It is a *pretrained-only* checkpoint and was released with the paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*. In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures of similar parameter count. To quote the paper: > We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased > before considering any other forms of uniform scaling across other dimensions. This is largely due to > how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a > tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise, > a tall base model might also generally more efficient compared to a large model. We generally find > that, regardless of size, even if absolute performance might increase as we continue to stack layers, > the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36 > layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e., > params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params, > FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to > consider. To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially. A sequence of word embeddings is therefore processed sequentially by each transformer block.
cb6754c086ad84fc31d7af9b70986966
apache-2.0
['deep-narrow']
false
Details model architecture This model checkpoint - **t5-efficient-base-dm256** - is of model type **Base** with the following variations: - **dm** is **256** It has **74.33** million parameters and thus requires *ca.* **297.32 MB** of memory in full precision (*fp32*) or **148.66 MB** of memory in half precision (*fp16* or *bf16*). A summary of the *original* T5 model architectures can be seen here: | Model | nl (el/dl) | ff | dm | kv | nh |
ab308973305254161196e427cc45f1f0
mit
[]
false
Base model: [gpt2-large](https://huggingface.co/gpt2-large) Fine-tuned to generate responses on a dataset of [Vaccine public health tweets](https://github.com/TheRensselaerIDEA/generative-response-modeling). For more information about the dataset, task and training, see [our paper](https://arxiv.org/abs/2204.04353). This checkpoint corresponds to the lowest validation perplexity (2.82 at 2 epochs) seen during training. See Training metrics for Tensorboard logs. For input format and usage examples, see our [COVID-19 public health tweet response model](https://huggingface.co/TheRensselaerIDEA/gpt2-large-covid-tweet-response).
afa5428faa9895e15dcb2838b4a0a6ae
apache-2.0
['generated_from_trainer']
false
model_output_en_de This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-de](https://huggingface.co/Helsinki-NLP/opus-mt-en-de) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1298 - Bleu: 33.9121 - Gen Len: 76.8132
0af65a56a06fb68b1d317c769c1c162f
apache-2.0
['generated_from_trainer']
false
BERT-tiny-sst2 This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.4422 - Accuracy: 0.8372
65b75d970296b996d4f676dd3fad848b
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.3914 | 1.0 | 4210 | 0.4383 | 0.8211 | | 0.2577 | 2.0 | 8420 | 0.4422 | 0.8372 | | 0.212 | 3.0 | 12630 | 0.5460 | 0.8085 | | 0.1862 | 4.0 | 16840 | 0.5885 | 0.8245 | | 0.1671 | 5.0 | 21050 | 0.7159 | 0.8096 |
a19c2d1102694a61bc029118857b701d
mit
['generated_from_trainer']
false
Klassifizierung-Gewerke This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0398 - F1: 0.9931
c5da901eb8daf6f26fddd17952e0ba59
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.1473 | 1.0 | 726 | 0.0952 | 0.9822 | | 0.0252 | 2.0 | 1452 | 0.0488 | 0.9918 | | 0.028 | 3.0 | 2178 | 0.0398 | 0.9931 |
274821a0328101b96168f85afdc874da
apache-2.0
['translation']
false
opus-mt-sv-fj * source languages: sv * target languages: fj * OPUS readme: [sv-fj](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-fj/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-fj/opus-2020-01-21.zip) * test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-fj/opus-2020-01-21.test.txt) * test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-fj/opus-2020-01-21.eval.txt)
4e680be3f78f35206897e7ae3c43a1c8
apache-2.0
['automatic-speech-recognition', 'es']
false
exp_w2v2t_es_vp-es_s859 Fine-tuned [facebook/wav2vec2-large-es-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-es-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
fbd63ebdf4a12c7f76e0c29e25429d25
apache-2.0
['generated_from_trainer']
false
t5-small-mse-summarization This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1108 - Rouge1: 43.1145 - Rouge2: 23.2262 - Rougel: 37.218 - Rougelsum: 41.0897 - Bleurt: -0.8051 - Gen Len: 18.549
fbea9429d19330bad5d1fe0f6c842cef
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 256 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20
aa2375286368ba00dcec6d02ac12f34e
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bleurt | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|:-------:| | 1.5207 | 1.0 | 267 | 1.2922 | 38.8738 | 19.1958 | 32.8458 | 36.9993 | -0.9061 | 18.668 | | 1.363 | 2.0 | 534 | 1.2340 | 39.8466 | 20.0452 | 33.9101 | 37.7708 | -0.8925 | 18.657 | | 1.3062 | 3.0 | 801 | 1.2057 | 40.5536 | 20.8249 | 34.5221 | 38.4648 | -0.8625 | 18.602 | | 1.272 | 4.0 | 1068 | 1.1782 | 41.0078 | 21.2186 | 35.0101 | 38.9186 | -0.8595 | 18.602 | | 1.2312 | 5.0 | 1335 | 1.1688 | 41.521 | 21.7934 | 35.704 | 39.4718 | -0.842 | 18.486 | | 1.2052 | 6.0 | 1602 | 1.1557 | 42.1037 | 22.4291 | 36.3554 | 40.1124 | -0.8432 | 18.533 | | 1.1842 | 7.0 | 1869 | 1.1440 | 42.4438 | 22.6456 | 36.5729 | 40.3134 | -0.8288 | 18.553 | | 1.1643 | 8.0 | 2136 | 1.1408 | 42.245 | 22.4859 | 36.3637 | 40.2193 | -0.8284 | 18.622 | | 1.1495 | 9.0 | 2403 | 1.1320 | 42.5362 | 22.5034 | 36.5092 | 40.4552 | -0.8211 | 18.57 | | 1.1368 | 10.0 | 2670 | 1.1301 | 42.5159 | 22.462 | 36.4646 | 40.3968 | -0.819 | 18.538 | | 1.1203 | 11.0 | 2937 | 1.1243 | 42.2803 | 22.5963 | 36.3454 | 40.2987 | -0.8242 | 18.522 | | 1.1116 | 12.0 | 3204 | 1.1197 | 42.8078 | 22.8409 | 36.7344 | 40.8186 | -0.821 | 18.565 | | 1.099 | 13.0 | 3471 | 1.1193 | 42.7423 | 22.9397 | 36.7894 | 40.7298 | -0.8125 | 18.552 | | 1.0976 | 14.0 | 3738 | 1.1176 | 42.9002 | 23.2394 | 37.0215 | 40.9211 | -0.8156 | 18.568 | | 1.0816 | 15.0 | 4005 | 1.1133 | 43.0007 | 23.3093 | 37.2037 | 40.9719 | -0.8059 | 18.519 | | 1.084 | 16.0 | 4272 | 1.1146 | 42.9053 | 23.2391 | 37.0542 | 40.8826 | -0.8104 | 18.533 | | 1.0755 | 17.0 | 4539 | 1.1124 | 43.0429 | 23.2773 | 37.1389 | 41.0755 | -0.8086 | 18.544 | | 1.0748 | 18.0 | 4806 | 1.1121 | 43.2243 | 23.4179 | 37.2039 | 41.143 | -0.8048 | 18.548 | | 1.072 | 19.0 | 5073 | 1.1106 | 43.1776 | 23.3061 | 37.3105 | 41.1392 | -0.8039 | 18.549 | | 1.0671 | 20.0 | 5340 | 1.1108 | 43.1145 | 23.2262 | 37.218 | 41.0897 | -0.8051 | 18.549 |
e2f108e7c0da53b4462b4f5fae8e792f
cc-by-4.0
['question generation']
false
Model Card of `lmqg/mt5-small-dequad-qg` This model is fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) for question generation task on the [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
97f34663faaec3d03a18acc8f4829a4c
cc-by-4.0
['question generation']
false
model prediction questions = model.generate_q(list_context="das erste weltweit errichtete Hermann Brehmer 1855 im niederschlesischen ''Görbersdorf'' (heute Sokołowsko, Polen).", list_answer="1855") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/mt5-small-dequad-qg") output = pipe("Empfangs- und Sendeantenne sollen in ihrer Polarisation übereinstimmen, andernfalls <hl> wird die Signalübertragung stark gedämpft. <hl>") ```
7311d1cdfa8c107ff9cbb6b0c0715bf2
cc-by-4.0
['question generation']
false
Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/mt5-small-dequad-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_dequad.default.json) | | Score | Type | Dataset | |:-----------|--------:|:--------|:-----------------------------------------------------------------| | BERTScore | 79.9 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) | | Bleu_1 | 10.18 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) | | Bleu_2 | 4.02 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) | | Bleu_3 | 1.6 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) | | Bleu_4 | 0.43 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) | | METEOR | 11.47 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) | | MoverScore | 54.64 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) | | ROUGE_L | 10.08 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) | - ***Metric (Question & Answer Generation, Reference Answer)***: Each question is generated from *the gold answer*. [raw metric file](https://huggingface.co/lmqg/mt5-small-dequad-qg/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_dequad.default.json) | | Score | Type | Dataset | |:--------------------------------|--------:|:--------|:-----------------------------------------------------------------| | QAAlignedF1Score (BERTScore) | 90.55 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) | | QAAlignedF1Score (MoverScore) | 64.33 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) | | QAAlignedPrecision (BERTScore) | 90.59 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) | | QAAlignedPrecision (MoverScore) | 64.37 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) | | QAAlignedRecall (BERTScore) | 90.51 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) | | QAAlignedRecall (MoverScore) | 64.29 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) | - ***Metric (Question & Answer Generation, Pipeline Approach)***: Each question is generated on the answer generated by [`lmqg/mt5-small-dequad-ae`](https://huggingface.co/lmqg/mt5-small-dequad-ae). [raw metric file](https://huggingface.co/lmqg/mt5-small-dequad-qg/raw/main/eval_pipeline/metric.first.answer.paragraph.questions_answers.lmqg_qg_dequad.default.lmqg_mt5-small-dequad-ae.json) | | Score | Type | Dataset | |:--------------------------------|--------:|:--------|:-----------------------------------------------------------------| | QAAlignedF1Score (BERTScore) | 81.19 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) | | QAAlignedF1Score (MoverScore) | 54.3 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) | | QAAlignedPrecision (BERTScore) | 80 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) | | QAAlignedPrecision (MoverScore) | 54.04 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) | | QAAlignedRecall (BERTScore) | 82.46 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) | | QAAlignedRecall (MoverScore) | 54.59 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) |
5025dc1f89a11c7149f98e24c2a9996f
cc-by-4.0
['question generation']
false
Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_dequad - dataset_name: default - input_types: ['paragraph_answer'] - output_types: ['question'] - prefix_types: None - model: google/mt5-small - max_length: 512 - max_length_output: 32 - epoch: 11 - batch: 16 - lr: 0.001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 4 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mt5-small-dequad-qg/raw/main/trainer_config.json).
623b7862507b9a23029a2bfa91b9620e
apache-2.0
['generated_from_trainer']
false
bert-finetuned-am This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4128 - Precision: 0.0054 - Recall: 0.0166 - F1: 0.0082 - Accuracy: 0.8423
2863eb155ef3e0cab9ae1fcc5bf71c3a
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 167 | 0.4448 | 0.0 | 0.0 | 0.0 | 0.8573 | | No log | 2.0 | 334 | 0.4078 | 0.0009 | 0.0013 | 0.0011 | 0.8572 | | 0.4231 | 3.0 | 501 | 0.4128 | 0.0054 | 0.0166 | 0.0082 | 0.8423 |
9269fb7e6c8f4221c455c8b326989697
mit
['pytorch', 'diffusers', 'unconditional-audio-generation', 'diffusion-models-class']
false
Usage ```python from IPython.display import Audio from diffusers import DiffusionPipeline pipe = DiffusionPipeline.from_pretrained("juancopi81/test-audio-diffusion-electronic") output = pipe() display(output.images[0]) display(Audio(output.audios[0], rate=pipe.mel.get_sample_rate())) ```
594df111954ee1bdb84e9dd8b7823961
cc-by-4.0
['question generation']
false
Model Card of `research-backup/bart-large-subjqa-vanilla-movies-qg` This model is fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) for question generation task on the [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) (dataset_name: movies) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
1a12b4ac1538836964be7fc59121c5bc
cc-by-4.0
['question generation']
false
Overview - **Language model:** [facebook/bart-large](https://huggingface.co/facebook/bart-large) - **Language:** en - **Training data:** [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) (movies) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
a30cdb289de77ab3814f750b564d7435
cc-by-4.0
['question generation']
false
model prediction questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "research-backup/bart-large-subjqa-vanilla-movies-qg") output = pipe("generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.") ```
4d83d031af67cc14518c48a9d4228935
cc-by-4.0
['question generation']
false
Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/research-backup/bart-large-subjqa-vanilla-movies-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.movies.json) | | Score | Type | Dataset | |:-----------|--------:|:-------|:-----------------------------------------------------------------| | BERTScore | 93.42 | movies | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_1 | 24.43 | movies | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_2 | 16.31 | movies | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_3 | 7.65 | movies | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_4 | 4.81 | movies | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | METEOR | 20.01 | movies | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | MoverScore | 61.02 | movies | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | ROUGE_L | 25.77 | movies | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
4256baf9fc42df316ceceb9d32f2fdce
cc-by-4.0
['question generation']
false
Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_subjqa - dataset_name: movies - input_types: ['paragraph_answer'] - output_types: ['question'] - prefix_types: ['qg'] - model: facebook/bart-large - max_length: 512 - max_length_output: 32 - epoch: 3 - batch: 8 - lr: 0.0001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 8 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/research-backup/bart-large-subjqa-vanilla-movies-qg/raw/main/trainer_config.json).
a1547047ce80af7d45a871f57a7d9086
creativeml-openrail-m
['coreml', 'stable-diffusion', 'text-to-image']
false
8528-diffusion final 8528-diffusion is a latent text-to-image diffusion model, conditioned by fine-tuning to colorful character images. 8528 Diffusion is a fine-tuning model of Stable Diffusion v1.4 with AI output images (t2i and t2i with i2i). I recommend entering "low quality,worst quality," for Negative prompt and Clip skip: 2. <img src=https://i.imgur.com/vCn02tM.jpg > ((ultra-detailed)), ((illustration)), Silver hair, red eyes, beautiful eyes, dress, Queen,Anime style, pretty face, pretty eyes, pretty, girl,High resolution, beautiful girl,octane render, realistic, hyper detailed ray tracing, 8k,classic style,Rococo Negative prompt: (low quality, worst quality:1.4) concept art Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 241379229, Size: 512x768, Model hash: 31cd036c, Clip skip: 2
6409fe6f1c7b20bfe97c2031e98644c6
creativeml-openrail-m
['coreml', 'stable-diffusion', 'text-to-image']
false
8528-diffusion v0.2 8528-diffusion is a latent text-to-image diffusion model, conditioned by fine-tuning to colorful character images. 8528 Diffusion v0.2 & v0.1 is a fine-tuning model of Waifu Diffusion with AI output images (t2i and t2i with i2i). <img src=https://i.imgur.com/z4sFctp.png >
c12676419e52b353cce78a4b9e28b8c7
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Whisper Medium Vietnamese This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the mozilla-foundation/common_voice_11_0 vi dataset. It achieves the following results on the evaluation set: - Loss: 0.7136 - Wer: 15.4925
8ca08263b83f4048cfea637b269f5fc5
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0001 | 124.0 | 1000 | 0.7136 | 15.4925 | | 0.0001 | 249.0 | 2000 | 0.8532 | 17.0045 | | 0.0 | 374.0 | 3000 | 0.9251 | 19.0972 | | 0.0 | 499.0 | 4000 | 0.9787 | 21.5953 | | 0.0 | 624.0 | 5000 | 0.9921 | 21.4638 |
0465026972255f8225fa59ea27cf8c10
apache-2.0
[]
false
doc2query/msmarco-t5-small-v1 This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on T5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)). It can be used for: - **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/UKPLab/beir) we have an example how to use docT5query with Pyserini. - **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. On [SBERT.net](https://www.sbert.net/examples/unsupervised_learning/query_generation/README.html) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
410b5ce4abd92e9d9566b612ab7e2df5
apache-2.0
[]
false
Usage ```python from transformers import T5Tokenizer, T5ForConditionalGeneration model_name = 'doc2query/msmarco-t5-small-v1' tokenizer = T5Tokenizer.from_pretrained(model_name) model = T5ForConditionalGeneration.from_pretrained(model_name) text = "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects." input_ids = tokenizer.encode(text, max_length=320, truncation=True, return_tensors='pt') outputs = model.generate( input_ids=input_ids, max_length=64, do_sample=True, top_p=0.95, num_return_sequences=5) print("Text:") print(text) print("\nGenerated Queries:") for i in range(len(outputs)): query = tokenizer.decode(outputs[i], skip_special_tokens=True) print(f'{i + 1}: {query}') ``` **Note:** `model.generate()` is non-deterministic. It produces different queries each time you run it.
108215d1c02af2aa39714a64765c5bbe
apache-2.0
[]
false
Training This model fine-tuned [google/t5-v1_1-small](https://huggingface.co/google/t5-v1_1-small) for 31k training steps (about 4 epochs on the 500k training pairs from MS MARCO). For the training script, see the `train_script.py` in this repository. The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces. This model was trained on a (query, passage) from the [MS MARCO Passage-Ranking dataset](https://github.com/microsoft/MSMARCO-Passage-Ranking).
30eceefcce2ed485c73f30fc25b9a508
creativeml-openrail-m
['stable-diffusion', 'text-to-image']
false
Low Poly Game Building on Stable Diffusion via Dreambooth This the Stable Diffusion model fine-tuned the Low Poly Game Building concept taught to Stable Diffusion with Dreambooth. It can be used by modifying the `instance_prompt`: **a photo of lowpoly_game_building**
b3a63514180e44b3543558d6e164bb8d
creativeml-openrail-m
['stable-diffusion', 'text-to-image']
false
Run on [Mirage](https://app.mirageml.com) Run this model and explore text-to-3D on [Mirage](https://app.mirageml.com)! Here are is a sample output for this model: ![image 0](https://huggingface.co/MirageML/lowpoly-game-building/resolve/main/output.png)
cc3461c5bae6580c066bd41befb65bca
creativeml-openrail-m
['stable-diffusion', 'text-to-image']
false
Share your Results and Reach us on [Discord](https://discord.gg/9B2Pu2bEvj)! [![Discord Server](https://discord.com/api/guilds/1022387303022338058/widget.png?style=banner2)](https://discord.gg/9B2Pu2bEvj) [Image Source](https://www.behance.net/guutv)
a751f82bda4f79651b38ae7c94b9215e
creativeml-openrail-m
['text-to-image']
false
Sample pictures of: sdcid (use that on your prompt) ![sdcid 0](https://huggingface.co/AppInApp/0644006a-45f5-4734-9477-7392b2199cec/resolve/main/instance_data/sdcid_%286%29.jpg)![sdcid 1](https://huggingface.co/AppInApp/0644006a-45f5-4734-9477-7392b2199cec/resolve/main/instance_data/sdcid_%287%29.jpg)![sdcid 2](https://huggingface.co/AppInApp/0644006a-45f5-4734-9477-7392b2199cec/resolve/main/instance_data/sdcid_%281%29.jpg)![sdcid 3](https://huggingface.co/AppInApp/0644006a-45f5-4734-9477-7392b2199cec/resolve/main/instance_data/sdcid_%283%29.jpg)![sdcid 4](https://huggingface.co/AppInApp/0644006a-45f5-4734-9477-7392b2199cec/resolve/main/instance_data/sdcid_%284%29.jpg)![sdcid 5](https://huggingface.co/AppInApp/0644006a-45f5-4734-9477-7392b2199cec/resolve/main/instance_data/sdcid_%285%29.jpg)![sdcid 6](https://huggingface.co/AppInApp/0644006a-45f5-4734-9477-7392b2199cec/resolve/main/instance_data/sdcid_%282%29.jpg)
6c34475aa3e6193938a7da5bb7327dd6
apache-2.0
['multiberts', 'multiberts-seed_5']
false
MultiBERTs - Seed 5 MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model
2772b4b476e9ca95b76982b610d00fd1
apache-2.0
['multiberts', 'multiberts-seed_5']
false
How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_5') model = TFBertModel.from_pretrained("google/multiberts-seed_5") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_5') model = BertModel.from_pretrained("google/multiberts-seed_5") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ```
30261360b70f2e773275f9ed2108ebc2
apache-2.0
['generated_from_trainer', 'nlu', 'text-classification']
false
bert-base-uncased-amazon-massive-intent This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on [Amazon Massive](https://huggingface.co/datasets/AmazonScience/massive) dataset (only en-US subset). It achieves the following results on the evaluation set: - Loss: 0.4897 - Accuracy: 0.8903 - F1: 0.8903
c70740bba27ab944f7c485fd24ca0f48
apache-2.0
['generated_from_trainer', 'nlu', 'text-classification']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 2.5862 | 1.0 | 720 | 1.0160 | 0.8096 | 0.8096 | | 1.0591 | 2.0 | 1440 | 0.6003 | 0.8716 | 0.8716 | | 0.4151 | 3.0 | 2160 | 0.5113 | 0.8859 | 0.8859 | | 0.3028 | 4.0 | 2880 | 0.5030 | 0.8883 | 0.8883 | | 0.1852 | 5.0 | 3600 | 0.4897 | 0.8903 | 0.8903 |
67e79bd428d046c66567e8205c142a7f
cc-by-4.0
['text generation', 'pytorch', 'causal-lm']
false
Model Description Megatron-GPT 5B is a transformer-based language model. GPT refers to a class of transformer decoder-only models similar to GPT-2 and 3 while 5B refers to the total trainable parameter count (5 Billion) [1, 2]. This model was trained with [NeMo Megatron](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/nlp/nemo_megatron/intro.html).
23d4ee8d797da32bc8a199de0485a9bb
cc-by-4.0
['text generation', 'pytorch', 'causal-lm']
false
Step 1: Install NeMo and dependencies You will need to install NVIDIA Apex and NeMo. ``` git clone https://github.com/ericharper/apex.git cd apex git checkout nm_v1.11.0 pip install -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" --global-option="--fast_layer_norm" --global-option="--distributed_adam" --global-option="--deprecated_fused_adam" ./ ``` ``` pip install nemo_toolkit['nlp']==1.11.0 ``` Alternatively, you can use NeMo Megatron training docker container with all dependencies pre-installed.
8b9e8e4e688f74bc6008e51d58e550a4
cc-by-4.0
['text generation', 'pytorch', 'causal-lm']
false
Step 2: Launch eval server **Note.** The example below launches a model variant with Tensor Parallelism (TP) of 2 and Pipeline Parallelism (PP) of 1 on two GPUs. ``` git clone https://github.com/NVIDIA/NeMo.git cd NeMo/examples/nlp/language_modeling git checkout v1.11.0 python megatron_gpt_eval.py gpt_model_file=nemo_gpt5B_fp16_tp2.nemo server=True tensor_model_parallel_size=2 trainer.devices=2 ```
81b5899625e438410cb3a624aeca22a3
cc-by-4.0
['text generation', 'pytorch', 'causal-lm']
false
Step 3: Send prompts to your model! ```python import json import requests port_num = 5555 headers = {"Content-Type": "application/json"} def request_data(data): resp = requests.put('http://localhost:{}/generate'.format(port_num), data=json.dumps(data), headers=headers) sentences = resp.json()['sentences'] return sentences data = { "sentences": ["Tell me an interesting fact about space travel."]*1, "tokens_to_generate": 50, "temperature": 1.0, "add_BOS": True, "top_k": 0, "top_p": 0.9, "greedy": False, "all_probs": False, "repetition_penalty": 1.2, "min_tokens_to_generate": 2, } sentences = request_data(data) print(sentences) ```
7fd32a4e18a79a36f08c668d8d410693
cc-by-4.0
['text generation', 'pytorch', 'causal-lm']
false
Evaluation results *Zero-shot performance.* Evaluated using [LM Evaluation Test Suite from AI21](https://github.com/AI21Labs/lm-evaluation) | ARC-Challenge | ARC-Easy | RACE-middle | RACE-high | Winogrande | RTE | BoolQA | HellaSwag | PiQA | | ------------- | -------- | ----------- | --------- | ---------- | --- | ------ | --------- | ---- | | 0.3976 | 0.5566 | 0.5007 | 0.4171 | 0.6133 | 0.5812 | 0.6356 | 0.6298 | 0.7492 |
6b46fc6341f89602521278e57712703d
cc-by-4.0
['text generation', 'pytorch', 'causal-lm']
false
References [1] [Improving Language Understanding by Generative Pre-Training](https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf) [2] [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/pdf/1909.08053.pdf) [3] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo) [4] [The Pile: An 800GB Dataset of Diverse Text for Language Modeling](https://arxiv.org/abs/2101.00027)
30a79298da106c4a84c69a297f463344
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2265 - Accuracy: 0.9235 - F1: 0.9237
e8589d2c3091b7ea78f1153bc7804025
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8243 | 1.0 | 250 | 0.3199 | 0.906 | 0.9025 | | 0.2484 | 2.0 | 500 | 0.2265 | 0.9235 | 0.9237 |
930f18f192922b557751dedbd345765d
apache-2.0
['generated_from_trainer']
false
roberta-base-bne-ROBERTaBECAS This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the becasv2 dataset. It achieves the following results on the evaluation set: - Loss: 2.5760
ca2d481f3aac728cecaf94a6261cf845
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 11 - eval_batch_size: 11 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5
9b3f02df78059e14c0874fd705a312f0
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 6 | 4.3366 | | No log | 2.0 | 12 | 3.1395 | | No log | 3.0 | 18 | 2.6092 | | No log | 4.0 | 24 | 2.5084 | | No log | 5.0 | 30 | 2.5760 |
d9943dc4e76506b5a61a747eae538cb9
apache-2.0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'sv', 'robust-speech-event', 'hf-asr-leaderboard']
false
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - SV-SE dataset. It achieves the following results on the evaluation set: - Loss: 0.2779 - Wer: 0.2525
429b33b2b53e73d3a124234afe7594b3
apache-2.0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'sv', 'robust-speech-event', 'hf-asr-leaderboard']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 3.3224 | 1.37 | 500 | 3.3354 | 1.0 | | 2.9318 | 2.74 | 1000 | 2.9361 | 1.0000 | | 2.1371 | 4.11 | 1500 | 1.1157 | 0.8359 | | 1.6883 | 5.48 | 2000 | 0.6003 | 0.6314 | | 1.5812 | 6.85 | 2500 | 0.4746 | 0.4725 | | 1.5145 | 8.22 | 3000 | 0.4376 | 0.4736 | | 1.4763 | 9.59 | 3500 | 0.4006 | 0.3863 | | 1.4215 | 10.96 | 4000 | 0.3783 | 0.3629 | | 1.3638 | 12.33 | 4500 | 0.3555 | 0.3425 | | 1.3561 | 13.7 | 5000 | 0.3340 | 0.3228 | | 1.3406 | 15.07 | 5500 | 0.3373 | 0.3295 | | 1.3055 | 16.44 | 6000 | 0.3432 | 0.3210 | | 1.3048 | 17.81 | 6500 | 0.3282 | 0.3118 | | 1.2863 | 19.18 | 7000 | 0.3226 | 0.3018 | | 1.2389 | 20.55 | 7500 | 0.3050 | 0.2986 | | 1.2361 | 21.92 | 8000 | 0.3048 | 0.2980 | | 1.2263 | 23.29 | 8500 | 0.3011 | 0.2977 | | 1.2225 | 24.66 | 9000 | 0.3017 | 0.2959 | | 1.2044 | 26.03 | 9500 | 0.2977 | 0.2782 | | 1.2017 | 27.4 | 10000 | 0.2966 | 0.2781 | | 1.1912 | 28.77 | 10500 | 0.2999 | 0.2786 | | 1.1658 | 30.14 | 11000 | 0.2991 | 0.2757 | | 1.148 | 31.51 | 11500 | 0.2915 | 0.2684 | | 1.1423 | 32.88 | 12000 | 0.2913 | 0.2643 | | 1.123 | 34.25 | 12500 | 0.2777 | 0.2630 | | 1.1297 | 35.62 | 13000 | 0.2873 | 0.2646 | | 1.0987 | 36.98 | 13500 | 0.2829 | 0.2619 | | 1.0873 | 38.36 | 14000 | 0.2864 | 0.2608 | | 1.0848 | 39.73 | 14500 | 0.2827 | 0.2577 | | 1.0628 | 41.1 | 15000 | 0.2896 | 0.2581 | | 1.0815 | 42.47 | 15500 | 0.2814 | 0.2561 | | 1.0587 | 43.83 | 16000 | 0.2738 | 0.2542 | | 1.0709 | 45.21 | 16500 | 0.2785 | 0.2578 | | 1.0512 | 46.57 | 17000 | 0.2793 | 0.2539 | | 1.0396 | 47.94 | 17500 | 0.2788 | 0.2525 | | 1.0481 | 49.31 | 18000 | 0.2777 | 0.2534 |
21121459e3c8737181dce7e58e412fc0
apache-2.0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'sv', 'robust-speech-event', 'hf-asr-leaderboard']
false
Evaluation Commands 1. To evaluate on `mozilla-foundation/common_voice_7_0` with split `test` ```bash python eval.py --model_id patrickvonplaten/xls-r-300m-sv-cv8 --dataset mozilla-foundation/common_voice_8_0 --config sv-SE --split test ``` 2. To evaluate on `speech-recognition-community-v2/dev_data` ```bash python eval.py --model_id patrickvonplaten/xls-r-300m-sv-cv8 --dataset speech-recognition-community-v2/dev_data --config sv --split validation --chunk_length_s 5.0 --stride_length_s 1.0 ```
1c7b81433c8f15273fa4c4d41e683468
cc-by-sa-4.0
['spacy', 'token-classification']
false
ko_core_news_sm Korean pipeline optimized for CPU. Components: tok2vec, tagger, morphologizer, parser, lemmatizer (trainable_lemmatizer), senter, ner. | Feature | Description | | --- | --- | | **Name** | `ko_core_news_sm` | | **Version** | `3.5.0` | | **spaCy** | `>=3.5.0,<3.6.0` | | **Default Pipeline** | `tok2vec`, `tagger`, `morphologizer`, `parser`, `lemmatizer`, `attribute_ruler`, `ner` | | **Components** | `tok2vec`, `tagger`, `morphologizer`, `parser`, `lemmatizer`, `senter`, `attribute_ruler`, `ner` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | [UD Korean Kaist v2.8](https://github.com/UniversalDependencies/UD_Korean-Kaist) (Choi, Jinho; Han, Na-Rae; Hwang, Jena; Chun, Jayeol)<br />[KLUE v1.1.0](https://github.com/KLUE-benchmark/KLUE) (Sungjoon Park, Jihyung Moon, Sungdong Kim, Won Ik Cho, Jiyoon Han, Jangwon Park, Chisung Song, Junseong Kim, Youngsook Song, Taehwan Oh, Joohong Lee, Juhyun Oh, Sungwon Ryu, Younghoon Jeong, Inkwon Lee, Sangwoo Seo, Dongjun Lee, Hyunwoo Kim, Myeonghwa Lee, Seongbo Jang, Seungwon Do, Sunkyoung Kim, Kyungtae Lim, Jongwon Lee, Kyumin Park, Jamin Shin, Seonghyun Kim, Lucy Park, Alice Oh, Jung-Woo Ha, Kyunghyun Cho) | | **License** | `CC BY-SA 4.0` | | **Author** | [Explosion](https://explosion.ai) |
6c840c7b8b5b3de712b5fde1d260fbaf
cc-by-sa-4.0
['spacy', 'token-classification']
false
Label Scheme <details> <summary>View label scheme (2028 labels for 4 components)</summary> | Component | Labels | | --- | --- | | **`tagger`** | `_SP`, `ecs`, `etm`, `f`, `f+f+jcj`, `f+f+jcs`, `f+f+jct`, `f+f+jxt`, `f+jca`, `f+jca+jp+ecc`, `f+jca+jp+ep+ef`, `f+jca+jxc`, `f+jca+jxc+jcm`, `f+jca+jxt`, `f+jcj`, `f+jcm`, `f+jco`, `f+jcs`, `f+jct`, `f+jct+jcm`, `f+jp+ef`, `f+jp+ep+ef`, `f+jp+etm`, `f+jxc`, `f+jxt`, `f+ncn`, `f+ncn+jcm`, `f+ncn+jcs`, `f+ncn+jp+ecc`, `f+ncn+jxt`, `f+ncpa+jcm`, `f+npp+jcs`, `f+nq`, `f+xsn`, `f+xsn+jco`, `f+xsn+jxt`, `ii`, `jca`, `jca+jcm`, `jca+jxc`, `jca+jxt`, `jcc`, `jcj`, `jcm`, `jco`, `jcr`, `jcr+jxc`, `jcs`, `jct`, `jct+jcm`, `jct+jxt`, `jp+ecc`, `jp+ecs`, `jp+ef`, `jp+ef+jcr`, `jp+ef+jcr+jxc`, `jp+ep+ecs`, `jp+ep+ef`, `jp+ep+etm`, `jp+ep+etn`, `jp+etm`, `jp+etn`, `jp+etn+jco`, `jp+etn+jxc`, `jxc`, `jxc+jca`, `jxc+jco`, `jxc+jcs`, `jxt`, `mad`, `mad+jxc`, `mad+jxt`, `mag`, `mag+jca`, `mag+jcm`, `mag+jcs`, `mag+jp+ef+jcr`, `mag+jxc`, `mag+jxc+jxc`, `mag+jxt`, `mag+xsn`, `maj`, `maj+jxc`, `maj+jxt`, `mma`, `mmd`, `nbn`, `nbn+jca`, `nbn+jca+jcj`, `nbn+jca+jcm`, `nbn+jca+jp+ef`, `nbn+jca+jxc`, `nbn+jca+jxt`, `nbn+jcc`, `nbn+jcj`, `nbn+jcm`, `nbn+jco`, `nbn+jcr`, `nbn+jcs`, `nbn+jct`, `nbn+jct+jcm`, `nbn+jct+jxt`, `nbn+jp+ecc`, `nbn+jp+ecs`, `nbn+jp+ecs+jca`, `nbn+jp+ecs+jcm`, `nbn+jp+ecs+jco`, `nbn+jp+ecs+jxc`, `nbn+jp+ecs+jxt`, `nbn+jp+ecx`, `nbn+jp+ef`, `nbn+jp+ef+jca`, `nbn+jp+ef+jco`, `nbn+jp+ef+jcr`, `nbn+jp+ef+jcr+jxc`, `nbn+jp+ef+jcr+jxt`, `nbn+jp+ef+jcs`, `nbn+jp+ef+jxc`, `nbn+jp+ef+jxc+jco`, `nbn+jp+ef+jxf`, `nbn+jp+ef+jxt`, `nbn+jp+ep+ecc`, `nbn+jp+ep+ecs`, `nbn+jp+ep+ecs+jxc`, `nbn+jp+ep+ef`, `nbn+jp+ep+ef+jcr`, `nbn+jp+ep+etm`, `nbn+jp+ep+etn`, `nbn+jp+ep+etn+jco`, `nbn+jp+ep+etn+jcs`, `nbn+jp+etm`, `nbn+jp+etn`, `nbn+jp+etn+jca`, `nbn+jp+etn+jca+jxt`, `nbn+jp+etn+jco`, `nbn+jp+etn+jcs`, `nbn+jp+etn+jxc`, `nbn+jp+etn+jxt`, `nbn+jxc`, `nbn+jxc+jca`, `nbn+jxc+jca+jxc`, `nbn+jxc+jca+jxt`, `nbn+jxc+jcc`, `nbn+jxc+jcm`, `nbn+jxc+jco`, `nbn+jxc+jcs`, `nbn+jxc+jp+ef`, `nbn+jxc+jxc`, `nbn+jxc+jxt`, `nbn+jxt`, `nbn+nbn`, `nbn+nbn+jp+ef`, `nbn+xsm+ecs`, `nbn+xsm+ef`, `nbn+xsm+ep+ef`, `nbn+xsm+ep+ef+jcr`, `nbn+xsm+etm`, `nbn+xsn`, `nbn+xsn+jca`, `nbn+xsn+jca+jp+ef+jcr`, `nbn+xsn+jca+jxc`, `nbn+xsn+jca+jxt`, `nbn+xsn+jcm`, `nbn+xsn+jco`, `nbn+xsn+jcs`, `nbn+xsn+jct`, `nbn+xsn+jp+ecc`, `nbn+xsn+jp+ecs`, `nbn+xsn+jp+ef`, `nbn+xsn+jp+ef+jcr`, `nbn+xsn+jp+ep+ef`, `nbn+xsn+jxc`, `nbn+xsn+jxt`, `nbn+xsv+etm`, `nbu`, `nbu+jca`, `nbu+jca+jxc`, `nbu+jca+jxt`, `nbu+jcc`, `nbu+jcc+jxc`, `nbu+jcj`, `nbu+jcm`, `nbu+jco`, `nbu+jcs`, `nbu+jct`, `nbu+jct+jxc`, `nbu+jp+ecc`, `nbu+jp+ecs`, `nbu+jp+ef`, `nbu+jp+ef+jcr`, `nbu+jp+ef+jxc`, `nbu+jp+ep+ecc`, `nbu+jp+ep+ecs`, `nbu+jp+ep+ef`, `nbu+jp+ep+ef+jcr`, `nbu+jp+ep+etm`, `nbu+jp+ep+etn+jco`, `nbu+jp+etm`, `nbu+jxc`, `nbu+jxc+jca`, `nbu+jxc+jcs`, `nbu+jxc+jp+ef`, `nbu+jxc+jp+ep+ef`, `nbu+jxc+jxt`, `nbu+jxt`, `nbu+ncn`, `nbu+ncn+jca`, `nbu+ncn+jcm`, `nbu+xsn`, `nbu+xsn+jca`, `nbu+xsn+jca+jxc`, `nbu+xsn+jca+jxt`, `nbu+xsn+jcm`, `nbu+xsn+jco`, `nbu+xsn+jcs`, `nbu+xsn+jp+ecs`, `nbu+xsn+jp+ep+ef`, `nbu+xsn+jxc`, `nbu+xsn+jxc+jxt`, `nbu+xsn+jxt`, `nbu+xsv+ecc`, `nbu+xsv+etm`, `ncn`, `ncn+f+ncpa+jco`, `ncn+jca`, `ncn+jca+jca`, `ncn+jca+jcc`, `ncn+jca+jcj`, `ncn+jca+jcm`, `ncn+jca+jcs`, `ncn+jca+jct`, `ncn+jca+jp+ecc`, `ncn+jca+jp+ecs`, `ncn+jca+jp+ef`, `ncn+jca+jp+ep+ef`, `ncn+jca+jp+etm`, `ncn+jca+jp+etn+jxt`, `ncn+jca+jxc`, `ncn+jca+jxc+jcc`, `ncn+jca+jxc+jcm`, `ncn+jca+jxc+jxc`, `ncn+jca+jxc+jxt`, `ncn+jca+jxt`, `ncn+jcc`, `ncn+jcc+jxc`, `ncn+jcj`, `ncn+jcj+jxt`, `ncn+jcm`, `ncn+jco`, `ncn+jcr`, `ncn+jcr+jxc`, `ncn+jcs`, `ncn+jcs+jxt`, `ncn+jct`, `ncn+jct+jcm`, `ncn+jct+jxc`, `ncn+jct+jxt`, `ncn+jcv`, `ncn+jp+ecc`, `ncn+jp+ecc+jct`, `ncn+jp+ecc+jxc`, `ncn+jp+ecs`, `ncn+jp+ecs+jcm`, `ncn+jp+ecs+jco`, `ncn+jp+ecs+jxc`, `ncn+jp+ecs+jxt`, `ncn+jp+ecx`, `ncn+jp+ef`, `ncn+jp+ef+jca`, `ncn+jp+ef+jcm`, `ncn+jp+ef+jco`, `ncn+jp+ef+jcr`, `ncn+jp+ef+jcr+jxc`, `ncn+jp+ef+jcr+jxt`, `ncn+jp+ef+jp+etm`, `ncn+jp+ef+jxc`, `ncn+jp+ef+jxf`, `ncn+jp+ef+jxt`, `ncn+jp+ep+ecc`, `ncn+jp+ep+ecs`, `ncn+jp+ep+ecs+jxc`, `ncn+jp+ep+ecx`, `ncn+jp+ep+ef`, `ncn+jp+ep+ef+jcr`, `ncn+jp+ep+ef+jcr+jxc`, `ncn+jp+ep+ef+jxc`, `ncn+jp+ep+ef+jxf`, `ncn+jp+ep+ef+jxt`, `ncn+jp+ep+ep+etm`, `ncn+jp+ep+etm`, `ncn+jp+ep+etn`, `ncn+jp+ep+etn+jca`, `ncn+jp+ep+etn+jca+jxc`, `ncn+jp+ep+etn+jco`, `ncn+jp+ep+etn+jcs`, `ncn+jp+ep+etn+jxt`, `ncn+jp+etm`, `ncn+jp+etn`, `ncn+jp+etn+jca`, `ncn+jp+etn+jca+jxc`, `ncn+jp+etn+jca+jxt`, `ncn+jp+etn+jco`, `ncn+jp+etn+jcs`, `ncn+jp+etn+jct`, `ncn+jp+etn+jxc`, `ncn+jp+etn+jxt`, `ncn+jxc`, `ncn+jxc+jca`, `ncn+jxc+jca+jxc`, `ncn+jxc+jca+jxt`, `ncn+jxc+jcc`, `ncn+jxc+jcm`, `ncn+jxc+jco`, `ncn+jxc+jcs`, `ncn+jxc+jct+jxt`, `ncn+jxc+jp+ef`, `ncn+jxc+jp+ef+jcr`, `ncn+jxc+jp+ep+ecs`, `ncn+jxc+jp+ep+ef`, `ncn+jxc+jp+etm`, `ncn+jxc+jxc`, `ncn+jxc+jxt`, `ncn+jxt`, `ncn+jxt+jcm`, `ncn+jxt+jxc`, `ncn+nbn`, `ncn+nbn+jca`, `ncn+nbn+jcm`, `ncn+nbn+jcs`, `ncn+nbn+jp+ecc`, `ncn+nbn+jp+ep+ef`, `ncn+nbn+jxc`, `ncn+nbn+jxt`, `ncn+nbu`, `ncn+nbu+jca`, `ncn+nbu+jcm`, `ncn+nbu+jco`, `ncn+nbu+jp+ef`, `ncn+nbu+jxc`, `ncn+nbu+ncn`, `ncn+ncn`, `ncn+ncn+jca`, `ncn+ncn+jca+jcc`, `ncn+ncn+jca+jcm`, `ncn+ncn+jca+jxc`, `ncn+ncn+jca+jxc+jcm`, `ncn+ncn+jca+jxc+jxc`, `ncn+ncn+jca+jxt`, `ncn+ncn+jcc`, `ncn+ncn+jcj`, `ncn+ncn+jcm`, `ncn+ncn+jco`, `ncn+ncn+jcr`, `ncn+ncn+jcs`, `ncn+ncn+jct`, `ncn+ncn+jct+jcm`, `ncn+ncn+jct+jxc`, `ncn+ncn+jct+jxt`, `ncn+ncn+jp+ecc`, `ncn+ncn+jp+ecs`, `ncn+ncn+jp+ef`, `ncn+ncn+jp+ef+jcm`, `ncn+ncn+jp+ef+jcr`, `ncn+ncn+jp+ef+jcs`, `ncn+ncn+jp+ep+ecc`, `ncn+ncn+jp+ep+ecs`, `ncn+ncn+jp+ep+ef`, `ncn+ncn+jp+ep+ef+jcr`, `ncn+ncn+jp+ep+ep+etm`, `ncn+ncn+jp+ep+etm`, `ncn+ncn+jp+ep+etn`, `ncn+ncn+jp+etm`, `ncn+ncn+jp+etn`, `ncn+ncn+jp+etn+jca`, `ncn+ncn+jp+etn+jco`, `ncn+ncn+jp+etn+jxc`, `ncn+ncn+jxc`, `ncn+ncn+jxc+jca`, `ncn+ncn+jxc+jcc`, `ncn+ncn+jxc+jcm`, `ncn+ncn+jxc+jco`, `ncn+ncn+jxc+jcs`, `ncn+ncn+jxc+jxc`, `ncn+ncn+jxt`, `ncn+ncn+nbn`, `ncn+ncn+ncn`, `ncn+ncn+ncn+jca`, `ncn+ncn+ncn+jca+jcm`, `ncn+ncn+ncn+jca+jxt`, `ncn+ncn+ncn+jcj`, `ncn+ncn+ncn+jcm`, `ncn+ncn+ncn+jco`, `ncn+ncn+ncn+jcs`, `ncn+ncn+ncn+jct+jxt`, `ncn+ncn+ncn+jp+etn+jxc`, `ncn+ncn+ncn+jxt`, `ncn+ncn+ncn+ncn+jca`, `ncn+ncn+ncn+ncn+jca+jxt`, `ncn+ncn+ncn+ncn+jco`, `ncn+ncn+ncn+xsn+jp+etm`, `ncn+ncn+ncpa`, `ncn+ncn+ncpa+jca`, `ncn+ncn+ncpa+jcm`, `ncn+ncn+ncpa+jco`, `ncn+ncn+ncpa+jcs`, `ncn+ncn+ncpa+jxc`, `ncn+ncn+ncpa+jxt`, `ncn+ncn+ncpa+ncn`, `ncn+ncn+ncpa+ncn+jca`, `ncn+ncn+ncpa+ncn+jcj`, `ncn+ncn+ncpa+ncn+jcm`, `ncn+ncn+ncpa+ncn+jxt`, `ncn+ncn+xsn`, `ncn+ncn+xsn+jca`, `ncn+ncn+xsn+jca+jxt`, `ncn+ncn+xsn+jcj`, `ncn+ncn+xsn+jcm`, `ncn+ncn+xsn+jco`, `ncn+ncn+xsn+jcs`, `ncn+ncn+xsn+jct`, `ncn+ncn+xsn+jp+ecs`, `ncn+ncn+xsn+jp+ep+ef`, `ncn+ncn+xsn+jp+etm`, `ncn+ncn+xsn+jxc`, `ncn+ncn+xsn+jxc+jcs`, `ncn+ncn+xsn+jxt`, `ncn+ncn+xsv+ecc`, `ncn+ncn+xsv+etm`, `ncn+ncpa`, `ncn+ncpa+jca`, `ncn+ncpa+jca+jcm`, `ncn+ncpa+jca+jxc`, `ncn+ncpa+jca+jxt`, `ncn+ncpa+jcc`, `ncn+ncpa+jcj`, `ncn+ncpa+jcm`, `ncn+ncpa+jco`, `ncn+ncpa+jcr`, `ncn+ncpa+jcs`, `ncn+ncpa+jct`, `ncn+ncpa+jct+jcm`, `ncn+ncpa+jct+jxt`, `ncn+ncpa+jp+ecc`, `ncn+ncpa+jp+ecc+jxc`, `ncn+ncpa+jp+ecs`, `ncn+ncpa+jp+ecs+jxc`, `ncn+ncpa+jp+ef`, `ncn+ncpa+jp+ef+jcr`, `ncn+ncpa+jp+ef+jcr+jxc`, `ncn+ncpa+jp+ep+ef`, `ncn+ncpa+jp+ep+etm`, `ncn+ncpa+jp+ep+etn`, `ncn+ncpa+jp+etm`, `ncn+ncpa+jxc`, `ncn+ncpa+jxc+jca+jxc`, `ncn+ncpa+jxc+jco`, `ncn+ncpa+jxc+jcs`, `ncn+ncpa+jxt`, `ncn+ncpa+nbn+jcs`, `ncn+ncpa+ncn`, `ncn+ncpa+ncn+jca`, `ncn+ncpa+ncn+jca+jcm`, `ncn+ncpa+ncn+jca+jxc`, `ncn+ncpa+ncn+jca+jxt`, `ncn+ncpa+ncn+jcj`, `ncn+ncpa+ncn+jcm`, `ncn+ncpa+ncn+jco`, `ncn+ncpa+ncn+jcs`, `ncn+ncpa+ncn+jct`, `ncn+ncpa+ncn+jct+jcm`, `ncn+ncpa+ncn+jp+ef+jcr`, `ncn+ncpa+ncn+jp+ep+etm`, `ncn+ncpa+ncn+jxc`, `ncn+ncpa+ncn+jxt`, `ncn+ncpa+ncn+xsn+jcm`, `ncn+ncpa+ncn+xsn+jxt`, `ncn+ncpa+ncpa`, `ncn+ncpa+ncpa+jca`, `ncn+ncpa+ncpa+jcj`, `ncn+ncpa+ncpa+jcm`, `ncn+ncpa+ncpa+jco`, `ncn+ncpa+ncpa+jcs`, `ncn+ncpa+ncpa+jp+ep+ef`, `ncn+ncpa+ncpa+jxt`, `ncn+ncpa+ncpa+ncn`, `ncn+ncpa+xsn`, `ncn+ncpa+xsn+jcm`, `ncn+ncpa+xsn+jco`, `ncn+ncpa+xsn+jcs`, `ncn+ncpa+xsn+jp+ecc`, `ncn+ncpa+xsn+jp+etm`, `ncn+ncpa+xsn+jxt`, `ncn+ncpa+xsv+ecc`, `ncn+ncpa+xsv+ecs`, `ncn+ncpa+xsv+ecx`, `ncn+ncpa+xsv+ecx+px+etm`, `ncn+ncpa+xsv+ef`, `ncn+ncpa+xsv+ef+jcm`, `ncn+ncpa+xsv+ef+jcr`, `ncn+ncpa+xsv+etm`, _(truncated: full list in pipeline meta)_ | | **`morphologizer`** | `POS=CCONJ`, `POS=ADV`, `POS=SCONJ`, `POS=DET`, `POS=NOUN`, `POS=VERB`, `POS=ADJ`, `POS=PUNCT`, `POS=SPACE`, `POS=AUX`, `POS=PRON`, `POS=PROPN`, `POS=NUM`, `POS=INTJ`, `POS=PART`, `POS=X`, `POS=ADP`, `POS=SYM` | | **`parser`** | `ROOT`, `acl`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `case`, `cc`, `ccomp`, `compound`, `conj`, `cop`, `csubj`, `dep`, `det`, `dislocated`, `fixed`, `flat`, `iobj`, `mark`, `nmod`, `nsubj`, `nummod`, `obj`, `obl`, `punct`, `xcomp` | | **`ner`** | `DT`, `LC`, `OG`, `PS`, `QT`, `TI` | </details>
b0526e4e0d7483c7c1c90282db0fce46
cc-by-sa-4.0
['spacy', 'token-classification']
false
Accuracy | Type | Score | | --- | --- | | `TOKEN_ACC` | 100.00 | | `TOKEN_P` | 100.00 | | `TOKEN_R` | 100.00 | | `TOKEN_F` | 100.00 | | `TAG_ACC` | 73.06 | | `POS_ACC` | 85.82 | | `SENTS_P` | 99.90 | | `SENTS_R` | 99.95 | | `SENTS_F` | 99.93 | | `DEP_UAS` | 73.61 | | `DEP_LAS` | 65.59 | | `LEMMA_ACC` | 83.57 | | `ENTS_P` | 77.04 | | `ENTS_R` | 66.03 | | `ENTS_F` | 71.11 |
6af992164bb6c9280f22f33b58cde8ed
apache-2.0
['generated_from_trainer']
false
tiny-mlm-squad-plain_text-custom-tokenizer This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 7.3247
f85a29d410c1e814854732b54d767e78
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 7.5181 | 0.4 | 500 | 7.5716 | | 6.4657 | 0.8 | 1000 | 7.5778 | | 6.2336 | 1.2 | 1500 | 7.4653 | | 6.0699 | 1.6 | 2000 | 7.4193 | | 5.946 | 2.0 | 2500 | 7.2908 | | 5.7981 | 2.4 | 3000 | 7.2710 | | 5.8332 | 2.8 | 3500 | 7.3876 | | 5.772 | 3.2 | 4000 | 7.3050 | | 5.6513 | 3.6 | 4500 | 7.3247 |
86f006d4673f556cbfcd16d32c950ec5
apache-2.0
['generated_from_trainer']
false
whisper-dpv-finetuned-WITH-AUGMENTATION-LOWER-LR This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5717 - Wer: 34.5241
6407580f937d1347dcf3e5e68d72435c
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 4 - mixed_precision_training: Native AMP
b022967b47cf3a5a39edf4b6693000f5
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.6221 | 0.62 | 1000 | 0.5345 | 35.9711 | | 0.4318 | 1.25 | 2000 | 0.5271 | 34.9537 | | 0.3859 | 1.87 | 3000 | 0.5338 | 34.3658 | | 0.3005 | 2.49 | 4000 | 0.5532 | 34.8858 | | 0.2444 | 3.12 | 5000 | 0.5628 | 33.7102 | | 0.315 | 3.74 | 6000 | 0.5717 | 34.5241 |
edfced2838ef8fe976e794754601d664
apache-2.0
['italian', 'sequence-to-sequence', 'question-generation', 'squad_it', 'text2text-generation']
false
IT5 Small for Question Generation 💭 🇮🇹 This repository contains the checkpoint for the [IT5 Small](https://huggingface.co/gsarti/it5-small) model fine-tuned on question generation on the [SQuAD-IT corpus](https://huggingface.co/datasets/squad_it) as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io). A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
19c4fcc6480356d57049f6f417329ccd
apache-2.0
['italian', 'sequence-to-sequence', 'question-generation', 'squad_it', 'text2text-generation']
false
Using the model Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as: ```python from transformers import pipelines qg = pipeline("text2text-generation", model='it5/it5-small-question-generation') qg("Le conoscenze mediche erano stagnanti durante il Medioevo. Il resoconto più autorevole di allora è venuto dalla facoltà di medicina di Parigi in un rapporto al re di Francia che ha incolpato i cieli, sotto forma di una congiunzione di tre pianeti nel 1345 che causò una "grande pestilenza nell\' aria". Questa relazione è diventata la prima e più diffusa di una serie di casi di peste che cercava di dare consigli ai malati. Che la peste fosse causata dalla cattiva aria divenne la teoria più accettata. Oggi, questo è conosciuto come la teoria di Miasma. La parola "peste" non aveva un significato particolare in questo momento, e solo la ricorrenza dei focolai durante il Medioevo gli diede il nome che è diventato il termine medico. Risposta: re di Francia") >>> [{"generated_text": "Per chi è stato redatto il referto medico?"}] ``` or loaded using autoclasses: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("it5/it5-small-question-generation") model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-small-question-generation") ``` If you use this model in your research, please cite our work as: ```bibtex @article{sarti-nissim-2022-it5, title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation}, author={Sarti, Gabriele and Nissim, Malvina}, journal={ArXiv preprint 2203.03759}, url={https://arxiv.org/abs/2203.03759}, year={2022}, month={mar} } ```
fa9b0fca5aaee2b82a7505570524bcbf
apache-2.0
[]
false
Perceiver IO for language Perceiver IO model pre-trained on the Masked Language Modeling (MLM) task proposed in [BERT](https://arxiv.org/abs/1810.04805) using a large text corpus obtained by combining [English Wikipedia](https://huggingface.co/datasets/wikipedia) and [C4](https://huggingface.co/datasets/c4). It was introduced in the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Jaegle et al. and first released in [this repository](https://github.com/deepmind/deepmind-research/tree/master/perceiver). Disclaimer: The team releasing Perceiver IO did not write a model card for this model so this model card has been written by the Hugging Face team.
3ada85f3730b8e8807dcecc56334db73
apache-2.0
[]
false
Model description Perceiver IO is a transformer encoder model that can be applied on any modality (text, images, audio, video, ...). The core idea is to employ the self-attention mechanism on a not-too-large set of latent vectors (e.g. 256 or 512), and only use the inputs to perform cross-attention with the latents. This allows for the time and memory requirements of the self-attention mechanism to not depend on the size of the inputs. To decode, the authors employ so-called decoder queries, which allow to flexibly decode the final hidden states of the latents to produce outputs of arbitrary size and semantics. For masked language modeling, the output is a tensor containing the prediction scores of the language modeling head, of shape (batch_size, seq_length, vocab_size). <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/perceiver_architecture.jpg" alt="drawing" width="600"/> <small> Perceiver IO architecture.</small> As the time and memory requirements of the self-attention mechanism don't depend on the size of the inputs, the Perceiver IO authors train the model directly on raw UTF-8 bytes, rather than on subwords as is done in models like BERT, RoBERTa and GPT-2. This has many benefits: one doesn't need to train a tokenizer before training the model, one doesn't need to maintain a (fixed) vocabulary file, and this also doesn't hurt model performance as shown by [Bostrom et al., 2020](https://arxiv.org/abs/2004.03720). By pre-training the model, it learns an inner representation of language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the Perceiver model as inputs.
7aa5c3bcc898ec241f2553253f4adfad
apache-2.0
[]
false
Intended uses & limitations You can use the raw model for masked language modeling, but the model is intended to be fine-tuned on a labeled dataset. See the [model hub](https://huggingface.co/models?search=deepmind/perceiver) to look for fine-tuned versions on a task that interests you.
2b017f03585a935676070a0d0ff9fb24
apache-2.0
[]
false
How to use Here is how to use this model in PyTorch: ```python from transformers import PerceiverTokenizer, PerceiverForMaskedLM tokenizer = PerceiverTokenizer.from_pretrained("deepmind/language-perceiver") model = PerceiverForMaskedLM.from_pretrained("deepmind/language-perceiver") text = "This is an incomplete sentence where some words are missing."
46362c4854509853672a399e14581234
apache-2.0
[]
false
mask " missing.". Note that the model performs much better if the masked span starts with a space. encoding.input_ids[0, 52:61] = tokenizer.mask_token_id inputs, input_mask = encoding.input_ids.to(device), encoding.attention_mask.to(device)
c35be4cf331f8cf14dac4d9cbebe5a06
apache-2.0
[]
false
forward pass outputs = model(inputs=inputs, attention_mask=input_mask) logits = outputs.logits masked_tokens_predictions = logits[0, 51:61].argmax(dim=-1) print(tokenizer.decode(masked_tokens_predictions)) >>> should print " missing." ```
51b8edd23806d768aa1a21fc79d0cd83
apache-2.0
[]
false
Training data This model was pretrained on a combination of [English Wikipedia](https://huggingface.co/datasets/wikipedia) and [C4](https://huggingface.co/datasets/c4). 70% of the training tokens were sampled from the C4 dataset and the remaining 30% from Wikipedia. The authors concatenate 10 documents before splitting into crops to reduce wasteful computation on padding tokens.
c318950a456b5c14f1bd880b59326fd2
apache-2.0
['exbert', 'security', 'cybersecurity', 'cyber security', 'threat hunting', 'threat intelligence']
false
SecRoBERTa This is the pretrained model presented in [SecBERT: A Pretrained Language Model for Cyber Security Text](https://github.com/jackaduma/SecBERT/), which is a SecRoBERTa model trained on cyber security text. The training corpus was papers taken from * [APTnotes](https://github.com/kbandla/APTnotes) * [Stucco-Data: Cyber security data sources](https://stucco.github.io/data/) * [CASIE: Extracting Cybersecurity Event Information from Text](https://ebiquity.umbc.edu/_file_directory_/papers/943.pdf) * [SemEval-2018 Task 8: Semantic Extraction from CybersecUrity REports using Natural Language Processing (SecureNLP)](https://competitions.codalab.org/competitions/17262). SecRoBERTa has its own wordpiece vocabulary (secvocab) that's built to best match the training corpus. We trained [SecBERT](https://huggingface.co/jackaduma/SecBERT) and [SecRoBERTa](https://huggingface.co/jackaduma/SecRoBERTa) versions. Available models include: * [`SecBERT`](https://huggingface.co/jackaduma/SecBERT) * [`SecRoBERTa`](https://huggingface.co/jackaduma/SecRoBERTa) ---
d618d068391ba0129767813a51bdbe53
apache-2.0
['exbert', 'security', 'cybersecurity', 'cyber security', 'threat hunting', 'threat intelligence']
false
**Fill Mask** We proposed to build language model which work on cyber security text, as result, it can improve downstream tasks (NER, Text Classification, Semantic Understand, Q&A) in Cyber Security Domain. First, as below shows Fill-Mask pipeline in [Google Bert](), [AllenAI SciBert](https://github.com/allenai/scibert) and our [SecBERT](https://github.com/jackaduma/SecBERT) . <!-- <img src="./fill-mask-result.png" width="150%" height="150%"> --> ![fill-mask-result](https://github.com/jackaduma/SecBERT/blob/main/fill-mask-result.png?raw=true) --- The original repo can be found [here](https://github.com/jackaduma/SecBERT).
593b7bb2e53ba8432a6331abeed33499
apache-2.0
['translation']
false
opus-mt-en-lg * source languages: en * target languages: lg * OPUS readme: [en-lg](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-lg/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-lg/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-lg/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-lg/opus-2020-01-08.eval.txt)
6dd7bcc2d299cebc8c7b2edf8b10f37a
apache-2.0
['generated_from_keras_callback']
false
Imene/vit-base-patch16-384-wi3 This model is a fine-tuned version of [google/vit-base-patch16-384](https://huggingface.co/google/vit-base-patch16-384) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2020 - Train Accuracy: 0.9984 - Train Top-3-accuracy: 0.9997 - Validation Loss: 1.4297 - Validation Accuracy: 0.6195 - Validation Top-3-accuracy: 0.8298 - Epoch: 11
76d697f6f701dca5c1cefef20a3e4995
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 1200, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000} - training_precision: mixed_float16
806a286eb1631eecaec77dbf017e64a0
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Train Accuracy | Train Top-3-accuracy | Validation Loss | Validation Accuracy | Validation Top-3-accuracy | Epoch | |:----------:|:--------------:|:--------------------:|:---------------:|:-------------------:|:-------------------------:|:-----:| | 3.6575 | 0.0902 | 0.1945 | 3.1772 | 0.2028 | 0.3980 | 0 | | 2.5870 | 0.3473 | 0.6048 | 2.3845 | 0.3717 | 0.6208 | 1 | | 1.8813 | 0.5553 | 0.7895 | 2.0262 | 0.4431 | 0.7196 | 2 | | 1.4326 | 0.6815 | 0.8754 | 1.8856 | 0.4793 | 0.7384 | 3 | | 1.0572 | 0.7989 | 0.9439 | 1.6570 | 0.5369 | 0.7960 | 4 | | 0.7740 | 0.8838 | 0.9749 | 1.6103 | 0.5557 | 0.7960 | 5 | | 0.5593 | 0.9417 | 0.9900 | 1.5303 | 0.5695 | 0.8173 | 6 | | 0.4151 | 0.9709 | 0.9975 | 1.4939 | 0.5795 | 0.8185 | 7 | | 0.3176 | 0.9884 | 0.9978 | 1.4553 | 0.5832 | 0.8248 | 8 | | 0.2582 | 0.9950 | 0.9991 | 1.4500 | 0.6020 | 0.8248 | 9 | | 0.2222 | 0.9978 | 0.9994 | 1.4315 | 0.6108 | 0.8310 | 10 | | 0.2020 | 0.9984 | 0.9997 | 1.4297 | 0.6195 | 0.8298 | 11 |
dffd60b98004d90e89cab0812ba6bb39
apache-2.0
['generated_from_trainer']
false
ner_ANAT_DISO This model is a fine-tuned version of [PlanTL-GOB-ES/bsc-bio-ehr-es](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0746 - Anat Precision: 0.6512 - Anat Recall: 0.6573 - Anat F1: 0.6542 - Anat Number: 534 - Diso Precision: 0.8727 - Diso Recall: 0.8844 - Diso F1: 0.8785 - Diso Number: 2915 - Overall Precision: 0.8385 - Overall Recall: 0.8492 - Overall F1: 0.8438 - Overall Accuracy: 0.9838
a08f461707017774d79d37dc6550c3ef
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Anat Precision | Anat Recall | Anat F1 | Anat Number | Diso Precision | Diso Recall | Diso F1 | Diso Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:-----------:|:-------:|:-----------:|:--------------:|:-----------:|:-------:|:-----------:|:-----------------:|:--------------:|:----------:|:----------------:| | 0.0625 | 1.0 | 1682 | 0.0591 | 0.5407 | 0.6723 | 0.5993 | 534 | 0.8516 | 0.8624 | 0.8570 | 2915 | 0.7945 | 0.8330 | 0.8133 | 0.9808 | | 0.0397 | 2.0 | 3364 | 0.0633 | 0.6237 | 0.6798 | 0.6505 | 534 | 0.8576 | 0.8820 | 0.8696 | 2915 | 0.8196 | 0.8507 | 0.8348 | 0.9826 | | 0.0181 | 3.0 | 5046 | 0.0698 | 0.6452 | 0.6948 | 0.6691 | 534 | 0.8670 | 0.8878 | 0.8773 | 2915 | 0.8312 | 0.8579 | 0.8443 | 0.9833 | | 0.0121 | 4.0 | 6728 | 0.0746 | 0.6512 | 0.6573 | 0.6542 | 534 | 0.8727 | 0.8844 | 0.8785 | 2915 | 0.8385 | 0.8492 | 0.8438 | 0.9838 |
94f56c71d774a5cca8ee95e407e0a06b
apache-2.0
['generated_from_trainer']
false
distilbert_ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0578 - Precision: 0.9189 - Recall: 0.9357 - F1: 0.9272 - Accuracy: 0.9831
f7cf12bd54512f832e568c6bb9d45953
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0754 | 1.0 | 1756 | 0.0578 | 0.9189 | 0.9357 | 0.9272 | 0.9831 |
e9a91e40de2ab24fcf1d4c3853e21398
cc-by-4.0
['bert']
false
bert-fc-medium A medium-size BERT Language Model with a **first character** prediction pre-training objective. For more details about the pre-training objective and the pre-training hyperparameters, please refer to [How does the pre-training objective affect what large language models learn about linguistic properties?](https://aclanthology.org/2022.acl-short.16/)
5b2178638a9b871d45d637476af509cf
apache-2.0
['translation']
false
opus-mt-uk-sv * source languages: uk * target languages: sv * OPUS readme: [uk-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/uk-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/uk-sv/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/uk-sv/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/uk-sv/opus-2020-01-16.eval.txt)
7b0d1455eddc27ad44b58a1e5d5dbc0b
other
['vision', 'image-segmentation']
false
SegFormer (b4-sized) model fine-tuned on CityScapes SegFormer model fine-tuned on CityScapes at resolution 512x1024. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer). Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
5ed56bd1bc8ed09c6f0c28c77fc76766
other
['vision', 'image-segmentation']
false
How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation from PIL import Image import requests feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/segformer-b0-finetuned-cityscapes-512-1024") model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b0-finetuned-cityscapes-512-1024") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits
dd7f865e13fefeb2bb63e70579c4e6f3
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-misogyny-sexism-4tweets-3e-05-0.01 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1537 - Accuracy: 0.6647 - F1: 0.6788 - Precision: 0.6076 - Recall: 0.7691 - Mae: 0.3353 - Tn: 309 - Fp: 228 - Fn: 106 - Tp: 353
692f16b02bf43ca79923fde332bf3884
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Mae | Tn | Fp | Fn | Tp | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|:------:|:---:|:---:|:---:|:---:| | 0.4449 | 1.0 | 1655 | 0.6853 | 0.5944 | 0.6799 | 0.5342 | 0.9346 | 0.4056 | 163 | 374 | 30 | 429 | | 0.3732 | 2.0 | 3310 | 0.7372 | 0.6416 | 0.6238 | 0.6041 | 0.6449 | 0.3584 | 343 | 194 | 163 | 296 | | 0.2962 | 3.0 | 4965 | 0.7860 | 0.6717 | 0.6714 | 0.6231 | 0.7277 | 0.3283 | 335 | 202 | 125 | 334 | | 0.2235 | 4.0 | 6620 | 1.1537 | 0.6647 | 0.6788 | 0.6076 | 0.7691 | 0.3353 | 309 | 228 | 106 | 353 |
981580a1b69c07d026971a566979a63b
cc-by-sa-4.0
['japanese', 'masked-lm']
false
Model Description This is a RoBERTa model pre-trained on 青空文庫 texts with [Japanese-LUW-Tokenizer](https://github.com/KoichiYasuoka/Japanese-LUW-Tokenizer). You can fine-tune `roberta-small-japanese-aozora` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-small-japanese-luw-upos), dependency-parsing, and so on.
d99a4ea934430624864515ebbc4d0046
cc-by-sa-4.0
['japanese', 'masked-lm']
false
How to Use ```py from transformers import AutoTokenizer,AutoModelForMaskedLM tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-small-japanese-aozora") model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-small-japanese-aozora") ```
cd480c71b6973e6188b5ca29a6e53e1a
mit
['question generation']
false
german-qg-t5-e2e-quad (Work in progress) This model is a end-to-end question generation model in German. Given a text, it generates several questions about it. This model is a fine-tuned version of [valhalla/t5-base-e2e-qg](https://huggingface.co/valhalla/t5-base-e2e-qg) on the [GermanQuAD dataset from deepset](https://huggingface.co/datasets/deepset/germanquad).
041801ca2c97c5867ced5bddb9bd3574
mit
['question generation']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0
7fecca0e15dfb7fbf0d8d6c48d7a1a45
mit
[]
false
Rishusei style on Stable Diffusion This is the `<crishusei-style>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<crishusei-style> 0](https://huggingface.co/sd-concepts-library/rishusei-style/resolve/main/concept_images/0.jpeg) ![<crishusei-style> 1](https://huggingface.co/sd-concepts-library/rishusei-style/resolve/main/concept_images/3.jpeg) ![<crishusei-style> 2](https://huggingface.co/sd-concepts-library/rishusei-style/resolve/main/concept_images/2.jpeg) ![<crishusei-style> 3](https://huggingface.co/sd-concepts-library/rishusei-style/resolve/main/concept_images/1.jpeg)
88d4917332765a286d3e0e1a16d2a03d
apache-2.0
['generated_from_trainer']
false
bert-base-uncased.CEBaB_confounding.food_service_positive.sa.5-class.seed_43 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the OpenTable OPENTABLE dataset. It achieves the following results on the evaluation set: - Loss: 0.7961 - Accuracy: 0.6569 - Macro-f1: 0.6291 - Weighted-macro-f1: 0.6459
780b30462e77bed8fa17bc6886c1caa7
apache-2.0
[]
false
PaddlePaddle/uie-m-large Information extraction suffers from its varying targets, heterogeneous structures, and demand-specific schemas. The unified text-to-structure generation framework, namely UIE, can universally model different IE tasks, adaptively generate targeted structures, and collaboratively learn general IE abilities from different knowledge sources. Specifically, UIE uniformly encodes different extraction structures via a structured extraction language, adaptively generates target extractions via a schema-based prompt mechanism - structural schema instructor, and captures the common IE abilities via a large-scale pre-trained text-to-structure model. Experiments show that UIE achieved the state-of-the-art performance on 4 IE tasks, 13 datasets, and on all supervised, low-resource, and few-shot settings for a wide range of entity, relation, event and sentiment extraction tasks and their unification. These results verified the effectiveness, universality, and transferability of UIE. UIE Paper: https://arxiv.org/abs/2203.12277 PaddleNLP released UIE model series for Information Extraction of texts and multi-modal documents which use the ERNIE 3.0 models as the pre-trained language models and were finetuned on a large amount of information extraction data. ![UIE-diagram](https://user-images.githubusercontent.com/40840292/167236006-66ed845d-21b8-4647-908b-e1c6e7613eb1.png)
bd9248804d8351f46240e48320681d2d
apache-2.0
['generated_from_trainer']
false
model_syllable_onSet3 This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1590 - 0 Precision: 0.9688 - 0 Recall: 1.0 - 0 F1-score: 0.9841 - 0 Support: 31 - 1 Precision: 1.0 - 1 Recall: 1.0 - 1 F1-score: 1.0 - 1 Support: 25 - 2 Precision: 1.0 - 2 Recall: 0.9474 - 2 F1-score: 0.9730 - 2 Support: 19 - 3 Precision: 0.9545 - 3 Recall: 0.9545 - 3 F1-score: 0.9545 - 3 Support: 22 - Accuracy: 0.9794 - Macro avg Precision: 0.9808 - Macro avg Recall: 0.9755 - Macro avg F1-score: 0.9779 - Macro avg Support: 97 - Weighted avg Precision: 0.9797 - Weighted avg Recall: 0.9794 - Weighted avg F1-score: 0.9793 - Weighted avg Support: 97 - Wer: 0.2202 - Mtrix: [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 0, 25, 0, 0], [2, 0, 0, 18, 1], [3, 1, 0, 0, 21]]
d0307700d0faa9fbb0fd6923468f2c74