license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
mit
[]
false
사용 예시 ```python from transformers import pipeline model_name = "heegyu/kogpt-j-350m" pipe = pipeline('text-generation', model=model_name) print(pipe("안녕하세요", repetition_penalty=1.2, do_sample=True, eos_token_id=1, early_stopping=True, max_new_tokens=128)) print(pipe("오늘 정부 발표에 따르면, ", repetition_penalty=1.2, do_sample=True, eos_token_id=1, early_stopping=True, max_new_tokens=128)) print(pipe("싸늘하다. 가슴에 비수가 날아와 꽂힌다. ", repetition_penalty=1.2, do_sample=True, eos_token_id=1, early_stopping=True, max_new_tokens=128, min_length=64)) ``` 결과 ```bash [{'generated_text': '안녕하세요?\n네.\n자~ 오늘 그~ 뭐~ 남북정상회담에서 인제 남북 관계와 관련된 발언이죠?\n예. 그렇습니다.\n어~ 그~ 이산가족 문제 관련해서 이산가족 상봉을\n예.\n하는 방안이 좀 가능성이 있지 않아요?\n상당히 가능성이 있죠.\n예. 이~ 구체적으로 어떤 거였나요?\n어~ 먼저 이산가족 상봉을 이제 말씀드리겠습니다.\n예.\n아까 설명드린 것처럼 그~ 이산가족 상\n네.\n그~ 상봉에 대한 그~ 구체적인 방안이 어떻게 결정되는 게 가장 좋을까요?\n우선 상봉 방법부터 얘기를 드리죠.\n'}] [{'generated_text': '오늘 정부 발표에 따르면, gtx-d d 노선을 창릉과 수서에서 출발하는 등 당초 예정된 노선들을 모두 정차하기로 했다. 지난 2월 국토교통부가 이 노선을 일산·금정·파주 운정역과 직접 연결키로 하면서 일산~동탄, 일산~분당, 일산~양재 구간에 추가 정차할 것이라는 예상이 나왔지만 실제 일산~수서 구간이 정차하기로 확정됐다. gtx-d 노선이 일산~수서역까지 개통되는 것은 이번이 처음이다.. gtx-d 노선과 gtx-a 노선이 모두 개통되면 지하철 5호선의 서울 도심 통과 구간이 추가된다. 현재 gtx-b'}] [{'generated_text': '싸늘하다. 가슴에 비수가 날아와 꽂힌다. \U000f0854삼국사절요\U000f0855 ‘화살촉이 울버린’의 경우에서 보면, 총소리의 원음은 鐘(종자용 : 송악), 鐘을 비(鐘)라 하고 종자의 발음은 ‘이( )’이다. 이때에서 ‘이(은)로 시작하는 발음’은 ‘이/이’의 음운적 표현이다. ‘이/은→종자용[鐘] → 송악/종자[鐘]→이→종자(鐘) …’이다. 이는 한자어로서 그 발음'}] ```
68e929a91cb07692d3cd7a44da12e507
mit
['AMRBART']
false
AMRBART (base-sized model) AMRBART model is continually pre-trained on the English text and AMR Graphs based on the BART model. It was introduced in the paper: [Graph Pre-training for AMR Parsing and Generation](https://arxiv.org/pdf/2203.07836.pdf) by bai et al. in ACL 2022 and first released in [this repository](https://github.com/muyeby/AMRBART).
1aa37b4f7e6dcdfc7a1f632452ecf595
mit
['AMRBART']
false
Model description AMRBART follows the BART model which uses a transformer encoder-encoder architecture. AMRBART is pre-trained with 6 tasks: + learning to reconstruct the text based on the corrupted text. + learning to reconstruct AMR graphs based on the corrupted AMR graph. + learning to reconstruct the text based on the corrupted text and its corresponding AMR graph. + learning to reconstruct an AMR graph based on the corrupted AMR graph and its corresponding text. + learning to reconstruct the text based on the corrupted text and its corresponding corrupted AMR graph. + learning to reconstruct an AMR graph based on the corrupted AMR graph and its corresponding corrupted text. AMRBART is particularly effective when fine-tuned for AMR parsing and AMR-to-text generation tasks.
b8a6dbc271e7c132534f0eac4fc4c417
mit
['AMRBART']
false
Training data The AMRBART model is pre-trained on [AMR3.0](https://catalog.ldc.upenn.edu/LDC2020T02), a dataset consisting of 55,635 training instances and [English Gigaword](https://catalog.ldc.upenn.edu/LDC2003T05) (we randomly sampled 200,000 sentences).
180acc8f72c492c10971834cd58cb677
mit
['AMRBART']
false
How to use Here is how to initialize this model in PyTorch: ```python from transformers import BartForConditionalGeneration model = BartForConditionalGeneration.from_pretrained("xfbai/AMRBART-base") ``` Please refer to [this repository](https://github.com/muyeby/AMRBART) for tokenizer initialization and data preprocessing.
f52c7a133b78e5a002f2b7f22404b4f6
mit
['generated_from_trainer']
false
pedantic_wright This model was trained from scratch on the tomekkorbak/pii-pile-chunk3-0-50000, the tomekkorbak/pii-pile-chunk3-50000-100000, the tomekkorbak/pii-pile-chunk3-100000-150000, the tomekkorbak/pii-pile-chunk3-150000-200000, the tomekkorbak/pii-pile-chunk3-200000-250000, the tomekkorbak/pii-pile-chunk3-250000-300000, the tomekkorbak/pii-pile-chunk3-300000-350000, the tomekkorbak/pii-pile-chunk3-350000-400000, the tomekkorbak/pii-pile-chunk3-400000-450000, the tomekkorbak/pii-pile-chunk3-450000-500000, the tomekkorbak/pii-pile-chunk3-500000-550000, the tomekkorbak/pii-pile-chunk3-550000-600000, the tomekkorbak/pii-pile-chunk3-600000-650000, the tomekkorbak/pii-pile-chunk3-650000-700000, the tomekkorbak/pii-pile-chunk3-700000-750000, the tomekkorbak/pii-pile-chunk3-750000-800000, the tomekkorbak/pii-pile-chunk3-800000-850000, the tomekkorbak/pii-pile-chunk3-850000-900000, the tomekkorbak/pii-pile-chunk3-900000-950000, the tomekkorbak/pii-pile-chunk3-950000-1000000, the tomekkorbak/pii-pile-chunk3-1000000-1050000, the tomekkorbak/pii-pile-chunk3-1050000-1100000, the tomekkorbak/pii-pile-chunk3-1100000-1150000, the tomekkorbak/pii-pile-chunk3-1150000-1200000, the tomekkorbak/pii-pile-chunk3-1200000-1250000, the tomekkorbak/pii-pile-chunk3-1250000-1300000, the tomekkorbak/pii-pile-chunk3-1300000-1350000, the tomekkorbak/pii-pile-chunk3-1350000-1400000, the tomekkorbak/pii-pile-chunk3-1400000-1450000, the tomekkorbak/pii-pile-chunk3-1450000-1500000, the tomekkorbak/pii-pile-chunk3-1500000-1550000, the tomekkorbak/pii-pile-chunk3-1550000-1600000, the tomekkorbak/pii-pile-chunk3-1600000-1650000, the tomekkorbak/pii-pile-chunk3-1650000-1700000, the tomekkorbak/pii-pile-chunk3-1700000-1750000, the tomekkorbak/pii-pile-chunk3-1750000-1800000, the tomekkorbak/pii-pile-chunk3-1800000-1850000, the tomekkorbak/pii-pile-chunk3-1850000-1900000 and the tomekkorbak/pii-pile-chunk3-1900000-1950000 datasets.
6ee70b8ec0110837a001d9ff8b68533e
mit
['generated_from_trainer']
false
Full config {'dataset': {'datasets': ['tomekkorbak/pii-pile-chunk3-0-50000', 'tomekkorbak/pii-pile-chunk3-50000-100000', 'tomekkorbak/pii-pile-chunk3-100000-150000', 'tomekkorbak/pii-pile-chunk3-150000-200000', 'tomekkorbak/pii-pile-chunk3-200000-250000', 'tomekkorbak/pii-pile-chunk3-250000-300000', 'tomekkorbak/pii-pile-chunk3-300000-350000', 'tomekkorbak/pii-pile-chunk3-350000-400000', 'tomekkorbak/pii-pile-chunk3-400000-450000', 'tomekkorbak/pii-pile-chunk3-450000-500000', 'tomekkorbak/pii-pile-chunk3-500000-550000', 'tomekkorbak/pii-pile-chunk3-550000-600000', 'tomekkorbak/pii-pile-chunk3-600000-650000', 'tomekkorbak/pii-pile-chunk3-650000-700000', 'tomekkorbak/pii-pile-chunk3-700000-750000', 'tomekkorbak/pii-pile-chunk3-750000-800000', 'tomekkorbak/pii-pile-chunk3-800000-850000', 'tomekkorbak/pii-pile-chunk3-850000-900000', 'tomekkorbak/pii-pile-chunk3-900000-950000', 'tomekkorbak/pii-pile-chunk3-950000-1000000', 'tomekkorbak/pii-pile-chunk3-1000000-1050000', 'tomekkorbak/pii-pile-chunk3-1050000-1100000', 'tomekkorbak/pii-pile-chunk3-1100000-1150000', 'tomekkorbak/pii-pile-chunk3-1150000-1200000', 'tomekkorbak/pii-pile-chunk3-1200000-1250000', 'tomekkorbak/pii-pile-chunk3-1250000-1300000', 'tomekkorbak/pii-pile-chunk3-1300000-1350000', 'tomekkorbak/pii-pile-chunk3-1350000-1400000', 'tomekkorbak/pii-pile-chunk3-1400000-1450000', 'tomekkorbak/pii-pile-chunk3-1450000-1500000', 'tomekkorbak/pii-pile-chunk3-1500000-1550000', 'tomekkorbak/pii-pile-chunk3-1550000-1600000', 'tomekkorbak/pii-pile-chunk3-1600000-1650000', 'tomekkorbak/pii-pile-chunk3-1650000-1700000', 'tomekkorbak/pii-pile-chunk3-1700000-1750000', 'tomekkorbak/pii-pile-chunk3-1750000-1800000', 'tomekkorbak/pii-pile-chunk3-1800000-1850000', 'tomekkorbak/pii-pile-chunk3-1850000-1900000', 'tomekkorbak/pii-pile-chunk3-1900000-1950000'], 'filter_threshold': 0.000286, 'is_split_by_sentences': True}, 'generation': {'force_call_on': [25354], 'metrics_configs': [{}, {'n': 1}, {'n': 2}], 'scenario_configs': [{'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_samples': 2048}], 'scorer_config': {}}, 'kl_gpt3_callback': {'max_tokens': 64, 'num_samples': 4096}, 'model': {'from_scratch': True, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'path_or_name': 'gpt2'}, 'objective': {'name': 'MLE'}, 'tokenizer': {'path_or_name': 'gpt2'}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 64, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'pedantic_wright', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.0005, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000, 'output_dir': 'training_output2', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 25354, 'save_strategy': 'steps', 'seed': 42, 'warmup_ratio': 0.01, 'weight_decay': 0.1}}
17622e13a1b090400aae6f9f7f06875e
cc-by-4.0
['danish', 'bert', 'sentiment', 'text-classification', 'Maltehb/danish-bert-botxo', 'Helsinki-NLP/opus-mt-en-da', 'go-emotion', 'Certainly']
false
Danish-Bert-GoÆmotion Danish Go-Emotions classifier. [Maltehb/danish-bert-botxo](https://huggingface.co/Maltehb/danish-bert-botxo) (uncased) finetuned on a translation of the [go_emotions](https://huggingface.co/datasets/go_emotions) dataset using [Helsinki-NLP/opus-mt-en-da](https://huggingface.co/Helsinki-NLP/opus-mt-de-en). Thus, performance is obviousely dependent on the translation model.
dde48a96d779f28725aad3dfc6e3f97b
cc-by-4.0
['danish', 'bert', 'sentiment', 'text-classification', 'Maltehb/danish-bert-botxo', 'Helsinki-NLP/opus-mt-en-da', 'go-emotion', 'Certainly']
false
Training - Translating the training data with MT: [Notebook](https://colab.research.google.com/github/RJuro/Da-HyggeBERT-finetuning/blob/main/HyggeBERT_translation_en_da.ipynb) - Fine-tuning danish-bert-botxo: coming soon...
13d249ddacd6c1103713807eedce69f8
cc-by-4.0
['danish', 'bert', 'sentiment', 'text-classification', 'Maltehb/danish-bert-botxo', 'Helsinki-NLP/opus-mt-en-da', 'go-emotion', 'Certainly']
false
Using the model with `transformers` Easiest use with `transformers` and `pipeline`: ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline model = AutoModelForSequenceClassification.from_pretrained('RJuro/Da-HyggeBERT') tokenizer = AutoTokenizer.from_pretrained('RJuro/Da-HyggeBERT') classifier = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer) classifier('jeg elsker dig') ``` `[{'label': 'kærlighed', 'score': 0.9634820818901062}]`
a39b756f5ed164b8a463404504807215
cc-by-4.0
['danish', 'bert', 'sentiment', 'text-classification', 'Maltehb/danish-bert-botxo', 'Helsinki-NLP/opus-mt-en-da', 'go-emotion', 'Certainly']
false
Using the model with `simpletransformers` ```python from simpletransformers.classification import MultiLabelClassificationModel model = MultiLabelClassificationModel('bert', 'RJuro/Da-HyggeBERT') predictions, raw_outputs = model.predict(df['text']) ```
b796ee8622d46c85194bf4a9ce76f607
apache-2.0
['generated_from_trainer']
false
BertMultiHateSpeech This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7496 - Accuracy: 0.74 - F1: 0.4841
141da59b58e841de585218c375bedb9b
cc-by-4.0
['answer extraction']
false
Model Card of `lmqg/mt5-base-dequad-ae` This model is fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) for answer extraction on the [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
2693b656d8aed2156c99103687874ca6
cc-by-4.0
['answer extraction']
false
Overview - **Language model:** [google/mt5-base](https://huggingface.co/google/mt5-base) - **Language:** de - **Training data:** [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
6e90e5bd13f95a5af7edeacf7444285c
cc-by-4.0
['answer extraction']
false
model prediction answers = model.generate_a("das erste weltweit errichtete Hermann Brehmer 1855 im niederschlesischen ''Görbersdorf'' (heute Sokołowsko, Polen).") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/mt5-base-dequad-ae") output = pipe("Sommerzeit <hl> Frühling <hl>: Umstellung von Normalzeit auf Sommerzeit – die Uhr wird um eine Stunde ''vor''gestellt. Herbst: Umstellung von Sommerzeit auf Normalzeit – die Uhr wird um eine Stunde ''zurück''gestellt. Als Sommerzeit wird die gegenüber der Zonenzeit meist um eine Stunde vorgestellte Uhrzeit bezeichnet, die während eines bestimmten Zeitraums im Sommerhalbjahr (und oft auch etwas darüber hinaus) als gesetzliche Zeit dient. Eine solche Regelung wird fast nur in Ländern der gemäßigten Zonen angewandt. Die mitteleuropäische Sommerzeit beginnt am letzten Sonntag im März um 2:00 Uhr MEZ, indem die Stundenzählung um eine Stunde von 2:00 Uhr auf 3:00 Uhr vorgestellt wird. Sie endet jeweils am letzten Sonntag im Oktober um 3:00 Uhr MESZ, indem die Stundenzählung um eine Stunde von 3:00 Uhr auf 2:00 Uhr zurückgestellt wird.") ```
7f4a9e80db0b470c699d3197f8196af8
cc-by-4.0
['answer extraction']
false
Evaluation - ***Metric (Answer Extraction)***: [raw metric file](https://huggingface.co/lmqg/mt5-base-dequad-ae/raw/main/eval/metric.first.answer.paragraph_sentence.answer.lmqg_qg_dequad.default.json) | | Score | Type | Dataset | |:-----------------|--------:|:--------|:-----------------------------------------------------------------| | AnswerExactMatch | 5.54 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) | | AnswerF1Score | 30.15 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) | | BERTScore | 69.15 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) | | Bleu_1 | 13.01 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) | | Bleu_2 | 8.54 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) | | Bleu_3 | 5.66 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) | | Bleu_4 | 3.71 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) | | METEOR | 21.42 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) | | MoverScore | 53.96 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) | | ROUGE_L | 15.18 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) |
cca1bd37b07584105593c5e108319db1
cc-by-4.0
['answer extraction']
false
Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_dequad - dataset_name: default - input_types: ['paragraph_sentence'] - output_types: ['answer'] - prefix_types: None - model: google/mt5-base - max_length: 512 - max_length_output: 32 - epoch: 15 - batch: 8 - lr: 0.001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 8 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mt5-base-dequad-ae/raw/main/trainer_config.json).
c85e67c92f2ae82a7c9fa25fb05864b3
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: IPU - gradient_accumulation_steps: 64 - total_train_batch_size: 128 - total_eval_batch_size: 5 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 - training precision: Mixed Precision
d48dd88f01a42781fea94170d3030b52
other
['generated_from_trainer']
false
125m-dalio-book-handwritten-io-constant-1e-6-v2 This model is a fine-tuned version of [facebook/opt-125m](https://huggingface.co/facebook/opt-125m) on the AlekseyKorshuk/dalio-book-handwritten-io-sorted-v2 dataset. It achieves the following results on the evaluation set: - Loss: 3.0859 - Accuracy: 0.2336 - Perplexity: 21.8880
c67134929cbd61ac928f921b0c898108
other
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Perplexity | |:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:| | 3.3352 | 0.01 | 1 | 3.1738 | 0.2305 | 23.8988 | | 3.3091 | 0.03 | 2 | 3.1738 | 0.2305 | 23.8988 | | 3.3347 | 0.04 | 3 | 3.1738 | 0.2305 | 23.8988 | | 3.1445 | 0.05 | 4 | 3.1738 | 0.2305 | 23.8988 | | 2.8918 | 0.07 | 5 | 3.1738 | 0.2305 | 23.8988 | | 3.2068 | 0.08 | 6 | 3.1738 | 0.2305 | 23.8988 | | 3.6245 | 0.09 | 7 | 3.1719 | 0.2305 | 23.8522 | | 3.2256 | 0.11 | 8 | 3.1719 | 0.2305 | 23.8522 | | 2.9991 | 0.12 | 9 | 3.1699 | 0.2305 | 23.8056 | | 3.3257 | 0.13 | 10 | 3.1680 | 0.2306 | 23.7592 | | 3.1199 | 0.15 | 11 | 3.1660 | 0.2306 | 23.7128 | | 3.3735 | 0.16 | 12 | 3.1660 | 0.2306 | 23.7128 | | 3.0051 | 0.17 | 13 | 3.1641 | 0.2307 | 23.6665 | | 3.2695 | 0.19 | 14 | 3.1621 | 0.2308 | 23.6204 | | 3.2004 | 0.2 | 15 | 3.1602 | 0.2309 | 23.5743 | | 3.2075 | 0.21 | 16 | 3.1582 | 0.2308 | 23.5283 | | 3.321 | 0.23 | 17 | 3.1562 | 0.2308 | 23.4824 | | 3.4026 | 0.24 | 18 | 3.1543 | 0.2309 | 23.4366 | | 3.0383 | 0.25 | 19 | 3.1523 | 0.2309 | 23.3908 | | 3.166 | 0.27 | 20 | 3.1504 | 0.2309 | 23.3452 | | 3.144 | 0.28 | 21 | 3.1484 | 0.2310 | 23.2996 | | 3.1624 | 0.29 | 22 | 3.1484 | 0.2310 | 23.2996 | | 3.0332 | 0.31 | 23 | 3.1465 | 0.2310 | 23.2542 | | 3.3745 | 0.32 | 24 | 3.1445 | 0.2311 | 23.2088 | | 3.0823 | 0.33 | 25 | 3.1426 | 0.2312 | 23.1635 | | 3.6021 | 0.35 | 26 | 3.1406 | 0.2312 | 23.1183 | | 3.1125 | 0.36 | 27 | 3.1387 | 0.2313 | 23.0732 | | 3.1406 | 0.37 | 28 | 3.1387 | 0.2314 | 23.0732 | | 3.1736 | 0.39 | 29 | 3.1367 | 0.2314 | 23.0282 | | 3.1104 | 0.4 | 30 | 3.1348 | 0.2315 | 22.9832 | | 3.1301 | 0.41 | 31 | 3.1328 | 0.2316 | 22.9384 | | 3.3376 | 0.43 | 32 | 3.1309 | 0.2315 | 22.8936 | | 3.218 | 0.44 | 33 | 3.1309 | 0.2316 | 22.8936 | | 3.0786 | 0.45 | 34 | 3.1289 | 0.2316 | 22.8490 | | 3.0125 | 0.47 | 35 | 3.1270 | 0.2317 | 22.8044 | | 3.2634 | 0.48 | 36 | 3.1270 | 0.2317 | 22.8044 | | 2.9888 | 0.49 | 37 | 3.125 | 0.2318 | 22.7599 | | 3.1624 | 0.51 | 38 | 3.1230 | 0.2318 | 22.7155 | | 2.9807 | 0.52 | 39 | 3.1211 | 0.2319 | 22.6712 | | 3.446 | 0.53 | 40 | 3.1211 | 0.2319 | 22.6712 | | 3.1338 | 0.55 | 41 | 3.1191 | 0.2320 | 22.6269 | | 3.1841 | 0.56 | 42 | 3.1191 | 0.2320 | 22.6269 | | 3.1079 | 0.57 | 43 | 3.1172 | 0.2320 | 22.5828 | | 3.0918 | 0.59 | 44 | 3.1152 | 0.2321 | 22.5387 | | 3.0302 | 0.6 | 45 | 3.1152 | 0.2322 | 22.5387 | | 3.1123 | 0.61 | 46 | 3.1133 | 0.2323 | 22.4947 | | 2.9985 | 0.63 | 47 | 3.1113 | 0.2324 | 22.4508 | | 3.3816 | 0.64 | 48 | 3.1113 | 0.2324 | 22.4508 | | 3.0813 | 0.65 | 49 | 3.1094 | 0.2324 | 22.4070 | | 3.2024 | 0.67 | 50 | 3.1094 | 0.2325 | 22.4070 | | 3.0178 | 0.68 | 51 | 3.1074 | 0.2325 | 22.3633 | | 3.1646 | 0.69 | 52 | 3.1074 | 0.2326 | 22.3633 | | 3.0046 | 0.71 | 53 | 3.1055 | 0.2327 | 22.3197 | | 3.0266 | 0.72 | 54 | 3.1055 | 0.2327 | 22.3197 | | 3.3857 | 0.73 | 55 | 3.1035 | 0.2327 | 22.2761 | | 3.064 | 0.75 | 56 | 3.1035 | 0.2328 | 22.2761 | | 3.176 | 0.76 | 57 | 3.1016 | 0.2328 | 22.2327 | | 3.1851 | 0.77 | 58 | 3.1016 | 0.2329 | 22.2327 | | 3.0811 | 0.79 | 59 | 3.0996 | 0.2329 | 22.1893 | | 3.0205 | 0.8 | 60 | 3.0996 | 0.2330 | 22.1893 | | 3.26 | 0.81 | 61 | 3.0977 | 0.2330 | 22.1460 | | 3.2922 | 0.83 | 62 | 3.0977 | 0.2331 | 22.1460 | | 3.5349 | 0.84 | 63 | 3.0957 | 0.2331 | 22.1028 | | 3.3525 | 0.85 | 64 | 3.0957 | 0.2331 | 22.1028 | | 3.135 | 0.87 | 65 | 3.0938 | 0.2331 | 22.0596 | | 3.1707 | 0.88 | 66 | 3.0938 | 0.2332 | 22.0596 | | 3.0127 | 0.89 | 67 | 3.0918 | 0.2332 | 22.0166 | | 3.0952 | 0.91 | 68 | 3.0918 | 0.2332 | 22.0166 | | 3.1023 | 0.92 | 69 | 3.0898 | 0.2334 | 21.9736 | | 3.3821 | 0.93 | 70 | 3.0898 | 0.2334 | 21.9736 | | 3.1118 | 0.95 | 71 | 3.0879 | 0.2334 | 21.9308 | | 3.1143 | 0.96 | 72 | 3.0879 | 0.2335 | 21.9308 | | 3.1118 | 0.97 | 73 | 3.0879 | 0.2335 | 21.9308 | | 3.0596 | 0.99 | 74 | 3.0859 | 0.2336 | 21.8880 | | 3.1033 | 1.0 | 75 | 3.0859 | 0.2336 | 21.8880 |
3c7044cc67adb80781d0472b93140e48
mit
[]
false
model by NobuLuis This your the Stable Diffusion model fine-tuned the andynsane concept taught to Stable Diffusion with Dreambooth. It can be used by modifying the `instance_prompt`: **a photo of sks andynsane** You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb). And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts) Here are the images used for training this concept: ![image 0](https://huggingface.co/sd-dreambooth-library/andynsane/resolve/main/concept_images/1.jpeg) ![image 1](https://huggingface.co/sd-dreambooth-library/andynsane/resolve/main/concept_images/4.jpeg) ![image 2](https://huggingface.co/sd-dreambooth-library/andynsane/resolve/main/concept_images/2.jpeg) ![image 3](https://huggingface.co/sd-dreambooth-library/andynsane/resolve/main/concept_images/0.jpeg) ![image 4](https://huggingface.co/sd-dreambooth-library/andynsane/resolve/main/concept_images/3.jpeg)
ea0fc0edccb8b92c252db1da1cbd35b6
apache-2.0
['generated_from_trainer']
false
distilled-indobert-classification This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the indonlu dataset. It achieves the following results on the evaluation set: - Loss: 0.6015 - Accuracy: 0.9016 - F1: 0.9015
a86c37328173d418c1551b5be9c44036
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 33 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5
af52699ecdfa774c266b342eeb249e58
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 1.0427 | 1.0 | 688 | 0.6306 | 0.8683 | 0.8684 | | 0.5332 | 2.0 | 1376 | 0.5621 | 0.8794 | 0.8779 | | 0.3021 | 3.0 | 2064 | 0.6785 | 0.8905 | 0.8896 | | 0.1851 | 4.0 | 2752 | 0.6085 | 0.8968 | 0.8959 | | 0.1152 | 5.0 | 3440 | 0.6015 | 0.9016 | 0.9015 |
2927cec7686f29daf0dbefa47c60e0f2
apache-2.0
['generated_from_trainer']
false
02_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5219 - Accuracy: 0.7412 - F1: 0.7625
a1151f8d934823ef8ccfa1abb91ce731
apache-2.0
['bert', 'mrpc', 'glue', 'kd', 'torchdistill']
false
`bert-base-uncased` fine-tuned on MRPC dataset, using fine-tuned `bert-large-uncased` as a teacher model, [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_kd_and_submission.ipynb) for knowledge distillation. The training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/mrpc/kd/bert_base_uncased_from_bert_large_uncased.yaml). I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **78.9**.
b081bf023b231b38819b0cf0208e8aaa
mit
['medical']
false
BioGPT Pre-trained language models have attracted increasing attention in the biomedical domain, inspired by their great success in the general natural language domain. Among the two main branches of pre-trained language models in the general language domain, i.e. BERT (and its variants) and GPT (and its variants), the first one has been extensively studied in the biomedical domain, such as BioBERT and PubMedBERT. While they have achieved great success on a variety of discriminative downstream biomedical tasks, the lack of generation ability constrains their application scope. In this paper, we propose BioGPT, a domain-specific generative Transformer language model pre-trained on large-scale biomedical literature. We evaluate BioGPT on six biomedical natural language processing tasks and demonstrate that our model outperforms previous models on most tasks. Especially, we get 44.98%, 38.42% and 40.76% F1 score on BC5CDR, KD-DTI and DDI end-to-end relation extraction tasks, respectively, and 78.2% accuracy on PubMedQA, creating a new record. Our case study on text generation further demonstrates the advantage of BioGPT on biomedical literature to generate fluent descriptions for biomedical terms.
d0822b0999a6a3c0ea9cb5a8c3aed61c
mit
['medical']
false
Citation If you find BioGPT useful in your research, please cite the following paper: ```latex @article{10.1093/bib/bbac409, author = {Luo, Renqian and Sun, Liai and Xia, Yingce and Qin, Tao and Zhang, Sheng and Poon, Hoifung and Liu, Tie-Yan}, title = "{BioGPT: generative pre-trained transformer for biomedical text generation and mining}", journal = {Briefings in Bioinformatics}, volume = {23}, number = {6}, year = {2022}, month = {09}, abstract = "{Pre-trained language models have attracted increasing attention in the biomedical domain, inspired by their great success in the general natural language domain. Among the two main branches of pre-trained language models in the general language domain, i.e. BERT (and its variants) and GPT (and its variants), the first one has been extensively studied in the biomedical domain, such as BioBERT and PubMedBERT. While they have achieved great success on a variety of discriminative downstream biomedical tasks, the lack of generation ability constrains their application scope. In this paper, we propose BioGPT, a domain-specific generative Transformer language model pre-trained on large-scale biomedical literature. We evaluate BioGPT on six biomedical natural language processing tasks and demonstrate that our model outperforms previous models on most tasks. Especially, we get 44.98\%, 38.42\% and 40.76\% F1 score on BC5CDR, KD-DTI and DDI end-to-end relation extraction tasks, respectively, and 78.2\% accuracy on PubMedQA, creating a new record. Our case study on text generation further demonstrates the advantage of BioGPT on biomedical literature to generate fluent descriptions for biomedical terms.}", issn = {1477-4054}, doi = {10.1093/bib/bbac409}, url = {https://doi.org/10.1093/bib/bbac409}, note = {bbac409}, eprint = {https://academic.oup.com/bib/article-pdf/23/6/bbac409/47144271/bbac409.pdf}, } ```
1652973c3d97b8e93b679074a958b3dd
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Wav2Vec2-Large-XLSR-Sundanese Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the [OpenSLR High quality TTS data for Sundanese](https://openslr.org/44/). When using this model, make sure that your speech input is sampled at 16kHz.
d55446d92d442f3dcd6f171236ee2997
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset, load_metric, Dataset from datasets.utils.download_manager import DownloadManager from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor from pathlib import Path import pandas as pd def load_dataset_sundanese(): urls = [ "https://www.openslr.org/resources/44/su_id_female.zip", "https://www.openslr.org/resources/44/su_id_male.zip" ] dm = DownloadManager() download_dirs = dm.download_and_extract(urls) data_dirs = [ Path(download_dirs[0])/"su_id_female/wavs", Path(download_dirs[1])/"su_id_male/wavs", ] filenames = [ Path(download_dirs[0])/"su_id_female/line_index.tsv", Path(download_dirs[1])/"su_id_male/line_index.tsv", ] dfs = [] dfs.append(pd.read_csv(filenames[0], sep='\t4?\t', names=["path", "sentence"])) dfs.append(pd.read_csv(filenames[1], sep='\t\t', names=["path", "sentence"])) for i, dir in enumerate(data_dirs): dfs[i]["path"] = dfs[i].apply(lambda row: str(data_dirs[i]) + "/" + row + ".wav", axis=1) df = pd.concat(dfs)
51b5802ea94677b25b8430e3992ed77e
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
df = df.sample(frac=1, random_state=1).reset_index(drop=True) dataset = Dataset.from_pandas(df) dataset = dataset.remove_columns('__index_level_0__') return dataset.train_test_split(test_size=0.1, seed=1) dataset = load_dataset_sundanese() test_dataset = dataset['test'] processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-sundanese") model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-sundanese") resampler = torchaudio.transforms.Resample(48_000, 16_000)
271365121b88d01c26bf05ffad3e9da4
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset[:2]["sentence"]) ```
0000eefd89a5b5ab0163985b8e55a499
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Evaluation The model can be evaluated as follows or using the [notebook](https://github.com/cahya-wirawan/indonesian-speech-recognition/blob/main/XLSR_Wav2Vec2_for_Indonesian_Evaluation-Sundanese.ipynb). ```python import torch import torchaudio from datasets import load_dataset, load_metric, Dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor from datasets.utils.download_manager import DownloadManager import re from pathlib import Path import pandas as pd def load_dataset_sundanese(): urls = [ "https://www.openslr.org/resources/44/su_id_female.zip", "https://www.openslr.org/resources/44/su_id_male.zip" ] dm = DownloadManager() download_dirs = dm.download_and_extract(urls) data_dirs = [ Path(download_dirs[0])/"su_id_female/wavs", Path(download_dirs[1])/"su_id_male/wavs", ] filenames = [ Path(download_dirs[0])/"su_id_female/line_index.tsv", Path(download_dirs[1])/"su_id_male/line_index.tsv", ] dfs = [] dfs.append(pd.read_csv(filenames[0], sep='\t4?\t', names=["path", "sentence"])) dfs.append(pd.read_csv(filenames[1], sep='\t\t', names=["path", "sentence"])) for i, dir in enumerate(data_dirs): dfs[i]["path"] = dfs[i].apply(lambda row: str(data_dirs[i]) + "/" + row + ".wav", axis=1) df = pd.concat(dfs)
1547cc4929faaa9a476228f178e50020
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
df = df.sample(frac=1, random_state=1).reset_index(drop=True) dataset = Dataset.from_pandas(df) dataset = dataset.remove_columns('__index_level_0__') return dataset.train_test_split(test_size=0.1, seed=1) dataset = load_dataset_sundanese() test_dataset = dataset['test'] wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-sundanese") model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-sundanese") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\'\”_\�]' resampler = torchaudio.transforms.Resample(48_000, 16_000)
51f20afb478b2f59db8432caf1c1de6f
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
We need to read the audio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 6.19 %
67a3e0c8d206f31c2c177ec81fc135e3
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Training [OpenSLR High quality TTS data for Sundanese](https://openslr.org/44/) was used for training. The script used for training can be found [here](https://github.com/cahya-wirawan/indonesian-speech-recognition/blob/main/XLSR_Wav2Vec2_for_Indonesian_Evaluation-Sundanese.ipynb) and to [evaluate it](https://github.com/cahya-wirawan/indonesian-speech-recognition/blob/main/XLSR_Wav2Vec2_for_Indonesian_Evaluation-Sundanese.ipynb)
d4fa5552a2f9f14d9a325ba6e4ba3503
mit
['generated_from_trainer']
false
bert-base-german-cased-finetuned-subj_v6_7Epoch_v3 This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2732 - Precision: 0.7654 - Recall: 0.7829 - F1: 0.7740 - Accuracy: 0.9119
04429c8c9c6b9910607c778980e45681
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 33 | 0.3281 | 0.6656 | 0.5914 | 0.6263 | 0.8623 | | No log | 2.0 | 66 | 0.2623 | 0.7440 | 0.7057 | 0.7243 | 0.8940 | | No log | 3.0 | 99 | 0.2460 | 0.7536 | 0.7514 | 0.7525 | 0.9067 | | No log | 4.0 | 132 | 0.2440 | 0.7778 | 0.76 | 0.7688 | 0.9124 | | No log | 5.0 | 165 | 0.2582 | 0.7723 | 0.7657 | 0.7690 | 0.9107 | | No log | 6.0 | 198 | 0.2681 | 0.7690 | 0.78 | 0.7745 | 0.9119 | | No log | 7.0 | 231 | 0.2732 | 0.7654 | 0.7829 | 0.7740 | 0.9119 |
14eb8ca37c68741ef0013c96fc186fb5
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Whisper Small Basque - Xabi Ezpeleta This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.2666 - Wer: 23.9965
2640ee1155f247fe781ad635c88b25db
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.2635 | 0.92 | 1000 | 0.3264 | 31.9754 | | 0.1492 | 1.84 | 2000 | 0.2668 | 25.7403 | | 0.0707 | 2.76 | 3000 | 0.2595 | 24.4859 | | 0.03 | 3.68 | 4000 | 0.2666 | 23.9965 |
1e329aff793d37385ea33867c1f90379
apache-2.0
['generated_from_trainer']
false
distilled-mt5-small-0.05-0.5 This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset. It achieves the following results on the evaluation set: - Loss: 2.8399 - Bleu: 7.0815 - Gen Len: 43.6583
c63c0a9e7f8afb0e46a536b028f3b277
mit
[]
false
SpaceBERT This is one of the 3 further pre-trained models from the SpaceTransformers family presented in [SpaceTransformers: Language Modeling for Space Systems](https://ieeexplore.ieee.org/document/9548078). The original Git repo is [strath-ace/smart-nlp](https://github.com/strath-ace/smart-nlp). The further pre-training corpus includes publications abstracts, books, and Wikipedia pages related to space systems. Corpus size is 14.3 GB. SpaceBERT was further pre-trained on this domain-specific corpus from [BERT-Base (uncased)](https://huggingface.co/bert-base-uncased). In our paper, it is then fine-tuned for a Concept Recognition task.
df77a266c37e5c4492685b516de57904
mit
['generated_from_keras_callback']
false
juro95/xlm-roberta-finetuned-ner-0.6-ratio-and-samples This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0415 - Validation Loss: 0.0722 - Epoch: 3
b2f9e1ec504e8dba114d7346d0764a9d
mit
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 105112, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16
48162c3a95b64ff0fdb3b2b8341cddaf
mit
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.2391 | 0.1212 | 0 | | 0.1048 | 0.0862 | 1 | | 0.0644 | 0.0734 | 2 | | 0.0415 | 0.0722 | 3 |
7f156863ab84b5bcbf342a6420c90b52
apache-2.0
['generated_from_trainer']
false
led-base-16384-100-MDS This model is a fine-tuned version of [allenai/led-base-16384](https://huggingface.co/allenai/led-base-16384) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 4.1425 - Rouge1: 16.7324 - Rouge2: 5.8501 - Rougel: 13.908 - Rougelsum: 13.8469 - Gen Len: 20.0
757845fcea295dc99cf83a34bc50c800
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP
ff4842a53f719ab29f54314031793315
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | No log | 1.0 | 25 | 3.6187 | 15.1426 | 4.2468 | 13.4488 | 13.38 | 20.0 | | No log | 2.0 | 50 | 3.9873 | 13.4341 | 3.3283 | 10.2739 | 10.8229 | 20.0 | | No log | 3.0 | 75 | 4.0264 | 18.1891 | 5.3395 | 15.0797 | 15.3586 | 20.0 | | No log | 4.0 | 100 | 4.0929 | 17.0091 | 5.5336 | 14.4381 | 14.5149 | 19.5 | | No log | 5.0 | 125 | 4.1425 | 16.7324 | 5.8501 | 13.908 | 13.8469 | 20.0 |
fb658851eab366719048e0f62810b750
apache-2.0
['Text', 'Sentence Similarity', 'Sentence-Embedding', 'camembert-base']
false
Pre-trained sentence embedding models are the state-of-the-art of Sentence Embeddings for French. Model is Fine-tuned using pre-trained [facebook/camembert-base](https://huggingface.co/camembert/camembert-base) and [Siamese BERT-Networks with 'sentences-transformers'](https://www.sbert.net/) on dataset [stsb](https://huggingface.co/datasets/stsb_multi_mt/viewer/fr/train)
3a55e6b7332c8f1ef99270076aadc153
apache-2.0
['Text', 'Sentence Similarity', 'Sentence-Embedding', 'camembert-base']
false
Usage The model can be used directly (without a language model) as follows: ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer("dangvantuan/sentence-camembert-base") sentences = ["Un avion est en train de décoller.", "Un homme joue d'une grande flûte.", "Un homme étale du fromage râpé sur une pizza.", "Une personne jette un chat au plafond.", "Une personne est en train de plier un morceau de papier.", ] embeddings = model.encode(sentences) ```
524a03d875854b13ef65efe2bde0207e
apache-2.0
['Text', 'Sentence Similarity', 'Sentence-Embedding', 'camembert-base']
false
Evaluation The model can be evaluated as follows on the French test data of stsb. ```python from sentence_transformers import SentenceTransformer from sentence_transformers.readers import InputExample from sentence_transformers.evaluation import EmbeddingSimilarityEvaluator from datasets import load_dataset def convert_dataset(dataset): dataset_samples=[] for df in dataset: score = float(df['similarity_score'])/5.0
8e0539147e08ddc4b6905a646a153503
apache-2.0
['Text', 'Sentence Similarity', 'Sentence-Embedding', 'camembert-base']
false
Normalize score to range 0 ... 1 inp_example = InputExample(texts=[df['sentence1'], df['sentence2']], label=score) dataset_samples.append(inp_example) return dataset_samples
c0d8f9930d155949daeae9eaadeb8f62
apache-2.0
['Text', 'Sentence Similarity', 'Sentence-Embedding', 'camembert-base']
false
For Test set: test_samples = convert_dataset(df_test) test_evaluator = EmbeddingSimilarityEvaluator.from_input_examples(test_samples, name='sts-test') test_evaluator(model, output_path="./") ``` **Test Result**: The performance is measured using Pearson and Spearman correlation: - On dev | Model | Pearson correlation | Spearman correlation |
6a71e38925af1f49deae0240e967c170
apache-2.0
['Text', 'Sentence Similarity', 'Sentence-Embedding', 'camembert-base']
false
params | | ------------- | ------------- | ------------- |------------- | | [dangvantuan/sentence-camembert-base](https://huggingface.co/dangvantuan/sentence-camembert-base)| 86.73 |86.54 | 110M | | [distiluse-base-multilingual-cased](https://huggingface.co/sentence-transformers/distiluse-base-multilingual-cased) | 79.22 | 79.16|135M | - On test | Model | Pearson correlation | Spearman correlation | | ------------- | ------------- | ------------- | | [dangvantuan/sentence-camembert-base](https://huggingface.co/dangvantuan/sentence-camembert-base)| 82.36 | 81.64| | [distiluse-base-multilingual-cased](https://huggingface.co/sentence-transformers/distiluse-base-multilingual-cased) | 78.62 | 77.48|
fd7ec1f92ad645f50c668a980207226f
apache-2.0
['Text', 'Sentence Similarity', 'Sentence-Embedding', 'camembert-base']
false
Citation @article{reimers2019sentence, title={Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks}, author={Nils Reimers, Iryna Gurevych}, journal={https://arxiv.org/abs/1908.10084}, year={2019} } @article{martin2020camembert, title={CamemBERT: a Tasty French Language Mode}, author={Martin, Louis and Muller, Benjamin and Su{\'a}rez, Pedro Javier Ortiz and Dupont, Yoann and Romary, Laurent and de la Clergerie, {\'E}ric Villemonte and Seddah, Djam{\'e} and Sagot, Beno{\^\i}t}, journal={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics}, year={2020} }
f11b9316a0885e579740b4420c3e92a1
apache-2.0
['translation']
false
opus-mt-sv-war * source languages: sv * target languages: war * OPUS readme: [sv-war](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-war/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-war/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-war/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-war/opus-2020-01-16.eval.txt)
49159a949473c7e383f8282717c02d0d
apache-2.0
['generated_from_trainer']
false
wav2vec2-base-timit-moaiz_exp2 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.1884 - Wer: 1.0
25b47c0f5e938d4a1ed1060359927510
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0004 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP
78b6c5c0676e018b836f2ffd0ac4db6d
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:---:| | 4.15 | 13.89 | 500 | 3.2020 | 1.0 | | 3.1522 | 27.78 | 1000 | 3.1884 | 1.0 |
371307b763941dcd124b30e8308cd851
apache-2.0
['generated_from_trainer']
false
wav2vec2-base-finetuned-stop-classification This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the audiofolder dataset. It achieves the following results on the evaluation set: - Loss: 0.1647 - Accuracy: 0.9470
6c0b7138fbb78156438741a8b4085842
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5
7cfbf723aa19406ec897feba89f50bce
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.671 | 0.98 | 26 | 0.5553 | 0.8347 | | 0.3525 | 1.98 | 52 | 0.2647 | 0.9163 | | 0.291 | 2.98 | 78 | 0.2474 | 0.9070 | | 0.2733 | 3.98 | 104 | 0.1729 | 0.9439 | | 0.2467 | 4.98 | 130 | 0.1647 | 0.9470 |
0080cc33d769d9d2f4374e9b8d21b45f
apache-2.0
['summarization', 'arabic', 'ar', 'mt5', 'Abstractive Summarization', 'generated_from_trainer']
false
mt5-base-arabic This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on arabic subset on the xlsum dataset. It achieves the following results on the evaluation set: - Loss: 3.2742 - Rouge-1: 22.86 - Rouge-2: 10.31 - Rouge-l: 20.85 - Gen Len: 19.0 - Bertscore: 71.52
fde481a2bc19ee88dd34347c75524509
apache-2.0
['summarization', 'arabic', 'ar', 'mt5', 'Abstractive Summarization', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge-1 | Rouge-2 | Rouge-l | Gen Len | Bertscore | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:-------:|:---------:| | 4.2331 | 1.0 | 1172 | 3.5051 | 18.54 | 6.63 | 16.77 | 19.0 | 70.28 | | 3.7075 | 2.0 | 2344 | 3.3737 | 19.99 | 7.94 | 18.19 | 19.0 | 70.79 | | 3.5132 | 3.0 | 3516 | 3.3171 | 20.76 | 8.57 | 18.96 | 19.0 | 70.95 | | 3.3859 | 4.0 | 4688 | 3.2811 | 21.49 | 8.99 | 19.51 | 19.0 | 71.19 | | 3.3012 | 5.0 | 5860 | 3.2742 | 21.79 | 9.18 | 19.77 | 19.0 | 71.25 |
5b9475bfcc38a69259aa3a2ccff242f1
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3100 - Precision: 0.9309 - Recall: 0.9435 - F1: 0.9371 - Accuracy: 0.9294
3e02181c68c362a5f79183e58174a97e
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 234 | 0.2362 | 0.9356 | 0.9484 | 0.9420 | 0.9335 | | No log | 2.0 | 468 | 0.2854 | 0.9303 | 0.9425 | 0.9363 | 0.9282 | | 0.2119 | 3.0 | 702 | 0.3100 | 0.9309 | 0.9435 | 0.9371 | 0.9294 |
4af184c104044d0a25a6df64f57d207f
mit
[]
false
gpt2-wechsel-uyghur Model trained with WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models. See the code here: https://github.com/CPJKU/wechsel And the paper here: https://arxiv.org/abs/2112.06598
a32e01c6671f079c28d45f3518d0e84d
mit
[]
false
Citation Please cite WECHSEL as ``` @misc{minixhofer2021wechsel, title={WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models}, author={Benjamin Minixhofer and Fabian Paischer and Navid Rekabsaz}, year={2021}, eprint={2112.06598}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
83fd7a44f2a6de2dcae44b6505c07c21
apache-2.0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'hy']
false
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - HY-AM dataset. It achieves the following results on the evaluation set: - Loss: 0.5891 - Wer: 0.6569 **Note**: If you aim for best performance use [this model](https://huggingface.co/arampacha/wav2vec2-xls-r-300m-hy). It is trained using noizy student procedure and achieves considerably better results.
386078c2eaa1524ea73218573a319d4e
apache-2.0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'hy']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 1200 - mixed_precision_training: Native AMP
f9c242dbe7ebe3f80a3c64790a93d1bf
apache-2.0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'hy']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | 9.167 | 16.67 | 100 | 3.5599 | 1.0 | | 3.2645 | 33.33 | 200 | 3.1771 | 1.0 | | 3.1509 | 50.0 | 300 | 3.1321 | 1.0 | | 3.0757 | 66.67 | 400 | 2.8594 | 1.0 | | 2.5274 | 83.33 | 500 | 1.5286 | 0.9797 | | 1.6826 | 100.0 | 600 | 0.8058 | 0.7974 | | 1.2868 | 116.67 | 700 | 0.6713 | 0.7279 | | 1.1262 | 133.33 | 800 | 0.6308 | 0.7034 | | 1.0408 | 150.0 | 900 | 0.6056 | 0.6745 | | 0.9617 | 166.67 | 1000 | 0.5891 | 0.6569 | | 0.9196 | 183.33 | 1100 | 0.5913 | 0.6432 | | 0.8853 | 200.0 | 1200 | 0.5924 | 0.6347 |
abf5fe33067350868b28109795b2df49
creativeml-openrail-m
['anime', 'manga', 'manhwa', 'webtoon']
false
<h1>The goal of this repo is to</h1> <ul> <li>Capturing webtoon character's unique characteristics</li> <li>Get varieties of poses, gestures and actions without damaging too many characteristics</li> </ul> <h3>For the LoRA inference</h3> <ul> <li>Current LoRA checkpoints are under development. Instruction will be added soon</li> <li>For those who want to try out. I recommend <b>Midnight Mixers</b> as the base model.</li> <li><b>512(width) x 640(height)</b></li> <li><b>50 steps / 7 cfg</b>. Step below 40 would yield poor quality</li> <li><b>0.5~0.6 range LoRA weight.</b></li> </ul>
4218f868be89025710d72f31cae4c3aa
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased__hate_speech_offensive__train-32-7 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8210 - Accuracy: 0.6305
a70ed8cbb67bbd40478e6ab96e392e20
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.0989 | 1.0 | 19 | 1.0655 | 0.4 | | 1.0102 | 2.0 | 38 | 0.9927 | 0.6 | | 0.8063 | 3.0 | 57 | 0.9117 | 0.5 | | 0.5284 | 4.0 | 76 | 0.8058 | 0.55 | | 0.2447 | 5.0 | 95 | 0.8393 | 0.45 | | 0.098 | 6.0 | 114 | 0.8438 | 0.6 | | 0.0388 | 7.0 | 133 | 1.1901 | 0.45 | | 0.0188 | 8.0 | 152 | 1.4429 | 0.45 | | 0.0121 | 9.0 | 171 | 1.3648 | 0.4 | | 0.0082 | 10.0 | 190 | 1.4768 | 0.4 | | 0.0066 | 11.0 | 209 | 1.4830 | 0.45 | | 0.0057 | 12.0 | 228 | 1.4936 | 0.45 | | 0.0053 | 13.0 | 247 | 1.5649 | 0.4 | | 0.0041 | 14.0 | 266 | 1.6306 | 0.4 |
d19a48fbd0a7ed867cf3497fa9a1212b
apache-2.0
['generated_from_trainer']
false
finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3041 - Accuracy: 0.87 - F1: 0.8696
ab1b9b4d33cef9cd9d7548fccb5dc8d3
apache-2.0
['setfit', 'sentence-transformers', 'text-classification']
false
fathyshalab/domain_transfer_clinic_credit_cards-massive_qa-roberta-large-v1-2-71 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer.
ab6db79a52dcb7e5acb537f0a4809678
apache-2.0
['classification', 'zero-shot']
false
Erlangshen-UniMC-Albert-235M-English - Main Page:[Fengshenbang](https://fengshenbang-lm.com/) - Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/main/fengshen/examples/unimc/) - Docs: [Fengshenbang-Docs](https://fengshenbang-doc.readthedocs.io/) - API: [Fengshen-OpenAPI](https://fengshenbang-lm.com/open-api)
8b1c5596f4a08016a37f414c570a4d40
apache-2.0
['classification', 'zero-shot']
false
模型分类 Model Taxonomy | 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra | | :----: | :----: | :----: | :----: | :----: | :----: | | 通用 General | 自然语言理解 NLU | 二郎神 Erlangshen | Albert | 235M | English |
4e6dc51d261d7cf4ba7e52fdd1d1e61a
apache-2.0
['classification', 'zero-shot']
false
模型信息 Model Information 我们为零样本学习者提出了一种与输入无关的新范式,从某种意义上说,它与任何格式兼容并适用于一系列语言任务,例如文本分类、常识推理、共指解析、情感分析。我们的方法将零样本学习转化为多项选择任务,避免常用的大型生成模型(如 FLAN)中的问题。它不仅增加了模型的泛化能力,而且显着减少了对参数的需求。我们证明了这种方法可以在通用语言基准上取得最先进的性能,并在自然语言推理和文本分类等任务上产生令人满意的结果。更多详细信息可以参考我们的[论文](https://arxiv.org/abs/2210.08590)或者[GitHub](https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/main/fengshen/examples/unimc/) We propose an new paradigm for zero-shot learners that is input-agnostic, in the sense that it is compatible with any format and applicable to a list of language tasks, such as text classification, commonsense reasoning, coreference resolution, sentiment analysis. Our approach converts zero-shot learning into multiple choice tasks, avoiding problems in commonly used large generative models such as FLAN. It not only adds generalization ability to the models, but also reduces the needs of parameters significantly. We demonstrate that this approach leads to state-of-the-art performance on common language benchmarks, and produces satisfactory results on tasks such as natural language inference and text classification.For more details, please refer to our [paper](https://arxiv.org/abs/2210.08590) or [github](https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/main/fengshen/examples/unimc/)
9dc6a328582aa4ad8ed6bbcf6626bb28
apache-2.0
['classification', 'zero-shot']
false
下游效果 Performance **Zero-Shot Classification** | Model | T0 11B | GLaM 60B | FLAN 137B | PaLM 540B | UniMC 235M | |---------|--------|----------|-----------|-----------|------------| | ANLI R1 | 43.6 | 40.9 | 47.7 | 48.4 | 52 | | ANLI R2 | 38.7 | 38.2 | 43.9 | 44.2 | 44.4 | | ANLI R3 | 41.3 | 40.9 | 47 | 45.7 | 47.8 | | CB | 70.1 | 33.9 | 64.1 | 51.8 | 75.7 |
83d9dfabf29d90117e137ad6bfaf371c
apache-2.0
['classification', 'zero-shot']
false
使用 Usage ```python3 import argparse from fengshen.pipelines.multiplechoice import UniMCPipelines total_parser = argparse.ArgumentParser("TASK NAME") total_parser = UniMCPipelines.piplines_args(total_parser) args = total_parser.parse_args() pretrained_model_path = 'IDEA-CCNL/Erlangshen-UniMC-Albert-235M-English' args.language='english' args.learning_rate=2e-5 args.max_length=512 args.max_epochs=3 args.batchsize=8 args.default_root_dir='./' model = UniMCPipelines(args, model_path=pretrained_model_path) train_data = [] dev_data = [] test_data = [{ "texta": "it 's just incredibly dull .", "textb": "", "question": "What is sentiment of follow review?", "choice": ["it's great", "it's terrible"], "answer": "", "label": 0, "id": 19 }] if args.train: model.train(train_data, dev_data) result = model.predict(test_data) ```
a815d56eacf7ddea619e30b1f0a75cf9
mit
[]
false
XGLM-4.5B XGLM-4.5B is a multilingual autoregressive language model (with 4.5 billion parameters) trained on a balanced corpus of a diverse set of 134 languages. It was introduced in the paper [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) by Xi Victoria Lin\*, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li\* (\*Equal Contribution). The original implementation was released in [this repository](https://github.com/pytorch/fairseq/tree/main/examples/xglm).
3352c1a9a3eafe3c70bdf53d029e8891
mit
[]
false
Example (COPA) The following snippet shows how to evaluate our models (GPT-3 style, zero-shot) on the Choice of Plausible Alternatives (COPA) task, using examples in English, Chinese and Hindi. ```python import torch import torch.nn.functional as F from transformers import XGLMTokenizer, XGLMForCausalLM tokenizer = XGLMTokenizer.from_pretrained("facebook/xglm-4.5B") model = XGLMForCausalLM.from_pretrained("facebook/xglm-4.5B") data_samples = { 'en': [ { "premise": "I wanted to conserve energy.", "choice1": "I swept the floor in the unoccupied room.", "choice2": "I shut off the light in the unoccupied room.", "question": "effect", "label": "1" }, { "premise": "The flame on the candle went out.", "choice1": "I blew on the wick.", "choice2": "I put a match to the wick.", "question": "cause", "label": "0" } ], 'zh': [ { "premise": "我想节约能源。", "choice1": "我在空着的房间里扫了地板。", "choice2": "我把空房间里的灯关了。", "question": "effect", "label": "1" }, { "premise": "蜡烛上的火焰熄灭了。", "choice1": "我吹灭了灯芯。", "choice2": "我把一根火柴放在灯芯上。", "question": "cause", "label": "0" } ], 'hi': [ { "premise": "M te vle konsève enèji.", "choice1": "Mwen te fin baleye chanm lib la.", "choice2": "Mwen te femen limyè nan chanm lib la.", "question": "effect", "label": "1" }, { "premise": "Flam bouji a te etenn.", "choice1": "Mwen te soufle bouji a.", "choice2": "Mwen te limen mèch bouji a.", "question": "cause", "label": "0" } ] } def get_logprobs(prompt): inputs = tokenizer(prompt, return_tensors="pt") input_ids, output_ids = inputs["input_ids"], inputs["input_ids"][:, 1:] outputs = model(**inputs, labels=input_ids) logits = outputs.logits logprobs = torch.gather(F.log_softmax(logits, dim=2), 2, output_ids.unsqueeze(2)) return logprobs
c7e7e7e18dcea1603ebb0c9feb0a41d2
apache-2.0
['generated_from_trainer']
false
rte This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE RTE dataset. It achieves the following results on the evaluation set: - Loss: 0.8396 - Accuracy: 0.6679
1334cd76bbeac05a749d9bca28f4de9c
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0
2ce6f338aaa92a91f52fe068cb274fac
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-recipe-ar This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0529 - F1: 0.9856
aab6b5c4cbbe01776c3bb3a800bb7c0b
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.4605 | 1.0 | 74 | 0.1084 | 0.9609 | | 0.1105 | 2.0 | 148 | 0.0563 | 0.9809 | | 0.0696 | 3.0 | 222 | 0.0500 | 0.9851 | | 0.0512 | 4.0 | 296 | 0.0529 | 0.9856 |
103c45a7820582c7a16f75ae9fb4c170
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Evaluation The model can be evaluated as follows on the {language} test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "id", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("ayameRushia/wav2vec2-large-xlsr-indonesia-demo") model = Wav2Vec2ForCTC.from_pretrained("ayameRushia/wav2vec2-large-xlsr-indonesia-demo") model.to("cuda") chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“]' resampler = torchaudio.transforms.Resample(48_000, 16_000)
1a65b64be9adb04c60d4c4faf00b62bd
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) resampler = torchaudio.transforms.Resample(sampling_rate, 16_000) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn)
d70b5c09b1e5207607636d65a8d00773
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: WER = 20.072720 %
c568a38326085d7aabd99012ff9bbc07
apache-2.0
['translation']
false
he-es * source group: Hebrew * target group: Spanish * OPUS readme: [heb-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/heb-spa/README.md) * model: transformer * source language(s): heb * target language(s): spa * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-12-10.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-spa/opus-2020-12-10.zip) * test set translations: [opus-2020-12-10.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-spa/opus-2020-12-10.test.txt) * test set scores: [opus-2020-12-10.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-spa/opus-2020-12-10.eval.txt)
99ba415a06119a9f297b88cc8d810371
apache-2.0
['translation']
false
System Info: - hf_name: he-es - source_languages: heb - target_languages: spa - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/heb-spa/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['he', 'es'] - src_constituents: ('Hebrew', {'heb'}) - tgt_constituents: ('Spanish', {'spa'}) - src_multilingual: False - tgt_multilingual: False - long_pair: heb-spa - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/heb-spa/opus-2020-12-10.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/heb-spa/opus-2020-12-10.test.txt - src_alpha3: heb - tgt_alpha3: spa - chrF2_score: 0.6890000000000001 - bleu: 51.3 - brevity_penalty: 0.97 - ref_len: 14213.0 - src_name: Hebrew - tgt_name: Spanish - train_date: 2020-12-10 00:00:00 - src_alpha2: he - tgt_alpha2: es - prefer_old: False - short_pair: he-es - helsinki_git_sha: b317f78a3ec8a556a481b6a53dc70dc11769ca96 - transformers_git_sha: 1310e1a758edc8e89ec363db76863c771fbeb1de - port_machine: LM0-400-22516.local - port_time: 2020-12-11-09:15
a71078aeb16c6da5e02569a76f9edf9d
apache-2.0
['summarization', 'translation']
false
PreTraining The model was pre-trained on a on a **multi-task mixture of unsupervised (1.) and supervised tasks (2.)**. Thereby, the following datasets were being used for (1.) and (2.): 1. **Datasets used for Unsupervised denoising objective**: - [C4](https://huggingface.co/datasets/c4) - [Wiki-DPR](https://huggingface.co/datasets/wiki_dpr) 2. **Datasets used for Supervised text-to-text language modeling objective** - Sentence acceptability judgment - CoLA [Warstadt et al., 2018](https://arxiv.org/abs/1805.12471) - Sentiment analysis - SST-2 [Socher et al., 2013](https://nlp.stanford.edu/~socherr/EMNLP2013_RNTN.pdf) - Paraphrasing/sentence similarity - MRPC [Dolan and Brockett, 2005](https://aclanthology.org/I05-5002) - STS-B [Ceret al., 2017](https://arxiv.org/abs/1708.00055) - QQP [Iyer et al., 2017](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) - Natural language inference - MNLI [Williams et al., 2017](https://arxiv.org/abs/1704.05426) - QNLI [Rajpurkar et al.,2016](https://arxiv.org/abs/1606.05250) - RTE [Dagan et al., 2005](https://link.springer.com/chapter/10.1007/11736790_9) - CB [De Marneff et al., 2019](https://semanticsarchive.net/Archive/Tg3ZGI2M/Marneffe.pdf) - Sentence completion - COPA [Roemmele et al., 2011](https://www.researchgate.net/publication/221251392_Choice_of_Plausible_Alternatives_An_Evaluation_of_Commonsense_Causal_Reasoning) - Word sense disambiguation - WIC [Pilehvar and Camacho-Collados, 2018](https://arxiv.org/abs/1808.09121) - Question answering - MultiRC [Khashabi et al., 2018](https://aclanthology.org/N18-1023) - ReCoRD [Zhang et al., 2018](https://arxiv.org/abs/1810.12885) - BoolQ [Clark et al., 2019](https://arxiv.org/abs/1905.10044)
0df8cfd10467a3c551d6ce7702ec7e26
apache-2.0
['summarization', 'translation']
false
Paper For more information, please take a look at the original paper. Paper: [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) Authors: *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* **Abstract** Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code. ![model image](https://camo.githubusercontent.com/623b4dea0b653f2ad3f36c71ebfe749a677ac0a1/68747470733a2f2f6d69726f2e6d656469756d2e636f6d2f6d61782f343030362f312a44304a31674e51663876727255704b657944387750412e706e67)
460c5db4f45e821afed5890c91633a40
apache-2.0
['translation']
false
zho-fin * source group: Chinese * target group: Finnish * OPUS readme: [zho-fin](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-fin/README.md) * model: transformer-align * source language(s): cmn_Bopo cmn_Hani cmn_Latn nan_Hani yue yue_Hani * target language(s): fin * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-fin/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-fin/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-fin/opus-2020-06-17.eval.txt)
5e276053ef60d2586f3df4ab6ac7898d
apache-2.0
['translation']
false
System Info: - hf_name: zho-fin - source_languages: zho - target_languages: fin - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-fin/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['zh', 'fi'] - src_constituents: {'cmn_Hans', 'nan', 'nan_Hani', 'gan', 'yue', 'cmn_Kana', 'yue_Hani', 'wuu_Bopo', 'cmn_Latn', 'yue_Hira', 'cmn_Hani', 'cjy_Hans', 'cmn', 'lzh_Hang', 'lzh_Hira', 'cmn_Hant', 'lzh_Bopo', 'zho', 'zho_Hans', 'zho_Hant', 'lzh_Hani', 'yue_Hang', 'wuu', 'yue_Kana', 'wuu_Latn', 'yue_Bopo', 'cjy_Hant', 'yue_Hans', 'lzh', 'cmn_Hira', 'lzh_Yiii', 'lzh_Hans', 'cmn_Bopo', 'cmn_Hang', 'hak_Hani', 'cmn_Yiii', 'yue_Hant', 'lzh_Kana', 'wuu_Hani'} - tgt_constituents: {'fin'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-fin/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-fin/opus-2020-06-17.test.txt - src_alpha3: zho - tgt_alpha3: fin - short_pair: zh-fi - chrF2_score: 0.579 - bleu: 35.1 - brevity_penalty: 0.935 - ref_len: 1847.0 - src_name: Chinese - tgt_name: Finnish - train_date: 2020-06-17 - src_alpha2: zh - tgt_alpha2: fi - prefer_old: False - long_pair: zho-fin - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
a57b599eac3ef9249a1e77f1ae61d583
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 7.9484 | 0.19 | 500 | 7.8474 | | 7.7968 | 0.39 | 1000 | 7.7020 | | 7.6992 | 0.58 | 1500 | 7.6949 | | 7.656 | 0.77 | 2000 | 7.6922 | | 7.68 | 0.97 | 2500 | 7.6863 | | 7.5952 | 1.16 | 3000 | 7.6523 | | 7.6441 | 1.36 | 3500 | 7.6523 | | 7.6178 | 1.55 | 4000 | 7.6128 | | 7.5977 | 1.74 | 4500 | 7.6556 | | 7.6087 | 1.94 | 5000 | 7.5990 | | 7.5734 | 2.13 | 5500 | 7.5997 | | 7.566 | 2.32 | 6000 | 7.5961 | | 7.5715 | 2.52 | 6500 | 7.5505 | | 7.5604 | 2.71 | 7000 | 7.5788 | | 7.5749 | 2.9 | 7500 | 7.5916 |
8dca02ab5229f2c5f6f347b7bb543611
apache-2.0
['generated_from_trainer']
false
tiny-mlm-glue-cola This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan
a0d6c719fea59986962760ed47408d24
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 4.2428 | 0.47 | 500 | 3.7383 | | 4.0764 | 0.94 | 1000 | 3.6771 | | 3.8781 | 1.4 | 1500 | 3.5846 | | 3.8168 | 1.87 | 2000 | 3.6091 | | 3.6486 | 2.34 | 2500 | 3.6647 | | 3.7452 | 2.81 | 3000 | nan |
0a7fdf4cad50aa9436363bd268369e4c
mit
['generated_from_trainer']
false
roberta-base.CEBaB_confounding.observational.sa.5-class.seed_42 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the OpenTable OPENTABLE dataset. It achieves the following results on the evaluation set: - Loss: 0.7697 - Accuracy: 0.7191 - Macro-f1: 0.7025 - Weighted-macro-f1: 0.7145
9714f4d26f1d69d80e5941b0fd8f8819