license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
gpl-3.0
['object-detection', 'yolo', 'autogenerated-modelcard']
false
Citation <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed]
72ac8c7f9b3cd2993c2b6f054af39c7b
apache-2.0
[]
false
distilbert-base-en-cased We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages. Our versions give exactly the same representations produced by the original model which preserves the original accuracy. For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
31cf5d5645b32e0ec20b8674ebe04a0b
apache-2.0
[]
false
How to use ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-en-cased") model = AutoModel.from_pretrained("Geotrend/distilbert-base-en-cased") ``` To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
7b5ad54d2830d73e94f78dbf983c917a
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.7375
6bae5b23c9ffdc7ec8d2196b47a55b8b
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.4419 | 1.0 | 557 | 1.7242 | | 1.2397 | 2.0 | 1114 | 1.6714 | | 0.9066 | 3.0 | 1671 | 1.7375 |
391ec2924f9119c12cb49f4233f54a7c
apache-2.0
['generated_from_trainer']
false
demo_sentiment_42 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 0.6332 - F1: 0.7114
bd1a9aa80a5fd62155216fa1474ec03e
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 8.62486660723695e-06 - train_batch_size: 64 - eval_batch_size: 64 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4
93fbc834e98efd39f0878e54fbc0b591
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.7592 | 1.0 | 713 | 0.6509 | 0.6834 | | 0.6389 | 2.0 | 1426 | 0.6318 | 0.7011 | | 0.5647 | 3.0 | 2139 | 0.6320 | 0.7041 | | 0.5391 | 4.0 | 2852 | 0.6332 | 0.7114 |
bb02d5c46fa19d043441662733b15146
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2235 - Accuracy: 0.9265 - F1: 0.9268
b7c5f2702d9fb0ca3f590dc00ff71405
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8101 | 1.0 | 250 | 0.3177 | 0.9045 | 0.9010 | | 0.2472 | 2.0 | 500 | 0.2235 | 0.9265 | 0.9268 |
75fe32a4e997f690f6e266a207e8d9c7
apache-2.0
['generated_from_trainer']
false
t5-base-asqa-ob This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the [ASQA](https://huggingface.co/datasets/din0s/asqa) dataset. It achieves the following results on the evaluation set: - Loss: 1.7356 - Rougelsum: 12.0879
1a54465bebbfc1acdee713e151dac8af
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP
7a3a21795dfb57cd3f39a11e877654a3
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:---------:| | No log | 1.0 | 355 | 1.8545 | 11.6549 | | 2.4887 | 2.0 | 710 | 1.8050 | 11.7533 | | 1.9581 | 3.0 | 1065 | 1.7843 | 11.8327 | | 1.9581 | 4.0 | 1420 | 1.7722 | 11.9442 | | 1.9252 | 5.0 | 1775 | 1.7648 | 11.9331 | | 1.8853 | 6.0 | 2130 | 1.7567 | 11.9788 | | 1.8853 | 7.0 | 2485 | 1.7519 | 12.0300 | | 1.8512 | 8.0 | 2840 | 1.7483 | 12.0225 | | 1.8328 | 9.0 | 3195 | 1.7451 | 12.0402 | | 1.8115 | 10.0 | 3550 | 1.7436 | 12.0444 | | 1.8115 | 11.0 | 3905 | 1.7419 | 12.0850 | | 1.7878 | 12.0 | 4260 | 1.7408 | 12.1047 | | 1.774 | 13.0 | 4615 | 1.7394 | 12.0839 | | 1.774 | 14.0 | 4970 | 1.7390 | 12.0910 | | 1.7787 | 15.0 | 5325 | 1.7381 | 12.0880 | | 1.7632 | 16.0 | 5680 | 1.7380 | 12.1088 | | 1.7623 | 17.0 | 6035 | 1.7370 | 12.1046 | | 1.7623 | 18.0 | 6390 | 1.7368 | 12.0997 | | 1.7508 | 19.0 | 6745 | 1.7359 | 12.0902 | | 1.7597 | 20.0 | 7100 | 1.7356 | 12.0879 |
db225b5f13557591a6a27d69124f9a92
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Whisper Small Slovak - Robust This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 sk dataset. It achieves the following results on the evaluation set: - Loss: 0.7397 - Wer: 43.6221
b26aa46fa09a93f9f9438444d7b1cd0e
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0232 | 14.29 | 1000 | 0.7425 | 51.8801 | | 0.0083 | 28.57 | 2000 | 0.7698 | 48.4888 | | 0.0006 | 42.86 | 3000 | 0.7640 | 47.5964 | | 0.0005 | 57.14 | 4000 | 0.7649 | 44.8953 | | 0.0002 | 71.43 | 5000 | 0.7440 | 44.3598 |
b0a736e91d76234de5097aebd5e98591
apache-2.0
['automatic-speech-recognition', 'ru']
false
exp_w2v2t_ru_r-wav2vec2_s869 Fine-tuned [facebook/wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust) for speech recognition using the train split of [Common Voice 7.0 (ru)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
246d6757f25ef6f1bede6164f43897a6
apache-2.0
['generated_from_trainer', 'hf-asr-leaderboard']
false
wav2vec2-large-xls-r-300m-tr This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.2841 - Wer: 0.2904
40367e843d2e3361bca7a2dbbfba86ea
apache-2.0
['generated_from_trainer', 'hf-asr-leaderboard']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 7 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 14 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP
5fce67df2b68683cef80e3de5132730f
apache-2.0
['generated_from_trainer', 'hf-asr-leaderboard']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 5.0805 | 4.03 | 1000 | 3.0333 | 1.0 | | 1.5733 | 8.06 | 2000 | 0.5545 | 0.5080 | | 0.6238 | 12.1 | 3000 | 0.3861 | 0.3977 | | 0.4535 | 16.13 | 4000 | 0.3253 | 0.3408 | | 0.3682 | 20.16 | 5000 | 0.3042 | 0.3177 | | 0.3302 | 24.19 | 6000 | 0.2950 | 0.3015 | | 0.2985 | 28.23 | 7000 | 0.2841 | 0.2904 |
08133e818b4fb66b8e2913ed6708fcfc
mit
['generated_from_keras_callback']
false
xlmrobertaenepochz This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.1485 - Train End Logits Accuracy: 0.6933 - Train Start Logits Accuracy: 0.6537 - Validation Loss: 0.9772 - Validation End Logits Accuracy: 0.7275 - Validation Start Logits Accuracy: 0.6976 - Epoch: 0
0bf9e6e5f7db565497cc2f09527786fd
mit
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 5599, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32
3b16bafa8194025db01145ab64229196
mit
['generated_from_keras_callback']
false
Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 1.1485 | 0.6933 | 0.6537 | 0.9772 | 0.7275 | 0.6976 | 0 |
25f44ff9ec89b50576c0c33abaf5b6d6
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.1664
cd73413c65f6fd4bf23e5644c51597ec
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.2096 | 1.0 | 5533 | 1.1505 | | 0.952 | 2.0 | 11066 | 1.1238 | | 0.7347 | 3.0 | 16599 | 1.1664 |
6900926e56229b8ed8c970f727b5aca8
apache-2.0
['generated_from_trainer']
false
MIX2_en-ja_helsinki This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-jap](https://huggingface.co/Helsinki-NLP/opus-mt-en-jap) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6703
fccc9326eb4742e5badd94659ffb77cb
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 96 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP
e12faebeea9add9259674b78751defde
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:------:|:---------------:| | 3.5357 | 0.02 | 4000 | 2.9519 | | 2.8601 | 0.04 | 8000 | 2.6962 | | 2.6183 | 0.06 | 12000 | 2.5156 | | 2.4731 | 0.08 | 16000 | 2.4312 | | 2.3731 | 0.1 | 20000 | 2.3575 | | 2.2964 | 0.11 | 24000 | 2.3319 | | 2.238 | 0.13 | 28000 | 2.2802 | | 2.1919 | 0.15 | 32000 | 2.2552 | | 2.1479 | 0.17 | 36000 | 2.2354 | | 2.1104 | 0.19 | 40000 | 2.2210 | | 2.0788 | 0.21 | 44000 | 2.1835 | | 2.0552 | 0.23 | 48000 | 2.1391 | | 2.0228 | 0.25 | 52000 | 2.1338 | | 2.0062 | 0.27 | 56000 | 2.1115 | | 1.9868 | 0.29 | 60000 | 2.1025 | | 1.9628 | 0.31 | 64000 | 2.1334 | | 1.9474 | 0.32 | 68000 | 2.0935 | | 1.9318 | 0.34 | 72000 | 2.1030 | | 1.9187 | 0.36 | 76000 | 2.0605 | | 1.9019 | 0.38 | 80000 | 2.0388 | | 1.8916 | 0.4 | 84000 | 2.0360 | | 1.8775 | 0.42 | 88000 | 2.0356 | | 1.8689 | 0.44 | 92000 | 2.0315 | | 1.8558 | 0.46 | 96000 | 2.0169 | | 1.8431 | 0.48 | 100000 | 2.0213 | | 1.8373 | 0.5 | 104000 | 2.0071 | | 1.8224 | 0.52 | 108000 | 2.0093 | | 1.8181 | 0.53 | 112000 | 1.9952 | | 1.8087 | 0.55 | 116000 | 1.9927 | | 1.7998 | 0.57 | 120000 | 1.9726 | | 1.7947 | 0.59 | 124000 | 1.9817 | | 1.7874 | 0.61 | 128000 | 1.9650 | | 1.7781 | 0.63 | 132000 | 1.9688 | | 1.7712 | 0.65 | 136000 | 1.9655 | | 1.7631 | 0.67 | 140000 | 1.9561 | | 1.7577 | 0.69 | 144000 | 1.9529 | | 1.7528 | 0.71 | 148000 | 1.9447 | | 1.746 | 0.73 | 152000 | 1.9700 | | 1.7386 | 0.74 | 156000 | 1.9413 | | 1.7329 | 0.76 | 160000 | 1.9329 | | 1.7285 | 0.78 | 164000 | 1.9289 | | 1.7227 | 0.8 | 168000 | 1.9337 | | 1.7186 | 0.82 | 172000 | 1.9263 | | 1.7116 | 0.84 | 176000 | 1.9407 | | 1.7072 | 0.86 | 180000 | 1.9059 | | 1.7032 | 0.88 | 184000 | 1.9380 | | 1.6932 | 0.9 | 188000 | 1.9183 | | 1.6921 | 0.92 | 192000 | 1.9131 | | 1.6875 | 0.94 | 196000 | 1.9180 | | 1.6846 | 0.96 | 200000 | 1.9040 | | 1.6797 | 0.97 | 204000 | 1.9089 | | 1.6725 | 0.99 | 208000 | 1.9024 | | 1.6589 | 1.01 | 212000 | 1.8909 | | 1.6507 | 1.03 | 216000 | 1.8837 | | 1.6441 | 1.05 | 220000 | 1.8906 | | 1.6445 | 1.07 | 224000 | 1.8914 | | 1.6394 | 1.09 | 228000 | 1.8833 | | 1.6382 | 1.11 | 232000 | 1.8837 | | 1.6376 | 1.13 | 236000 | 1.8869 | | 1.6329 | 1.15 | 240000 | 1.8829 | | 1.6294 | 1.17 | 244000 | 1.8845 | | 1.6273 | 1.18 | 248000 | 1.8888 | | 1.6243 | 1.2 | 252000 | 1.8709 | | 1.6226 | 1.22 | 256000 | 1.8418 | | 1.6177 | 1.24 | 260000 | 1.8587 | | 1.6151 | 1.26 | 264000 | 1.8526 | | 1.6111 | 1.28 | 268000 | 1.8494 | | 1.6084 | 1.3 | 272000 | 1.8781 | | 1.6043 | 1.32 | 276000 | 1.8390 | | 1.6011 | 1.34 | 280000 | 1.8603 | | 1.5999 | 1.36 | 284000 | 1.8515 | | 1.5954 | 1.38 | 288000 | 1.8356 | | 1.5936 | 1.39 | 292000 | 1.8530 | | 1.5916 | 1.41 | 296000 | 1.8475 | | 1.5886 | 1.43 | 300000 | 1.8410 | | 1.5883 | 1.45 | 304000 | 1.8153 | | 1.5828 | 1.47 | 308000 | 1.8254 | | 1.582 | 1.49 | 312000 | 1.8139 | | 1.578 | 1.51 | 316000 | 1.8366 | | 1.5723 | 1.53 | 320000 | 1.8353 | | 1.5705 | 1.55 | 324000 | 1.8230 | | 1.5691 | 1.57 | 328000 | 1.8194 | | 1.5656 | 1.59 | 332000 | 1.8069 | | 1.566 | 1.6 | 336000 | 1.8204 | | 1.5604 | 1.62 | 340000 | 1.8307 | | 1.5573 | 1.64 | 344000 | 1.8209 | | 1.5547 | 1.66 | 348000 | 1.8320 | | 1.5545 | 1.68 | 352000 | 1.8179 | | 1.5519 | 1.7 | 356000 | 1.8323 | | 1.545 | 1.72 | 360000 | 1.8005 | | 1.5483 | 1.74 | 364000 | 1.8034 | | 1.5454 | 1.76 | 368000 | 1.7997 | | 1.5393 | 1.78 | 372000 | 1.8078 | | 1.5381 | 1.8 | 376000 | 1.8204 | | 1.5347 | 1.81 | 380000 | 1.8071 | | 1.5327 | 1.83 | 384000 | 1.7997 | | 1.529 | 1.85 | 388000 | 1.8012 | | 1.5287 | 1.87 | 392000 | 1.8028 | | 1.5273 | 1.89 | 396000 | 1.8103 | | 1.5194 | 1.91 | 400000 | 1.8008 | | 1.5197 | 1.93 | 404000 | 1.8004 | | 1.5218 | 1.95 | 408000 | 1.8024 | | 1.514 | 1.97 | 412000 | 1.7852 | | 1.5146 | 1.99 | 416000 | 1.7908 | | 1.5045 | 2.01 | 420000 | 1.7864 | | 1.4876 | 2.02 | 424000 | 1.7813 | | 1.4846 | 2.04 | 428000 | 1.7822 | | 1.4865 | 2.06 | 432000 | 1.7737 | | 1.4857 | 2.08 | 436000 | 1.7668 | | 1.4825 | 2.1 | 440000 | 1.7681 | | 1.4828 | 2.12 | 444000 | 1.7685 | | 1.4821 | 2.14 | 448000 | 1.7636 | | 1.4778 | 2.16 | 452000 | 1.7778 | | 1.4803 | 2.18 | 456000 | 1.7834 | | 1.4766 | 2.2 | 460000 | 1.7801 | | 1.4741 | 2.22 | 464000 | 1.7601 | | 1.4705 | 2.23 | 468000 | 1.7665 | | 1.4739 | 2.25 | 472000 | 1.7604 | | 1.4694 | 2.27 | 476000 | 1.7803 | | 1.4665 | 2.29 | 480000 | 1.7835 | | 1.4668 | 2.31 | 484000 | 1.7670 | | 1.4605 | 2.33 | 488000 | 1.7629 | | 1.4626 | 2.35 | 492000 | 1.7612 | | 1.4627 | 2.37 | 496000 | 1.7612 | | 1.4569 | 2.39 | 500000 | 1.7557 | | 1.455 | 2.41 | 504000 | 1.7599 | | 1.4547 | 2.43 | 508000 | 1.7569 | | 1.453 | 2.44 | 512000 | 1.7589 | | 1.4515 | 2.46 | 516000 | 1.7679 | | 1.4501 | 2.48 | 520000 | 1.7574 | | 1.4446 | 2.5 | 524000 | 1.7526 | | 1.4456 | 2.52 | 528000 | 1.7506 | | 1.4445 | 2.54 | 532000 | 1.7484 | | 1.4428 | 2.56 | 536000 | 1.7447 | | 1.439 | 2.58 | 540000 | 1.7468 | | 1.441 | 2.6 | 544000 | 1.7609 | | 1.4358 | 2.62 | 548000 | 1.7498 | | 1.4318 | 2.64 | 552000 | 1.7592 | | 1.4276 | 2.65 | 556000 | 1.7452 | | 1.4317 | 2.67 | 560000 | 1.7500 | | 1.4277 | 2.69 | 564000 | 1.7392 | | 1.4259 | 2.71 | 568000 | 1.7351 | | 1.4239 | 2.73 | 572000 | 1.7385 | | 1.4191 | 2.75 | 576000 | 1.7487 | | 1.4204 | 2.77 | 580000 | 1.7392 | | 1.4176 | 2.79 | 584000 | 1.7372 | | 1.4147 | 2.81 | 588000 | 1.7347 | | 1.4154 | 2.83 | 592000 | 1.7085 | | 1.4134 | 2.85 | 596000 | 1.7103 | | 1.4091 | 2.87 | 600000 | 1.7124 | | 1.4091 | 2.88 | 604000 | 1.7369 | | 1.406 | 2.9 | 608000 | 1.7142 | | 1.4028 | 2.92 | 612000 | 1.7376 | | 1.4019 | 2.94 | 616000 | 1.7201 | | 1.4018 | 2.96 | 620000 | 1.7230 | | 1.3959 | 2.98 | 624000 | 1.7206 | | 1.3985 | 3.0 | 628000 | 1.7183 | | 1.3681 | 3.02 | 632000 | 1.7283 | | 1.3668 | 3.04 | 636000 | 1.7330 | | 1.3687 | 3.06 | 640000 | 1.7187 | | 1.3681 | 3.08 | 644000 | 1.7163 | | 1.3687 | 3.09 | 648000 | 1.7249 | | 1.364 | 3.11 | 652000 | 1.7283 | | 1.364 | 3.13 | 656000 | 1.7091 | | 1.3652 | 3.15 | 660000 | 1.7030 | | 1.3623 | 3.17 | 664000 | 1.7058 | | 1.3604 | 3.19 | 668000 | 1.7101 | | 1.3598 | 3.21 | 672000 | 1.7104 | | 1.3577 | 3.23 | 676000 | 1.7028 | | 1.3574 | 3.25 | 680000 | 1.7023 | | 1.3546 | 3.27 | 684000 | 1.7197 | | 1.3549 | 3.29 | 688000 | 1.7045 | | 1.3534 | 3.3 | 692000 | 1.6990 | | 1.3511 | 3.32 | 696000 | 1.6971 | | 1.3504 | 3.34 | 700000 | 1.6894 | | 1.346 | 3.36 | 704000 | 1.6820 | | 1.3467 | 3.38 | 708000 | 1.6920 | | 1.3461 | 3.4 | 712000 | 1.6897 | | 1.3425 | 3.42 | 716000 | 1.6962 | | 1.34 | 3.44 | 720000 | 1.6864 | | 1.3408 | 3.46 | 724000 | 1.6860 | | 1.3387 | 3.48 | 728000 | 1.6924 | | 1.3377 | 3.5 | 732000 | 1.6919 | | 1.3378 | 3.51 | 736000 | 1.6858 | | 1.334 | 3.53 | 740000 | 1.6816 | | 1.3347 | 3.55 | 744000 | 1.6867 | | 1.3307 | 3.57 | 748000 | 1.6859 | | 1.3316 | 3.59 | 752000 | 1.6896 | | 1.3257 | 3.61 | 756000 | 1.6824 | | 1.3222 | 3.63 | 760000 | 1.6819 | | 1.3247 | 3.65 | 764000 | 1.6809 | | 1.3207 | 3.67 | 768000 | 1.6775 | | 1.3227 | 3.69 | 772000 | 1.6807 | | 1.3203 | 3.71 | 776000 | 1.6750 | | 1.3203 | 3.72 | 780000 | 1.6758 | | 1.316 | 3.74 | 784000 | 1.6787 | | 1.3147 | 3.76 | 788000 | 1.6747 | | 1.3146 | 3.78 | 792000 | 1.6718 | | 1.3137 | 3.8 | 796000 | 1.6744 | | 1.3143 | 3.82 | 800000 | 1.6733 | | 1.3123 | 3.84 | 804000 | 1.6754 | | 1.3069 | 3.86 | 808000 | 1.6734 | | 1.3122 | 3.88 | 812000 | 1.6742 | | 1.3074 | 3.9 | 816000 | 1.6742 | | 1.3006 | 3.92 | 820000 | 1.6709 | | 1.308 | 3.93 | 824000 | 1.6714 | | 1.3063 | 3.95 | 828000 | 1.6727 | | 1.3036 | 3.97 | 832000 | 1.6711 | | 1.3048 | 3.99 | 836000 | 1.6703 |
4cb68df889d2f01c7b634edfea2753ad
mit
[]
false
GPT-2 Tokenizer with unmerged digits A fork of the GPT-2 tokenizer, which **removes multi-digit tokens**: ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('cyrilzhang/gpt2-numfix') tokenizer('123.45')
4700840e3f7e7f6bcf65e6788686d045
mit
[]
false
'123 pigeon' ``` - This is for my investigations into the arithmetic capabilities of large language models. There is no model here, only a tokenizer. - [PaLM](https://arxiv.org/abs/2204.02311) does this. I think it's very reasonable. - Many models (illustriously, [GPT-3](https://arxiv.org/abs/2005.14165)) don't do this, because they use the GPT-2 tokenizer.
1378133885e19e3a4657d278913230ec
mit
['generated_from_trainer']
false
xlnet-base-cased_fold_3_binary_v1 This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8649 - F1: 0.8044
9287d8db261afc18bb387cc457e1b62a
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 289 | 0.4483 | 0.8000 | | 0.4228 | 2.0 | 578 | 0.4264 | 0.8040 | | 0.4228 | 3.0 | 867 | 0.5341 | 0.8056 | | 0.2409 | 4.0 | 1156 | 0.9077 | 0.8103 | | 0.2409 | 5.0 | 1445 | 1.1069 | 0.7889 | | 0.1386 | 6.0 | 1734 | 1.0288 | 0.8093 | | 0.0817 | 7.0 | 2023 | 1.2477 | 0.8049 | | 0.0817 | 8.0 | 2312 | 1.5915 | 0.7872 | | 0.0465 | 9.0 | 2601 | 1.5323 | 0.8035 | | 0.0465 | 10.0 | 2890 | 1.4351 | 0.7989 | | 0.0376 | 11.0 | 3179 | 1.4639 | 0.7916 | | 0.0376 | 12.0 | 3468 | 1.6027 | 0.7956 | | 0.0234 | 13.0 | 3757 | 1.7860 | 0.7931 | | 0.0109 | 14.0 | 4046 | 1.8567 | 0.7934 | | 0.0109 | 15.0 | 4335 | 1.8294 | 0.8053 | | 0.0115 | 16.0 | 4624 | 1.7799 | 0.7971 | | 0.0115 | 17.0 | 4913 | 1.5935 | 0.8000 | | 0.0142 | 18.0 | 5202 | 1.8136 | 0.8066 | | 0.0142 | 19.0 | 5491 | 1.7718 | 0.8063 | | 0.0124 | 20.0 | 5780 | 1.8581 | 0.8053 | | 0.0083 | 21.0 | 6069 | 1.8523 | 0.8056 | | 0.0083 | 22.0 | 6358 | 1.8408 | 0.8035 | | 0.0045 | 23.0 | 6647 | 1.8347 | 0.8040 | | 0.0045 | 24.0 | 6936 | 1.8683 | 0.8067 | | 0.0005 | 25.0 | 7225 | 1.8649 | 0.8044 |
5653c73a95561d38d3a6212c4d4251e8
cc-by-4.0
['espnet', 'audio', 'audio-to-audio', 'vocoder']
false
Details ``` batch_size: 64 discriminator_params: follow_official_norm: true period_discriminator_params: bias: true channels: 32 downsample_scales: - 3 - 3 - 3 - 3 - 1 in_channels: 1 kernel_sizes: - 5 - 3 max_downsample_channels: 1024 nonlinear_activation: LeakyReLU nonlinear_activation_params: negative_slope: 0.1 out_channels: 1 use_spectral_norm: false use_weight_norm: true periods: - 2 - 3 - 5 - 7 - 11 ```
a4eb986c57022e6274af8c54cac659e5
apache-2.0
['translation']
false
opus-mt-nso-es * source languages: nso * target languages: es * OPUS readme: [nso-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nso-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/nso-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-es/opus-2020-01-16.eval.txt)
5ea646ad667df0b2294590ef0bc2d161
apache-2.0
['generated_from_trainer']
false
binary-classification This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.3009 - Accuracy: 0.8968
c96d9fe01cf434078413070be83fae76
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.175 | 1.0 | 4210 | 0.3009 | 0.8968 |
7e4359e3791dae2d066b975125bc943d
mit
['Dutch', 'Flemish', 'RoBERTa', 'RobBERT']
false
<p align="center"> <img src="https://github.com/iPieter/RobBERT/raw/master/res/robbert_2022_logo_with_name.png" alt="RobBERT-2022: Updating a Dutch Language Model to Account for Evolving Language Use" width="75%"> </p>
d4a8e7694f8cffb1f0570d033846269d
mit
['Dutch', 'Flemish', 'RoBERTa', 'RobBERT']
false
RobBERT-2022: Updating a Dutch Language Model to Account for Evolving Language Use. RobBERT-2022 is the latest release of the [Dutch RobBERT model](https://pieter.ai/robbert/). It further pretrained the original [pdelobelle/robbert-v2-dutch-base](https://huggingface.co/pdelobelle/robbert-v2-dutch-base) model on the 2022 version of the OSCAR version. Thanks to this more recent dataset, this [DTAI-KULeuven/robbert-2022-dutch-base](https://huggingface.co/DTAI-KULeuven/robbert-2022-dutch-base) model shows increased performance on several tasks related to recent events, e.g. COVID-19-related tasks. We also found that for some tasks that do not contain more recent information than 2019, the original [pdelobelle/robbert-v2-dutch-base](https://huggingface.co/pdelobelle/robbert-v2-dutch-base) RobBERT model can still outperform this newer one. The original RobBERT model was released in January 2020. Dutch has evolved a lot since then, for example the COVID-19 pandemic introduced a wide range of new words that were suddenly used daily. Also, many other world facts that the original model considered true have also changed. To account for this and other changes in usage, we release a new Dutch BERT model trained on data from 2022: RobBERT 2022. More in-depth information about RobBERT-2022 can be found in our [blog post](https://pieter.ai/robbert-2022/), [our paper](http://arxiv.org/abs/2211.08192), [the original RobBERT paper](https://arxiv.org/abs/2001.06286) and [the RobBERT Github repository](https://github.com/iPieter/RobBERT).
b6aaea0555665b85f21756ce0c8c5ceb
mit
['Dutch', 'Flemish', 'RoBERTa', 'RobBERT']
false
How to use RobBERT-2022 and RobBERT both use the [RoBERTa](https://arxiv.org/abs/1907.11692) architecture and pre-training but with a Dutch tokenizer and training data. RoBERTa is the robustly optimized English BERT model, making it even more powerful than the original BERT model. Given this same architecture, RobBERT can easily be finetuned and inferenced using [code to finetune RoBERTa](https://huggingface.co/transformers/model_doc/roberta.html) models and most code used for BERT models, e.g. as provided by [HuggingFace Transformers](https://huggingface.co/transformers/) library. By default, RobBERT-2022 has the masked language model head used in training. This can be used as a zero-shot way to fill masks in sentences. It can be tested out for free on [RobBERT's Hosted infererence API of Huggingface](https://huggingface.co/pdelobelle/robbert-v2-dutch-base?text=De+hoofdstad+van+Belgi%C3%AB+is+%3Cmask%3E.). You can also create a new prediction head for your own task by using any of HuggingFace's [RoBERTa-runners](https://huggingface.co/transformers/v2.7.0/examples.html
ec2cc161c759897e95fb1f0c34439141
mit
['Dutch', 'Flemish', 'RoBERTa', 'RobBERT']
false
language-model-training), [their fine-tuning notebooks](https://huggingface.co/transformers/v4.1.1/notebooks.html) by changing the model name to `DTAI-KULeuven/robbert-2022-dutch-base`. ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("DTAI-KULeuven/robbert-2022-dutch-base") model = AutoModelForSequenceClassification.from_pretrained("DTAI-KULeuven/robbert-2022-dutch-base") ``` You can then use most of [HuggingFace's BERT-based notebooks](https://huggingface.co/transformers/v4.1.1/notebooks.html) for finetuning RobBERT-2022 on your type of Dutch language dataset.
462ca645e65fad856d2f8387e132dbd8
mit
['Dutch', 'Flemish', 'RoBERTa', 'RobBERT']
false
Comparison of Available Dutch BERT models There is a wide variety of Dutch BERT-based models available for fine-tuning on your tasks. Here's a quick summary to find the one that suits your need: - [pdelobelle/robbert-v2-dutch-base](https://huggingface.co/pdelobelle/robbert-v2-dutch-base): The RobBERT model has for years been the best performing BERT-like model for most language tasks. It is trained on a large Dutch webcrawled dataset (OSCAR) and uses the superior [RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta) architecture, which robustly optimized the original [BERT model](https://huggingface.co/docs/transformers/model_doc/bert). - [DTAI-KULeuven/robbertje-1-gb-merged](https://huggingface.co/DTAI-KULeuven/robbertje-1-gb-mergedRobBERTje): The RobBERTje model is a distilled version of RobBERT and about half the size and four times faster to perform inference on. This can help deploy more scalable language models for your language task - [DTAI-KULeuven/robbert-2022-dutch-base](https://huggingface.co/DTAI-KULeuven/robbert-2022-dutch-base): The RobBERT-2022 is a further pre-trained RobBERT model on the OSCAR2022 dataset. It is helpful for tasks that rely on words and/or information about more recent events. There's also the [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) "BERTje" model. This model uses the outdated basic BERT model, and is trained on a smaller corpus of clean Dutch texts. Thanks to RobBERT's more recent architecture as well as its larger and more real-world-like training corpus, most researchers and practitioners seem to achieve higher performance on their language tasks with the RobBERT model.
0f1fb186faee58c8ef83bc23b4749117
mit
['Dutch', 'Flemish', 'RoBERTa', 'RobBERT']
false
Our Performance Evaluation Results All experiments are described in more detail in our [paper](https://arxiv.org/abs/2001.06286), with the code in [our GitHub repository](https://github.com/iPieter/RobBERT).
f2b2940a77ec745446925857ca7757ba
mit
['Dutch', 'Flemish', 'RoBERTa', 'RobBERT']
false
Sentiment analysis Predicting whether a review is positive or negative using the [Dutch Book Reviews Dataset](https://github.com/benjaminvdb/110kDBRD). | Model | Accuracy [%] | |-------------------|--------------------------| | ULMFiT | 93.8 | | BERTje | 93.0 | | RobBERT v2 | 94.4 | | RobBERT 2022 | **95.1** |
91a5e78b072f0aa86489d538811eb0a1
mit
['Dutch', 'Flemish', 'RoBERTa', 'RobBERT']
false
Die/Dat (coreference resolution) We measured how well the models are able to do coreference resolution by predicting whether "die" or "dat" should be filled into a sentence. For this, we used the [EuroParl corpus](https://www.statmt.org/europarl/).
2a5ec4031fe4ff235ca9e162fed2b82d
mit
['Dutch', 'Flemish', 'RoBERTa', 'RobBERT']
false
Finetuning on whole dataset | Model | Accuracy [%] | F1 [%] | |-------------------|--------------------------|--------------| | [Baseline](https://arxiv.org/abs/2001.02943) (LSTM) | | 75.03 | | mBERT | 98.285 | 98.033 | | BERTje | 98.268 | 98.014 | | RobBERT v2 | **99.232** | **99.121** | | RobBERT 2022 | 97.8 | |
d33c908e72a0c9a54347600267d49638
mit
['Dutch', 'Flemish', 'RoBERTa', 'RobBERT']
false
Finetuning on 10K examples We also measured the performance using only 10K training examples. This experiment clearly illustrates that RobBERT outperforms other models when there is little data available. | Model | Accuracy [%] | F1 [%] | |-------------------|--------------------------|--------------| | mBERT | 92.157 | 90.898 | | BERTje | 93.096 | 91.279 | | RobBERT v2 | **97.816** | **97.514** |
0bfe4ef2b446e78fe3c569ec91c3ae00
mit
['Dutch', 'Flemish', 'RoBERTa', 'RobBERT']
false
Using zero-shot word masking task Since BERT models are pre-trained using the word masking task, we can use this to predict whether "die" or "dat" is more likely. This experiment shows that RobBERT has internalised more information about Dutch than other models. | Model | Accuracy [%] | |-------------------|--------------------------| | ZeroR | 66.70 | | mBERT | 90.21 | | BERTje | 94.94 | | RobBERT v2 | **98.75** |
540f054ed67c017bf37230612f730b52
mit
['Dutch', 'Flemish', 'RoBERTa', 'RobBERT']
false
Part-of-Speech Tagging. Using the [Lassy UD dataset](https://universaldependencies.org/treebanks/nl_lassysmall/index.html). | Model | Accuracy [%] | |-------------------|--------------------------| | Frog | 91.7 | | mBERT | **96.5** | | BERTje | 96.3 | | RobBERT v2 | 96.4 | | RobBERT 2022 | 96.1 |
b3faa7eaead3946850f69bb4a9bb471f
mit
['Dutch', 'Flemish', 'RoBERTa', 'RobBERT']
false
Credits and citation This project is created by [Pieter Delobelle](https://people.cs.kuleuven.be/~pieter.delobelle), [Thomas Winters](https://thomaswinters.be) and [Bettina Berendt](https://people.cs.kuleuven.be/~bettina.berendt/). If you would like to cite our paper or model, you can use the following BibTeX: ``` @inproceedings{delobelle2022robbert2022, doi = {10.48550/ARXIV.2211.08192}, url = {https://arxiv.org/abs/2211.08192}, author = {Delobelle, Pieter and Winters, Thomas and Berendt, Bettina}, keywords = {Computation and Language (cs.CL), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {RobBERT-2022: Updating a Dutch Language Model to Account for Evolving Language Use}, venue = {arXiv}, year = {2022}, } @inproceedings{delobelle2020robbert, title = "{R}ob{BERT}: a {D}utch {R}o{BERT}a-based {L}anguage {M}odel", author = "Delobelle, Pieter and Winters, Thomas and Berendt, Bettina", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.292", doi = "10.18653/v1/2020.findings-emnlp.292", pages = "3255--3265" } ```
45d69f53c56687e5aa121af53acf8410
creativeml-openrail-m
['text-to-image', 'stable-diffusion']
false
nagisa Dreambooth model trained by birdaz with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
15988e9c6bd251d373da52e69d528ac1
cc-by-sa-4.0
['japanese', 'question-answering', 'dependency-parsing']
false
Model Description This is a DeBERTa(V2) model pretrained on 青空文庫 for dependency-parsing (head-detection on long-unit-words) as question-answering, derived from [deberta-large-japanese-aozora](https://huggingface.co/KoichiYasuoka/deberta-large-japanese-aozora) and [UD_Japanese-GSDLUW](https://github.com/UniversalDependencies/UD_Japanese-GSDLUW). Use [MASK] inside `context` to avoid ambiguity when specifying a multiple-used word as `question`.
f922f21ad8d0fd41d07fe105666bba29
cc-by-sa-4.0
['japanese', 'question-answering', 'dependency-parsing']
false
How to Use ```py from transformers import AutoTokenizer,AutoModelForQuestionAnswering,QuestionAnsweringPipeline tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/deberta-large-japanese-aozora-ud-head") model=AutoModelForQuestionAnswering.from_pretrained("KoichiYasuoka/deberta-large-japanese-aozora-ud-head") qap=QuestionAnsweringPipeline(tokenizer=tokenizer,model=model,align_to_words=False) print(qap(question="国語",context="全学年にわたって小学校の国語の教科書に挿し絵>が用いられている")) ``` or (with [ufal.chu-liu-edmonds](https://pypi.org/project/ufal.chu-liu-edmonds/)) ```py class TransformersUD(object): def __init__(self,bert): import os from transformers import (AutoTokenizer,AutoModelForQuestionAnswering, AutoModelForTokenClassification,AutoConfig,TokenClassificationPipeline) self.tokenizer=AutoTokenizer.from_pretrained(bert) self.model=AutoModelForQuestionAnswering.from_pretrained(bert) x=AutoModelForTokenClassification.from_pretrained if os.path.isdir(bert): d,t=x(os.path.join(bert,"deprel")),x(os.path.join(bert,"tagger")) else: from transformers.utils import cached_file c=AutoConfig.from_pretrained(cached_file(bert,"deprel/config.json")) d=x(cached_file(bert,"deprel/pytorch_model.bin"),config=c) s=AutoConfig.from_pretrained(cached_file(bert,"tagger/config.json")) t=x(cached_file(bert,"tagger/pytorch_model.bin"),config=s) self.deprel=TokenClassificationPipeline(model=d,tokenizer=self.tokenizer, aggregation_strategy="simple") self.tagger=TokenClassificationPipeline(model=t,tokenizer=self.tokenizer) def __call__(self,text): import numpy,torch,ufal.chu_liu_edmonds w=[(t["start"],t["end"],t["entity_group"]) for t in self.deprel(text)] z,n={t["start"]:t["entity"].split("|") for t in self.tagger(text)},len(w) r,m=[text[s:e] for s,e,p in w],numpy.full((n+1,n+1),numpy.nan) v,c=self.tokenizer(r,add_special_tokens=False)["input_ids"],[] for i,t in enumerate(v): q=[self.tokenizer.cls_token_id]+t+[self.tokenizer.sep_token_id] c.append([q]+v[0:i]+[[self.tokenizer.mask_token_id]]+v[i+1:]+[[q[-1]]]) b=[[len(sum(x[0:j+1],[])) for j in range(len(x))] for x in c] with torch.no_grad(): d=self.model(input_ids=torch.tensor([sum(x,[]) for x in c]), token_type_ids=torch.tensor([[0]*x[0]+[1]*(x[-1]-x[0]) for x in b])) s,e=d.start_logits.tolist(),d.end_logits.tolist() for i in range(n): for j in range(n): m[i+1,0 if i==j else j+1]=s[i][b[i][j]]+e[i][b[i][j+1]-1] h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0] if [0 for i in h if i==0]!=[0]: i=([p for s,e,p in w]+["root"]).index("root") j=i+1 if i<n else numpy.nanargmax(m[:,0]) m[0:j,0]=m[j+1:,0]=numpy.nan h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0] u="
ffc100d564434a585cce6fec07141b38
cc-by-sa-4.0
['japanese', 'question-answering', 'dependency-parsing']
false
text = "+text.replace("\n"," ")+"\n" for i,(s,e,p) in enumerate(w,1): p="root" if h[i]==0 else "dep" if p=="root" else p u+="\t".join([str(i),r[i-1],"_",z[s][0][2:],"_","|".join(z[s][1:]), str(h[i]),p,"_","_" if i<n and e<w[i][0] else "SpaceAfter=No"])+"\n" return u+"\n" nlp=TransformersUD("KoichiYasuoka/deberta-large-japanese-aozora-ud-head") print(nlp("全学年にわたって小学校の国語の教科書に挿し絵が用いられている")) ```
c5d28f2f8586e7d3c79f5c6050bcba4a
apache-2.0
['translation']
false
opus-mt-fi-yap * source languages: fi * target languages: yap * OPUS readme: [fi-yap](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-yap/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-yap/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-yap/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-yap/opus-2020-01-08.eval.txt)
75e5bb2454046d67221c2439e35f2cf0
apache-2.0
['generated_from_trainer']
false
Graphcore/roberta-base-squad2 Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore). Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.
c8f8630c76c51f3be3b083b696a2c56d
apache-2.0
['generated_from_trainer']
false
Model description RoBERTa is based on BERT pretraining approach and improves on it by carefully evaluating a number of design decisions of BERT pretraining which it found to cause the model to be undertrained. It suggested a way to improve the performance by training the model longer, with bigger batches over more data, removing the next sentence prediction objectives, training on longer sequences and dynamically changing the mask pattern applied to the training data. As a result, it achieved state-of-the-art results on GLUE, RACE and SQuAD. Paper link : [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/pdf/1907.11692.pdf)
6a2b5e322bb61f5a6e81a147d19ab08d
apache-2.0
['generated_from_trainer']
false
Training procedure Trained on 16 Graphcore Mk2 IPUs using [optimum-graphcore](https://github.com/huggingface/optimum-graphcore). Command line: ``` python examples/question-answering/run_qa.py \ --ipu_config_name Graphcore/roberta-base-ipu \ --model_name_or_path roberta-base \ --dataset_name squad_v2 \ --version_2_with_negative \ --do_train \ --do_eval \ --num_train_epochs 3 \ --per_device_train_batch_size 4 \ --per_device_eval_batch_size 2 \ --pod_type pod16 \ --learning_rate 7e-5 \ --max_seq_length 384 \ --doc_stride 128 \ --seed 1984 \ --lr_scheduler_type linear \ --loss_scaling 64 \ --weight_decay 0.01 \ --warmup_ratio 0.2 \ --logging_steps 1 \ --save_steps -1 \ --dataloader_num_workers 64 \ --output_dir roberta-base-squad2 \ --overwrite_output_dir \ --push_to_hub ```
b67e0e0c82a7a8e9b1c1d18c98d8242e
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 4 - eval_batch_size: 2 - seed: 1984 - distributed_type: IPU - total_train_batch_size: 256 - total_eval_batch_size: 40 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.2 - num_epochs: 3.0 - training precision: Mixed Precision
890ea70d814b6b7658f8568502ba7a7b
apache-2.0
['generated_from_trainer']
false
Training results ``` ***** train metrics ***** epoch = 3.0 train_loss = 0.9982 train_runtime = 0:04:44.21 train_samples = 131823 train_samples_per_second = 1391.43 train_steps_per_second = 5.425 ***** eval metrics ***** epoch = 3.0 eval_HasAns_exact = 78.1208 eval_HasAns_f1 = 84.6569 eval_HasAns_total = 5928 eval_NoAns_exact = 82.0353 eval_NoAns_f1 = 82.0353 eval_NoAns_total = 5945 eval_best_exact = 80.0809 eval_best_exact_thresh = 0.0 eval_best_f1 = 83.3442 eval_best_f1_thresh = 0.0 eval_exact = 80.0809 eval_f1 = 83.3442 eval_samples = 12165 eval_total = 11873 ```
9b2e6a4b0aa02afc0d193e192c96128f
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Wav2Vec2-Large-XLSR-53-Punjabi Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Punjabi using the [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz.
52c1f1cb57130eb00b4d2f583c09ccbf
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "pa-IN", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("danurahul/wav2vec2-large-xlsr-pa-IN") model = Wav2Vec2ForCTC.from_pretrained("danurahul/wav2vec2-large-xlsr-pa-IN") resampler = torchaudio.transforms.Resample(48_000, 16_000)
d3346c0e56af61f15747f00451e569ad
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Evaluation The model can be evaluated as follows on the Punjabi test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "pa-IN", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("danurahul/wav2vec2-large-xlsr-pa-IN") model = Wav2Vec2ForCTC.from_pretrained("danurahul/wav2vec2-large-xlsr-pa-IN") model.to("cuda") chars_to_ignore_regex = '[\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\?\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\!\\\\\\\\\\\\\\\\-\\\\\\\\\\\\\\\\;\\\\\\\\\\\\\\\\:\\\\\\\\\\\\\\\\"\\\\\\\\\\\\\\\\“\\\\\\\\\\\\\\\\%\\\\\\\\\\\\\\\\‘\\\\\\\\\\\\\\\\”\\\\\\\\\\\\\\\\�]' resampler = torchaudio.transforms.Resample(48_000, 16_000)
7146fdb2915afd8a80eb12ac669e8f12
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 100 %
26aaec9b971fb0e81912406142380404
creativeml-openrail-m
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'wildcard']
false
DreamBooth model for the bird concept trained by Someman on the Someman/danphe dataset. This is a Stable Diffusion model fine-tuned on the bird concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of bird danphe** This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
30a3311d3cdea8e8d117f4cf99a33cbf
apache-2.0
['tapas', 'TapasModel']
false
TAPAS small model This model has 2 versions which can be used. The latest version, which is the default one, corresponds to the `tapas_inter_masklm_small_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas). This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training. It uses relative position embeddings by default (i.e. resetting the position index at every cell of the table). The other (non-default) version which can be used is the one with absolute position embeddings: - `revision="no_reset"`, which corresponds to `tapas_inter_masklm_small` Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by the Hugging Face team and contributors.
199dcb498f0991d8cf382cfb501fbed9
openrail
[]
false
This model is an outcome of an experiment of training from scratch https://huggingface.co/facebook/opt-1.3b for just 8B tokens in fp16, fp32 and bf16 which would allow comparing the resulting models when they are used to train a multimodal model. But, of course, it can be used for any other purpose, just be aware that these models are very undertrained. Most language models are trained for about 300B tokens, this one was just 8B. The 3 repositories are: - https://huggingface.co/HuggingFaceM4/opt-1.3b-fp16-8b-samples - https://huggingface.co/HuggingFaceM4/opt-1.3b-fp32-8b-samples - https://huggingface.co/HuggingFaceM4/opt-1.3b-bf16-8b-samples
1831873fc57d8208a59f06a75e2ce3aa
openrail
[]
false
The training get transformers: ``` git clone https://github.com/huggingface/transformers cd transformers ``` Prepare an initialized opt-1.3 model: ``` cat << EOT > prep-bf16.py from transformers import AutoConfig, AutoModel, AutoTokenizer import torch mname = "facebook/opt-1.3b" config = AutoConfig.from_pretrained(mname) model = AutoModel.from_config(config, torch_dtype=torch.float16) tokenizer = AutoTokenizer.from_pretrained(mname) path = "opt-1.3b-bf16" model.save_pretrained(path) tokenizer.save_pretrained(path) EOT ``` Run: ``` python prep-bf16.py ``` Train from scratch on a single 8x 80GB A100 node on `realnewslike` subset of https://huggingface.co/datasets/c4: ``` git clone https://github.com/huggingface/transformers cd transformers PYTHONPATH="src" python -m torch.distributed.run \ --nproc_per_node=8 \ --nnode=1 \ --node_rank=0 \ --master_addr=127.0.0.1 \ --master_port=9901 \ examples/pytorch/language-modeling/run_clm.py \ --bf16 \ --tf32 1 \ --seed 42 \ --dataset_name c4 \ --dataset_config_name realnewslike \ --model_name_or_path opt-1.3b-bf16 \ --per_device_train_batch_size 6 \ --per_device_eval_batch_size 6 \ --gradient_accumulation_steps 2 \ --do_train \ --logging_steps 5 \ --save_steps 1000 \ --eval_steps 1000 \ --weight_decay 0.1 \ --num_train_epochs 1 \ --adam_beta1 0.9 \ --adam_beta2 0.95 \ --learning_rate 0.0002 \ --lr_scheduler_type linear \ --warmup_steps 1000 \ --report_to tensorboard \ --output_dir saved \ --logging_dir tb \ --log_level warning \ --preprocessing_num_workers 32 ``` The training took about 40h.
e0402fcf7ecdd86f065b7f4176b49dde
apache-2.0
['generated_from_trainer']
false
wav2vec2-base-common-voice-fa-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.0558 - Wer: 1.0
3c1fcf7477d44a31e554aa9011947172
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 3 - mixed_precision_training: Native AMP
10b9db8326a102fa712d7193a6e610ba
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:---:| | 5.1626 | 0.3 | 100 | 4.0692 | 1.0 | | 5.1776 | 0.6 | 200 | 3.6640 | 1.0 | | 3.6628 | 0.9 | 300 | 3.3832 | 1.0 | | 3.2022 | 1.2 | 400 | 3.3492 | 1.0 | | 3.1714 | 1.5 | 500 | 3.3215 | 1.0 | | 3.0689 | 1.8 | 600 | 3.0806 | 1.0 | | 3.1478 | 2.1 | 700 | 3.0624 | 1.0 | | 3.1818 | 2.4 | 800 | 3.0777 | 1.0 | | 3.159 | 2.7 | 900 | 3.0558 | 1.0 |
29836f3e24462686c02433f83eba46ad
apache-2.0
['automatic-speech-recognition', 'de']
false
exp_w2v2r_de_vp-100k_age_teens-10_sixties-0_s362 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
dbd331a6b8877a80f6f8649a8c213f1f
apache-2.0
['translation']
false
war-eng * source group: Waray (Philippines) * target group: English * OPUS readme: [war-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/war-eng/README.md) * model: transformer-align * source language(s): war * target language(s): eng * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/war-eng/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/war-eng/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/war-eng/opus-2020-06-16.eval.txt)
773a40d35c63424935f81d117d3f9f48
apache-2.0
['translation']
false
System Info: - hf_name: war-eng - source_languages: war - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/war-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['war', 'en'] - src_constituents: {'war'} - tgt_constituents: {'eng'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/war-eng/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/war-eng/opus-2020-06-16.test.txt - src_alpha3: war - tgt_alpha3: eng - short_pair: war-en - chrF2_score: 0.308 - bleu: 12.3 - brevity_penalty: 1.0 - ref_len: 11345.0 - src_name: Waray (Philippines) - tgt_name: English - train_date: 2020-06-16 - src_alpha2: war - tgt_alpha2: en - prefer_old: False - long_pair: war-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
16d586a073cc44dfc7ede0f5f735eafc
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - training_steps: 1000
dec928cd7ab7d96303a29f968591f188
creativeml-openrail-m
['stable-diffusion', 'text-to-image']
false
finetuned on dark, moody, "victorian" imagery (ノ◕ヮ◕)ノ*:・゚✧ [<img src="https://colab.research.google.com/assets/colab-badge.svg">](https://colab.research.google.com/drive/13E3i6_Z1BWd3e6f71-TNd5bk8eGqaeZf?usp=sharing) ![1](https://i.im.ge/2022/11/16/S1gs6P.darkvictorian-2.jpg) v1 was trained on SD 1.4, v2 on SD 1.5. check the pdf for examples with different prompts & settings. comparisons.zip has steps vs cfg scale x/y plots for euler_a and lms. use the tokens "darkvictorian artstyle" in your prompt to use the style.
ad99b5f68d413d8313c83bbc8c2dc75b
creativeml-openrail-m
['stable-diffusion', 'text-to-image']
false
random samples: ![samples](https://i.im.ge/2022/11/16/S1gaV1.samples.jpg) <a href='https://ko-fi.com/S6S6FUYKY' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi3.png?v=3' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
467f6cbd564f8d699638ebcb86cce12a
mit
['roberta', 'cloze', 'distractor', 'generation']
false
Model description This model is a Candidate Set Generator in **"CDGP: Automatic Cloze Distractor Generation based on Pre-trained Language Model", Findings of EMNLP 2022**. Its input are stem and answer, and output is candidate set of distractors. It is fine-tuned by [**CLOTH**](https://www.cs.cmu.edu/~glai1/data/cloth/) dataset based on [**roberta-base**](https://huggingface.co/roberta-base) model. For more details, you can see our **paper** or [**GitHub**](https://github.com/AndyChiangSH/CDGP).
6001628d9c18fd5b6b834f47e91169d7
mit
['roberta', 'cloze', 'distractor', 'generation']
false
How to use? 1. Download the model by hugging face transformers. ```python from transformers import RobertaTokenizer, RobertaForMaskedLM, pipeline tokenizer = RobertaTokenizer.from_pretrained("AndyChiang/cdgp-csg-roberta-cloth") csg_model = RobertaForMaskedLM.from_pretrained("AndyChiang/cdgp-csg-roberta-cloth") ``` 2. Create a unmasker. ```python unmasker = pipeline("fill-mask", tokenizer=tokenizer, model=csg_model, top_k=10) ``` 3. Use the unmasker to generate the candidate set of distractors. ```python sent = "I feel <mask> now. </s> happy" cs = unmasker(sent) print(cs) ```
5e0093f5f6ca51405546ce4924453f6f
mit
['roberta', 'cloze', 'distractor', 'generation']
false
Dataset This model is fine-tuned by [CLOTH](https://www.cs.cmu.edu/~glai1/data/cloth/) dataset, which is a collection of nearly 100,000 cloze questions from middle school and high school English exams. The detail of CLOTH dataset is shown below. | Number of questions | Train | Valid | Test | | ------------------- | ----- | ----- | ----- | | Middle school | 22056 | 3273 | 3198 | | High school | 54794 | 7794 | 8318 | | Total | 76850 | 11067 | 11516 | You can also use the [dataset](https://huggingface.co/datasets/AndyChiang/cloth) we have already cleaned.
27447a4689e8ab894ca43daad10bba00
mit
['roberta', 'cloze', 'distractor', 'generation']
false
Training hyperparameters The following hyperparameters were used during training: - Pre-train language model: [roberta-base](https://huggingface.co/roberta-base) - Optimizer: adam - Learning rate: 0.0001 - Max length of input: 64 - Batch size: 64 - Epoch: 1 - Device: NVIDIA® Tesla T4 in Google Colab
81a6636bb1e592e24085e0fa74540450
mit
['roberta', 'cloze', 'distractor', 'generation']
false
Testing The evaluations of this model as a Candidate Set Generator in CDGP is as follows: | P@1 | F1@3 | F1@10 | MRR | NDCG@10 | | ----- | ---- | ----- | ----- | ------- | | 10.50 | 9.83 | 10.25 | 20.42 | 28.17 |
8fa5b126bd1d9fea83fa94403abd431a
mit
['roberta', 'cloze', 'distractor', 'generation']
false
Candidate Set Generator | Models | CLOTH | DGen | | ----------- | ----------------------------------------------------------------------------------- | -------------------------------------------------------------------------------- | | **BERT** | [cdgp-csg-bert-cloth](https://huggingface.co/AndyChiang/cdgp-csg-bert-cloth) | [cdgp-csg-bert-dgen](https://huggingface.co/AndyChiang/cdgp-csg-bert-dgen) | | **SciBERT** | [cdgp-csg-scibert-cloth](https://huggingface.co/AndyChiang/cdgp-csg-scibert-cloth) | [cdgp-csg-scibert-dgen](https://huggingface.co/AndyChiang/cdgp-csg-scibert-dgen) | | **RoBERTa** | [*cdgp-csg-roberta-cloth*](https://huggingface.co/AndyChiang/cdgp-csg-roberta-cloth) | [cdgp-csg-roberta-dgen](https://huggingface.co/AndyChiang/cdgp-csg-roberta-dgen) | | **BART** | [cdgp-csg-bart-cloth](https://huggingface.co/AndyChiang/cdgp-csg-bart-cloth) | [cdgp-csg-bart-dgen](https://huggingface.co/AndyChiang/cdgp-csg-bart-dgen) |
68a2bb5e927a635b2d5f6d522ef2d4bb
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.7501 - Matthews Correlation: 0.5309
da3df4e12933496fe7225ebe0791d313
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5286 | 1.0 | 535 | 0.5067 | 0.4301 | | 0.3469 | 2.0 | 1070 | 0.5216 | 0.4802 | | 0.2343 | 3.0 | 1605 | 0.6431 | 0.5002 | | 0.1753 | 4.0 | 2140 | 0.7501 | 0.5309 | | 0.1251 | 5.0 | 2675 | 0.8695 | 0.5222 |
31ed2e1976ffe40223172082e046730d
apache-2.0
['generated_from_trainer']
false
all-roberta-large-v1-work-7-16-5 This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.3586 - Accuracy: 0.3689
a7936eb2e12ec066f4a1dff5433eb54b
apache-2.0
['translation']
false
opus-mt-fr-tll * source languages: fr * target languages: tll * OPUS readme: [fr-tll](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-tll/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-tll/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-tll/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-tll/opus-2020-01-16.eval.txt)
2d867905307360bc7943fa156f999bd8
apache-2.0
['generated_from_trainer']
false
Article_100v5_NER_Model_3Epochs_UNAUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the article100v5_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.5958 - Precision: 0.0241 - Recall: 0.0005 - F1: 0.0010 - Accuracy: 0.7822
9f567795eee5a1b619dcfa4cfe93edf0
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 13 | 0.7298 | 0.0 | 0.0 | 0.0 | 0.7816 | | No log | 2.0 | 26 | 0.6272 | 0.0 | 0.0 | 0.0 | 0.7816 | | No log | 3.0 | 39 | 0.5958 | 0.0241 | 0.0005 | 0.0010 | 0.7822 |
b82e36ccd6fa330402a90342fd3511fc
cc-by-sa-4.0
['japanese', 'pos', 'dependency-parsing']
false
Model Description This is a RoBERTa model pretrained on 青空文庫 texts for POS-tagging and dependency-parsing (using `goeswith` for subwords), derived from [roberta-large-japanese-aozora](https://huggingface.co/KoichiYasuoka/roberta-large-japanese-aozora) and [UD_Japanese-GSDLUW](https://github.com/UniversalDependencies/UD_Japanese-GSDLUW).
416efba49eeacbf5adbf31df8ba6eba3
cc-by-sa-4.0
['japanese', 'pos', 'dependency-parsing']
false
text = "+text+"\n" v=[(s,e) for s,e in w["offset_mapping"] if s<e] for i,(s,e) in enumerate(v,1): q=self.model.config.id2label[p[i,h[i]]].split("|") u+="\t".join([str(i),text[s:e],"_",q[0],"_","|".join(q[1:-1]),str(h[i]),q[-1],"_","_" if i<len(v) and e<v[i][0] else "SpaceAfter=No"])+"\n" return u+"\n" nlp=UDgoeswith("KoichiYasuoka/roberta-large-japanese-aozora-ud-goeswith") print(nlp("全学年にわたって小学校の国語の教科書に挿し絵が用いられている")) ``` with [ufal.chu-liu-edmonds](https://pypi.org/project/ufal.chu-liu-edmonds/). Or without ufal.chu-liu-edmonds: ``` from transformers import pipeline nlp=pipeline("universal-dependencies","KoichiYasuoka/roberta-large-japanese-aozora-ud-goeswith",trust_remote_code=True,aggregation_strategy="simple") print(nlp("全学年にわたって小学校の国語の教科書に挿し絵が用いられている")) ```
b038e7972fc1ce5bcb4ca506e4eeec1f
mit
['generated_from_trainer']
false
deberta-v3-xsmall-indonesia-squadv2 This model is a fine-tuned version of [microsoft/deberta-v3-xsmall](https://huggingface.co/microsoft/deberta-v3-xsmall) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.4182
178b0c4705c535bfd6f1bfd34a508aff
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4.5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3
9ba705a91374d04f47861d076950352e
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.6078 | 1.0 | 13505 | 1.5331 | | 1.4216 | 2.0 | 27010 | 1.4344 | | 1.2017 | 3.0 | 40515 | 1.4182 |
340dcf3efeba8b698fd75f2b60fcdc22
mit
['generated_from_trainer']
false
Evaluation Results ``` {'exact': 55.34646711872568, 'f1': 67.22757187614371, 'total': 24923, 'HasAns_exact': 55.34646711872568, 'HasAns_f1': 67.22757187614371, 'HasAns_total': 24923, 'best_exact': 55.34646711872568, 'best_exact_thresh': 0.0, 'best_f1': 67.22757187614371, 'best_f1_thresh': 0.0} ```
81567a228c8c0d7a0e2ad8a2df10ab7c
mit
['generated_from_trainer']
false
Simple Usage ``` from transformers import pipeline qa_pipeline = pipeline( "question-answering", model="asaduas/deberta-v3-xsmall-indonesia-squadv2", tokenizer="asaduas/deberta-v3-xsmall-indonesia-squadv2" ) qa_pipeline( { 'context': "Pada tahun 1512 juga Afonso de Albuquerque mengirim Antonio Albreu dan Franscisco Serrao untuk memimpin armadanya mencari jalan ke tempat asal rempah-rempah di Maluku. Sepanjang perjalanan, mereka singgah di Madura, Bali, dan Lombok. Dengan menggunakan nakhoda-nakhoda Jawa, armada itu tiba di Kepulauan Banda, terus menuju Aibku Utara sampai tiba di Ternate.", 'question': "Siapa yang dikirim oleh Afonso de Albuquerque Pada tahun 1512?" } ) ```
dadd6be11721ee7fbffb0ccf771252c3
apache-2.0
['generated_from_trainer']
false
t5_large_epoch_1_comve_triple This model is a fine-tuned version of [t5-large](https://huggingface.co/t5-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.5605
8e68a51fb6333f927aa4c0f135ae821a
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 48 - eval_batch_size: 96 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2.0
2d060386918cec5b1f6b923b5dce0af8
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 4 | 4.1923 | | No log | 2.0 | 8 | 3.5605 |
c19ca338d87a19a546bbd85f5c833d82
apache-2.0
['translation']
false
zho-eng * source group: Chinese * target group: English * OPUS readme: [zho-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-eng/README.md) * model: transformer * source language(s): cjy_Hans cjy_Hant cmn cmn_Hans cmn_Hant gan lzh lzh_Hans nan wuu yue yue_Hans yue_Hant * target language(s): eng * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-07-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-eng/opus-2020-07-17.zip) * test set translations: [opus-2020-07-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-eng/opus-2020-07-17.test.txt) * test set scores: [opus-2020-07-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-eng/opus-2020-07-17.eval.txt)
ad4399ea103fc24dfe7faae60fe26175
apache-2.0
['translation']
false
System Info: - hf_name: zho-eng - source_languages: zho - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['zh', 'en'] - src_constituents: {'cmn_Hans', 'nan', 'nan_Hani', 'gan', 'yue', 'cmn_Kana', 'yue_Hani', 'wuu_Bopo', 'cmn_Latn', 'yue_Hira', 'cmn_Hani', 'cjy_Hans', 'cmn', 'lzh_Hang', 'lzh_Hira', 'cmn_Hant', 'lzh_Bopo', 'zho', 'zho_Hans', 'zho_Hant', 'lzh_Hani', 'yue_Hang', 'wuu', 'yue_Kana', 'wuu_Latn', 'yue_Bopo', 'cjy_Hant', 'yue_Hans', 'lzh', 'cmn_Hira', 'lzh_Yiii', 'lzh_Hans', 'cmn_Bopo', 'cmn_Hang', 'hak_Hani', 'cmn_Yiii', 'yue_Hant', 'lzh_Kana', 'wuu_Hani'} - tgt_constituents: {'eng'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-eng/opus-2020-07-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-eng/opus-2020-07-17.test.txt - src_alpha3: zho - tgt_alpha3: eng - short_pair: zh-en - chrF2_score: 0.5479999999999999 - bleu: 36.1 - brevity_penalty: 0.948 - ref_len: 82826.0 - src_name: Chinese - tgt_name: English - train_date: 2020-07-17 - src_alpha2: zh - tgt_alpha2: en - prefer_old: False - long_pair: zho-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
a4e059df382367d1caf89254bccfa891
creativeml-openrail-m
['text-to-image', 'stable-diffusion']
false
Avatar Dreambooth model trained by yugkha3 with [buildspace's DreamBooth](https://colab.research.google.com/github/buildspace/diffusers/blob/main/examples/dreambooth/DreamBooth_Stable_Diffusion.ipynb) notebook Build your own using the [AI Avatar project](https://buildspace.so/builds/ai-avatar)! To get started head over to the [project dashboard](https://buildspace.so/p/build-ai-avatars). Sample pictures of this concept:
39ba72c43c76dcd57ce4c56877c9486e