license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 1.8267 | 1.0 | 23927 | 1.6689 | 24.4634 | 11.7413 | 20.2154 | 23.0875 | 18.9993 | | 1.81 | 2.0 | 47854 | 1.6614 | 24.5589 | 11.8509 | 20.3011 | 23.1768 | 19.0 |
da8dd5c87979a2f2baff2c0e103f0ae9
apache-2.0
['generated_from_keras_callback']
false
Imene/vit-base-patch16-224-in21k-wi2 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.9892 - Train Accuracy: 0.5568 - Train Top-3-accuracy: 0.8130 - Validation Loss: 3.0923 - Validation Accuracy: 0.4280 - Validation Top-3-accuracy: 0.7034 - Epoch: 4
14a2558c78b50cdb64edb0eea178dcd8
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000} - training_precision: mixed_float16
b028d2c5e3badf861ede3c34610c86c4
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Train Accuracy | Train Top-3-accuracy | Validation Loss | Validation Accuracy | Validation Top-3-accuracy | Epoch | |:----------:|:--------------:|:--------------------:|:---------------:|:-------------------:|:-------------------------:|:-----:| | 3.8488 | 0.0720 | 0.1713 | 3.7116 | 0.1564 | 0.3617 | 0 | | 3.5246 | 0.2703 | 0.4898 | 3.4122 | 0.3217 | 0.5732 | 1 | | 3.2493 | 0.4150 | 0.6827 | 3.2232 | 0.3880 | 0.6633 | 2 | | 3.0840 | 0.5002 | 0.7670 | 3.1275 | 0.4255 | 0.6921 | 3 | | 2.9892 | 0.5568 | 0.8130 | 3.0923 | 0.4280 | 0.7034 | 4 |
d64837037f661d99d09e5bff4261d932
apache-2.0
['whisper-event', 'generated_from_trainer']
false
kpriyanshu256/whisper-large-v2-as-600-32-1e-05-bn-Assamese This model is a fine-tuned version of [kpriyanshu256/whisper-large-v2-as-600-32-1e-05-bn](https://huggingface.co/kpriyanshu256/whisper-large-v2-as-600-32-1e-05-bn) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.2637 - Wer: 21.6928
b296eb6f3d1ea1bda23d57bf64ef6aab
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - training_steps: 200
e16d7547b2fbf235bb1755519029f74c
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.1915 | 1.1 | 50 | 0.2129 | 26.3851 | | 0.0639 | 3.06 | 100 | 0.2305 | 23.0825 | | 0.0192 | 5.03 | 150 | 0.2391 | 22.0538 | | 0.0041 | 6.13 | 200 | 0.2637 | 21.6928 |
2bc174498fa09ad48803d2d20f37b63a
mit
[]
false
<h1>Transformer Encoder for Social Science (TESS)</h1> TESS is a deep neural network model intended for social science related NLP tasks. The model is developed by Haosen Ge, In Young Park, Xuancheng Qian, and Grace Zeng. We demonstrate in two validation tests that TESS outperforms BERT and RoBERTa by 16.7\% on average, especially when the number of training samples is limited (<1,000 training instances). The results display the superiority of TESS on social science text processing tasks. GitHub: [TESS](https://github.com/haosenge/TESS). <h2>Training Corpus</h2> | TEXT | SOURCE | | ------------- | ------------- | | Preferential Trade Agreements | ToTA | | Congressional Bills | Kornilova and Eidelman (2019) | |UNGA Resolutions | UN | |Firms' Annual Reports | Loughran and McDonald (2016)| | U.S. Court Opinions | Caselaw Access Project| The model is trained on 4 NVIDIA A100 GPUs for 120K steps.
f0a503799a83c760ca96108235996dec
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.7758 - Matthews Correlation: 0.5259
ef5fe161ede83bece6e7ec8a435cc026
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.1926 | 1.0 | 535 | 0.7758 | 0.5259 |
71497df5d13c4b213a2484248e418e23
apache-2.0
['generated_from_trainer']
false
BERT_Mod_7_Squad This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 1.0928
f6e62aba980dbb839f168739efdf7d11
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3
049a7bee5a055f2055f94637e1b1cdda
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.189 | 1.0 | 4089 | 1.2196 | | 1.0312 | 2.0 | 8178 | 1.0691 | | 0.8954 | 3.0 | 12267 | 1.0928 |
44aa0b42d6a82a42a4761d7b454b2a8a
apache-2.0
['automatic-speech-recognition', 'es']
false
exp_w2v2t_es_unispeech-ml_s952 Fine-tuned [microsoft/unispeech-large-multi-lingual-1500h-cv](https://huggingface.co/microsoft/unispeech-large-multi-lingual-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
a30a7f811c266e28ae9ded653e5fdeea
apache-2.0
['generated_from_trainer']
false
all-roberta-large-v1-work-4-16-5-oos This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.3586 - Accuracy: 0.3689
ec9542f7c7c74067398989dc6bfd39d1
apache-2.0
['generated_from_trainer']
false
wav2vec2-base-timit-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4475 - Wer: 0.3400
a09047c034ec6c85292a6ae8dbaf172b
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.6929 | 4.0 | 500 | 2.4485 | 1.0009 | | 0.9441 | 8.0 | 1000 | 0.4848 | 0.4758 | | 0.3016 | 12.0 | 1500 | 0.4464 | 0.4016 | | 0.1715 | 16.0 | 2000 | 0.4666 | 0.3765 | | 0.1277 | 20.0 | 2500 | 0.4340 | 0.3515 | | 0.1082 | 24.0 | 3000 | 0.4544 | 0.3495 | | 0.0819 | 28.0 | 3500 | 0.4475 | 0.3400 |
95b688f55191e4b67b4394051ceb5dc9
apache-2.0
['image-classification', 'other-image-classification', 'generated_from_trainer']
false
vit-base-beans-demo-v3 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.0645 - Accuracy: 0.9850
5ec21cace4c97bc7f8151b9528cbb82a
apache-2.0
['image-classification', 'other-image-classification', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP
4bdd42952666aeb840535cd9cb112b83
apache-2.0
['image-classification', 'other-image-classification', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0397 | 1.54 | 100 | 0.0645 | 0.9850 |
cb30534c76058ea63b67b346c1493883
apache-2.0
['generated_from_trainer']
false
recipe-lr8e06-wd0.01-bs32 This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2753 - Rmse: 0.5246 - Mse: 0.2753 - Mae: 0.4184
f76ca7fb19b2b8d6a76ea2a57cc20cf4
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 8e-06 - train_batch_size: 256 - eval_batch_size: 256 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10
c606c5e39431487b6d506691e6c02bc9
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:| | 0.2769 | 1.0 | 623 | 0.2774 | 0.5266 | 0.2774 | 0.4296 | | 0.2745 | 2.0 | 1246 | 0.2739 | 0.5233 | 0.2739 | 0.4145 | | 0.2733 | 3.0 | 1869 | 0.2752 | 0.5246 | 0.2752 | 0.4215 | | 0.2722 | 4.0 | 2492 | 0.2744 | 0.5238 | 0.2744 | 0.4058 | | 0.2714 | 5.0 | 3115 | 0.2758 | 0.5251 | 0.2758 | 0.4232 | | 0.2705 | 6.0 | 3738 | 0.2753 | 0.5246 | 0.2753 | 0.4184 |
ba4744ad9a4cef5694adfd666714331e
apache-2.0
['generated_from_trainer']
false
fnet-large-finetuned-cola-copy4 This model is a fine-tuned version of [google/fnet-large](https://huggingface.co/google/fnet-large) on the GLUE COLA dataset. It achieves the following results on the evaluation set: - Loss: 0.6500 - Matthews Correlation: 0.0
a8385138f5dc69f662d7a188b3b0bd73
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: polynomial - num_epochs: 3.0
e4df249c288dd6213f87c03c22fb1339
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.6345 | 1.0 | 2138 | 0.6611 | 0.0 | | 0.6359 | 2.0 | 4276 | 0.6840 | 0.0 | | 0.6331 | 3.0 | 6414 | 0.6500 | 0.0 |
4c89c3a66f124eb6eb13592beba5b0bc
mit
['generated_from_keras_callback']
false
nandysoham16/Canadian_Armed_Forces-clustered This model is a fine-tuned version of [nandysoham16/0-clustered_aug](https://huggingface.co/nandysoham16/0-clustered_aug) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5493 - Train End Logits Accuracy: 0.8611 - Train Start Logits Accuracy: 0.7812 - Validation Loss: 0.3839 - Validation End Logits Accuracy: 1.0 - Validation Start Logits Accuracy: 0.8000 - Epoch: 0
64c9e93e4980ec11556d47ec7717bd79
mit
['generated_from_keras_callback']
false
Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 0.5493 | 0.8611 | 0.7812 | 0.3839 | 1.0 | 0.8000 | 0 |
6440015a96b22264aea9300b083e4175
apache-2.0
['automatic-speech-recognition', 'th']
false
exp_w2v2t_th_vp-100k_s403 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition on Thai using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
f42adcd96878f831423457a49175251b
apache-2.0
[]
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(0.95, 0.999), weight_decay=1e-06 and epsilon=1e-08 - lr_scheduler: cosine - lr_warmup_steps: 500 - ema_inv_gamma: 1.0 - ema_inv_gamma: 0.75 - ema_inv_gamma: 0.9999 - mixed_precision: fp16
2b1a14d4f01c6b2e9fe2304ad31ad232
apache-2.0
['generated_from_trainer']
false
wav2vec2-xls-r-300m-cv8-es This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - eval_loss: 0.2115 - eval_wer: 0.1931 - eval_runtime: 859.964 - eval_samples_per_second: 17.954 - eval_steps_per_second: 2.244 - epoch: 6.97 - step: 50000
88069ca21498a301d56084092b170254
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1368 - F1: 0.8599
f30530e6c4f84745b2a41bdaa05e77f7
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2618 | 1.0 | 525 | 0.1748 | 0.8134 | | 0.1274 | 2.0 | 1050 | 0.1398 | 0.8461 | | 0.0817 | 3.0 | 1575 | 0.1368 | 0.8599 |
ba1a5f8dcc82f9741d37c23dda8f60f5
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 249 | 3.1538 | | No log | 2.0 | 498 | 2.6796 | | 4.0415 | 3.0 | 747 | 2.5939 |
79997273a6811e9afc880adc2026e82b
apache-2.0
['automatic-speech-recognition', 'es']
false
exp_w2v2r_es_vp-100k_age_teens-8_sixties-2_s284 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
fad571f4ae41d105f81482b23378e2ea
other
['generated_from_trainer']
false
dalio-all-io-125m-3-epoch This model is a fine-tuned version of [facebook/opt-125m](https://huggingface.co/facebook/opt-125m) on the AlekseyKorshuk/dalio-all-io dataset. It achieves the following results on the evaluation set: - Loss: 2.7656 - Accuracy: 0.0497
92e45e04472eafce445ba9b3494739ea
other
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 16 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 3.0
2123c4089e43e3f14c94b439aa827030
other
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 3.1406 | 0.03 | 1 | 3.0762 | 0.0451 | | 3.074 | 0.07 | 2 | 3.0762 | 0.0451 | | 3.0557 | 0.1 | 3 | 3.0762 | 0.0451 | | 3.2166 | 0.14 | 4 | 3.0176 | 0.0457 | | 3.0989 | 0.17 | 5 | 2.9922 | 0.0460 | | 3.0732 | 0.21 | 6 | 2.9746 | 0.0464 | | 3.0867 | 0.24 | 7 | 2.9629 | 0.0463 | | 2.979 | 0.28 | 8 | 2.9512 | 0.0467 | | 3.1838 | 0.31 | 9 | 2.9414 | 0.0467 | | 2.9399 | 0.34 | 10 | 2.9336 | 0.0467 | | 2.926 | 0.38 | 11 | 2.9258 | 0.0471 | | 3.2144 | 0.41 | 12 | 2.9199 | 0.0473 | | 2.978 | 0.45 | 13 | 2.9141 | 0.0474 | | 3.0076 | 0.48 | 14 | 2.9082 | 0.0476 | | 2.9897 | 0.52 | 15 | 2.9023 | 0.0477 | | 2.8831 | 0.55 | 16 | 2.8945 | 0.0479 | | 2.9749 | 0.59 | 17 | 2.8867 | 0.0479 | | 2.9431 | 0.62 | 18 | 2.8828 | 0.0478 | | 3.0498 | 0.66 | 19 | 2.8770 | 0.0479 | | 2.9409 | 0.69 | 20 | 2.8711 | 0.0479 | | 2.96 | 0.72 | 21 | 2.8672 | 0.0480 | | 3.0767 | 0.76 | 22 | 2.8633 | 0.0478 | | 2.772 | 0.79 | 23 | 2.8594 | 0.0479 | | 3.0574 | 0.83 | 24 | 2.8535 | 0.0480 | | 2.8137 | 0.86 | 25 | 2.8496 | 0.0480 | | 2.8872 | 0.9 | 26 | 2.8438 | 0.0483 | | 3.0085 | 0.93 | 27 | 2.8398 | 0.0484 | | 2.9165 | 0.97 | 28 | 2.8359 | 0.0485 | | 2.8525 | 1.0 | 29 | 2.8340 | 0.0486 | | 2.7759 | 1.03 | 30 | 2.8301 | 0.0485 | | 2.7312 | 1.07 | 31 | 2.8281 | 0.0485 | | 2.6641 | 1.1 | 32 | 2.8262 | 0.0487 | | 2.7896 | 1.14 | 33 | 2.8242 | 0.0486 | | 2.7878 | 1.17 | 34 | 2.8223 | 0.0487 | | 2.4028 | 1.21 | 35 | 2.8203 | 0.0487 | | 2.5618 | 1.24 | 36 | 2.8184 | 0.0488 | | 2.6697 | 1.28 | 37 | 2.8164 | 0.0488 | | 2.6333 | 1.31 | 38 | 2.8145 | 0.0487 | | 2.4897 | 1.34 | 39 | 2.8125 | 0.0486 | | 2.4908 | 1.38 | 40 | 2.8105 | 0.0487 | | 2.6926 | 1.41 | 41 | 2.8086 | 0.0488 | | 2.6602 | 1.45 | 42 | 2.8066 | 0.0489 | | 2.8054 | 1.48 | 43 | 2.8047 | 0.0489 | | 2.5532 | 1.52 | 44 | 2.8047 | 0.0490 | | 2.4756 | 1.55 | 45 | 2.8027 | 0.0491 | | 2.6123 | 1.59 | 46 | 2.8008 | 0.0491 | | 2.5117 | 1.62 | 47 | 2.7988 | 0.0490 | | 2.5552 | 1.66 | 48 | 2.7969 | 0.0490 | | 2.5122 | 1.69 | 49 | 2.7949 | 0.0490 | | 2.5593 | 1.72 | 50 | 2.7930 | 0.0491 | | 2.5759 | 1.76 | 51 | 2.7910 | 0.0491 | | 2.5535 | 1.79 | 52 | 2.7891 | 0.0493 | | 2.6531 | 1.83 | 53 | 2.7871 | 0.0494 | | 2.5701 | 1.86 | 54 | 2.7852 | 0.0495 | | 2.6621 | 1.9 | 55 | 2.7832 | 0.0497 | | 2.532 | 1.93 | 56 | 2.7812 | 0.0496 | | 2.5928 | 1.97 | 57 | 2.7793 | 0.0497 | | 2.5486 | 2.0 | 58 | 2.7754 | 0.0497 | | 2.5009 | 2.03 | 59 | 2.7734 | 0.0497 | | 2.4346 | 2.07 | 60 | 2.7734 | 0.0498 | | 2.3259 | 2.1 | 61 | 2.7715 | 0.0497 | | 2.3569 | 2.14 | 62 | 2.7695 | 0.0498 | | 2.5898 | 2.17 | 63 | 2.7695 | 0.0498 | | 2.3657 | 2.21 | 64 | 2.7676 | 0.0498 | | 2.4875 | 2.24 | 65 | 2.7676 | 0.0498 | | 2.4392 | 2.28 | 66 | 2.7676 | 0.0497 | | 2.3595 | 2.31 | 67 | 2.7656 | 0.0497 | | 2.4757 | 2.34 | 68 | 2.7656 | 0.0498 | | 2.4617 | 2.38 | 69 | 2.7656 | 0.0498 | | 2.3376 | 2.41 | 70 | 2.7656 | 0.0499 | | 2.3129 | 2.45 | 71 | 2.7656 | 0.0498 | | 2.5703 | 2.48 | 72 | 2.7656 | 0.0498 | | 2.3491 | 2.52 | 73 | 2.7656 | 0.0498 | | 2.3484 | 2.55 | 74 | 2.7656 | 0.0498 | | 2.3782 | 2.59 | 75 | 2.7656 | 0.0497 | | 2.4033 | 2.62 | 76 | 2.7656 | 0.0498 | | 2.3821 | 2.66 | 77 | 2.7656 | 0.0498 | | 2.39 | 2.69 | 78 | 2.7656 | 0.0498 | | 2.3984 | 2.72 | 79 | 2.7656 | 0.0497 | | 2.3936 | 2.76 | 80 | 2.7656 | 0.0498 | | 2.4414 | 2.79 | 81 | 2.7656 | 0.0497 | | 2.4727 | 2.83 | 82 | 2.7656 | 0.0497 | | 2.3192 | 2.86 | 83 | 2.7656 | 0.0497 | | 2.4365 | 2.9 | 84 | 2.7656 | 0.0497 | | 2.5042 | 2.93 | 85 | 2.7656 | 0.0497 | | 2.4746 | 2.97 | 86 | 2.7656 | 0.0497 | | 2.5383 | 3.0 | 87 | 2.7656 | 0.0497 |
392bd61a588bc7457871c263a8c383c9
apache-2.0
['generated_from_trainer']
false
DistilBERT-POWO_MGH_Growth_Form_Finetuned This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2182
4799505a524eba63c4479f15e652083a
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.2379 | 1.0 | 2054 | 0.2241 | | 0.2098 | 2.0 | 4108 | 0.2173 | | 0.2168 | 3.0 | 6162 | 0.2182 |
367394a705e54e18c54f216d9a50d58d
apache-2.0
['generated_from_trainer']
false
distilbert_add_GLUE_Experiment_logit_kd_rte_96 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE RTE dataset. It achieves the following results on the evaluation set: - Loss: 0.4234 - Accuracy: 0.4729
1ffe2a581d9ad8bd6f6ba17679fd9acd
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4604 | 1.0 | 10 | 0.4429 | 0.4729 | | 0.4358 | 2.0 | 20 | 0.4328 | 0.4729 | | 0.4282 | 3.0 | 30 | 0.4290 | 0.4729 | | 0.4246 | 4.0 | 40 | 0.4269 | 0.4729 | | 0.4227 | 5.0 | 50 | 0.4252 | 0.4729 | | 0.4204 | 6.0 | 60 | 0.4243 | 0.4729 | | 0.4191 | 7.0 | 70 | 0.4238 | 0.4729 | | 0.4185 | 8.0 | 80 | 0.4235 | 0.4729 | | 0.4175 | 9.0 | 90 | 0.4234 | 0.4729 | | 0.4164 | 10.0 | 100 | 0.4235 | 0.4729 | | 0.418 | 11.0 | 110 | 0.4236 | 0.4729 | | 0.4169 | 12.0 | 120 | 0.4236 | 0.4729 | | 0.4173 | 13.0 | 130 | 0.4238 | 0.4729 | | 0.4168 | 14.0 | 140 | 0.4239 | 0.4729 |
81cf1e3297c301369604d3d0bf43ca78
apache-2.0
['generated_from_keras_callback']
false
kasrahabib/20_propogated This model is a fine-tuned version of [kasrahabib/XXX08_02_23__-bucket-finetunned](https://huggingface.co/kasrahabib/XXX08_02_23__-bucket-finetunned) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0504 - Validation Loss: 0.1528 - Epoch: 9
2d697bbaf075f62f2c90b5bdd5730fa0
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7660, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32
c2ca07384747c955bd8e991fc6786f06
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.2492 | 0.1740 | 0 | | 0.1527 | 0.1501 | 1 | | 0.1092 | 0.1582 | 2 | | 0.0879 | 0.1568 | 3 | | 0.0774 | 0.1577 | 4 | | 0.0689 | 0.1513 | 5 | | 0.0597 | 0.1598 | 6 | | 0.0600 | 0.1536 | 7 | | 0.0526 | 0.1519 | 8 | | 0.0504 | 0.1528 | 9 |
524302f678007e485f4b5b9e4f940477
apache-2.0
['generated_from_keras_callback']
false
alk/t5-small-finetuned-cnn_dailymail-en-es This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.9163 - Validation Loss: 1.7610 - Epoch: 3
00ccb1de4fa0d940dc16e38a9ed899ef
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 71776, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32
0f7fe2394c5c2a0a5ac44eea7160c534
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.9945 | 1.7837 | 0 | | 1.9478 | 1.7694 | 1 | | 1.9278 | 1.7646 | 2 | | 1.9163 | 1.7610 | 3 |
b3158b30aeb63261577c354c92a8b366
apache-2.0
['generated_from_trainer']
false
tiny-mlm-glue-mnli-target-glue-qnli This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-mnli](https://huggingface.co/muhtasham/tiny-mlm-glue-mnli) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4695 - Accuracy: 0.7814
3d7ddd1f7b7b593343683a11c9ccc074
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6034 | 0.15 | 500 | 0.5431 | 0.7335 | | 0.5403 | 0.31 | 1000 | 0.5253 | 0.7459 | | 0.5174 | 0.46 | 1500 | 0.4953 | 0.7659 | | 0.5137 | 0.61 | 2000 | 0.5259 | 0.7483 | | 0.511 | 0.76 | 2500 | 0.4814 | 0.7750 | | 0.5032 | 0.92 | 3000 | 0.4670 | 0.7847 | | 0.4901 | 1.07 | 3500 | 0.4525 | 0.7904 | | 0.4798 | 1.22 | 4000 | 0.4679 | 0.7836 | | 0.4667 | 1.37 | 4500 | 0.4752 | 0.7798 | | 0.4736 | 1.53 | 5000 | 0.4695 | 0.7814 |
3ec6dc6ed8c46a47ae00f870a76bb272
unlicense
[]
false
권장사항 (Recommend) * 가장 추천되는 모델은 BAD 0.3입니다. * BA 0.1이 가장 반실사에 가깝고, BAD 0.5는 매우 실사스럽고 드림 특유의 뭉개짐이 많습니다. * 권장 프롬프트 : detailed face, restore face * 권장 네거티브 : (worst quality, low quality:1.4), (loli, child, infant, baby:1.3), accessories * Most recommended model is BAD 0.3. * BA 0.1 likes semi-realistic, BAD 0.5 is very realistic but it has many errors. * Prompts recommended : detailed face, restore face * Negative recommended : (worst quality, low quality:1.4), (loli, child, infant, baby:1.3), accessories
707df9969cfd935cb97a38bc2e0e7db4
mit
['generated_from_trainer']
false
model_dir This model is a fine-tuned version of [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0380 - Pearson: 0.9399
e320e99e776f63c2ebf10767dade3cf0
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 8e-05 - train_batch_size: 128 - eval_batch_size: 256 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 - mixed_precision_training: Native AMP
018977d71652801ccd65b2fb9990d80b
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Pearson | |:-------------:|:-----:|:----:|:---------------:|:-------:| | No log | 1.0 | 12 | 0.2773 | 0.7230 | | No log | 2.0 | 24 | 0.1120 | 0.7812 | | No log | 3.0 | 36 | 0.1090 | 0.8638 | | No log | 4.0 | 48 | 0.0613 | 0.9163 | | No log | 5.0 | 60 | 0.0447 | 0.9409 | | No log | 6.0 | 72 | 0.0356 | 0.9402 | | No log | 7.0 | 84 | 0.0368 | 0.9359 | | No log | 8.0 | 96 | 0.0408 | 0.9295 | | No log | 9.0 | 108 | 0.0397 | 0.9382 | | No log | 10.0 | 120 | 0.0380 | 0.9399 |
cd70313dd926cc62136494f2e2f2c681
mit
['text-classification', 'generated_from_trainer']
false
xnli_xlm_r_base_only_en_automodel_single_gpu This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xnli dataset. It achieves the following results on the evaluation set: - Loss: 1.0986 - Accuracy: 0.3333
42fad2cba2e12be077cf4abbcd47488f
mit
['text-classification', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.1064 | 0.04 | 1000 | 1.1003 | 0.3333 | | 1.1042 | 0.08 | 2000 | 1.1006 | 0.3333 | | 1.1049 | 0.12 | 3000 | 1.0992 | 0.3333 | | 1.1037 | 0.16 | 4000 | 1.1019 | 0.3333 | | 1.1037 | 0.2 | 5000 | 1.0986 | 0.3333 | | 1.1028 | 0.24 | 6000 | 1.1014 | 0.3333 | | 1.1044 | 0.29 | 7000 | 1.1059 | 0.3333 | | 1.102 | 0.33 | 8000 | 1.1000 | 0.3333 | | 1.1022 | 0.37 | 9000 | 1.1012 | 0.3333 | | 1.1019 | 0.41 | 10000 | 1.0995 | 0.3333 | | 1.1018 | 0.45 | 11000 | 1.0990 | 0.3333 | | 1.103 | 0.49 | 12000 | 1.1018 | 0.3333 | | 1.1016 | 0.53 | 13000 | 1.0989 | 0.3333 | | 1.1021 | 0.57 | 14000 | 1.0995 | 0.3333 | | 1.1012 | 0.61 | 15000 | 1.1026 | 0.3333 | | 1.1012 | 0.65 | 16000 | 1.1000 | 0.3333 | | 1.1018 | 0.69 | 17000 | 1.0992 | 0.3333 | | 1.1004 | 0.73 | 18000 | 1.0996 | 0.3333 | | 1.101 | 0.77 | 19000 | 1.0987 | 0.3333 | | 1.1011 | 0.81 | 20000 | 1.1001 | 0.3333 | | 1.1006 | 0.86 | 21000 | 1.0991 | 0.3333 | | 1.1006 | 0.9 | 22000 | 1.1028 | 0.3333 | | 1.1003 | 0.94 | 23000 | 1.0988 | 0.3333 | | 1.1006 | 0.98 | 24000 | 1.0987 | 0.3333 | | 1.1008 | 1.02 | 25000 | 1.0995 | 0.3333 | | 1.1011 | 1.06 | 26000 | 1.0987 | 0.3333 | | 1.1003 | 1.1 | 27000 | 1.0987 | 0.3333 | | 1.1002 | 1.14 | 28000 | 1.1020 | 0.3333 | | 1.1 | 1.18 | 29000 | 1.0988 | 0.3333 | | 1.1002 | 1.22 | 30000 | 1.0995 | 0.3333 | | 1.1001 | 1.26 | 31000 | 1.0989 | 0.3333 | | 1.1001 | 1.3 | 32000 | 1.0986 | 0.3333 | | 1.0999 | 1.34 | 33000 | 1.0989 | 0.3333 | | 1.1004 | 1.39 | 34000 | 1.0987 | 0.3333 | | 1.0993 | 1.43 | 35000 | 1.0989 | 0.3333 | | 1.1003 | 1.47 | 36000 | 1.0989 | 0.3333 | | 1.0999 | 1.51 | 37000 | 1.0991 | 0.3333 | | 1.0999 | 1.55 | 38000 | 1.0993 | 0.3333 | | 1.0994 | 1.59 | 39000 | 1.0993 | 0.3333 | | 1.0994 | 1.63 | 40000 | 1.0989 | 0.3333 | | 1.0999 | 1.67 | 41000 | 1.0988 | 0.3333 | | 1.0995 | 1.71 | 42000 | 1.0996 | 0.3333 | | 1.1003 | 1.75 | 43000 | 1.0987 | 0.3333 | | 1.0996 | 1.79 | 44000 | 1.0987 | 0.3333 | | 1.0996 | 1.83 | 45000 | 1.0990 | 0.3333 | | 1.0994 | 1.87 | 46000 | 1.0990 | 0.3333 | | 1.0992 | 1.91 | 47000 | 1.1000 | 0.3333 | | 1.0992 | 1.96 | 48000 | 1.0989 | 0.3333 | | 1.0991 | 2.0 | 49000 | 1.0991 | 0.3333 | | 1.099 | 2.04 | 50000 | 1.0987 | 0.3333 | | 1.0992 | 2.08 | 51000 | 1.0987 | 0.3333 | | 1.0995 | 2.12 | 52000 | 1.0988 | 0.3333 | | 1.0994 | 2.16 | 53000 | 1.0989 | 0.3333 | | 1.0994 | 2.2 | 54000 | 1.0989 | 0.3333 | | 1.0993 | 2.24 | 55000 | 1.0988 | 0.3333 | | 1.0988 | 2.28 | 56000 | 1.0986 | 0.3333 | | 1.0995 | 2.32 | 57000 | 1.0986 | 0.3333 | | 1.0991 | 2.36 | 58000 | 1.0988 | 0.3333 | | 1.0989 | 2.4 | 59000 | 1.0987 | 0.3333 | | 1.0991 | 2.44 | 60000 | 1.0990 | 0.3333 | | 1.0992 | 2.49 | 61000 | 1.0989 | 0.3333 | | 1.0992 | 2.53 | 62000 | 1.0987 | 0.3333 | | 1.0989 | 2.57 | 63000 | 1.0986 | 0.3333 | | 1.099 | 2.61 | 64000 | 1.0987 | 0.3333 | | 1.0991 | 2.65 | 65000 | 1.0986 | 0.3333 | | 1.0991 | 2.69 | 66000 | 1.0986 | 0.3333 | | 1.0991 | 2.73 | 67000 | 1.0987 | 0.3333 | | 1.0986 | 2.77 | 68000 | 1.0987 | 0.3333 | | 1.0992 | 2.81 | 69000 | 1.0986 | 0.3333 | | 1.0989 | 2.85 | 70000 | 1.0986 | 0.3333 | | 1.099 | 2.89 | 71000 | 1.0987 | 0.3333 | | 1.0989 | 2.93 | 72000 | 1.0986 | 0.3333 | | 1.0989 | 2.97 | 73000 | 1.0986 | 0.3333 |
71f1ef3cd8d40d1ad20c27be98aac3c5
apache-2.0
['generated_from_trainer']
false
t5-base-pointer-mtop This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the mtop dataset. It achieves the following results on the evaluation set: - Loss: 0.1131 - Exact Match: 0.7199
a92ff815801909156dce760f83223b9c
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Exact Match | |:-------------:|:-----:|:----:|:---------------:|:-----------:| | 1.7749 | 6.65 | 200 | 0.5892 | 0.0031 | | 0.6021 | 13.33 | 400 | 0.5160 | 0.0139 | | 0.6044 | 19.98 | 600 | 0.4080 | 0.0532 | | 0.3302 | 26.65 | 800 | 0.1865 | 0.3620 | | 0.1483 | 33.33 | 1000 | 0.1267 | 0.5105 | | 0.0768 | 39.98 | 1200 | 0.1131 | 0.5298 | | 0.0525 | 46.65 | 1400 | 0.1219 | 0.5414 | | 0.0801 | 53.33 | 1600 | 0.1186 | 0.5275 | | 0.0331 | 59.98 | 1800 | 0.1306 | 0.5423 | | 0.0254 | 66.65 | 2000 | 0.1396 | 0.5396 | | 0.0168 | 73.33 | 2200 | 0.1560 | 0.5436 | | 0.0129 | 79.98 | 2400 | 0.1659 | 0.5494 | | 0.0105 | 86.65 | 2600 | 0.1699 | 0.5423 | | 0.0088 | 93.33 | 2800 | 0.1742 | 0.5472 | | 0.0077 | 99.98 | 3000 | 0.1775 | 0.5468 |
860c13935783790fe2306b08ba819a7f
apache-2.0
['generated_from_trainer']
false
swin-tiny-patch4-window7-224-finetuned-brainTumorData This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset.
a200f15068179c7c9c7489c76f51a830
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 4
e2909760e6a9b88b111fbca1e8176d4a
apache-2.0
['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_8_0', 'robust-speech-event', 'xlsr-fine-tuning-week']
false
wav2vec2-xls-r-300m-west-slavic-cv8 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the Common Voice 8 dataset of five similar languages with similar scripts: Czech, Slovak, Polish, Slovenian and Upper Sorbian. Training and validation sets were concatenated and shuffled. Evaluation set used for training was concatenated from the respective test sets and shuffled while limiting each language to at most 2000 samples. During training, cca WER 70 was achieved on this set.
6c06ca4553df9b852c278599f813d62e
apache-2.0
['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_8_0', 'robust-speech-event', 'xlsr-fine-tuning-week']
false
Evaluation script ``` python eval.py --model_id comodoro/wav2vec2-xls-r-300m-west-slavic-cv8 --dataset mozilla-foundation/common_voice_8_0 --split test --config {lang} ```
24e449de2f055134d9db6a6092f7cd90
apache-2.0
['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_8_0', 'robust-speech-event', 'xlsr-fine-tuning-week']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 50 - mixed_precision_training: Native AMP
b9131518fbd50e62ad03bf6ad52baf5f
openrail
['generated_from_trainer']
false
santacoder-finetuned-the-stack-bash-3 This model is a fine-tuned version of [bigcode/santacoder](https://huggingface.co/bigcode/santacoder) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: nan
e5169b4fbbffbb18fc98d561e8087e72
openrail
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 5000 - mixed_precision_training: Native AMP
121febce6e08eb2c6b6625f63b3d73f7
openrail
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.0 | 0.1 | 500 | nan | | 0.0 | 0.2 | 1000 | nan | | 0.0 | 0.3 | 1500 | nan | | 0.0 | 0.4 | 2000 | nan | | 0.0 | 0.5 | 2500 | nan | | 0.0 | 0.6 | 3000 | nan | | 0.0 | 0.7 | 3500 | nan | | 0.0 | 0.8 | 4000 | nan | | 0.0 | 0.9 | 4500 | nan | | 0.0 | 1.0 | 5000 | nan |
801739f64c131f78a91fa313a949a0d9
apache-2.0
[]
false
Türkçe Multi-label Intent Classification RoBERTa Depremzedelerin ihtiyaçlarını karşılamak için etiketlenmiş eğitilmiş multi-label RoBERTa modeli. Aşağıda değerlendirme sonuçları var. **Evaluation** - 'eval_loss': 0.18568251545368838, - 'eval_runtime': 2.7693, - 'eval_samples_per_second': 254.935, - 'eval_steps_per_second': 8.305, - 'epoch': 3.0 **Classification Report** ``` precision recall f1-score support Alakasiz 0.95 0.87 0.91 781 Barinma 0.86 0.52 0.65 234 Elektronik 0.00 0.00 0.00 171 Giysi 0.89 0.25 0.39 122 Kurtarma 0.86 0.78 0.82 472 Lojistik 0.00 0.00 0.00 123 Saglik 0.78 0.05 0.09 148 Su 0.92 0.11 0.20 96 Yagma 0.00 0.00 0.00 19 Yemek 0.94 0.42 0.58 158 micro avg 0.91 0.55 0.69 2324 macro avg 0.62 0.30 0.36 2324 weighted avg 0.78 0.55 0.61 2324 samples avg 0.69 0.63 0.65 2324 ```
0ac762aef0a2d35254454e1bb23122e7
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-health_facts This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the health_fact dataset. It achieves the following results on the evaluation set: - Loss: 1.1227 - Accuracy: 0.6285 - F1: 0.6545
7af581da52bb157fcd82a5dbadca6d68
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 1.1367 | 1.0 | 154 | 0.9423 | 0.5560 | 0.6060 | | 0.9444 | 2.0 | 308 | 0.9267 | 0.5733 | 0.6170 | | 0.8248 | 3.0 | 462 | 0.9483 | 0.5832 | 0.6256 | | 0.7213 | 4.0 | 616 | 1.0119 | 0.5815 | 0.6219 | | 0.608 | 5.0 | 770 | 1.1227 | 0.6285 | 0.6545 |
c62d9e766aa35b05e84ad3c223f54995
cc-by-sa-4.0
['text-classification', 'hate-speech']
false
roberta-base-frenk-hate Text classification model based on [`roberta-base`](https://huggingface.co/roberta-base) and fine-tuned on the [FRENK dataset](https://www.clarin.si/repository/xmlui/handle/11356/1433) comprising of LGBT and migrant hatespeech. Only the English subset of the data was used for fine-tuning and the dataset has been relabeled for binary classification (offensive or acceptable).
f55582d30a0287be3ed9fa6748b0aa34
cc-by-sa-4.0
['text-classification', 'hate-speech']
false
Fine-tuning hyperparameters Fine-tuning was performed with `simpletransformers`. Beforehand a brief hyperparameter optimisation was performed and the presumed optimal hyperparameters are: ```python model_args = { "num_train_epochs": 6, "learning_rate": 3e-6, "train_batch_size": 69} ```
50e3f1b8b9b092618f18184172aeebec
cc-by-sa-4.0
['text-classification', 'hate-speech']
false
Performance The same pipeline was run with two other transformer models and `fasttext` for comparison. Accuracy and macro F1 score were recorded for each of the 6 fine-tuning sessions and post festum analyzed. | model | average accuracy | average macro F1| |---|---|---| |roberta-base-frenk-hate|0.7915|0.7785| |xlm-roberta-large |0.7904|0.77876| |xlm-roberta-base |0.7577|0.7402| |fasttext|0.725 |0.707 | From recorded accuracies and macro F1 scores p-values were also calculated: Comparison with `xlm-roberta-base`: | test | accuracy p-value | macro F1 p-value| | --- | --- | --- | |Wilcoxon|0.00781|0.00781| |Mann Whithney U-test|0.00108|0.00108| |Student t-test | 1.35e-08 | 1.05e-07| Comparison with `xlm-roberta-large` yielded inconclusive results. `roberta-base` has average accuracy 0.7915, while `xlm-roberta-large` has average accuracy of 0.7904. If macro F1 scores were to be compared, `roberta-base` actually has lower average than `xlm-roberta-large`: 0.77852 vs 0.77876 respectively. The same statistical tests were performed with the premise that `roberta-base` has greater metrics, and the results are given below. | test | accuracy p-value | macro F1 p-value| | --- | --- | --- | |Wilcoxon|0.188|0.406| |Mann Whithey|0.375|0.649| |Student t-test | 0.681| 0.934| With reversed premise (i.e., that `xlm-roberta-large` has greater statistics) the Wilcoxon p-value for macro F1 scores for this case reaches 0.656, Mann-Whithey p-value is 0.399, and of course the Student p-value stays the same. It was therefore concluded that performance of the two models are not statistically significantly different from one another.
877eeaaa1d0c46ff80146c97595ab011
cc-by-sa-4.0
['text-classification', 'hate-speech']
false
Use examples ```python from simpletransformers.classification import ClassificationModel model_args = { "num_train_epochs": 6, "learning_rate": 3e-6, "train_batch_size": 69} model = ClassificationModel( "roberta", "5roop/roberta-base-frenk-hate", use_cuda=True, args=model_args ) predictions, logit_output = model.predict(["Build the wall", "Build the wall of trust"] ) predictions
8a229779f1d3b382e1c14e4e8cceec39
cc-by-sa-4.0
['text-classification', 'hate-speech']
false
Citation If you use the model, please cite the following paper on which the original model is based: ``` @article{DBLP:journals/corr/abs-1907-11692, author = {Yinhan Liu and Myle Ott and Naman Goyal and Jingfei Du and Mandar Joshi and Danqi Chen and Omer Levy and Mike Lewis and Luke Zettlemoyer and Veselin Stoyanov}, title = {RoBERTa: {A} Robustly Optimized {BERT} Pretraining Approach}, journal = {CoRR}, volume = {abs/1907.11692}, year = {2019}, url = {http://arxiv.org/abs/1907.11692}, archivePrefix = {arXiv}, eprint = {1907.11692}, timestamp = {Thu, 01 Aug 2019 08:59:33 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1907-11692.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` and the dataset used for fine-tuning: ``` @misc{ljubešić2019frenk, title={The FRENK Datasets of Socially Unacceptable Discourse in Slovene and English}, author={Nikola Ljubešić and Darja Fišer and Tomaž Erjavec}, year={2019}, eprint={1906.02045}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/1906.02045} } ```
4da0f0a2820ec068773b19b98d4af87b
apache-2.0
['generated_from_trainer']
false
bert-base-uncased-issues-128 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2526
fcaf140b45db13a8bac1bc0d22516a79
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 16 - mixed_precision_training: Native AMP
667db5c171ffad19769567b74760accf
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.1071 | 1.0 | 291 | 1.6964 | | 1.6421 | 2.0 | 582 | 1.4279 | | 1.4853 | 3.0 | 873 | 1.3924 | | 1.4014 | 4.0 | 1164 | 1.3701 | | 1.3388 | 5.0 | 1455 | 1.1944 | | 1.283 | 6.0 | 1746 | 1.2795 | | 1.2394 | 7.0 | 2037 | 1.2671 | | 1.2014 | 8.0 | 2328 | 1.2084 | | 1.1668 | 9.0 | 2619 | 1.1783 | | 1.14 | 10.0 | 2910 | 1.2076 | | 1.1277 | 11.0 | 3201 | 1.2081 | | 1.1053 | 12.0 | 3492 | 1.1628 | | 1.0819 | 13.0 | 3783 | 1.2544 | | 1.0763 | 14.0 | 4074 | 1.1695 | | 1.0634 | 15.0 | 4365 | 1.1157 | | 1.0637 | 16.0 | 4656 | 1.2526 |
ea73aab5b1a193c180ce7fe7b0234098
mit
['huggan', 'gan', 'image-to-image', 'huggingnft', 'nft', 'image', 'images']
false
Model description CycleGAN for unpaired image-to-image translation. Given two image domains A and B, the following components are trained end2end to translate between such domains: - A generator A to B, named G_AB conditioned on an image from A - A generator B to A, named G_BA conditioned on an image from B - A domain classifier D_A, associated with G_AB - A domain classifier D_B, associated with G_BA At inference time, G_AB or G_BA are relevant to translate images, respectively A to B or B to A. In the general setting, this technique provides style transfer functionalities between the selected image domains A and B. This allows to obtain a generated translation by G_AB, of an image from domain A that resembles the distribution of the images from domain B, and viceversa for the generator G_BA. Under these framework, these aspects have been used to perform style transfer between NFT collections. A collection is selected as domain A, another one as domain B and the CycleGAN provides forward and backward translation between A and B. This has showed to allows high quality translation even in absence of paired sample-ground-truth data. In particular, the model performs well with stationary backgrounds (no drastic texture changes in the appearance of backgrounds) as it is capable of recognizing the attributes of each of the elements of an NFT collections. An attribute can be a variation in type of dressed fashion items such as sunglasses, earrings, clothes and also face or body attributes with respect to a common template model of the given NFT collection).
0b9083546cdba493bbff09e4f07e4ead
mit
['huggan', 'gan', 'image-to-image', 'huggingnft', 'nft', 'image', 'images']
false
How to use ```python import torch from PIL import Image from huggan.pytorch.cyclegan.modeling_cyclegan import GeneratorResNet from torchvision import transforms as T from torchvision.transforms import Compose, Resize, ToTensor, Normalize from torchvision.utils import make_grid from huggingface_hub import hf_hub_download, file_download from accelerate import Accelerator import json def load_lightweight_model(model_name): file_path = file_download.hf_hub_download( repo_id=model_name, filename="config.json" ) config = json.loads(open(file_path).read()) organization_name, name = model_name.split("/") model = Trainer(**config, organization_name=organization_name, name=name) model.load(use_cpu=True) model.accelerator = Accelerator() return model def get_concat_h(im1, im2): dst = Image.new('RGB', (im1.width + im2.width, im1.height)) dst.paste(im1, (0, 0)) dst.paste(im2, (im1.width, 0)) return dst n_channels = 3 image_size = 256 input_shape = (image_size, image_size) transform = Compose([ T.ToPILImage(), T.Resize(input_shape), ToTensor(), Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)), ])
8bb1345669cc1fdf223448a038d89add
mit
['huggan', 'gan', 'image-to-image', 'huggingnft', 'nft', 'image', 'images']
false
B = Translator(GAN(A)) translator = GeneratorResNet.from_pretrained(f'huggingnft/{model_name}', input_shape=(n_channels, image_size, image_size), num_residual_blocks=9)
0e9cb3865deeff21c274befee2efd383
mit
['huggan', 'gan', 'image-to-image', 'huggingnft', 'nft', 'image', 'images']
false
load the GAN generator of source images that will be translated by the translation model model = load_lightweight_model(f"huggingnft/{model_name.split('__2__')[0]}") collectionA = model.generate_app( num=timestamped_filename(), nrow=nrows, checkpoint=-1, types="default" )[1]
d335e54973b24d8af2a219580fbf619f
mit
['huggan', 'gan', 'image-to-image', 'huggingnft', 'nft', 'image', 'images']
false
translate the resized collectionA to collectionB collectionB = translator(input) out_transform = T.ToPILImage() results = [] for collA_image, collB_image in zip(input, collectionB): results.append( get_concat_h(out_transform(make_grid(collA_image, nrow=1, normalize=True)), out_transform(make_grid(collB_image, nrow=1, normalize=True))) ) ```
62455ca5cf4cb164d7a9bb9fc32891e3
mit
['huggan', 'gan', 'image-to-image', 'huggingnft', 'nft', 'image', 'images']
false
Limitations and bias Translation between collections provides exceptional output images in the case of NFT collections that portray subjects in the same way. If the backgrounds vary too much within either of the collections, performance degrades or many more training iterations re required to achieve acceptable results.
1a03dc2c371fd6dc88cee7ccbb07ef55
mit
['huggan', 'gan', 'image-to-image', 'huggingnft', 'nft', 'image', 'images']
false
Training data The CycleGAN model is trained on an unpaired dataset of samples from two selected NFT collections: colle tionA and collectionB. To this end, two collections are loaded by means of the function load_dataset in the huggingface library, as follows. A list of all available collections is available at [huggingNFT](https://huggingface.co/huggingnft) ```python from datasets import load_dataset collectionA = load_dataset("huggingnft/COLLECTION_A") collectionB = load_dataset("huggingnft/COLLECTION_B") ```
b68ef883ac3598060b27d8ef5a5f2e51
mit
['huggan', 'gan', 'image-to-image', 'huggingnft', 'nft', 'image', 'images']
false
Preprocessing The following transformations are applied to each input sample of collectionA and collectionB. The input size is fixed to RGB images of height, width = 256, 256 ```python n_channels = 3 image_size = 256 input_shape = (image_size, image_size) transform = Compose([ T.ToPILImage(), T.Resize(input_shape), ToTensor(), Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)), ]) ```
166e522aabc6270cd85311b8780ede6e
mit
['huggan', 'gan', 'image-to-image', 'huggingnft', 'nft', 'image', 'images']
false
Hyperparameters The following configuration has been kept fixed for all translation models: - learning rate 0.0002 - number of epochs 200 - learning rate decay activation at epoch 80 - number of residual blocks of the cyclegan 9 - cycle loss weight 10.0 - identity loss weight 5.0 - optimizer ADAM with beta1 0.5 and beta2 0.999 - batch size 8 - NO mixed precision training
fba9295a8ca367438bb686ce48cb2f34
mit
['huggan', 'gan', 'image-to-image', 'huggingnft', 'nft', 'image', 'images']
false
Training reports [Cryptopunks to boreapeyachtclub](https://wandb.ai/chris1nexus/experiments--experiments_cyclegan_punk_to_apes_HQ--0/reports/CycleGAN-training-report--VmlldzoxODUxNzQz?accessToken=vueurpbhd2i8n347j880yakggs0sqdf7u0hpz3bpfsbrxcmk1jk4obg18f6wfk9w) [Boreapeyachtclub to mutant-ape-yacht-club](https://wandb.ai/chris1nexus/experiments--my_paperspace_boredapeyachtclub__2__mutant-ape-yacht-club--11/reports/CycleGAN-training-report--VmlldzoxODUxNzg4?accessToken=jpyviwn7kdf5216ycrthwp6l8t3heb0lt8djt7dz12guu64qnpdh3ekecfcnoahu)
a71892fabeae586b29d269a8d5a67e09
mit
['huggan', 'gan', 'image-to-image', 'huggingnft', 'nft', 'image', 'images']
false
Generated Images In the provided images, row0 and row2 represent real images from the respective collections. Row1 is the translation of the immediate above images in row0 by means of the G_AB translation model. Row3 is the translation of the immediate above images in row2 by means of the G_BA translation model. Visualization over the training iterations for [boreapeyachtclub to mutant-ape-yacht-club](https://wandb.ai/chris1nexus/experiments--my_paperspace_boredapeyachtclub__2__mutant-ape-yacht-club--11/reports/Shared-panel-22-04-15-08-04-99--VmlldzoxODQ0MDI3?accessToken=45m3kxex5m3rpev3s6vmrv69k3u9p9uxcsp2k90wvbxwxzlqbqjqlnmgpl9265c0) Visualization over the training iterations for [Cryptopunks to boreapeyachtclub](https://wandb.ai/chris1nexus/experiments--experiments_cyclegan_punk_to_apes_HQ--0/reports/Shared-panel-22-04-17-11-04-83--VmlldzoxODUxNjk5?accessToken=o25si6nflp2xst649vt6ayt56bnb95mxmngt1ieso091j2oazmqnwaf4h78vc2tu)
d7d82554c36388a4ce127547adcffca3
mit
['huggan', 'gan', 'image-to-image', 'huggingnft', 'nft', 'image', 'images']
false
References ```bibtex @misc{https://doi.org/10.48550/arxiv.1703.10593, doi = {10.48550/ARXIV.1703.10593}, url = {https://arxiv.org/abs/1703.10593}, author = {Zhu, Jun-Yan and Park, Taesung and Isola, Phillip and Efros, Alexei A.}, keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks}, publisher = {arXiv}, year = {2017}, copyright = {arXiv.org perpetual, non-exclusive license} } ```
f442392f19552314ac1d207ae50b630e
creativeml-openrail-m
['text-to-image']
false
drag_queen_Shangela on Stable Diffusion via Dreambooth trained on the [fast-DreamBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
ec51bc8fcd270fa53d073bd868e70731
creativeml-openrail-m
['text-to-image']
false
Model by chrisin2d This your the Stable Diffusion model fine-tuned the drag_queen_Shangela concept taught to Stable Diffusion with Dreambooth. It can be used by modifying the `instance_prompt(s)`: **** You can also train your own concepts and upload them to the library by using [the fast-DremaBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb). You can run your new concept via A1111 Colab :[Fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Or you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts) Sample pictures of this concept:
de5dabab7f94032ca9c14bb77d2873cf
apache-2.0
['multiberts', 'multiberts-seed_1', 'multiberts-seed_1-step_700k']
false
MultiBERTs, Intermediate Checkpoint - Seed 1, Step 700k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model
fda2b0329de14e4c615f1577f1deabc5
apache-2.0
['multiberts', 'multiberts-seed_1', 'multiberts-seed_1-step_700k']
false
How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_700k') model = TFBertModel.from_pretrained("google/multiberts-seed_1-step_700k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_700k') model = BertModel.from_pretrained("google/multiberts-seed_1-step_700k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ```
341a58d10a4848d55ec1b152b6152e38
apache-2.0
[]
false
doc2query/msmarco-german-mt5-base-v1 This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on mT5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)). It can be used for: - **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/beir-cellar/beir) we have an example how to use docT5query with Pyserini. - **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. In our [GPL-Paper](https://arxiv.org/abs/2112.07577) / [GPL Example on SBERT.net](https://www.sbert.net/examples/domain_adaptation/README.html
d1c3975a93e36094cf8d13a4bef95ad7
apache-2.0
[]
false
gpl-generative-pseudo-labeling) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
62800460204278538aacd001ef7ac0f6
apache-2.0
[]
false
Usage ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM import torch model_name = 'doc2query/msmarco-german-mt5-base-v1' tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSeq2SeqLM.from_pretrained(model_name) text = "Python ist eine universelle, üblicherweise interpretierte, höhere Programmiersprache. Sie hat den Anspruch, einen gut lesbaren, knappen Programmierstil zu fördern. So werden beispielsweise Blöcke nicht durch geschweifte Klammern, sondern durch Einrückungen strukturiert." def create_queries(para): input_ids = tokenizer.encode(para, return_tensors='pt') with torch.no_grad():
12f70ff7a5d308936ec9d953f9744919
apache-2.0
[]
false
Here we use top_k / top_k random sampling. It generates more diverse queries, but of lower quality sampling_outputs = model.generate( input_ids=input_ids, max_length=64, do_sample=True, top_p=0.95, top_k=10, num_return_sequences=5 )
60634c0e4b72b54aa27a4532b6752ea1
apache-2.0
[]
false
Here we use Beam-search. It generates better quality queries, but with less diversity beam_outputs = model.generate( input_ids=input_ids, max_length=64, num_beams=5, no_repeat_ngram_size=2, num_return_sequences=5, early_stopping=True ) print("Paragraph:") print(para) print("\nBeam Outputs:") for i in range(len(beam_outputs)): query = tokenizer.decode(beam_outputs[i], skip_special_tokens=True) print(f'{i + 1}: {query}') print("\nSampling Outputs:") for i in range(len(sampling_outputs)): query = tokenizer.decode(sampling_outputs[i], skip_special_tokens=True) print(f'{i + 1}: {query}') create_queries(text) ``` **Note:** `model.generate()` is non-deterministic for top_k/top_n sampling. It produces different queries each time you run it.
f6aa352ce09c956345b5faa5c1b489cf
apache-2.0
[]
false
Training This model fine-tuned [google/mt5-base](https://huggingface.co/google/mt5-base) for 66k training steps (4 epochs on the 500k training pairs from MS MARCO). For the training script, see the `train_script.py` in this repository. The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces. This model was trained on a (query, passage) from the [mMARCO dataset](https://github.com/unicamp-dl/mMARCO).
f118421748aadba781c0ca05fed96259
apache-2.0
['generated_from_trainer']
false
wav2vec2_xlsr50k_english_phoneme This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on [the TIMIT dataset](https://catalog.ldc.upenn.edu/LDC93s1). It achieves the following results on the evaluation set: - Loss: 0.5783 - Cer: 0.1178
4cda6563e792b6d547a0c0f14588583d