license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1
bfbe5b5bae95bcf65688cda43b90e322
apache-2.0
['generated_from_trainer']
false
edos-2023-baseline-distilbert-base-uncased-label_sexist This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4852 - F1: 0.7874
a7016c1dd9b22f3439aa073d195d2bfd
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5 - num_epochs: 8 - mixed_precision_training: Native AMP
b9130b014407e8d6631324eb6ad3f8f8
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.4199 | 1.14 | 400 | 0.3911 | 0.7571 | | 0.293 | 2.29 | 800 | 0.3778 | 0.7899 | | 0.2348 | 3.43 | 1200 | 0.4102 | 0.7894 | | 0.1895 | 4.57 | 1600 | 0.4417 | 0.7835 | | 0.1392 | 5.71 | 2000 | 0.4852 | 0.7874 |
06eee3d957d442cee831a2d85176cb34
apache-2.0
['generated_from_trainer']
false
model_syllable_onSet1 This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1815 - 0 Precision: 1.0 - 0 Recall: 0.9677 - 0 F1-score: 0.9836 - 0 Support: 31 - 1 Precision: 0.9545 - 1 Recall: 1.0 - 1 F1-score: 0.9767 - 1 Support: 21 - 2 Precision: 1.0 - 2 Recall: 1.0 - 2 F1-score: 1.0 - 2 Support: 30 - 3 Precision: 1.0 - 3 Recall: 1.0 - 3 F1-score: 1.0 - 3 Support: 16 - Accuracy: 0.9898 - Macro avg Precision: 0.9886 - Macro avg Recall: 0.9919 - Macro avg F1-score: 0.9901 - Macro avg Support: 98 - Weighted avg Precision: 0.9903 - Weighted avg Recall: 0.9898 - Weighted avg F1-score: 0.9898 - Weighted avg Support: 98 - Wer: 0.7883 - Mtrix: [[0, 1, 2, 3], [0, 30, 1, 0, 0], [1, 0, 21, 0, 0], [2, 0, 0, 30, 0], [3, 0, 0, 0, 16]]
2fd348d18bc0e3eaf0431a99178b5413
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - num_epochs: 70 - mixed_precision_training: Native AMP
5d440deb2b73680e1e06c9c3f48bdd60
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | 0 Precision | 0 Recall | 0 F1-score | 0 Support | 1 Precision | 1 Recall | 1 F1-score | 1 Support | 2 Precision | 2 Recall | 2 F1-score | 2 Support | 3 Precision | 3 Recall | 3 F1-score | 3 Support | Accuracy | Macro avg Precision | Macro avg Recall | Macro avg F1-score | Macro avg Support | Weighted avg Precision | Weighted avg Recall | Weighted avg F1-score | Weighted avg Support | Wer | Mtrix | |:-------------:|:-----:|:----:|:---------------:|:-----------:|:--------:|:----------:|:---------:|:-----------:|:--------:|:----------:|:---------:|:-----------:|:--------:|:----------:|:---------:|:-----------:|:--------:|:----------:|:---------:|:--------:|:-------------------:|:----------------:|:------------------:|:-----------------:|:----------------------:|:-------------------:|:---------------------:|:--------------------:|:------:|:--------------------------------------------------------------------------------------:| | 1.6949 | 4.16 | 100 | 1.6177 | 1.0 | 0.2581 | 0.4103 | 31 | 0.0 | 0.0 | 0.0 | 21 | 0.3333 | 1.0 | 0.5 | 30 | 0.0 | 0.0 | 0.0 | 16 | 0.3878 | 0.3333 | 0.3145 | 0.2276 | 98 | 0.4184 | 0.3878 | 0.2828 | 98 | 0.9655 | [[0, 1, 2, 3], [0, 8, 0, 23, 0], [1, 0, 0, 21, 0], [2, 0, 0, 30, 0], [3, 0, 0, 16, 0]] | | 1.5778 | 8.33 | 200 | 1.3535 | 1.0 | 0.2581 | 0.4103 | 31 | 0.0 | 0.0 | 0.0 | 21 | 0.3333 | 1.0 | 0.5 | 30 | 0.0 | 0.0 | 0.0 | 16 | 0.3878 | 0.3333 | 0.3145 | 0.2276 | 98 | 0.4184 | 0.3878 | 0.2828 | 98 | 0.9655 | [[0, 1, 2, 3], [0, 8, 0, 23, 0], [1, 0, 0, 21, 0], [2, 0, 0, 30, 0], [3, 0, 0, 16, 0]] | | 1.2861 | 12.49 | 300 | 1.0938 | 1.0 | 0.2581 | 0.4103 | 31 | 0.0 | 0.0 | 0.0 | 21 | 0.3333 | 1.0 | 0.5 | 30 | 0.0 | 0.0 | 0.0 | 16 | 0.3878 | 0.3333 | 0.3145 | 0.2276 | 98 | 0.4184 | 0.3878 | 0.2828 | 98 | 0.9655 | [[0, 1, 2, 3], [0, 8, 0, 23, 0], [1, 0, 0, 21, 0], [2, 0, 0, 30, 0], [3, 0, 0, 16, 0]] | | 0.954 | 16.65 | 400 | 0.9480 | 1.0 | 0.2581 | 0.4103 | 31 | 0.0 | 0.0 | 0.0 | 21 | 0.3333 | 1.0 | 0.5 | 30 | 0.0 | 0.0 | 0.0 | 16 | 0.3878 | 0.3333 | 0.3145 | 0.2276 | 98 | 0.4184 | 0.3878 | 0.2828 | 98 | 0.9655 | [[0, 1, 2, 3], [0, 8, 0, 23, 0], [1, 0, 0, 21, 0], [2, 0, 0, 30, 0], [3, 0, 0, 16, 0]] | | 0.8849 | 20.82 | 500 | 0.9231 | 1.0 | 0.2581 | 0.4103 | 31 | 0.0 | 0.0 | 0.0 | 21 | 0.3333 | 1.0 | 0.5 | 30 | 0.0 | 0.0 | 0.0 | 16 | 0.3878 | 0.3333 | 0.3145 | 0.2276 | 98 | 0.4184 | 0.3878 | 0.2828 | 98 | 0.9655 | [[0, 1, 2, 3], [0, 8, 0, 23, 0], [1, 0, 0, 21, 0], [2, 0, 0, 30, 0], [3, 0, 0, 16, 0]] | | 0.8674 | 24.98 | 600 | 0.8767 | 1.0 | 0.2581 | 0.4103 | 31 | 0.0 | 0.0 | 0.0 | 21 | 0.3333 | 1.0 | 0.5 | 30 | 0.0 | 0.0 | 0.0 | 16 | 0.3878 | 0.3333 | 0.3145 | 0.2276 | 98 | 0.4184 | 0.3878 | 0.2828 | 98 | 0.9655 | [[0, 1, 2, 3], [0, 8, 0, 23, 0], [1, 0, 0, 21, 0], [2, 0, 0, 30, 0], [3, 0, 0, 16, 0]] | | 0.7921 | 29.16 | 700 | 0.7519 | 1.0 | 0.9677 | 0.9836 | 31 | 0.9545 | 1.0 | 0.9767 | 21 | 1.0 | 1.0 | 1.0 | 30 | 1.0 | 1.0 | 1.0 | 16 | 0.9898 | 0.9886 | 0.9919 | 0.9901 | 98 | 0.9903 | 0.9898 | 0.9898 | 98 | 1.0 | [[0, 1, 2, 3], [0, 30, 1, 0, 0], [1, 0, 21, 0, 0], [2, 0, 0, 30, 0], [3, 0, 0, 0, 16]] | | 0.7851 | 33.33 | 800 | 0.8212 | 1.0 | 0.9032 | 0.9492 | 31 | 0.84 | 1.0 | 0.9130 | 21 | 1.0 | 1.0 | 1.0 | 30 | 1.0 | 0.9375 | 0.9677 | 16 | 0.9592 | 0.96 | 0.9602 | 0.9575 | 98 | 0.9657 | 0.9592 | 0.9600 | 98 | 1.0 | [[0, 1, 2, 3], [0, 28, 3, 0, 0], [1, 0, 21, 0, 0], [2, 0, 0, 30, 0], [3, 0, 1, 0, 15]] | | 0.7657 | 37.49 | 900 | 0.7504 | 1.0 | 0.9677 | 0.9836 | 31 | 0.9130 | 1.0 | 0.9545 | 21 | 1.0 | 1.0 | 1.0 | 30 | 1.0 | 0.9375 | 0.9677 | 16 | 0.9796 | 0.9783 | 0.9763 | 0.9765 | 98 | 0.9814 | 0.9796 | 0.9798 | 98 | 1.0 | [[0, 1, 2, 3], [0, 30, 1, 0, 0], [1, 0, 21, 0, 0], [2, 0, 0, 30, 0], [3, 0, 1, 0, 15]] | | 0.688 | 41.65 | 1000 | 0.6897 | 1.0 | 1.0 | 1.0 | 31 | 0.9130 | 1.0 | 0.9545 | 21 | 1.0 | 0.9667 | 0.9831 | 30 | 1.0 | 0.9375 | 0.9677 | 16 | 0.9796 | 0.9783 | 0.9760 | 0.9763 | 98 | 0.9814 | 0.9796 | 0.9798 | 98 | 0.7008 | [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 0, 21, 0, 0], [2, 0, 1, 29, 0], [3, 0, 1, 0, 15]] | | 0.4415 | 45.82 | 1100 | 0.1917 | 1.0 | 1.0 | 1.0 | 31 | 1.0 | 1.0 | 1.0 | 21 | 1.0 | 1.0 | 1.0 | 30 | 1.0 | 1.0 | 1.0 | 16 | 1.0 | 1.0 | 1.0 | 1.0 | 98 | 1.0 | 1.0 | 1.0 | 98 | 0.6974 | [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 0, 21, 0, 0], [2, 0, 0, 30, 0], [3, 0, 0, 0, 16]] | | 0.3074 | 49.98 | 1200 | 0.1865 | 1.0 | 0.9677 | 0.9836 | 31 | 0.9545 | 1.0 | 0.9767 | 21 | 1.0 | 1.0 | 1.0 | 30 | 1.0 | 1.0 | 1.0 | 16 | 0.9898 | 0.9886 | 0.9919 | 0.9901 | 98 | 0.9903 | 0.9898 | 0.9898 | 98 | 0.6686 | [[0, 1, 2, 3], [0, 30, 1, 0, 0], [1, 0, 21, 0, 0], [2, 0, 0, 30, 0], [3, 0, 0, 0, 16]] | | 0.2069 | 54.16 | 1300 | 0.1821 | 1.0 | 0.9677 | 0.9836 | 31 | 0.9545 | 1.0 | 0.9767 | 21 | 1.0 | 1.0 | 1.0 | 30 | 1.0 | 1.0 | 1.0 | 16 | 0.9898 | 0.9886 | 0.9919 | 0.9901 | 98 | 0.9903 | 0.9898 | 0.9898 | 98 | 0.7043 | [[0, 1, 2, 3], [0, 30, 1, 0, 0], [1, 0, 21, 0, 0], [2, 0, 0, 30, 0], [3, 0, 0, 0, 16]] | | 0.1791 | 58.33 | 1400 | 0.1866 | 1.0 | 0.9677 | 0.9836 | 31 | 0.9130 | 1.0 | 0.9545 | 21 | 1.0 | 0.9667 | 0.9831 | 30 | 1.0 | 1.0 | 1.0 | 16 | 0.9796 | 0.9783 | 0.9836 | 0.9803 | 98 | 0.9814 | 0.9796 | 0.9799 | 98 | 0.6893 | [[0, 1, 2, 3], [0, 30, 1, 0, 0], [1, 0, 21, 0, 0], [2, 0, 1, 29, 0], [3, 0, 0, 0, 16]] | | 0.1717 | 62.49 | 1500 | 0.1839 | 1.0 | 0.9677 | 0.9836 | 31 | 0.9545 | 1.0 | 0.9767 | 21 | 1.0 | 1.0 | 1.0 | 30 | 1.0 | 1.0 | 1.0 | 16 | 0.9898 | 0.9886 | 0.9919 | 0.9901 | 98 | 0.9903 | 0.9898 | 0.9898 | 98 | 0.7848 | [[0, 1, 2, 3], [0, 30, 1, 0, 0], [1, 0, 21, 0, 0], [2, 0, 0, 30, 0], [3, 0, 0, 0, 16]] | | 0.1571 | 66.65 | 1600 | 0.1799 | 1.0 | 0.9677 | 0.9836 | 31 | 0.9545 | 1.0 | 0.9767 | 21 | 1.0 | 1.0 | 1.0 | 30 | 1.0 | 1.0 | 1.0 | 16 | 0.9898 | 0.9886 | 0.9919 | 0.9901 | 98 | 0.9903 | 0.9898 | 0.9898 | 98 | 0.7929 | [[0, 1, 2, 3], [0, 30, 1, 0, 0], [1, 0, 21, 0, 0], [2, 0, 0, 30, 0], [3, 0, 0, 0, 16]] |
7acf056f7d45cc74ef83923ee3a4791a
apache-2.0
['whisper-event', 'generated_from_trainer']
false
whisper-base-uk This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - eval_loss: 1.3201 - eval_wer: 10.2869
6d5f789d97e5322d5b58f3663cc872b6
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 5000 - mixed_precision_training: Native AMP
190bf89651be4795c55ca31709151b0e
apache-2.0
['automatic-speech-recognition', 'en']
false
exp_w2v2r_en_xls-r_accent_us-5_england-5_s334 Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
7eb3a35720500e41998c078faa9befe5
other
['generated_from_keras_callback']
false
TheNateTCY/fulltrain_optmodel This model is a fine-tuned version of [facebook/opt-125m](https://huggingface.co/facebook/opt-125m) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.8560 - Validation Loss: 1.2171 - Epoch: 0
036cc92afd3191597846eeb680218574
other
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 8375, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32
14e408ae7b30a6a3442118629e857ae4
apache-2.0
['tabular-classification', 'baseline-trainer']
false
Baseline Model trained on irisg444_4c0 to apply classification on Species **Metrics of the best model:** accuracy 0.953333 recall_macro 0.953333 precision_macro 0.956229 f1_macro 0.953216 Name: LogisticRegression(class_weight='balanced', max_iter=1000), dtype: float64 **See model plot below:** <style>
527a4ae4344dfc9d8e342f55061d9e4c
apache-2.0
['tabular-classification', 'baseline-trainer']
false
sk-container-id-2 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}
d3ed564c952ec8960b4337166b2512fb
apache-2.0
['tabular-classification', 'baseline-trainer']
false
x27;,EasyPreprocessor(types= continuous dirty_float ... free_string useless SepalLengthCm True False ... False False SepalWidthCm True False ... False False PetalLengthCm True False ... False False PetalWidthCm True False ... False False[4 rows x 7 columns])),(&
0ae40a15e737d25be07b1a098a1ebde8
apache-2.0
['tabular-classification', 'baseline-trainer']
false
x27;,max_iter=1000))])</pre><b>In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. <br />On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.</b></div><div class="sk-container" hidden><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-4" type="checkbox" ><label for="sk-estimator-id-4" class="sk-toggleable__label sk-toggleable__label-arrow">Pipeline</label><div class="sk-toggleable__content"><pre>Pipeline(steps=[(&
10a5838bc43a55812f31de59ecb52fad
apache-2.0
['tabular-classification', 'baseline-trainer']
false
x27;,max_iter=1000))])</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-5" type="checkbox" ><label for="sk-estimator-id-5" class="sk-toggleable__label sk-toggleable__label-arrow">EasyPreprocessor</label><div class="sk-toggleable__content"><pre>EasyPreprocessor(types= continuous dirty_float ... free_string useless SepalLengthCm True False ... False False SepalWidthCm True False ... False False PetalLengthCm True False ... False False PetalWidthCm True False ... False False[4 rows x 7 columns])</pre></div></div></div><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-6" type="checkbox" ><label for="sk-estimator-id-6" class="sk-toggleable__label sk-toggleable__label-arrow">LogisticRegression</label><div class="sk-toggleable__content"><pre>LogisticRegression(C=1, class_weight=&
70fd586d778a7927731e4415f5c0ab36
apache-2.0
['tabular-classification', 'baseline-trainer']
false
x27;, max_iter=1000)</pre></div></div></div></div></div></div></div> **Disclaimer:** This model is trained with dabl library as a baseline, for better results, use [AutoTrain](https://huggingface.co/autotrain). **Logs of training** including the models tried in the process can be found in logs.txt
add4638b3f16a699897cbc9fe8cf0202
apache-2.0
['translation']
false
opus-mt-fr-bi * source languages: fr * target languages: bi * OPUS readme: [fr-bi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-bi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-bi/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-bi/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-bi/opus-2020-01-20.eval.txt)
314113b76182d036e94d42113e8d644f
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased__hate_speech_offensive__train-32-4 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7384 - Accuracy: 0.724
907c2ef0f4a96c59105a10b7f1d5f464
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.1013 | 1.0 | 19 | 1.0733 | 0.55 | | 1.0226 | 2.0 | 38 | 1.0064 | 0.65 | | 0.8539 | 3.0 | 57 | 0.8758 | 0.75 | | 0.584 | 4.0 | 76 | 0.6941 | 0.7 | | 0.2813 | 5.0 | 95 | 0.5151 | 0.7 | | 0.1122 | 6.0 | 114 | 0.4351 | 0.8 | | 0.0432 | 7.0 | 133 | 0.4896 | 0.85 | | 0.0199 | 8.0 | 152 | 0.5391 | 0.85 | | 0.0126 | 9.0 | 171 | 0.5200 | 0.85 | | 0.0085 | 10.0 | 190 | 0.5622 | 0.85 | | 0.0069 | 11.0 | 209 | 0.5950 | 0.85 | | 0.0058 | 12.0 | 228 | 0.6015 | 0.85 | | 0.0053 | 13.0 | 247 | 0.6120 | 0.85 | | 0.0042 | 14.0 | 266 | 0.6347 | 0.85 | | 0.0039 | 15.0 | 285 | 0.6453 | 0.85 | | 0.0034 | 16.0 | 304 | 0.6660 | 0.85 |
70f673b91cb2ee8e7aa030969366b5fb
apache-2.0
['automatic-speech-recognition', 'es']
false
exp_w2v2t_es_no-pretraining_s807 Fine-tuned randomly initialized wav2vec2 model for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
cd21c65865e8fd83bdd46b1342431e2b
mit
['conversational']
false
DialoGPT Trained on the Speech of a Game Character This is an instance of [microsoft/DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) trained on a game character, Joshua from [The World Ends With You](https://en.wikipedia.org/wiki/The_World_Ends_with_You). The data comes from [a Kaggle game script dataset](https://www.kaggle.com/ruolinzheng/twewy-game-script). I built a Discord AI chatbot based on this model. [Check out my GitHub repo.](https://github.com/RuolinZheng08/twewy-discord-chatbot) Chat with the model: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua") model = AutoModelWithLMHead.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua")
957e1ff94ac90c92c81d9b404a8ca068
mit
['conversational']
false
generated a response while limiting the total chat history to 1000 tokens, chat_history_ids = model.generate( bot_input_ids, max_length=200, pad_token_id=tokenizer.eos_token_id, no_repeat_ngram_size=3, do_sample=True, top_k=100, top_p=0.7, temperature=0.8 )
be4745473152e9c4abed922d91529a6a
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-panx-de-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1608 - F1: 0.8593
b66dd2f6aee0fd8751da8ca68e2fe6b8
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2888 | 1.0 | 715 | 0.1779 | 0.8233 | | 0.1437 | 2.0 | 1430 | 0.1570 | 0.8497 | | 0.0931 | 3.0 | 2145 | 0.1608 | 0.8593 |
bff39ae636dd966f0c4ae244d77c4dfb
apache-2.0
['generated_from_trainer']
false
finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.2913 - Accuracy: 0.88 - F1: 0.8808
4c174aca373cff5e335538c006e8c899
apache-2.0
['generated_from_trainer', 'fnet-bert-base-comparison']
false
bert-base-cased-finetuned-stsb This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the GLUE STSB dataset. It achieves the following results on the evaluation set: - Loss: 0.4861 - Pearson: 0.8926 - Spearmanr: 0.8898 - Combined Score: 0.8912 The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased).
8e9d1b89cc2687a35bf34e10f9f15680
apache-2.0
['generated_from_trainer', 'fnet-bert-base-comparison']
false
Training procedure This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used: ```bash
078b71aaaa5b1ad9d4ed315e837689de
apache-2.0
['generated_from_trainer', 'fnet-bert-base-comparison']
false
!/usr/bin/bash python ../run_glue.py \\n --model_name_or_path bert-base-cased \\n --task_name stsb \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 3 \\n --output_dir bert-base-cased-finetuned-stsb \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n```
d6a7aa4216f7117a87a2cc901cd9bfc4
apache-2.0
['generated_from_trainer', 'fnet-bert-base-comparison']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0
63658491ba435f3e49dc512e8638f72d
apache-2.0
['generated_from_trainer', 'fnet-bert-base-comparison']
false
Training results | Training Loss | Epoch | Step | Combined Score | Validation Loss | Pearson | Spearmanr | |:-------------:|:-----:|:----:|:--------------:|:---------------:|:-------:|:---------:| | 1.1174 | 1.0 | 360 | 0.8816 | 0.5000 | 0.8832 | 0.8800 | | 0.3835 | 2.0 | 720 | 0.8901 | 0.4672 | 0.8915 | 0.8888 | | 0.2388 | 3.0 | 1080 | 0.8912 | 0.4861 | 0.8926 | 0.8898 |
6ec0a4a840fb7df8d5a940d9bdc80d0f
cc-by-4.0
['translation', 'opus-mt-tc']
false
opus-mt-tc-big-es-zle Neural machine translation model for translating from Spanish (es) to East Slavic languages (zle). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ```
e57e1597a0e9c9e4dd76fb9e43936d57
cc-by-4.0
['translation', 'opus-mt-tc']
false
Model info * Release: 2022-03-23 * source language(s): spa * target language(s): bel rus ukr * valid target language labels: >>bel<< >>rus<< >>ukr<< * model: transformer-big * data: opusTCv20210807 ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) * tokenization: SentencePiece (spm32k,spm32k) * original model: [opusTCv20210807_transformer-big_2022-03-23.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-zle/opusTCv20210807_transformer-big_2022-03-23.zip) * more information released models: [OPUS-MT spa-zle README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-zle/README.md) * more information about the model: [MarianMT](https://huggingface.co/docs/transformers/model_doc/marian) This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>bel<<`
1d424581220af8f5c81f72b605d376a9
cc-by-4.0
['translation', 'opus-mt-tc']
false
Usage A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ ">>rus<< Su novela se vendió bien.", ">>ukr<< Quiero ir a Corea del Norte." ] model_name = "pytorch-models/opus-mt-tc-big-es-zle" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) )
dd103ca0339371531f0c97fe9944726b
cc-by-4.0
['translation', 'opus-mt-tc']
false
Я хочу поїхати до Північної Кореї. ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-es-zle") print(pipe(">>rus<< Su novela se vendió bien."))
572dd9b95350c9420dd5f84a88b94b2d
cc-by-4.0
['translation', 'opus-mt-tc']
false
Benchmarks * test set translations: [opusTCv20210807_transformer-big_2022-03-23.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-zle/opusTCv20210807_transformer-big_2022-03-23.test.txt) * test set scores: [opusTCv20210807_transformer-big_2022-03-23.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-zle/opusTCv20210807_transformer-big_2022-03-23.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU |
54d1355fc6953e53f57c06a486071ae9
cc-by-4.0
['translation', 'opus-mt-tc']
false
words | |----------|---------|-------|-------|-------|--------| | spa-bel | tatoeba-test-v2021-08-07 | 0.54506 | 27.5 | 205 | 1259 | | spa-rus | tatoeba-test-v2021-08-07 | 0.68523 | 49.0 | 10506 | 69242 | | spa-ukr | tatoeba-test-v2021-08-07 | 0.63502 | 42.3 | 10115 | 54544 | | spa-rus | flores101-devtest | 0.49913 | 20.2 | 1012 | 23295 | | spa-ukr | flores101-devtest | 0.47772 | 17.4 | 1012 | 22810 | | spa-rus | newstest2012 | 0.52436 | 24.6 | 3003 | 64790 | | spa-rus | newstest2013 | 0.54249 | 26.9 | 3000 | 58560 |
a7aa96d64593099b597552a8b5d53661
apache-2.0
['question-answering']
false
Model description This model replicates the "DistilBERT (D)" model from Table 2 of the [DistilBERT paper](https://arxiv.org/pdf/1910.01108.pdf). In this approach, a DistilBERT student is fine-tuned on SQuAD v1.1, but with a BERT model (also fine-tuned on SQuAD v1.1) acting as a teacher for a second step of task-specific distillation. In this version, the following pre-trained models were used: * Student: `distilbert-base-uncased` * Teacher: `lewtun/bert-base-uncased-finetuned-squad-v1`
b61324a447420f363f34209ecca1d742
apache-2.0
['question-answering']
false
Training data This model was trained on the SQuAD v1.1 dataset which can be obtained from the `datasets` library as follows: ```python from datasets import load_dataset squad = load_dataset('squad') ```
2e72869c5aadad444f4397c76b351fe1
apache-2.0
['question-answering']
false
Eval results | | Exact Match | F1 | |------------------|-------------|------| | DistilBERT paper | 79.1 | 86.9 | | Ours | 78.4 | 86.5 | The scores were calculated using the `squad` metric from `datasets`.
70c22dc706382792c026801c82ba1e59
apache-2.0
['question-answering']
false
BibTeX entry and citation info ```bibtex @misc{sanh2020distilbert, title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter}, author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf}, year={2020}, eprint={1910.01108}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
669eb29c469faf333ac66f139724ead9
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-becasv3-1 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the becasv3 dataset. It achieves the following results on the evaluation set: - Loss: 3.1086
3afd47fc4bb2060197d4b846a3e55c1f
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8
8510dfdcd5e2abd7c9662c0feaafd433
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 8 | 5.1063 | | No log | 2.0 | 16 | 4.4615 | | No log | 3.0 | 24 | 3.9351 | | No log | 4.0 | 32 | 3.5490 | | No log | 5.0 | 40 | 3.3299 | | No log | 6.0 | 48 | 3.2148 | | No log | 7.0 | 56 | 3.1292 | | No log | 8.0 | 64 | 3.1086 |
d48fef8daf813fe347dc6d3eca990c12
mit
['generated_from_trainer']
false
clinical-finetuned-AgitationModel This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.9746 - Accuracy: 0.88 - Precision: 0.9178 - Recall: 0.9178 - F1: 0.9178
ebbd61cf9abd4f54f0c775aca674be3a
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP
478785d1e9524ab05e70926183cada2a
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.0949 | 1.0 | 50 | 1.0393 | 0.85 | 0.8816 | 0.9178 | 0.8993 | | 0.0475 | 2.0 | 100 | 1.0619 | 0.85 | 0.8816 | 0.9178 | 0.8993 | | 0.0149 | 3.0 | 150 | 0.9746 | 0.88 | 0.9178 | 0.9178 | 0.9178 |
23dbb2a1a060d82af9052ffce9f1acb8
apache-2.0
['generated_from_trainer']
false
new-test-model2 This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1040 - Precision: 0.9722 - Recall: 0.9757 - F1: 0.9739 - Accuracy: 0.9808
8bd5038ecd82d29d827c12deddacc02c
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10
f9116a2dc106793943a93a59befb6503
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 151 | 0.1819 | 0.9360 | 0.9405 | 0.9382 | 0.9540 | | No log | 2.0 | 302 | 0.1196 | 0.9637 | 0.9639 | 0.9638 | 0.9703 | | No log | 3.0 | 453 | 0.1322 | 0.9614 | 0.9682 | 0.9648 | 0.9711 | | 0.2764 | 4.0 | 604 | 0.1071 | 0.9677 | 0.9725 | 0.9701 | 0.9763 | | 0.2764 | 5.0 | 755 | 0.1084 | 0.9709 | 0.9766 | 0.9737 | 0.9790 | | 0.2764 | 6.0 | 906 | 0.1015 | 0.9717 | 0.9739 | 0.9728 | 0.9791 | | 0.0342 | 7.0 | 1057 | 0.1208 | 0.9686 | 0.9727 | 0.9706 | 0.9785 | | 0.0342 | 8.0 | 1208 | 0.1068 | 0.9680 | 0.9752 | 0.9716 | 0.9798 | | 0.0342 | 9.0 | 1359 | 0.1028 | 0.9719 | 0.9743 | 0.9731 | 0.9807 | | 0.0129 | 10.0 | 1510 | 0.1040 | 0.9722 | 0.9757 | 0.9739 | 0.9808 |
8c6e031739344c11ab485d2b725b6120
apache-2.0
['translation']
false
opus-mt-hy-en * source languages: hy * target languages: en * OPUS readme: [hy-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/hy-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/hy-en/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/hy-en/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/hy-en/opus-2019-12-18.eval.txt)
e8789f54bcb5b5971bf90c393996861c
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition', 'speech separation']
false
Demo: How to use in ESPnet2 Follow the [CHiME-7 DASR installation instructions](https://github.com/espnet/espnet/blob/master/egs2/chime7_task1/asr1/README.md) if you haven't done that already. ```bash cd espnet git checkout 15646109f254de8b39bbe310827d617da5ac858d
105260d47726145f372d4c820d87ff93
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition', 'speech separation']
false
follow installation instruction for CHiME-7 DASR recipe https://github.com/espnet/espnet/blob/master/egs2/chime7_task1/asr1/README.md ./run.sh --decode-only 1 --use-pretrained popcornell/chime7_task1_asr1_baseline --ngpu PUT YOURS ``` <!-- Generated by scripts/utils/show_asr_result.sh -->
4b12573d828bd85aa487533de0b89035
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition', 'speech separation']
false
Environments - date: `Wed Feb 8 23:41:28 UTC 2023` - python version: `3.9.2 (default, Mar 3 2021, 20:02:32) [GCC 7.3.0]` - espnet version: `espnet 202301` - pytorch version: `pytorch 1.13.1+cu116` - Git hash: `` - Commit date: ``
53f45e33c46a5bc958fbb28d805de548
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition', 'speech separation']
false
ASR config <details><summary>expand</summary> ``` config: conf/tuning/train_asr_transformer_wavlm_lr1e-4_specaugm_accum1_preenc128_warmup20k.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_train_asr_transformer_wavlm_lr1e-4_specaugm_accum1_preenc128_warmup20k_raw_en_bpe500_batch_size640_scheduler_confwarmup_steps8000_max_epoch8_optim_conflr0.000500000000_sp ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: 5 dist_rank: 0 local_rank: 0 dist_master_addr: localhost dist_master_port: 44341 dist_launcher: null multiprocessing_distributed: true unused_parameters: true sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 8 patience: 4 val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - acc - max keep_nbest_models: 5 nbest_averaging_interval: 0 grad_clip: 5 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_matplotlib: true use_tensorboard: true create_graph_in_tensorboard: false use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: - frontend.upstream num_iters_per_epoch: null batch_size: 640 valid_batch_size: null batch_bins: 1000000 valid_batch_bins: null train_shape_file: - exp/asr_stats_raw_en_bpe500_sp/train/speech_shape - exp/asr_stats_raw_en_bpe500_sp/train/text_shape.bpe valid_shape_file: - exp/asr_stats_raw_en_bpe500_sp/valid/speech_shape - exp/asr_stats_raw_en_bpe500_sp/valid/text_shape.bpe batch_type: folded valid_batch_type: null fold_length: - 80000 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/kaldi/train_all_mdm_ihm_rvb_gss_sp/wav.scp - speech - sound - - dump/raw/kaldi/train_all_mdm_ihm_rvb_gss_sp/text - text - text valid_data_path_and_name_and_type: - - dump/raw/kaldi/chime6/dev/gss/wav.scp - speech - sound - - dump/raw/kaldi/chime6/dev/gss/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null exclude_weight_decay: false exclude_weight_decay_conf: {} optim: adam optim_conf: lr: 0.0005 scheduler: warmuplr scheduler_conf: warmup_steps: 8000 token_list: - <blank> - <unk> - s - '''' - ▁i - t - ▁it - ▁a - e - ▁you - ▁the - ▁like - ▁yeah - a - d - ▁and - m - ▁that - ▁to - n - i - y - ing - o - u - ▁so - p - ▁of - ▁in - re - ▁was - c - r - ▁just - er - ▁know - ▁oh - ed - ▁but - ▁ummm - ▁we - l - ▁no - ▁they - ▁have - ▁do - g - ▁he - k - ll - ▁uhhh - ▁don - ▁for - h - ▁what - ▁be - ar - ▁is - ▁there - '-' - ▁s - ▁this - in - b - ▁ - en - ▁on - ▁p - ▁can - al - ▁not - w - ▁my - ▁one - ic - f - ▁or - ▁really - ▁go - ▁right - ▁me - an - ▁w - or - le - ▁f - ▁think - ▁okay - ▁all - ▁then - ▁with - ▁are - ▁get - it - ▁t - ▁st - ve - ▁hmmm - ▁g - ▁if - ce - 'on' - ▁she - ▁good - ▁e - es - ▁well - v - ▁re - th - ter - ch - ▁out - ▁up - ly - ▁b - ▁ma - il - ▁would - ▁at - ▁want - ▁mean - ▁ch - ▁your - ▁people - ur - ▁how - ▁k - ▁co - ▁about - ▁tr - ▁ba - ▁kind - ▁when - ▁mi - ▁because - ro - ▁had - ▁ho - ▁gonna - ▁time - ▁more - ▁got - ▁some - ▁two - ▁did - ▁see - ▁now - ▁pa - ra - ▁de - ▁lot - ▁actually - ▁o - ▁too - ate - ▁here - ▁cuz - ▁sp - ▁where - ▁going - ▁j - ▁from - ▁bo - ▁them - ▁bu - ▁put - ▁thing - ng - ▁were - ▁n - ▁sh - ▁work - el - ▁something - ▁se - ▁say - ke - ow - ▁ca - ▁fa - ▁need - sh - ▁di - ▁po - ▁make - la - ▁br - ▁v - ▁an - ▁who - ion - ▁y - ▁look - ▁didn - ▁could - ▁little - ver - ▁c - ▁mo - ▁much - ▁very - ir - ▁sa - ▁play - ▁pretty - ▁been - ▁d - ▁other - ▁year - and - ▁mm - ▁stuff - ▁dr - ▁why - ▁con - ▁su - ▁back - ▁ex - ting - ▁take - ▁li - ▁even - ▁should - ▁her - ally - lo - ation - ▁way - ▁guess - ▁has - z - ▁three - ry - ▁ha - ies - is - x - ▁ro - ▁yes - ▁th - ▁use - ▁down - ous - ▁over - ▁probably - ▁guys - ▁maybe - ▁still - ▁cr - ▁which - ▁nice - und - ▁sure - ▁l - ▁off - ▁la - ▁cu - est - ▁any - ▁fi - ▁these - ▁ra - ▁went - ▁things - ment - ▁doing - ▁day - ▁un - ▁lo - ▁da - ▁only - igh - ▁come - ▁big - ▁those - ▁wanna - ▁bit - ▁never - ▁us - ol - ▁though - ▁first - ive - ▁their - ▁let - ▁start - ▁his - ▁four - ▁le - ▁eat - ist - ▁school - us - ▁into - ▁yep - uck - ▁than - ▁him - ▁hi - ▁also - ▁five - side - ▁new - ▁comp - ▁cool - ▁talk - ▁said - ▁pro - ▁r - ▁always - ▁ri - ▁cl - ▁long - able - ▁sc - ▁gra - ▁by - ▁friend - age - ▁different - ▁live - ▁doesn - ▁place - ▁sorry - ▁will - ▁feel - ▁does - ▁part - ▁wait - ▁six - ▁watch - ▁anything - ▁man - ▁our - ▁car - ▁huh - ▁whatever - ▁last - ▁give - ▁ten - ▁before - ▁thought - ▁after - ▁game - ▁card - ▁fl - ▁every - cause - ▁same - ▁around - ▁cook - ▁week - ▁hu - ▁everything - ▁fine - ▁many - ▁qu - ▁read - ▁tea - ough - ance - ▁turn - ▁wow - ▁fun - ▁hard - ▁great - ▁love - ▁remember - ▁twenty - ▁whole - ▁happen - ▁seven - ▁keep - ▁food - ▁most - j - ▁might - ▁thank - ▁move - ▁job - ▁eight - ▁mu - ▁sort - ▁better - port - ▁another - ful - ▁point - ▁show - ▁again - ▁high - ize - ▁house - ▁home - ▁person - ▁old - ▁end - ▁through - ▁pick - ▁else - ▁guy - ▁app - ▁find - ▁nine - ▁hand - ▁kid - ▁interesting - ▁city - ▁called - ▁tell - ▁half - ▁name - ▁definitely - ▁made - ▁exactly - ▁came - ▁wood - ▁funny - ▁basically - ▁count - ▁usually - ▁help - ▁someone - ▁already - ▁dunno - ▁enough - ction - ▁own - ▁weird - ▁next - ▁hundred - ▁small - ▁money - ▁couple - ▁while - ▁close - ▁movie - ▁sometimes - ▁everyone - ▁away - ▁true - ▁super - ▁cheese - ▁class - ▁night - ▁life - ▁leave - ▁plan - ▁water - ▁left - ▁thirty - ▁family - ▁phone - ▁build - ▁room - ▁month - ▁open - ▁idea - ▁second - ▁dude - ▁music - ▁each - ▁learn - ▁girl - ▁together - ▁under - ▁run - ▁chicken - ▁having - ▁either - ▁almost - ▁crazy - ▁book - ▁sauce - ▁supposed - ▁course - ▁speak - ▁awesome - ▁anyway - ▁throw - ▁finish - ▁world - ▁reason - ▁check - ▁least - ▁parents - ▁everybody - ▁change - '&' - ä - '
0d34aaf127123d26ac97710d43943c50
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition', 'speech separation']
false
' - ñ - â - é - ü - ']' - q - î - <sos/eos> init: xavier_uniform input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: null zero_infinity: true joint_net_conf: null use_preprocessor: true token_type: bpe bpemodel: data/en_token_list/bpe_unigram500/bpe.model non_linguistic_symbols: data/nlsyms.txt cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' short_noise_thres: 0.5 aux_ctc_tasks: [] frontend: s3prl frontend_conf: frontend_conf: upstream: wavlm_large download_dir: ./hub multilayer_feature: true fs: 16k specaug: specaug specaug_conf: apply_time_warp: false time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: false freq_mask_width_range: - 0 - 150 num_freq_mask: 4 apply_time_mask: true time_mask_width_ratio_range: - 0.0 - 0.15 num_time_mask: 3 normalize: utterance_mvn normalize_conf: {} model: espnet model_conf: ctc_weight: 0.3 lsm_weight: 0.1 length_normalized_loss: false extract_feats_in_collect_stats: false preencoder: linear preencoder_conf: input_size: 1024 output_size: 128 dropout: 0.2 encoder: transformer encoder_conf: output_size: 256 attention_heads: 4 linear_units: 2048 num_blocks: 12 dropout_rate: 0.1 attention_dropout_rate: 0.0 input_layer: conv2d2 normalize_before: true postencoder: null postencoder_conf: {} decoder: transformer decoder_conf: input_layer: embed attention_heads: 4 linear_units: 2048 num_blocks: 6 dropout_rate: 0.1 positional_dropout_rate: 0.0 self_attention_dropout_rate: 0.0 src_attention_dropout_rate: 0.0 preprocessor: default preprocessor_conf: {} required: - output_dir - token_list version: '202301' distributed: true ``` </details>
fb986e828c070452d82d0fa451c336bf
apache-2.0
['generated_from_trainer']
false
test1000v2 This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7873 - Wer: 0.6162
e0b209039caf8825023981e9350044d9
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 20 - mixed_precision_training: Native AMP
7bdbd55ce5ca8202f4f9eed47b0c6f11
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 7.7913 | 3.22 | 100 | 3.3481 | 1.0 | | 3.3831 | 6.44 | 200 | 3.3229 | 1.0 | | 3.3778 | 9.67 | 300 | 3.3211 | 1.0 | | 3.3671 | 12.89 | 400 | 3.2973 | 1.0 | | 3.3528 | 16.13 | 500 | 3.1349 | 1.0 | | 1.8611 | 19.35 | 600 | 0.7873 | 0.6162 |
b580b506d01d8bd4ff8570f48b7c8155
mit
['exbert']
false
Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture.
b08bdaf5cdc42edd7716f6d701f4f5b0
mit
['exbert']
false
How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ```
7ded885fce97538f1d2d766d3ab2f53e
mit
['exbert']
false
Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results:
074e57a45f8add8f0a536eac8c057306
mit
['exbert']
false
BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
5eb753caa7446ba88997960337b87fd2
creativeml-openrail-m
['text-to-image', 'stable-diffusion']
false
koja Dreambooth model trained by Kurapka with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb) Sample pictures of this concept:
2c793be0a02562cce21d438b96357658
apache-2.0
['automatic-speech-recognition', 'sv-SE']
false
exp_w2v2t_sv-se_r-wav2vec2_s418 Fine-tuned [facebook/wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust) for speech recognition using the train split of [Common Voice 7.0 (sv-SE)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
9a442d52f550a9965a4a56c0736c0fee
unknown
['stable-diffusion', 'text-to-image']
false
.safetensor model for automatic1111 webui. Strange_Dedication_v3 is an improvement to Strange_Dedication_v2 using Anything_v4.5. It's better at the cutesexyrobutts style, without having to use a trigger. Also, it's good at shiny_skin and shiny_clothes and artistical backgrounds. I have only used it with "vae-ft-mse-840000-ema-pruned", CLIP-Skip 1 and with danbooru tags. Lately I have started using the negative embed "bad-hands-5" (by an unknown author?), which was used for the example images as well. If you work with those you should be able to prompt images like these (prompts in .png metadata): ![00009-20230122002457-6cda57b672.png](https://huggingface.co/MortalSage/Strange_Dedication/resolve/main/Strange_Dedication_v3%20examples/SFW/00009-20230122002457-6cda57b672.png) ![00009-20230121201847-6cda57b672.png](https://huggingface.co/MortalSage/Strange_Dedication/resolve/main/Strange_Dedication_v3%20examples/SFW/00009-20230121201847-6cda57b672.png) ![00007-20230122001758-6cda57b672.png](https://huggingface.co/MortalSage/Strange_Dedication/resolve/main/Strange_Dedication_v3%20examples/SFW/00007-20230122001758-6cda57b672.png) ![00002-20230122000635-6cda57b672.png](https://huggingface.co/MortalSage/Strange_Dedication/resolve/main/Strange_Dedication_v3%20examples/SFW/00002-20230122000635-6cda57b672.png) v2 version Strange_Dedication_v2 is a model mix I did for myself. It's based mostly on two models, which are specialised in the artists cutesexyrobutts and free_style(yohan1754). I have added some different danbooru and r34 models to increase the affinity with uncommon prompts, can't specify which exactly. I have only used it with "vae-ft-mse-840000-ema-pruned", CLIP-Skip 1 and with danbooru tags. If you work with those you should be able to prompt good images, check out the example folder as well (prompts in .png metadata). The cutesexyrobutts style model had the trigger "by_cutesexyrobutts", which still works.
9b107d017a9674c092158e4f7c3571ec
creativeml-openrail-m
['text-to-image', 'stable-diffusion', 'gakki']
false
legal & risk ⚠️⚠ It is prohibited to use this model for commercial purposes and any scenarios of illegal acts and purposes. Sample pictures of this concept: ![0](https://huggingface.co/Sa1i/gakki-mix/resolve/main/sample_images/00986-2977967196.png) ![1](https://huggingface.co/Sa1i/gakki-mix/resolve/main/sample_images/00997-2275133157.png) ![2](https://huggingface.co/Sa1i/gakki-mix/resolve/main/sample_images/01002-3229456781.png)
934895e587a12357e692b5c6237f6ba8
apache-2.0
['generated_from_keras_callback']
false
whisper_havest_0015 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 4.5508 - Train Accuracy: 0.0121 - Train Do Wer: 1.0 - Validation Loss: 4.7620 - Validation Accuracy: 0.0121 - Validation Do Wer: 1.0 - Epoch: 14
fd5ad5d70cbc0c17215302d532ce9c0e
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32
ed0e9bb766998d5716ae1931a9b275df
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Train Accuracy | Train Do Wer | Validation Loss | Validation Accuracy | Validation Do Wer | Epoch | |:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:| | 9.9191 | 0.0046 | 1.0 | 8.5836 | 0.0067 | 1.0 | 0 | | 8.0709 | 0.0083 | 1.0 | 7.4667 | 0.0089 | 1.0 | 1 | | 7.1652 | 0.0100 | 1.0 | 6.8204 | 0.0112 | 1.0 | 2 | | 6.7196 | 0.0114 | 1.0 | 6.5192 | 0.0114 | 1.0 | 3 | | 6.4115 | 0.0115 | 1.0 | 6.2357 | 0.0115 | 1.0 | 4 | | 6.1085 | 0.0115 | 1.0 | 5.9657 | 0.0115 | 1.0 | 5 | | 5.8206 | 0.0115 | 1.0 | 5.7162 | 0.0115 | 1.0 | 6 | | 5.5567 | 0.0115 | 1.0 | 5.4963 | 0.0115 | 1.0 | 7 | | 5.3223 | 0.0116 | 1.0 | 5.3096 | 0.0116 | 1.0 | 8 | | 5.1222 | 0.0117 | 1.0 | 5.1600 | 0.0117 | 1.0 | 9 | | 4.9580 | 0.0117 | 1.0 | 5.0391 | 0.0118 | 1.0 | 10 | | 4.8251 | 0.0119 | 1.0 | 4.9427 | 0.0118 | 1.0 | 11 | | 4.7171 | 0.0119 | 1.0 | 4.8691 | 0.0119 | 1.0 | 12 | | 4.6284 | 0.0121 | 1.0 | 4.8123 | 0.0120 | 1.0 | 13 | | 4.5508 | 0.0121 | 1.0 | 4.7620 | 0.0121 | 1.0 | 14 |
0c9642d039f05a5d979c622c7ab8629c
gpl-3.0
['electra', 'tagalog', 'filipino']
false
ELECTRA Tagalog Small Uncased Generator Tagalog ELECTRA model pretrained with a large corpus scraped from the internet. This model is part of a larger research project. We open-source the model to allow greater usage within the Filipino NLP community. This is the generator model used to sample synthetic text and pretrain the discriminator. Only use this model for retraining and mask-filling. For the actual model for downstream tasks, please refer to the discriminator models.
043d1299ffbb5eb40a02bf4588b9714f
gpl-3.0
['electra', 'tagalog', 'filipino']
false
Citations All model details and training setups can be found in our papers. If you use our model or find it useful in your projects, please cite our work: ``` @inproceedings{cruz2021exploiting, title={Exploiting News Article Structure for Automatic Corpus Generation of Entailment Datasets}, author={Cruz, Jan Christian Blaise and Resabal, Jose Kristian and Lin, James and Velasco, Dan John and Cheng, Charibeth}, booktitle={Pacific Rim International Conference on Artificial Intelligence}, pages={86--99}, year={2021}, organization={Springer} } ```
98c8cd28972e108a2891ee9d35eccb1f
apache-2.0
['generated_from_trainer']
false
nba_pbp_distilgpt2 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on text files containing play-by-play descriptions of games played by the Boston Celtics and Golden State Warriors during the 2021-22 NBA season. It achieves the following results on the evaluation set: - Loss: 0.6324 - Accuracy: 0.8117
a2a13d2378039df174d55c18cdb4401d
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0
37b549f9cb3f9b65b3370395f58f47d7
apache-2.0
['generated_from_trainer']
false
small-mlm-glue-qqp-custom-tokenizer This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the None dataset. It achieves the following results on the evaluation set: - Loss: 5.0065
ca1e94ec4d13ff12054f54189267efd7
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.3631 | 0.4 | 500 | 5.9145 | | 5.6422 | 0.8 | 1000 | 5.8224 | | 5.4368 | 1.2 | 1500 | 5.6172 | | 5.1539 | 1.6 | 2000 | 5.4872 | | 5.0641 | 2.0 | 2500 | 5.5369 | | 4.9495 | 2.4 | 3000 | 5.3466 | | 4.8947 | 2.8 | 3500 | 5.4592 | | 4.9081 | 3.2 | 4000 | 5.3328 | | 4.7214 | 3.6 | 4500 | 5.3746 | | 4.7341 | 4.0 | 5000 | 5.3417 | | 4.6482 | 4.4 | 5500 | 5.2731 | | 4.628 | 4.8 | 6000 | 5.2716 | | 4.5801 | 5.2 | 6500 | 5.1364 | | 4.4967 | 5.6 | 7000 | 5.2167 | | 4.4984 | 6.0 | 7500 | 5.2133 | | 4.4255 | 6.4 | 8000 | 5.1228 | | 4.4459 | 6.8 | 8500 | 5.1664 | | 4.3732 | 7.2 | 9000 | 5.0800 | | 4.2546 | 7.6 | 9500 | 5.0616 | | 4.351 | 8.0 | 10000 | 5.1500 | | 4.2365 | 8.4 | 10500 | 5.0903 | | 4.2224 | 8.8 | 11000 | 5.0041 | | 4.2549 | 9.2 | 11500 | 5.0711 | | 4.1108 | 9.6 | 12000 | 5.1525 | | 4.1366 | 10.0 | 12500 | 5.0065 |
76623b9a43eafbcc608b7dbd93c33a5d
apache-2.0
['automatic-speech-recognition', 'de']
false
exp_w2v2r_de_xls-r_age_teens-8_sixties-2_s945 Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
b8fd0221d92f875aa66e5106d1cb7754
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.1547
01ee8b557cfe53d4ecb706fb3e7090d5
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.2164 | 1.0 | 5533 | 1.1486 | | 0.9546 | 2.0 | 11066 | 1.1251 | | 0.7573 | 3.0 | 16599 | 1.1547 |
d8c99812008c24b049de3dec504be50b
apache-2.0
['text-classfication', 'int8', 'Intel® Neural Compressor', 'PostTrainingDynamic', 'onnx']
false
PyTorch This is an INT8 PyTorch model quantized with [huggingface/optimum-intel](https://github.com/huggingface/optimum-intel) through the usage of [Intel® Neural Compressor](https://github.com/intel/neural-compressor). The original fp32 model comes from the fine-tuned model [Intel/bert-base-uncased-mrpc](https://huggingface.co/Intel/bert-base-uncased-mrpc).
351d4f81e741dc7bf03f133246dc6846
apache-2.0
['text-classfication', 'int8', 'Intel® Neural Compressor', 'PostTrainingDynamic', 'onnx']
false
Load with optimum: ```python from optimum.intel.neural_compressor.quantization import IncQuantizedModelForSequenceClassification int8_model = IncQuantizedModelForSequenceClassification.from_pretrained( 'Intel/bert-base-uncased-mrpc-int8-dynamic', ) ```
f1b5117cd91336c26dd98272c208bf14
apache-2.0
['text-classfication', 'int8', 'Intel® Neural Compressor', 'PostTrainingDynamic', 'onnx']
false
ONNX This is an INT8 ONNX model quantized with [Intel® Neural Compressor](https://github.com/intel/neural-compressor). The original fp32 model comes from the fine-tuned model [Intel/bert-base-uncased-mrpc](https://huggingface.co/Intel/bert-base-uncased-mrpc).
475454570558e60e11688fddd3a59672
apache-2.0
['text-classfication', 'int8', 'Intel® Neural Compressor', 'PostTrainingDynamic', 'onnx']
false
Load ONNX model: ```python from optimum.onnxruntime import ORTModelForSequenceClassification model = ORTModelForSequenceClassification.from_pretrained('Intel/bert-base-uncased-mrpc-int8-dynamic') ```
6d374f58e178b42c441a0cd1a8a067ae
apache-2.0
['generated_from_trainer']
false
InstructDial Instruction tuning is an emergent paradigm in NLP wherein natural language instructions are leveraged with language models to induce zero-shot performance on unseen tasks. Instructions have been shown to enable good performance on unseen tasks and datasets in both large and small language models. Dialogue is an especially interesting area to explore instruction tuning because dialogue systems perform multiple kinds of tasks related to language (e.g., natural language understanding and generation, domain-specific interaction), yet instruction tuning has not been systematically explored for dialogue-related tasks. We introduce InstructDial, an instruction tuning framework for dialogue, which consists of a repository of 48 diverse dialogue tasks in a unified text-to-text format created from 59 openly available dialogue datasets. Next, we explore cross-task generalization ability on models tuned on InstructDial across diverse dialogue tasks. Our analysis reveals that InstructDial enables good zero-shot performance on unseen datasets and tasks such as dialogue evaluation and intent detection, and even better performance in a few-shot setting. To ensure that models adhere to instructions, we introduce novel meta-tasks. We establish benchmark zero-shot and few-shot performance of models trained using the proposed framework on multiple dialogue tasks. [Paper](https://arxiv.org/abs/2205.12673) [GIT] (https://github.com/prakharguptaz/Instructdial)
50b4681dc54aa3ec1d1395eb67170505
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 9 - eval_batch_size: 9 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 72 - total_eval_batch_size: 72 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0
4c857c5700de0fc4c0b638c0041cc1e6
mit
['huggingnft', 'nft', 'huggan', 'gan', 'image', 'images', 'unconditional-image-generation']
false
Model description LightWeight GAN model for unconditional generation. NFT collection available [here](https://opensea.io/collection/alpacadabraz). Dataset is available [here](https://huggingface.co/datasets/huggingnft/alpacadabraz). Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft). Project repository: [link](https://github.com/AlekseyKorshuk/huggingnft). [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingnft?style=social)](https://github.com/AlekseyKorshuk/huggingnft)
74273bb849a98af0f67d7f52b7b43462
mit
['huggingnft', 'nft', 'huggan', 'gan', 'image', 'images', 'unconditional-image-generation']
false
About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingnft?style=social)](https://github.com/AlekseyKorshuk/huggingnft)
2a551b65551c8cee12a719add6afaa5f
apache-2.0
['flair', 'Text Classification', 'token-classification', 'sequence-tagger-model']
false
Arabic NER Model for AQMAR dataset Training was conducted over 86 epochs, using a linear decaying learning rate of 2e-05, starting from 0.3 and a batch size of 48 with fastText and Flair forward and backward embeddings.
cebe1a13a46e0a232352d05beae6a59a
apache-2.0
['flair', 'Text Classification', 'token-classification', 'sequence-tagger-model']
false
Results: - F1-score (micro) 0.9323 - F1-score (macro) 0.9272 | | True Posititves | False Positives | False Negatives | Precision | Recall | class-F1 | |------|-----|----|----|---------|--------|----------| | LOC | 164 | 7 | 13 | 0.9591 | 0.9266 | 0.9425 | | MISC | 398 | 22 | 37 | 0.9476 | 0.9149 | 0.9310 | | ORG | 65 | 6 | 9 | 0.9155 | 0.8784 | 0.8966 | | PER | 199 | 13 | 13 | 0.9387 | 0.9387 | 0.9387 | ---
0fddb3c1f870e9512b91fa4adb591331
apache-2.0
['flair', 'Text Classification', 'token-classification', 'sequence-tagger-model']
false
Usage ```python from flair.data import Sentence from flair.models import SequenceTagger import pyarabic.araby as araby from icecream import ic arTagger = SequenceTagger.load('megantosh/flair-arabic-MSA-aqmar') sentence = Sentence('George Washington went to Washington .') arSentence = Sentence('عمرو عادلي أستاذ للاقتصاد السياسي المساعد في الجامعة الأمريكية بالقاهرة .')
9c0059bcbb20861d3b0b8201bf313a5b
apache-2.0
['flair', 'Text Classification', 'token-classification', 'sequence-tagger-model']
false
Model Configuration ```python (embeddings): StackedEmbeddings( (list_embedding_0): WordEmbeddings('ar') (list_embedding_1): FlairEmbeddings( (lm): LanguageModel( (drop): Dropout(p=0.1, inplace=False) (encoder): Embedding(7125, 100) (rnn): LSTM(100, 2048) (decoder): Linear(in_features=2048, out_features=7125, bias=True) ) ) (list_embedding_2): FlairEmbeddings( (lm): LanguageModel( (drop): Dropout(p=0.1, inplace=False) (encoder): Embedding(7125, 100) (rnn): LSTM(100, 2048) (decoder): Linear(in_features=2048, out_features=7125, bias=True) ) ) ) (word_dropout): WordDropout(p=0.05) (locked_dropout): LockedDropout(p=0.5) (embedding2nn): Linear(in_features=4396, out_features=4396, bias=True) (rnn): LSTM(4396, 256, batch_first=True, bidirectional=True) (linear): Linear(in_features=512, out_features=14, bias=True) (beta): 1.0 (weights): None (weight_tensor) None )" 2021-03-31 22:19:50,654 ---------------------------------------------------------------------------------------------------- 2021-03-31 22:19:50,654 Corpus: "Corpus: 3025 train + 336 dev + 373 test sentences" 2021-03-31 22:19:50,654 ---------------------------------------------------------------------------------------------------- 2021-03-31 22:19:50,654 Parameters: 2021-03-31 22:19:50,654 - learning_rate: "0.3" 2021-03-31 22:19:50,654 - mini_batch_size: "48" 2021-03-31 22:19:50,654 - patience: "3" 2021-03-31 22:19:50,654 - anneal_factor: "0.5" 2021-03-31 22:19:50,654 - max_epochs: "150" 2021-03-31 22:19:50,654 - shuffle: "True" 2021-03-31 22:19:50,654 - train_with_dev: "False" 2021-03-31 22:19:50,654 - batch_growth_annealing: "False" 2021-03-31 22:19:50,655 ------------------------------------ ``` Due to the right-to-left in left-to-right context, some formatting errors might occur. and your code might appear like [this](https://ibb.co/ky20Lnq), (link accessed on 2020-10-27)
3c3d47b724a8477aaf981812151c69c2
apache-2.0
['flair', 'Text Classification', 'token-classification', 'sequence-tagger-model']
false
Citation *if you use this model, please consider citing [this work](https://www.researchgate.net/publication/358956953_Sequence_Labeling_Architectures_in_Diglossia_-_a_case_study_of_Arabic_and_its_dialects):* ```latex @unpublished{MMHU21 author = "M. Megahed", title = "Sequence Labeling Architectures in Diglossia", year = {2021}, doi = "10.13140/RG.2.2.34961.10084" url = {https://www.researchgate.net/publication/358956953_Sequence_Labeling_Architectures_in_Diglossia_-_a_case_study_of_Arabic_and_its_dialects} } ```
292dd785708b072d9c5a1b0ce026f18e
apache-2.0
['generated_from_trainer']
false
CTEBMSP_ANAT_DISO This model is a fine-tuned version of [PlanTL-GOB-ES/bsc-bio-ehr-es](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0909 - Anat Precision: 0.7522 - Anat Recall: 0.7147 - Anat F1: 0.7330 - Anat Number: 361 - Diso Precision: 0.8915 - Diso Recall: 0.8919 - Diso F1: 0.8917 - Diso Number: 2645 - Overall Precision: 0.8755 - Overall Recall: 0.8706 - Overall F1: 0.8731 - Overall Accuracy: 0.9873
9341482384c90286ac255c361b827da2
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 8e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8
deb4d4b174a250db5bd6f631abee0b88
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Anat Precision | Anat Recall | Anat F1 | Anat Number | Diso Precision | Diso Recall | Diso F1 | Diso Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------------:|:-----------:|:-------:|:-----------:|:--------------:|:-----------:|:-------:|:-----------:|:-----------------:|:--------------:|:----------:|:----------------:| | 0.0592 | 1.0 | 2133 | 0.0506 | 0.6950 | 0.4986 | 0.5806 | 361 | 0.8635 | 0.8609 | 0.8622 | 2645 | 0.8484 | 0.8174 | 0.8326 | 0.9843 | | 0.0323 | 2.0 | 4266 | 0.0583 | 0.7899 | 0.6039 | 0.6845 | 361 | 0.8780 | 0.8817 | 0.8798 | 2645 | 0.8697 | 0.8483 | 0.8589 | 0.9858 | | 0.0201 | 3.0 | 6399 | 0.0580 | 0.6565 | 0.7147 | 0.6844 | 361 | 0.8598 | 0.8764 | 0.8680 | 2645 | 0.8339 | 0.8570 | 0.8453 | 0.9851 | | 0.0121 | 4.0 | 8532 | 0.0758 | 0.7240 | 0.6759 | 0.6991 | 361 | 0.8976 | 0.8752 | 0.8863 | 2645 | 0.8776 | 0.8513 | 0.8642 | 0.9863 | | 0.0078 | 5.0 | 10665 | 0.0814 | 0.7219 | 0.7119 | 0.7169 | 361 | 0.8776 | 0.8975 | 0.8875 | 2645 | 0.8595 | 0.8752 | 0.8673 | 0.9862 | | 0.0031 | 6.0 | 12798 | 0.0974 | 0.7599 | 0.6399 | 0.6947 | 361 | 0.8895 | 0.8915 | 0.8905 | 2645 | 0.8761 | 0.8613 | 0.8686 | 0.9867 | | 0.002 | 7.0 | 14931 | 0.0980 | 0.7143 | 0.6787 | 0.6960 | 361 | 0.8813 | 0.8957 | 0.8884 | 2645 | 0.8624 | 0.8696 | 0.8660 | 0.9860 | | 0.0005 | 8.0 | 17064 | 0.0909 | 0.7522 | 0.7147 | 0.7330 | 361 | 0.8915 | 0.8919 | 0.8917 | 2645 | 0.8755 | 0.8706 | 0.8731 | 0.9873 |
fff440d4203fa73dbbf54c137d1b116c
apache-2.0
['generated_from_trainer']
false
bert-finetuned-ner-20percent This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6513 - Precision: 0.5252 - Recall: 0.6562 - F1: 0.5834 - Accuracy: 0.8044
a378918f5353f936aa1c2f57d6d0631b
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 2022 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3
69d2ddcf700eb15a31b59e16efe648a9
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 15 | 0.9155 | 0.3511 | 0.4264 | 0.3851 | 0.7353 | | No log | 2.0 | 30 | 0.7116 | 0.4845 | 0.6321 | 0.5485 | 0.7898 | | No log | 3.0 | 45 | 0.6513 | 0.5252 | 0.6562 | 0.5834 | 0.8044 |
956c41091aa0f545c6b70d6bea579bce
mit
['generated_from_trainer']
false
Klassifizierung-Heizung This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0936 - F1: 0.9859
f69e325d937340169e4c532b033ad3d5