license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 5000 - mixed_precision_training: Native AMP
9ab02de3b60f4696011600b48b8e2b39
apache-2.0
['generated_from_trainer']
false
distilroberta-base-finetuned-aumet-lm This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.9210
3a9e4fa030b86b1448cb85c9b382b7ba
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 203 | 3.0614 | | No log | 2.0 | 406 | 2.9287 | | 2.9507 | 3.0 | 609 | 2.8713 |
a96445e0ccea47f7c28a73855e0596d6
mit
['generated_from_trainer']
false
bert_base_tcm_0.7 This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0128 - Criterio Julgamento Precision: 0.8235 - Criterio Julgamento Recall: 0.9032 - Criterio Julgamento F1: 0.8615 - Criterio Julgamento Number: 93 - Data Sessao Precision: 0.7324 - Data Sessao Recall: 0.9286 - Data Sessao F1: 0.8189 - Data Sessao Number: 56 - Modalidade Licitacao Precision: 0.9415 - Modalidade Licitacao Recall: 0.9769 - Modalidade Licitacao F1: 0.9589 - Modalidade Licitacao Number: 346 - Numero Exercicio Precision: 0.9486 - Numero Exercicio Recall: 0.9486 - Numero Exercicio F1: 0.9486 - Numero Exercicio Number: 175 - Objeto Licitacao Precision: 0.5352 - Objeto Licitacao Recall: 0.6909 - Objeto Licitacao F1: 0.6032 - Objeto Licitacao Number: 55 - Valor Objeto Precision: 0.8 - Valor Objeto Recall: 0.8649 - Valor Objeto F1: 0.8312 - Valor Objeto Number: 37 - Overall Precision: 0.8680 - Overall Recall: 0.9318 - Overall F1: 0.8987 - Overall Accuracy: 0.9966
cf0c488e0953ea1962b34f36a5b4f86d
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Criterio Julgamento Precision | Criterio Julgamento Recall | Criterio Julgamento F1 | Criterio Julgamento Number | Data Sessao Precision | Data Sessao Recall | Data Sessao F1 | Data Sessao Number | Modalidade Licitacao Precision | Modalidade Licitacao Recall | Modalidade Licitacao F1 | Modalidade Licitacao Number | Numero Exercicio Precision | Numero Exercicio Recall | Numero Exercicio F1 | Numero Exercicio Number | Objeto Licitacao Precision | Objeto Licitacao Recall | Objeto Licitacao F1 | Objeto Licitacao Number | Valor Objeto Precision | Valor Objeto Recall | Valor Objeto F1 | Valor Objeto Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:-----------------------------:|:--------------------------:|:----------------------:|:--------------------------:|:---------------------:|:------------------:|:--------------:|:------------------:|:------------------------------:|:---------------------------:|:-----------------------:|:---------------------------:|:--------------------------:|:-----------------------:|:-------------------:|:-----------------------:|:--------------------------:|:-----------------------:|:-------------------:|:-----------------------:|:----------------------:|:-------------------:|:---------------:|:-------------------:|:-----------------:|:--------------:|:----------:|:----------------:| | 0.0267 | 1.0 | 2332 | 0.0175 | 0.8333 | 0.9140 | 0.8718 | 93 | 0.6825 | 0.7679 | 0.7227 | 56 | 0.9342 | 0.9855 | 0.9592 | 346 | 0.9194 | 0.9771 | 0.9474 | 175 | 0.4154 | 0.4909 | 0.45 | 55 | 0.5 | 0.7568 | 0.6022 | 37 | 0.8303 | 0.9121 | 0.8693 | 0.9954 | | 0.0211 | 2.0 | 4664 | 0.0158 | 0.7154 | 0.9462 | 0.8148 | 93 | 0.7812 | 0.8929 | 0.8333 | 56 | 0.9319 | 0.9884 | 0.9593 | 346 | 0.9605 | 0.9714 | 0.9659 | 175 | 0.4 | 0.6545 | 0.4966 | 55 | 0.8293 | 0.9189 | 0.8718 | 37 | 0.8353 | 0.9449 | 0.8867 | 0.9956 | | 0.0127 | 3.0 | 6996 | 0.0157 | 0.8218 | 0.8925 | 0.8557 | 93 | 0.8254 | 0.9286 | 0.8739 | 56 | 0.9522 | 0.9798 | 0.9658 | 346 | 0.96 | 0.96 | 0.96 | 175 | 0.5735 | 0.7091 | 0.6341 | 55 | 0.6857 | 0.6486 | 0.6667 | 37 | 0.8835 | 0.9252 | 0.9038 | 0.9957 | | 0.0074 | 4.0 | 9328 | 0.0128 | 0.8235 | 0.9032 | 0.8615 | 93 | 0.7324 | 0.9286 | 0.8189 | 56 | 0.9415 | 0.9769 | 0.9589 | 346 | 0.9486 | 0.9486 | 0.9486 | 175 | 0.5352 | 0.6909 | 0.6032 | 55 | 0.8 | 0.8649 | 0.8312 | 37 | 0.8680 | 0.9318 | 0.8987 | 0.9966 | | 0.0065 | 5.0 | 11660 | 0.0177 | 0.8113 | 0.9247 | 0.8643 | 93 | 0.675 | 0.9643 | 0.7941 | 56 | 0.9444 | 0.9827 | 0.9632 | 346 | 0.9392 | 0.9714 | 0.9551 | 175 | 0.5075 | 0.6182 | 0.5574 | 55 | 0.7674 | 0.8919 | 0.825 | 37 | 0.8566 | 0.9409 | 0.8968 | 0.9958 | | 0.005 | 6.0 | 13992 | 0.0161 | 0.8485 | 0.9032 | 0.875 | 93 | 0.7164 | 0.8571 | 0.7805 | 56 | 0.9496 | 0.9798 | 0.9644 | 346 | 0.9556 | 0.9829 | 0.9690 | 175 | 0.6290 | 0.7091 | 0.6667 | 55 | 0.8108 | 0.8108 | 0.8108 | 37 | 0.8878 | 0.9344 | 0.9105 | 0.9967 | | 0.0039 | 7.0 | 16324 | 0.0185 | 0.8925 | 0.8925 | 0.8925 | 93 | 0.7812 | 0.8929 | 0.8333 | 56 | 0.9602 | 0.9769 | 0.9685 | 346 | 0.9607 | 0.9771 | 0.9688 | 175 | 0.5224 | 0.6364 | 0.5738 | 55 | 0.8378 | 0.8378 | 0.8378 | 37 | 0.8951 | 0.9291 | 0.9118 | 0.9966 | | 0.0035 | 8.0 | 18656 | 0.0188 | 0.8431 | 0.9247 | 0.8821 | 93 | 0.7903 | 0.875 | 0.8305 | 56 | 0.9571 | 0.9682 | 0.9626 | 346 | 0.9605 | 0.9714 | 0.9659 | 175 | 0.6981 | 0.6727 | 0.6852 | 55 | 0.8462 | 0.8919 | 0.8684 | 37 | 0.9068 | 0.9318 | 0.9191 | 0.9969 | | 0.0017 | 9.0 | 20988 | 0.0207 | 0.8529 | 0.9355 | 0.8923 | 93 | 0.7727 | 0.9107 | 0.8361 | 56 | 0.9630 | 0.9769 | 0.9699 | 346 | 0.9605 | 0.9714 | 0.9659 | 175 | 0.7143 | 0.6364 | 0.6731 | 55 | 0.8462 | 0.8919 | 0.8684 | 37 | 0.9107 | 0.9370 | 0.9237 | 0.9968 | | 0.002 | 10.0 | 23320 | 0.0191 | 0.8614 | 0.9355 | 0.8969 | 93 | 0.7647 | 0.9286 | 0.8387 | 56 | 0.9549 | 0.9798 | 0.9672 | 346 | 0.9553 | 0.9771 | 0.9661 | 175 | 0.6167 | 0.6727 | 0.6435 | 55 | 0.825 | 0.8919 | 0.8571 | 37 | 0.8954 | 0.9436 | 0.9188 | 0.9968 |
718407416a6c9ec25c6a2d5489c4d110
mit
['sklearn', 'skops', 'tabular-classification']
false
Hyperparameters The model is trained with below hyperparameters. <details> <summary> Click to expand </summary> | Hyperparameter | Value | |--------------------------|-----------------------------------------------------------------------------------------------| | memory | | | steps | [('imputer', SimpleImputer()), ('scaler', StandardScaler()), ('model', LogisticRegression())] | | verbose | False | | imputer | SimpleImputer() | | scaler | StandardScaler() | | model | LogisticRegression() | | imputer__add_indicator | False | | imputer__copy | True | | imputer__fill_value | | | imputer__missing_values | nan | | imputer__strategy | mean | | imputer__verbose | 0 | | scaler__copy | True | | scaler__with_mean | True | | scaler__with_std | True | | model__C | 1.0 | | model__class_weight | | | model__dual | False | | model__fit_intercept | True | | model__intercept_scaling | 1 | | model__l1_ratio | | | model__max_iter | 100 | | model__multi_class | auto | | model__n_jobs | | | model__penalty | l2 | | model__random_state | | | model__solver | lbfgs | | model__tol | 0.0001 | | model__verbose | 0 | | model__warm_start | False | </details>
d04291668b3cacd36a5a109150708016
mit
['sklearn', 'skops', 'tabular-classification']
false
sk-e60317e1-ee5c-4f4d-98a6-92332ba1474b input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}
22f4d2507a6c302f2ac41b631efd4b56
mit
['sklearn', 'skops', 'tabular-classification']
false
sk-e60317e1-ee5c-4f4d-98a6-92332ba1474b div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;position: relative;}
7a9cd0e6d387d53126260be0ad4cab83
mit
['sklearn', 'skops', 'tabular-classification']
false
sk-e60317e1-ee5c-4f4d-98a6-92332ba1474b div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}
d057dde4a08020572ab8ac19c60e4955
mit
['sklearn', 'skops', 'tabular-classification']
false
sk-e60317e1-ee5c-4f4d-98a6-92332ba1474b div.sk-text-repr-fallback {display: none;}</style><div id="sk-e60317e1-ee5c-4f4d-98a6-92332ba1474b" class="sk-top-container" style="overflow: auto;"><div class="sk-text-repr-fallback"><pre>Pipeline(steps=[(&
a0ff108ce14932aae4392309947576f1
mit
['sklearn', 'skops', 'tabular-classification']
false
x27;, LogisticRegression())])</pre><b>Please rerun this cell to show the HTML repr or trust the notebook.</b></div><div class="sk-container" hidden><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="6aee50d2-d0d7-437e-8e9b-bd1121de94e7" type="checkbox" ><label for="6aee50d2-d0d7-437e-8e9b-bd1121de94e7" class="sk-toggleable__label sk-toggleable__label-arrow">Pipeline</label><div class="sk-toggleable__content"><pre>Pipeline(steps=[(&
cba91a16720f673ae45ca13790597dae
mit
['sklearn', 'skops', 'tabular-classification']
false
x27;, LogisticRegression())])</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="ac5b7f88-9a16-4c90-8fcb-2a4f833cadf1" type="checkbox" ><label for="ac5b7f88-9a16-4c90-8fcb-2a4f833cadf1" class="sk-toggleable__label sk-toggleable__label-arrow">SimpleImputer</label><div class="sk-toggleable__content"><pre>SimpleImputer()</pre></div></div></div><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="65ce6721-e323-4189-a9bd-e373e248f0f7" type="checkbox" ><label for="65ce6721-e323-4189-a9bd-e373e248f0f7" class="sk-toggleable__label sk-toggleable__label-arrow">StandardScaler</label><div class="sk-toggleable__content"><pre>StandardScaler()</pre></div></div></div><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="2328c6c4-413e-46ed-b597-1b88227e45a5" type="checkbox" ><label for="2328c6c4-413e-46ed-b597-1b88227e45a5" class="sk-toggleable__label sk-toggleable__label-arrow">LogisticRegression</label><div class="sk-toggleable__content"><pre>LogisticRegression()</pre></div></div></div></div></div></div></div>
6fa150130d37d695b8940ebfe0570c8f
apache-2.0
['generated_from_trainer']
false
bert_base_yc_recipe_30 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0000
1ecc986c737b2318345f586e63924be7
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 - mixed_precision_training: Native AMP
05043f0163dc924aee40bc8156f2d29e
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 121 | 0.0001 | | No log | 2.0 | 242 | 0.0000 | | No log | 3.0 | 363 | 0.0000 | | No log | 4.0 | 484 | 0.0000 | | 0.0465 | 5.0 | 605 | 0.0000 | | 0.0465 | 6.0 | 726 | 0.0000 | | 0.0465 | 7.0 | 847 | 0.0000 | | 0.0465 | 8.0 | 968 | 0.0000 | | 0.0 | 9.0 | 1089 | 0.0000 | | 0.0 | 10.0 | 1210 | 0.0000 | | 0.0 | 11.0 | 1331 | 0.0000 | | 0.0 | 12.0 | 1452 | 0.0000 | | 0.0 | 13.0 | 1573 | 0.0000 | | 0.0 | 14.0 | 1694 | 0.0000 | | 0.0 | 15.0 | 1815 | 0.0000 | | 0.0 | 16.0 | 1936 | 0.0000 | | 0.0 | 17.0 | 2057 | 0.0000 | | 0.0 | 18.0 | 2178 | 0.0000 | | 0.0 | 19.0 | 2299 | 0.0000 | | 0.0 | 20.0 | 2420 | 0.0000 | | 0.0 | 21.0 | 2541 | 0.0000 | | 0.0 | 22.0 | 2662 | 0.0000 | | 0.0 | 23.0 | 2783 | 0.0000 | | 0.0 | 24.0 | 2904 | 0.0000 | | 0.0 | 25.0 | 3025 | 0.0000 | | 0.0 | 26.0 | 3146 | 0.0000 | | 0.0 | 27.0 | 3267 | 0.0000 | | 0.0 | 28.0 | 3388 | 0.0000 | | 0.0 | 29.0 | 3509 | 0.0000 | | 0.0 | 30.0 | 3630 | 0.0000 |
a773c56415dca0a3d520efd8252534f3
mit
['generated_from_trainer']
false
xlnet-base-cased-fine-Disaster-Tweets-Part3 This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3924 - Accuracy: 0.8468 - F1: 0.8467
fe8ae2e57a2990228a44183e0c9cd7dc
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 8e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 2 - mixed_precision_training: Native AMP
3785258b7f4bc68b1f28e666699a7fc9
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 203 | 0.4457 | 0.8257 | 0.8253 | | No log | 2.0 | 406 | 0.3924 | 0.8468 | 0.8467 |
f1619a5b22990914887c84c3e4f2214c
cc-by-4.0
['automatic-speech-recognition', 'speech', 'audio', 'Transducer', 'Conformer', 'Transformer', 'pytorch', 'NeMo', 'hf-asr-leaderboard']
false
datasets) This model transcribes speech in lowercase Italian alphabet including spaces, and was trained on a composite dataset comprising of 487 hours of Italian speech. It is a "large" variant of Conformer-Transducer, with around 120 million parameters. See the [model architecture](
cac5f4e84566651fbc86c31318b39b75
cc-by-4.0
['automatic-speech-recognition', 'speech', 'audio', 'Transducer', 'Conformer', 'Transformer', 'pytorch', 'NeMo', 'hf-asr-leaderboard']
false
NVIDIA NeMo: Training To train, fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed latest Pytorch version. ``` pip install nemo_toolkit['all'] ```
0827ecc3cd1295e093066cc1314515ea
cc-by-4.0
['automatic-speech-recognition', 'speech', 'audio', 'Transducer', 'Conformer', 'Transformer', 'pytorch', 'NeMo', 'hf-asr-leaderboard']
false
Transcribing many audio files ```shell python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py pretrained_name="nvidia/stt_it_conformer_transducer_large" audio_dir="<DIRECTORY CONTAINING AUDIO FILES>" ```
dc8c59119abf84a529828c9e19d0a639
cc-by-4.0
['automatic-speech-recognition', 'speech', 'audio', 'Transducer', 'Conformer', 'Transformer', 'pytorch', 'NeMo', 'hf-asr-leaderboard']
false
Model Architecture Conformer-Transducer model is an autoregressive variant of Conformer model [1] for Automatic Speech Recognition which uses Transducer loss/decoding instead of CTC Loss. You may find more info on the detail of this model here: [Conformer-Transducer Model](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html).
7c97efd0b45499b805c98d85f676dc89
cc-by-4.0
['automatic-speech-recognition', 'speech', 'audio', 'Transducer', 'Conformer', 'Transformer', 'pytorch', 'NeMo', 'hf-asr-leaderboard']
false
Training The NeMo toolkit [3] was used for training these models for over several hundred epochs. These models are trained with this [example script](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/asr_transducer/speech_to_text_rnnt_bpe.py) and this [base config](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/conf/conformer/conformer_transducer_bpe.yaml). The tokenizers for these models were built using the text transcripts of the train set with this [script](https://github.com/NVIDIA/NeMo/blob/main/scripts/tokenizers/process_asr_text_tokenizer.py).
58ced5615aaaa52e6a7517a3d6f04b64
cc-by-4.0
['automatic-speech-recognition', 'speech', 'audio', 'Transducer', 'Conformer', 'Transformer', 'pytorch', 'NeMo', 'hf-asr-leaderboard']
false
Datasets All the models in this collection are trained on a composite dataset (NeMo ASRSET) comprising of 487 hours of Italian speech: - Mozilla Common Voice 11.0 (Italian) - 220 hours after data cleaning - Multilingual LibriSpeech (Italian) - 214 hours after data cleaning - VoxPopuli transcribed subset (Italian) - 53 hours after data cleaning
4526143dabe5805db5590f4e5a0eab9b
cc-by-4.0
['automatic-speech-recognition', 'speech', 'audio', 'Transducer', 'Conformer', 'Transformer', 'pytorch', 'NeMo', 'hf-asr-leaderboard']
false
Performance The list of the available models in this collection is shown in the following table. Performances of the ASR models are reported in terms of Word Error Rate (WER%) with greedy decoding. | Version | Tokenizer | Vocabulary Size | MCV 11.0 Dev | MCV 11.0 Test | MLS Dev | MLS Test | VoxPopuli Dev | VoxPopuli Test | Train Dataset | |---------|-----------------------|-----------------|--------------|---------------|---------|----------|---------------|----------------|--------------------| | 1.13.0 | SentencePiece Unigram | 1024 | 4.80 | 5.24 | 14.62 | 12.18 | 12.00 | 15.15 | NeMo ASRSET It 2.0 |
0cd90a7b115946fc2d2b45c9b4bd13d8
cc-by-4.0
['automatic-speech-recognition', 'speech', 'audio', 'Transducer', 'Conformer', 'Transformer', 'pytorch', 'NeMo', 'hf-asr-leaderboard']
false
Limitations Since this model was trained on publicly available speech datasets, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on. The model might also perform worse for accented speech.
33d00c6fc8caeffc67b1f672dae2ade0
cc-by-4.0
['automatic-speech-recognition', 'speech', 'audio', 'Transducer', 'Conformer', 'Transformer', 'pytorch', 'NeMo', 'hf-asr-leaderboard']
false
NVIDIA Riva: Deployment [NVIDIA Riva](https://developer.nvidia.com/riva), is an accelerated speech AI SDK deployable on-prem, in all clouds, multi-cloud, hybrid, on edge, and embedded. Additionally, Riva provides: * World-class out-of-the-box accuracy for the most common languages with model checkpoints trained on proprietary data with hundreds of thousands of GPU-compute hours * Best in class accuracy with run-time word boosting (e.g., brand and product names) and customization of acoustic model, language model, and inverse text normalization * Streaming speech recognition, Kubernetes compatible scaling, and enterprise-grade support Although this model isn’t supported yet by Riva, the [list of supported models is here](https://huggingface.co/models?other=Riva). Check out [Riva live demo](https://developer.nvidia.com/riva
ac971f9e3d7cdc0fbe332ef8e78eb00d
cc-by-4.0
['automatic-speech-recognition', 'speech', 'audio', 'Transducer', 'Conformer', 'Transformer', 'pytorch', 'NeMo', 'hf-asr-leaderboard']
false
References - [1] [Conformer: Convolution-augmented Transformer for Speech Recognition](https://arxiv.org/abs/2005.08100) - [2] [Google Sentencepiece Tokenizer](https://github.com/google/sentencepiece) - [3] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
e47085ed824a52b2ab8d33acd65fcc95
cc-by-4.0
['automatic-speech-recognition', 'speech', 'audio', 'Transducer', 'Conformer', 'Transformer', 'pytorch', 'NeMo', 'hf-asr-leaderboard']
false
Licence License to use this model is covered by the [CC-BY-4 License](https://creativecommons.org/licenses/by/4.0/legalcode) unless another License/Terms Of Use/EULA is clearly specified. By downloading the public and release version of the model, you accept the terms and conditions of the [CC-BY-4 License](https://creativecommons.org/licenses/by/4.0/legalcode).
7a1cf81d1485536a6484fcc3b92fdcc9
apache-2.0
['text-to-image', 'dalle-mini']
false
This is the [dalle-mini/dalle-mini](https://huggingface.co/dalle-mini/dalle-mini) text-to-image model fine-tuned on 120k <title, image> pairs from the [Medium](https://medium.com) blogging platform. The full dataset can be found on Kaggle: [Medium Articles Dataset (128k): Metadata + Images](https://www.kaggle.com/datasets/succinctlyai/medium-data). The goal of this model is to probe the ability of text-to-image models of operating on text prompts that are abstract (like the titles on Medium usually are), as opposed to concrete descriptions of the envisioned visual scene. [More context here](https://medium.com/@turc.raluca/fine-tuning-dall-e-mini-craiyon-to-generate-blogpost-images-32903cc7aa52).
60f18dc78bb219284882a19fd01b35e0
apache-2.0
['translation']
false
opus-mt-es-pis * source languages: es * target languages: pis * OPUS readme: [es-pis](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-pis/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-pis/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-pis/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-pis/opus-2020-01-16.eval.txt)
21517e422c17c073b74e3a82792e98f5
mit
['deberta', 'fill-mask']
false
DeBERTa: Decoding-enhanced BERT with Disentangled Attention [DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. It outperforms BERT and RoBERTa on majority of NLU tasks with 80GB training data. Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates. This is the DeBERTa V2 xlarge model with 24 layers, 1536 hidden size. The total parameters are 900M and it is trained with 160GB raw data.
3ec95cfab8718af1f6ba333edbb0e308
mit
['deberta', 'fill-mask']
false
Fine-tuning on NLU tasks We present the dev results on SQuAD 1.1/2.0 and several GLUE benchmark tasks. | Model | SQuAD 1.1 | SQuAD 2.0 | MNLI-m/mm | SST-2 | QNLI | CoLA | RTE | MRPC | QQP |STS-B | |---------------------------|-----------|-----------|-------------|-------|------|------|--------|-------|-------|------| | | F1/EM | F1/EM | Acc | Acc | Acc | MCC | Acc |Acc/F1 |Acc/F1 |P/S | | BERT-Large | 90.9/84.1 | 81.8/79.0 | 86.6/- | 93.2 | 92.3 | 60.6 | 70.4 | 88.0/- | 91.3/- |90.0/- | | RoBERTa-Large | 94.6/88.9 | 89.4/86.5 | 90.2/- | 96.4 | 93.9 | 68.0 | 86.6 | 90.9/- | 92.2/- |92.4/- | | XLNet-Large | 95.1/89.7 | 90.6/87.9 | 90.8/- | 97.0 | 94.9 | 69.0 | 85.9 | 90.8/- | 92.3/- |92.5/- | | [DeBERTa-Large](https://huggingface.co/microsoft/deberta-large)<sup>1</sup> | 95.5/90.1 | 90.7/88.0 | 91.3/91.1| 96.5|95.3| 69.5| 91.0| 92.6/94.6| 92.3/- |92.8/92.5 | | [DeBERTa-XLarge](https://huggingface.co/microsoft/deberta-xlarge)<sup>1</sup> | -/- | -/- | 91.5/91.2| 97.0 | - | - | 93.1 | 92.1/94.3 | - |92.9/92.7| | [DeBERTa-V2-XLarge](https://huggingface.co/microsoft/deberta-v2-xlarge)<sup>1</sup>|95.8/90.8| 91.4/88.9|91.7/91.6| **97.5**| 95.8|71.1|**93.9**|92.0/94.2|92.3/89.8|92.9/92.9| |**[DeBERTa-V2-XXLarge](https://huggingface.co/microsoft/deberta-v2-xxlarge)<sup>1,2</sup>**|**96.1/91.4**|**92.2/89.7**|**91.7/91.9**|97.2|**96.0**|**72.0**| 93.5| **93.1/94.9**|**92.7/90.3** |**93.2/93.1** | --------
ab247c49f51d58627b9b035d588a32ce
mit
['deberta', 'fill-mask']
false
Notes. - <sup>1</sup> Following RoBERTa, for RTE, MRPC, STS-B, we fine-tune the tasks based on [DeBERTa-Large-MNLI](https://huggingface.co/microsoft/deberta-large-mnli), [DeBERTa-XLarge-MNLI](https://huggingface.co/microsoft/deberta-xlarge-mnli), [DeBERTa-V2-XLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xlarge-mnli), [DeBERTa-V2-XXLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli). The results of SST-2/QQP/QNLI/SQuADv2 will also be slightly improved when start from MNLI fine-tuned models, however, we only report the numbers fine-tuned from pretrained base models for those 4 tasks. - <sup>2</sup> To try the **XXLarge** model with **[HF transformers](https://huggingface.co/transformers/main_classes/trainer.html)**, you need to specify **--sharded_ddp** ```bash cd transformers/examples/text-classification/ export TASK_NAME=mrpc python -m torch.distributed.launch --nproc_per_node=8 run_glue.py --model_name_or_path microsoft/deberta-v2-xxlarge \\\\ --task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 4 \\\\ --learning_rate 3e-6 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --sharded_ddp --fp16 ```
92f4f64f2d92a10986f62b7cf999b2ff
mit
['deberta', 'fill-mask']
false
Citation If you find DeBERTa useful for your work, please cite the following paper: ``` latex @inproceedings{ he2021deberta, title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION}, author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen}, booktitle={International Conference on Learning Representations}, year={2021}, url={https://openreview.net/forum?id=XPZIaotutsD} } ```
33522ed5480eb2e826356bc19e59e8b1
wtfpl
[]
false
SOTA SOTA (short for Sign Of The Apocalypse) is a model pretrained on all atoms in the observable universe. It achieves state-of-the-art results on every task known to humans, including those in future generations. It was introduced in the paper [_SOTA is All You Need_](https://twitter.com/wellingmax/status/1542384014279016448?s=20&t=HOS51HLCzmPR2Xyz2Opqvw) and first released [via Twitter](https://twitter.com/josh_tobin_/status/1544371187051941890?s=20&t=Nsf8hYQKfWBSsY_XU23NDQ). Disclaimer: this model is not to be confused with the closely related, but fictitious [AGI model](https://github.com/google/agi).
e000e127d6bdf27d3dd51836683e7720
wtfpl
[]
false
Model description SOTA is a Transformer model pretrained on atomic sequences in a self-supervised fashion. Since all atoms in the Universe were used for training, no humans were available to provide the labels. By learning to predict the next atom in a sequence, SOTA is able to learn an inner representation of physics that can be used to solve all downstream tasks.
1058d01674795eceb3016d327c114062
apache-2.0
['generated_from_trainer']
false
wav2vec2-base-checkpoint-11.1 This model is a fine-tuned version of [jiobiala24/wav2vec2-base-checkpoint-10](https://huggingface.co/jiobiala24/wav2vec2-base-checkpoint-10) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 1.0173 - Wer: 0.3350
1882e3d7bb4fbc705dd4a0a0d590de3f
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.2788 | 1.52 | 1000 | 0.5776 | 0.3410 | | 0.2277 | 3.04 | 2000 | 0.6148 | 0.3465 | | 0.1772 | 4.56 | 3000 | 0.6497 | 0.3497 | | 0.1528 | 6.08 | 4000 | 0.6786 | 0.3430 | | 0.1285 | 7.6 | 5000 | 0.6779 | 0.3489 | | 0.1104 | 9.12 | 6000 | 0.7417 | 0.3528 | | 0.0965 | 10.64 | 7000 | 0.7956 | 0.3477 | | 0.0914 | 12.16 | 8000 | 0.7994 | 0.3570 | | 0.082 | 13.68 | 9000 | 0.8690 | 0.3510 | | 0.0788 | 15.2 | 10000 | 0.8569 | 0.3526 | | 0.0727 | 16.72 | 11000 | 0.8885 | 0.3440 | | 0.0656 | 18.24 | 12000 | 0.9586 | 0.3476 | | 0.0608 | 19.76 | 13000 | 0.9317 | 0.3495 | | 0.0588 | 21.28 | 14000 | 0.9809 | 0.3449 | | 0.0547 | 22.8 | 15000 | 0.9552 | 0.3421 | | 0.0519 | 24.32 | 16000 | 0.9782 | 0.3380 | | 0.0474 | 25.84 | 17000 | 0.9923 | 0.3386 | | 0.046 | 27.36 | 18000 | 0.9984 | 0.3347 | | 0.045 | 28.88 | 19000 | 1.0173 | 0.3350 |
c19d27a6615179ba6f7447ca128d6e68
apache-2.0
['generated_from_keras_callback']
false
twitter-emotion-classifier-BERT This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1487 - Train Sparse Categorical Accuracy: 0.9374 - Validation Loss: 0.1447 - Validation Sparse Categorical Accuracy: 0.9390 - Epoch: 1
04d07ad89da20ad2e4475e974c486d84
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Train Sparse Categorical Accuracy | Validation Loss | Validation Sparse Categorical Accuracy | Epoch | |:----------:|:---------------------------------:|:---------------:|:--------------------------------------:|:-----:| | 0.5268 | 0.8156 | 0.2002 | 0.9265 | 0 | | 0.1487 | 0.9374 | 0.1447 | 0.9390 | 1 |
2d7c0573078a27b7a3406d666e243fd1
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 100 - mixed_precision_training: Native AMP
ba78b02feeea3418b3c2685727d5f056
apache-2.0
['automatic-speech-recognition', 'fr']
false
exp_w2v2r_fr_xls-r_accent_france-2_belgium-8_s55 Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
44f3f55056444281396a5cd63e058b29
mit
['generated_from_trainer']
false
deberta-v3-large__sst2__train-16-3 This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6286 - Accuracy: 0.7068
42383cbfb1409ee6dc097b32dd3a7f0f
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6955 | 1.0 | 7 | 0.7370 | 0.2857 | | 0.6919 | 2.0 | 14 | 0.6855 | 0.4286 | | 0.6347 | 3.0 | 21 | 0.5872 | 0.7143 | | 0.4016 | 4.0 | 28 | 0.6644 | 0.7143 | | 0.3097 | 5.0 | 35 | 0.5120 | 0.7143 | | 0.0785 | 6.0 | 42 | 0.5845 | 0.7143 | | 0.024 | 7.0 | 49 | 0.6951 | 0.7143 | | 0.0132 | 8.0 | 56 | 0.8972 | 0.7143 | | 0.0037 | 9.0 | 63 | 1.5798 | 0.7143 | | 0.0034 | 10.0 | 70 | 1.5178 | 0.7143 | | 0.003 | 11.0 | 77 | 1.3511 | 0.7143 | | 0.0012 | 12.0 | 84 | 1.1346 | 0.7143 | | 0.0007 | 13.0 | 91 | 0.9752 | 0.7143 | | 0.0008 | 14.0 | 98 | 0.8531 | 0.7143 | | 0.0007 | 15.0 | 105 | 0.8149 | 0.7143 |
69026856ed6223a058f652ad56dbd2cc
apache-2.0
['automatic-speech-recognition', 'ar']
false
exp_w2v2t_ar_vp-it_s284 Fine-tuned [facebook/wav2vec2-large-it-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-it-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (ar)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
4dfac7f750ffa9c728b553a585c2bb1c
apache-2.0
['translation']
false
opus-mt-loz-fi * source languages: loz * target languages: fi * OPUS readme: [loz-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/loz-fi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/loz-fi/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/loz-fi/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/loz-fi/opus-2020-01-09.eval.txt)
1c0e6dbac76d59c77a83457593058695
apache-2.0
['translation']
false
epo-nld * source group: Esperanto * target group: Dutch * OPUS readme: [epo-nld](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-nld/README.md) * model: transformer-align * source language(s): epo * target language(s): nld * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-nld/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-nld/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-nld/opus-2020-06-16.eval.txt)
e2173c5d37b7b1cf1f483ba564bd0299
apache-2.0
['translation']
false
System Info: - hf_name: epo-nld - source_languages: epo - target_languages: nld - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-nld/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['eo', 'nl'] - src_constituents: {'epo'} - tgt_constituents: {'nld'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-nld/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-nld/opus-2020-06-16.test.txt - src_alpha3: epo - tgt_alpha3: nld - short_pair: eo-nl - chrF2_score: 0.337 - bleu: 15.3 - brevity_penalty: 0.8640000000000001 - ref_len: 78770.0 - src_name: Esperanto - tgt_name: Dutch - train_date: 2020-06-16 - src_alpha2: eo - tgt_alpha2: nl - prefer_old: False - long_pair: epo-nld - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
cec116e88ccae2383cd46a6da46971cd
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Whisper Small It - Gianluca Ruberto This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.393979 - Wer: 22.108985
0bd37002a6a751e72e9a480d745e21e1
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Training and evaluation data Data used for training is the initial 10% of train and validation of [Italian Common Voice](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0/viewer/it/train) 11.0 from Mozilla Foundation. The dataset used for evaluation is the initial 10% of test of Italian Common Voice.
5d1627b4e5241bb7aae7eb66053684ce
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.2545 | 0.95 | 1000 | 0.3872 | 24.8891 | | 0.129 | 1.91 | 2000 | 0.3682 | 22.1991 | | 0.0534 | 2.86 | 3000 | 0.3771 | 22.4695 | | 0.0302 | 3.82 | 4000 | 0.3940 | 22.1090 |
eff3673f270d9b04a0cd32696bb76d19
mit
[]
false
GC4LM: A Colossal (Biased) language model for German This repository presents a colossal (and biased) language model for German trained on the recently released ["German colossal, clean Common Crawl corpus"](https://german-nlp-group.github.io/projects/gc4-corpus.html) (GC4), with a total dataset size of ~844GB. --- **Disclaimer**: the presented and trained language models in this repository are for **research only** purposes. The GC4 corpus - that was used for training - contains crawled texts from the internet. Thus, the language models can be considered as highly biased, resulting in a model that encodes stereotypical associations along gender, race, ethnicity and disability status. Before using and working with the released checkpoints, it is highly recommended to read: [On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?](https://faculty.washington.edu/ebender/papers/Stochastic_Parrots.pdf) from Emily M. Bender, Timnit Gebru, Angelina McMillan-Major and Shmargaret Shmitchell. The aim of the released checkpoints is to boost research on large pre-trained language models for German, especially for identifying biases and how to prevent them, as most research is currently done only for English. --- Please use the new GitHub Discussions feature in order to discuss or present further research questions. Feel free to use `
e7384f16de11946fa63536f72ae550b6
openrail++
['stable-diffusion', 'sygil-diffusion', 'text-to-image', 'sygil-devs', 'finetune', 'stable-diffusion-1.5']
false
About the model ----------------- This model is a fine-tune of Stable Diffusion, trained on the [Imaginary Network Expanded Dataset](https://github.com/Sygil-Dev/INE-dataset), with the big advantage of allowing the use of multiple namespaces (labeled tags) to control various parts of the final generation. While current models usually are prone to “context errors” and need substantial negative prompting to set them on the right track, the use of namespaces in this model (eg. “species:seal” or “studio:dc”) stop the model from misinterpreting a seal as the singer Seal, or DC Comics as Washington DC. This model is also able to understand other languages besides English, currently it can partially understand prompts in Chinese, Japanese and Spanish. More training is already being done in order to have the model completely understand those languages and have it work just like how it works with English prompts. As the model is fine-tuned on a wide variety of content, it’s able to generate many types of images and compositions, and easily outperforms the original model when it comes to portraits, architecture, reflections, fantasy, concept art, anime, landscapes and a lot more without being hyper-specialized like other community fine-tunes that are currently available. **Note: The prompt engineering techniques needed are slightly different from other fine-tunes and the original Stable Diffusion model, so while you can still use your favorite prompts, for best results you might need to tweak them to make use of namespaces. A more detailed guide will be available later on, but you can use the tags and namespaces found here [Dataset Explorer](https://huggingface.co/spaces/Sygil/INE-dataset-explorer) should be able to start you off on the right track. If you find our work useful, please consider supporting us on [OpenCollective](https://opencollective.com/sygil_dev)! This model is still in its infancy and it's meant to be constantly updated and trained with more and more data as time goes by, so feel free to give us feedback on our [Discord Server](https://discord.gg/UjXFsf6mTu) or on the discussions section on huggingface. We plan to improve it with more, better tags in the future, so any help is always welcome 😛 [![Join the Discord Server](https://badgen.net/discord/members/fTtcufxyHQ?icon=discord)](https://discord.gg/UjXFsf6mTu)
c26824aa588da58f72b7beab75ac34ba
openrail++
['stable-diffusion', 'sygil-diffusion', 'text-to-image', 'sygil-devs', 'finetune', 'stable-diffusion-1.5']
false
Examples Using the [🤗's Diffusers library](https://github.com/huggingface/diffusers) to run Sygil Diffusion in a simple and efficient manner. ```bash pip install diffusers transformers accelerate scipy safetensors ``` Running the pipeline (if you don't swap the scheduler it will run with the default DDIM, in this example we are swapping it to DPMSolverMultistepScheduler): ```python import torch from diffusers import StableDiffusionPipeline, DPMSolverMultistepScheduler model_id = "Sygil/Sygil-Diffusion"
2e1a6a89cb34ac89bb5acf6430102465
openrail++
['stable-diffusion', 'sygil-diffusion', 'text-to-image', 'sygil-devs', 'finetune', 'stable-diffusion-1.5']
false
Use the DPMSolverMultistepScheduler (DPM-Solver++) scheduler here instead pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) pipe = pipe.to("cuda") prompt = "a beautiful illustration of a fantasy forest" image = pipe(prompt).images[0] image.save("fantasy_forest_illustration.png") ``` **Notes**: - Despite not being a dependency, we highly recommend you to install [xformers](https://github.com/facebookresearch/xformers) for memory efficient attention (better performance) - If you have low GPU RAM available, make sure to add a `pipe.enable_attention_slicing()` after sending it to `cuda` for less VRAM usage (to the cost of speed).
a8dc83bff149cd89a7ee65afd789af73
openrail++
['stable-diffusion', 'sygil-diffusion', 'text-to-image', 'sygil-devs', 'finetune', 'stable-diffusion-1.5']
false
Stable: - [Sygil Diffusion v0.1](https://huggingface.co/Sygil/Sygil-Diffusion/blob/main/sygil-diffusion-v0.1.ckpt): Trained on Stable Diffusion 1.5 for 800,000 steps. - [Sygil Diffusion v0.2](https://huggingface.co/Sygil/Sygil-Diffusion/blob/main/sygil-diffusion-v0.2.ckpt): Resumed from Sygil Diffusion v0.1 and trained for a total of 1.77 million steps. - [Sygil Diffusion v0.3](https://huggingface.co/Sygil/Sygil-Diffusion/blob/main/sygil-diffusion-v0.3.ckpt): Resumed from Sygil Diffusion v0.2 and trained for a total of 2.01 million steps. - [Sygil Diffusion v0.4](https://huggingface.co/Sygil/Sygil-Diffusion/blob/main/sygil-diffusion-v0.4.ckpt): Resumed from Sygil Diffusion v0.3 and trained for a total of 2.37 million steps. -
926861bf33896e328f47bf5401cc2cfe
openrail++
['stable-diffusion', 'sygil-diffusion', 'text-to-image', 'sygil-devs', 'finetune', 'stable-diffusion-1.5']
false
Beta: - No active beta right now. Note: Checkpoints under the Beta section are updated daily or at least 3-4 times a week. This is usually the equivalent of 1-2 training session, this is done until they are stable enough to be moved into a proper release, usually every 1 or 2 weeks. While the beta checkpoints can be used as they are only the latest version is kept on the repo and the older checkpoints are removed when a new one is uploaded to keep the repo clean. The HuggingFace inference API as well as the diffusers library will always use the latest beta checkpoint in the diffusers format. For special cases we might make additional repositories to keep a copy of the diffusers model like when a model uses a different Stable Diffusion model as base (eg. Stable Diffusion 1.5 vs 2.1).
4c8fc34f4d93b1bd0f1a4bc9aed65625
openrail++
['stable-diffusion', 'sygil-diffusion', 'text-to-image', 'sygil-devs', 'finetune', 'stable-diffusion-1.5']
false
Training **Training Data**: The model was trained on the following dataset: - [Imaginary Network Expanded Dataset](https://github.com/Sygil-Dev/INE-dataset) dataset. **Hardware and others** - **Hardware:** 1 x Nvidia RTX 3050 8GB GPU - **Hours Trained:** 857 hours approximately. - **Optimizer:** AdamW - **Adam Beta 1**: 0.9 - **Adam Beta 2**: 0.999 - **Adam Weight Decay**: 0.01 - **Adam Epsilon**: 1e-8 - **Gradient Checkpointing**: True - **Gradient Accumulations**: 400 - **Batch:** 1 - **Learning Rate:** 1e-7 - **Learning Rate Scheduler:** cosine_with_restarts - **Learning Rate Warmup Steps:** 10,000 - **Lora unet Learning Rate**: 1e-7 - **Lora Text Encoder Learning Rate**: 1e-7 - **Resolution**: 512 pixels - **Total Training Steps:** 2,370,200 Note: For the learning rate I'm testing something new, after changing from using the `constant` scheduler to `cosine_with_restarts` after v0.3 was released, I noticed it practically uses the optimal learning rate while trying to minimize the loss value, so, when every training session finishes I use for the next session the latest learning rate value shown for the last few steps from the last session, this makes it so it will overtime decrease at a constant rate. When I add a lot of data to the training dataset at once, I move the learning rate back to 1e-7 which then the scheduler will move down again as it learns more from the new data, this makes it so the training doesn't overfit or uses a learning rate too low that makes the model not learn anything new for a while. Developed by: [ZeroCool94](https://github.com/ZeroCool940711) at [Sygil-Dev](https://github.com/Sygil-Dev/)
418bf55bf218af42c4ef3d7ceebcf990
openrail++
['stable-diffusion', 'sygil-diffusion', 'text-to-image', 'sygil-devs', 'finetune', 'stable-diffusion-1.5']
false
Community Contributions: - [Kevin Turner (keturn)](https://huggingface.co/keturn): created the [INE-dataset-explorer](https://huggingface.co/spaces/Sygil/INE-dataset-explorer) space for better browsing of the INE dataset. *This model card is based on the [Stable Diffusion v1](https://github.com/CompVis/stable-diffusion/blob/main/Stable_Diffusion_v1_Model_Card.md) and [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*
a03523f3a2f497733ad67583fedd1987
openrail++
['stable-diffusion', 'sygil-diffusion', 'text-to-image', 'sygil-devs', 'finetune', 'stable-diffusion-1.5']
false
License This model is open access and available to all, with a CreativeML Open RAIL++-M License further specifying rights and usage. [Please read the full license here](https://huggingface.co/stabilityai/stable-diffusion-2/blob/main/LICENSE-MODEL)
04ea052984f3b0cd2c1ef9bd2c7dcf36
apache-2.0
['generated_from_keras_callback']
false
long-t5-tglobal-base This model is a fine-tuned version of [google/long-t5-tglobal-base](https://huggingface.co/google/long-t5-tglobal-base) on an unknown dataset. It achieves the following results on the evaluation set:
1c5cfbf7acb565a24c1bf1edf9cfaf19
apache-2.0
['automatic-speech-recognition', 'ja']
false
exp_w2v2t_ja_xlsr-53_s705 Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) for speech recognition using the train split of [Common Voice 7.0 (ja)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
1be5b1c2c6f0632374ad818c4733f3ae
apache-2.0
['generated_from_trainer']
false
distilgpt2-finetuned-wikitext2 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.6424
93f7a88b5245d776f3edf241829fe2a1
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.7598 | 1.0 | 2334 | 3.6654 | | 3.6321 | 2.0 | 4668 | 3.6453 | | 3.6076 | 3.0 | 7002 | 3.6424 |
f33edab2c0a8ad15ad579ed8e0e6d78d
mit
['generated_from_trainer']
false
roberta-base.CEBaB_confounding.price_food_ambiance_negative.absa.5-class.seed_43 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the OpenTable OPENTABLE-ABSA dataset. It achieves the following results on the evaluation set: - Loss: 0.4429 - Accuracy: 0.8778 - Macro-f1: 0.8771 - Weighted-macro-f1: 0.8779
69b183c6c2c630c351d04b3604a654fc
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP
95c4b4d54fec77a4cbf7261efac9a544
apache-2.0
['bert', 'biomedical']
false
Automatic Biomedical Term Clustering by Learning Fine-grained Term Representations. CODER++ ``` @misc{https://doi.org/10.48550/arxiv.2204.00391, doi = {10.48550/ARXIV.2204.00391}, url = {https://arxiv.org/abs/2204.00391}, author = {Zeng, Sihang and Yuan, Zheng and Yu, Sheng}, title = {Automatic Biomedical Term Clustering by Learning Fine-grained Term Representations}, publisher = {arXiv}, year = {2022} } ```
fa90f9387e9f7051c0a80a8fb14b2751
cc-by-4.0
['answer extraction']
false
Model Card of `lmqg/mbart-large-cc25-koquad-ae` This model is fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) for answer extraction on the [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
775e19ebb4b27277e7978ad92ab5c0b4
cc-by-4.0
['answer extraction']
false
Overview - **Language model:** [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) - **Language:** ko - **Training data:** [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
97f46c40255b8285467be190015a7867
cc-by-4.0
['answer extraction']
false
model prediction answers = model.generate_a("1990년 영화 《 남부군 》에서 단역으로 영화배우 첫 데뷔에 이어 같은 해 KBS 드라마 《지구인》에서 단역으로 출연하였고 이듬해 MBC 《여명의 눈동자》를 통해 단역으로 출연하였다.") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/mbart-large-cc25-koquad-ae") output = pipe("또한 스피어스는 많은 새로운 여성 아티스트들에게 영향을 끼쳤는데, 대표적으로 데미 로바토, 케이티 페리, 크리스티니아 드바지, 레이디 가가, 리틀 부츠, 셀레나 고메즈 & 더씬, 픽시 로트 이 있다. 2007년 비욘세 놀스는 Total Request Live와의 인터뷰에서 '나는 브리트니를 사랑하고 팬이에요. 특히 새 앨범 Blackout을 좋아해요'라고 말했다. 린제이 로한은 '언제나 브리트니 스피어스에게 영감을 받는다. 학창시절 그녀처럼 타블로이드에 오르기를 꿈꿔왔다'고 말하며 롤 모델로 꼽았다. 스피어스는 현대 음악가들에게 음악적 영감으로 언급되기도 했다. <hl> 마일리 사이러스는 자신의 히트곡 Party in the U.S.A. 가 브리트니에게 영감과 영향을 받은 곡이라고 밝혔다. <hl> 베리 매닐로우의 앨범 15 Minutes 역시 브리트니에게 영감을 얻었다고 언급되었다.") ```
7dabce16704fb51a93732f3a7fc4be64
cc-by-4.0
['answer extraction']
false
Evaluation - ***Metric (Answer Extraction)***: [raw metric file](https://huggingface.co/lmqg/mbart-large-cc25-koquad-ae/raw/main/eval/metric.first.answer.paragraph_sentence.answer.lmqg_qg_koquad.default.json) | | Score | Type | Dataset | |:-----------------|--------:|:--------|:-----------------------------------------------------------------| | AnswerExactMatch | 79.92 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | AnswerF1Score | 86.7 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | BERTScore | 95.67 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | Bleu_1 | 76.79 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | Bleu_2 | 68.63 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | Bleu_3 | 57.06 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | Bleu_4 | 40.87 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | METEOR | 58.4 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | MoverScore | 94.72 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | ROUGE_L | 81.24 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
5e930932613df628434bb98207389208
cc-by-4.0
['answer extraction']
false
Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_koquad - dataset_name: default - input_types: ['paragraph_sentence'] - output_types: ['answer'] - prefix_types: None - model: facebook/mbart-large-cc25 - max_length: 512 - max_length_output: 32 - epoch: 10 - batch: 8 - lr: 0.0001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 8 - label_smoothing: 0.0 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mbart-large-cc25-koquad-ae/raw/main/trainer_config.json).
7762067b8aad770d5a19f6501f04290e
apache-2.0
['automatic-speech-recognition', 'pt']
false
exp_w2v2t_pt_xls-r_s17 Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
5a80b502cf8e6f8938e2758696f14942
cc-by-4.0
['espnet', 'audio', 'text-to-speech']
false
`kan-bayashi/vctk_tts_train_gst_conformer_fastspeech2_raw_phn_tacotron_g2p_en_no_space_train.loss.ave` ♻️ Imported from https://zenodo.org/record/4036264/ This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
29822325888122671fdaec6fb65a1e3a
apache-2.0
['automatic-speech-recognition', 'et']
false
exp_w2v2t_et_unispeech_s605 Fine-tuned [microsoft/unispeech-large-1500h-cv](https://huggingface.co/microsoft/unispeech-large-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (et)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
010d106ee7c4b36341e99bbe812c2063
apache-2.0
[]
false
Model description **CAMeLBERT Mix SA Model** is a Sentiment Analysis (SA) model that was built by fine-tuning the [CAMeLBERT Mix](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix/) model. For the fine-tuning, we used the [ASTD](https://aclanthology.org/D15-1299.pdf), [ArSAS](http://lrec-conf.org/workshops/lrec2018/W30/pdf/22_W30.pdf), and [SemEval](https://aclanthology.org/S17-2088.pdf) datasets. Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
f790b32e04086a72b25d23a98cb6e78e
apache-2.0
[]
false
Intended uses You can use the CAMeLBERT Mix SA model directly as part of our [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) SA component (*recommended*) or as part of the transformers pipeline.
16e5692a1a67aec5fb5a8ddebb4d1725
apache-2.0
[]
false
How to use To use the model with the [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) SA component: ```python >>> from camel_tools.sentiment import SentimentAnalyzer >>> sa = SentimentAnalyzer("CAMeL-Lab/bert-base-arabic-camelbert-mix-sentiment") >>> sentences = ['أنا بخير', 'أنا لست بخير'] >>> sa.predict(sentences) >>> ['positive', 'negative'] ``` You can also use the SA model directly with a transformers pipeline: ```python >>> from transformers import pipeline >>> sa = pipeline('sentiment-analysis', model='CAMeL-Lab/bert-base-arabic-camelbert-mix-sentiment') >>> sentences = ['أنا بخير', 'أنا لست بخير'] >>> sa(sentences) [{'label': 'positive', 'score': 0.9616648554801941}, {'label': 'negative', 'score': 0.9779177904129028}] ``` *Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually.
3fee5db1c024e1fa090d501182d33070
apache-2.0
['Quality Estimation', 'monotransquest', 'DA']
false
Using Pre-trained Models ```python import torch from transquest.algo.sentence_level.monotransquest.run_model import MonoTransQuestModel model = MonoTransQuestModel("xlmroberta", "TransQuest/monotransquest-da-any_en", num_labels=1, use_cuda=torch.cuda.is_available()) predictions, raw_outputs = model.predict([["Reducerea acestor conflicte este importantă pentru conservare.", "Reducing these conflicts is not important for preservation."]]) print(predictions) ```
6aa0cb1505157b2ec55fce19665872f9
mit
['exbert']
false
GPT-2 Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in [this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and first released at [this page](https://openai.com/blog/better-language-models/). Disclaimer: The team releasing GPT-2 also wrote a [model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model. Content from this model card has been written by the Hugging Face team to complete the information they provided and give specific examples of bias.
56ee3cf8d5a595a5a327d06a71c3a32f
mit
['exbert']
false
How to use You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: ```python from tf_transformers.models import GPT2Model from transformers import GPT2Tokenizer tokenizer = GPT2Tokenizer.from_pretrained('gpt2-medium') model = GPT2Model.from_pretrained("gpt2-medium") text = "Replace me by any text you'd like." inputs_tf = {} inputs = tokenizer(text, return_tensors='tf') inputs_tf["input_ids"] = inputs["input_ids"] outputs_tf = model(inputs_tf) ```
a8ef89d23048a6ddde06777f3c100727
mit
['exbert']
false
out-of-scope-use-cases): > Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases > that require the generated text to be true. > > Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do > not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a > study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, > and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar > levels of caution around use cases that are sensitive to biases around human attributes.
fe228a707435a1e7a3a60b06e02343c5
apache-2.0
['generated_from_trainer']
false
t5-small-finetuned-xsum This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset. It achieves the following results on the evaluation set: - Loss: 2.4789 - Rouge1: 28.2786 - Rouge2: 7.6957 - Rougel: 22.1976 - Rougelsum: 22.2034 - Gen Len: 18.8238
4fac949e88ee3a1477d3a52b6a339e63
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 2.7189 | 1.0 | 12753 | 2.4789 | 28.2786 | 7.6957 | 22.1976 | 22.2034 | 18.8238 |
52237f75bc6408c096ac805db18d8bcb
apache-2.0
['generated_from_trainer']
false
t5-small_corrector_15 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.3416 - Rouge1: 34.7998 - Rouge2: 9.0842 - Rougel: 27.8188 - Rougelsum: 27.839 - Gen Len: 18.5561
35329939a4dc7b0384ab03686c47faea
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP
1d40dc3a982e0a71362873be1daf46e6
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 4.2274 | 1.0 | 2365 | 2.9386 | 10.1244 | 1.0024 | 9.1029 | 9.1104 | 18.5377 | | 2.7936 | 2.0 | 4730 | 2.0196 | 17.7168 | 3.0899 | 15.1305 | 15.1353 | 18.8883 | | 2.2678 | 3.0 | 7095 | 1.7072 | 26.8501 | 5.7804 | 22.0034 | 22.0213 | 18.839 | | 1.9029 | 4.0 | 9460 | 1.5254 | 32.9484 | 7.8531 | 26.4538 | 26.4749 | 18.502 | | 1.5936 | 5.0 | 11825 | 1.3416 | 34.7998 | 9.0842 | 27.8188 | 27.839 | 18.5561 |
9092009906b137b13e2726f613bdfcd7
apache-2.0
[]
false
Disclaimer Like most AI models, this classifier is not 100% accurate. Please do not take the results of this model as fact. The best version had a 96% accuracy distinguishing aibooru and the images from the imageboard sites. However, the success you have with this model will vary based on the images you are trying to classify. Here are some biases I have noticed from my testing: - Images on aibooru, the site where the AI images were taken from, were high quality AI generations. Low quality AI generations have a higher chance of being misclassified - Textual inversions and hypernetworks increase the chance of misclassification
c292fb3caaecc5c9cd5299cfe2fcf2f5
apache-2.0
[]
false
Training This model was trained from microsoft/beit-base-patch16-224 for one epoch on 11 thousand images from imageboard sites, and 11 thousand images from aibooru. You can view the wandb run [here](https://wandb.ai/saltacc/huggingface/runs/2mp30x7j?workspace=user-saltacc).
766faed82acf933814db3dcaa532c26e
apache-2.0
[]
false
Use Case I don't intend for this model to be more accurate than humans for detecting AI art. I think the best use cases for this model would be for cases where misclassification isn't a big deal, such as removing AI art from a training dataset.
067f38c1da654bc2d1cb4d11f0218942
apache-2.0
['translation']
false
vie-rus * source group: Vietnamese * target group: Russian * OPUS readme: [vie-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/vie-rus/README.md) * model: transformer-align * source language(s): vie * target language(s): rus * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-rus/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-rus/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-rus/opus-2020-06-17.eval.txt)
14373c238d5c8fb5430b7607d8080179
apache-2.0
['translation']
false
System Info: - hf_name: vie-rus - source_languages: vie - target_languages: rus - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/vie-rus/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['vi', 'ru'] - src_constituents: {'vie', 'vie_Hani'} - tgt_constituents: {'rus'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/vie-rus/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/vie-rus/opus-2020-06-17.test.txt - src_alpha3: vie - tgt_alpha3: rus - short_pair: vi-ru - chrF2_score: 0.331 - bleu: 16.9 - brevity_penalty: 0.878 - ref_len: 2207.0 - src_name: Vietnamese - tgt_name: Russian - train_date: 2020-06-17 - src_alpha2: vi - tgt_alpha2: ru - prefer_old: False - long_pair: vie-rus - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
573542ff8e50e5a2583fe74494966455
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased_cls_bbc-news This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1140 - Accuracy: 0.976
bb32759b0ac68e8f10ee77a42d3be003
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.2 - num_epochs: 5 - mixed_precision_training: Native AMP
d5fec8fff5e8df351c1175352021fb20
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 77 | 0.2531 | 0.944 | | No log | 2.0 | 154 | 0.0971 | 0.973 | | No log | 3.0 | 231 | 0.0951 | 0.977 | | No log | 4.0 | 308 | 0.1166 | 0.975 | | No log | 5.0 | 385 | 0.1140 | 0.976 |
9dba5709cc9d3c7ed39c385ca98a74c8
mit
[]
false
paolo bonolis on Stable Diffusion This is the `<paolo-bonolis>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<paolo-bonolis> 0](https://huggingface.co/sd-concepts-library/paolo-bonolis/resolve/main/concept_images/3.jpeg) ![<paolo-bonolis> 1](https://huggingface.co/sd-concepts-library/paolo-bonolis/resolve/main/concept_images/1.jpeg) ![<paolo-bonolis> 2](https://huggingface.co/sd-concepts-library/paolo-bonolis/resolve/main/concept_images/0.jpeg) ![<paolo-bonolis> 3](https://huggingface.co/sd-concepts-library/paolo-bonolis/resolve/main/concept_images/2.jpeg)
b75aa62e63ebaea498d7caa18c0b383d
apache-2.0
['generated_from_trainer']
false
distilbert-finetuned-fakenews This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0049 - Accuracy: 0.9995 - F1: 0.9995
a0fdf3ffcf37da207d2e79235bebe291
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.0392 | 1.0 | 500 | 0.0059 | 0.999 | 0.999 | | 0.002 | 2.0 | 1000 | 0.0047 | 0.9995 | 0.9995 | | 0.0001 | 3.0 | 1500 | 0.0047 | 0.9995 | 0.9995 | | 0.0001 | 4.0 | 2000 | 0.0049 | 0.9995 | 0.9995 | | 0.0 | 5.0 | 2500 | 0.0049 | 0.9995 | 0.9995 |
ee2017dfd30b55481cca5745b86eabee
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0
2e5b12f975401c9f93c134b26882bcd1