license stringlengths 2 30 | tags stringlengths 2 513 | is_nc bool 1 class | readme_section stringlengths 201 597k | hash stringlengths 32 32 |
|---|---|---|---|---|
mit | ['generated_from_trainer'] | false | lilt-en-funsd This model is a fine-tuned version of [SCUT-DLVCLab/lilt-roberta-en-base](https://huggingface.co/SCUT-DLVCLab/lilt-roberta-en-base) on the funsd-layoutlmv3 dataset. It achieves the following results on the evaluation set: - Loss: 1.6459 - Answer: {'precision': 0.8831942789034565, 'recall': 0.9069767441860465, 'f1': 0.894927536231884, 'number': 817} - Header: {'precision': 0.6213592233009708, 'recall': 0.5378151260504201, 'f1': 0.5765765765765765, 'number': 119} - Question: {'precision': 0.8998178506375227, 'recall': 0.9173630454967502, 'f1': 0.9085057471264367, 'number': 1077} - Overall Precision: 0.8789 - Overall Recall: 0.8907 - Overall F1: 0.8848 - Overall Accuracy: 0.8068 | 3f780a796a244986bfededd8f9f952f3 |
mit | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 2000 - mixed_precision_training: Native AMP | c241f9f474da6b44afe9026f8d84e999 |
mit | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Answer | Header | Question | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:| | 0.4201 | 10.53 | 200 | 0.8003 | {'precision': 0.8321995464852607, 'recall': 0.8984088127294981, 'f1': 0.8640376692171865, 'number': 817} | {'precision': 0.5714285714285714, 'recall': 0.5714285714285714, 'f1': 0.5714285714285714, 'number': 119} | {'precision': 0.8651079136690647, 'recall': 0.89322191272052, 'f1': 0.8789401553220649, 'number': 1077} | 0.8348 | 0.8763 | 0.8551 | 0.8104 | | 0.0376 | 21.05 | 400 | 1.3158 | {'precision': 0.8395904436860068, 'recall': 0.9033047735618115, 'f1': 0.8702830188679245, 'number': 817} | {'precision': 0.4785714285714286, 'recall': 0.5630252100840336, 'f1': 0.5173745173745175, 'number': 119} | {'precision': 0.8887814313346228, 'recall': 0.8532961931290622, 'f1': 0.8706774040738986, 'number': 1077} | 0.8397 | 0.8564 | 0.8480 | 0.7934 | | 0.0119 | 31.58 | 600 | 1.4791 | {'precision': 0.8752941176470588, 'recall': 0.9106487148102815, 'f1': 0.8926214757048591, 'number': 817} | {'precision': 0.5401459854014599, 'recall': 0.6218487394957983, 'f1': 0.578125, 'number': 119} | {'precision': 0.8818681318681318, 'recall': 0.8941504178272981, 'f1': 0.8879668049792531, 'number': 1077} | 0.8567 | 0.8847 | 0.8705 | 0.7961 | | 0.0061 | 42.11 | 800 | 1.5605 | {'precision': 0.8617886178861789, 'recall': 0.9082007343941249, 'f1': 0.8843861740166865, 'number': 817} | {'precision': 0.5963302752293578, 'recall': 0.5462184873949579, 'f1': 0.5701754385964912, 'number': 119} | {'precision': 0.8747763864042933, 'recall': 0.9080779944289693, 'f1': 0.8911161731207289, 'number': 1077} | 0.8549 | 0.8867 | 0.8705 | 0.7965 | | 0.0026 | 52.63 | 1000 | 1.5172 | {'precision': 0.8596491228070176, 'recall': 0.8996328029375765, 'f1': 0.8791866028708135, 'number': 817} | {'precision': 0.7176470588235294, 'recall': 0.5126050420168067, 'f1': 0.5980392156862744, 'number': 119} | {'precision': 0.8737864077669902, 'recall': 0.9192200557103064, 'f1': 0.8959276018099548, 'number': 1077} | 0.8616 | 0.8872 | 0.8742 | 0.8014 | | 0.0019 | 63.16 | 1200 | 1.6132 | {'precision': 0.8735224586288416, 'recall': 0.9045287637698899, 'f1': 0.888755261575466, 'number': 817} | {'precision': 0.6460176991150443, 'recall': 0.6134453781512605, 'f1': 0.6293103448275863, 'number': 119} | {'precision': 0.881508078994614, 'recall': 0.9117920148560817, 'f1': 0.8963943404837974, 'number': 1077} | 0.8654 | 0.8912 | 0.8781 | 0.8040 | | 0.0012 | 73.68 | 1400 | 1.6459 | {'precision': 0.8831942789034565, 'recall': 0.9069767441860465, 'f1': 0.894927536231884, 'number': 817} | {'precision': 0.6213592233009708, 'recall': 0.5378151260504201, 'f1': 0.5765765765765765, 'number': 119} | {'precision': 0.8998178506375227, 'recall': 0.9173630454967502, 'f1': 0.9085057471264367, 'number': 1077} | 0.8789 | 0.8907 | 0.8848 | 0.8068 | | 0.0005 | 84.21 | 1600 | 1.5619 | {'precision': 0.8602771362586605, 'recall': 0.9118727050183598, 'f1': 0.8853238265002972, 'number': 817} | {'precision': 0.6631578947368421, 'recall': 0.5294117647058824, 'f1': 0.5887850467289719, 'number': 119} | {'precision': 0.8944494995450409, 'recall': 0.9127205199628597, 'f1': 0.9034926470588234, 'number': 1077} | 0.8694 | 0.8897 | 0.8795 | 0.8155 | | 0.0003 | 94.74 | 1800 | 1.6571 | {'precision': 0.8649592549476135, 'recall': 0.9094247246022031, 'f1': 0.886634844868735, 'number': 817} | {'precision': 0.6391752577319587, 'recall': 0.5210084033613446, 'f1': 0.5740740740740741, 'number': 119} | {'precision': 0.8971792538671519, 'recall': 0.9155060352831941, 'f1': 0.90625, 'number': 1077} | 0.8715 | 0.8897 | 0.8805 | 0.8098 | | 0.0003 | 105.26 | 2000 | 1.6731 | {'precision': 0.8672875436554133, 'recall': 0.9118727050183598, 'f1': 0.8890214797136038, 'number': 817} | {'precision': 0.62, 'recall': 0.5210084033613446, 'f1': 0.5662100456621004, 'number': 119} | {'precision': 0.9008264462809917, 'recall': 0.9108635097493036, 'f1': 0.9058171745152355, 'number': 1077} | 0.8730 | 0.8882 | 0.8806 | 0.8071 | | 5094eb04f917311f9b84548db59e1485 |
apache-2.0 | ['generated_from_trainer'] | false | emotion_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 1.3046 - Accuracy: 0.7938 | ea27567eb173e3ca49130e9edb1d9ae8 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 204 | 1.1915 | 0.7854 | | No log | 2.0 | 408 | 1.1624 | 0.7889 | | 0.0451 | 3.0 | 612 | 1.1865 | 0.7952 | | 0.0451 | 4.0 | 816 | 1.2653 | 0.7945 | | 0.0154 | 5.0 | 1020 | 1.3046 | 0.7938 | | a0881ffa822fa1538b6c2ad983aeb351 |
apache-2.0 | ['hf-asr-leaderboard', 'generated_from_trainer'] | false | Whisper Tiny Dutch 25 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.7024 - Wer: 42.0655 | aa6152a931d89098714c4b2bcc1a4fab |
apache-2.0 | ['hf-asr-leaderboard', 'generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 2000 - mixed_precision_training: Native AMP | 251376bf1a95e440355390283b704a2a |
apache-2.0 | ['hf-asr-leaderboard', 'generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.5563 | 0.78 | 500 | 0.7838 | 47.5002 | | 0.3949 | 1.56 | 1000 | 0.7301 | 43.9570 | | 0.2666 | 2.34 | 1500 | 0.7103 | 42.8426 | | 0.2307 | 3.12 | 2000 | 0.7024 | 42.0655 | | 594acc58562c873499991f811ad97424 |
mit | [] | false | Scarlet witch on Stable Diffusion This is the `<sw-mom>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`:     | 4aaf24a84ab2392b7de0c20bdb19656f |
mit | ['generated_from_keras_callback'] | false | ishaankul67/Adult_contemporary_music-clustered This model is a fine-tuned version of [nandysoham16/15-clustered_aug](https://huggingface.co/nandysoham16/15-clustered_aug) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3734 - Train End Logits Accuracy: 0.9167 - Train Start Logits Accuracy: 0.8889 - Validation Loss: 0.1582 - Validation End Logits Accuracy: 0.8571 - Validation Start Logits Accuracy: 1.0 - Epoch: 0 | 2b764caf38c93c0c0f7c4f40726e5038 |
mit | ['generated_from_keras_callback'] | false | Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 0.3734 | 0.9167 | 0.8889 | 0.1582 | 0.8571 | 1.0 | 0 | | 21a9f463a3071f394c413a459b8ea884 |
other | ['vision', 'image-segmentation', 'generated_from_trainer'] | false | segformer-b5-finetuned-magic-cards-230117-3 This model is a fine-tuned version of [nvidia/mit-b5](https://huggingface.co/nvidia/mit-b5) on the andrewljohnson/magic_cards dataset. It achieves the following results on the evaluation set: - Loss: 0.0691 - Mean Iou: 0.6585 - Mean Accuracy: 0.9878 - Overall Accuracy: 0.9912 - Accuracy Unlabeled: nan - Accuracy Front: 0.9978 - Accuracy Back: 0.9777 - Iou Unlabeled: 0.0 - Iou Front: 0.9978 - Iou Back: 0.9777 | 96452cb86ddcf69391eecf3889fac826 |
other | ['vision', 'image-segmentation', 'generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 | 3c9d353a27dce4a40891a43b657e8585 |
other | ['vision', 'image-segmentation', 'generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Unlabeled | Accuracy Front | Accuracy Back | Iou Unlabeled | Iou Front | Iou Back | |:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------:|:--------------:|:-------------:|:-------------:|:---------:|:--------:| | 1.2232 | 0.37 | 20 | 0.4691 | 0.6041 | 0.9201 | 0.9218 | nan | 0.9252 | 0.9150 | 0.0 | 0.9252 | 0.8870 | | 0.2718 | 0.74 | 40 | 0.1983 | 0.6509 | 0.9764 | 0.9785 | nan | 0.9826 | 0.9702 | 0.0 | 0.9826 | 0.9702 | | 0.255 | 1.11 | 60 | 0.0939 | 0.6524 | 0.9785 | 0.9794 | nan | 0.9812 | 0.9758 | 0.0 | 0.9812 | 0.9758 | | 0.1103 | 1.48 | 80 | 0.0682 | 0.6536 | 0.9804 | 0.9813 | nan | 0.9830 | 0.9779 | 0.0 | 0.9830 | 0.9779 | | 0.1373 | 1.85 | 100 | 0.1260 | 0.6631 | 0.9946 | 0.9961 | nan | 0.9989 | 0.9903 | 0.0 | 0.9989 | 0.9903 | | 0.0566 | 2.22 | 120 | 0.1558 | 0.6578 | 0.9868 | 0.9912 | nan | 0.9999 | 0.9736 | 0.0 | 0.9999 | 0.9736 | | 0.1535 | 2.59 | 140 | 0.1330 | 0.6558 | 0.9838 | 0.9883 | nan | 0.9973 | 0.9703 | 0.0 | 0.9973 | 0.9703 | | 0.0586 | 2.96 | 160 | 0.2317 | 0.6599 | 0.9899 | 0.9933 | nan | 1.0000 | 0.9798 | 0.0 | 1.0000 | 0.9798 | | 0.0727 | 3.33 | 180 | 0.1018 | 0.6586 | 0.9880 | 0.9919 | nan | 0.9995 | 0.9764 | 0.0 | 0.9995 | 0.9764 | | 0.3588 | 3.7 | 200 | 0.1151 | 0.6608 | 0.9912 | 0.9939 | nan | 0.9993 | 0.9831 | 0.0 | 0.9993 | 0.9831 | | 0.0463 | 4.07 | 220 | 0.0538 | 0.6610 | 0.9915 | 0.9934 | nan | 0.9969 | 0.9862 | 0.0 | 0.9969 | 0.9862 | | 0.046 | 4.44 | 240 | 0.1201 | 0.6581 | 0.9871 | 0.9912 | nan | 0.9991 | 0.9751 | 0.0 | 0.9991 | 0.9751 | | 0.0468 | 4.81 | 260 | 0.0691 | 0.6585 | 0.9878 | 0.9912 | nan | 0.9978 | 0.9777 | 0.0 | 0.9978 | 0.9777 | | 4edd947498d850f8f211716844fff070 |
apache-2.0 | ['whisper-event', 'generated_from_trainer'] | false | Whisper small Greek Farsioal and El Greco This model is a fine-tuned version of [emilios/whisper-sm-el-farsipal-e4](https://huggingface.co/emilios/whisper-sm-el-farsipal-e4) on the mozilla-foundation/common_voice_11_0,google/fleurs el,el_gr dataset. It achieves the following results on the evaluation set: - Loss: 0.4871 - Wer: 17.1991 | 452852ac65f4fc3c96646b80d6ef71aa |
apache-2.0 | ['whisper-event', 'generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 20000 | d83f8bee6013d8aa9059bf53947edf3c |
apache-2.0 | ['whisper-event', 'generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:-------:| | 0.1259 | 2.49 | 1000 | 0.4834 | 18.3692 | | 0.1002 | 4.49 | 2000 | 0.4604 | 17.8027 | | 0.1096 | 6.98 | 3000 | 0.4553 | 17.8770 | | 0.0885 | 9.46 | 4000 | 0.4551 | 17.9606 | | 0.0675 | 11.95 | 5000 | 0.4631 | 17.9049 | | 0.0675 | 14.44 | 6000 | 0.4619 | 17.9049 | | 0.0645 | 16.93 | 7000 | 0.4678 | 17.6727 | | 0.0535 | 19.41 | 8000 | 0.4685 | 17.6634 | | 0.039 | 21.49 | 9000 | 0.4746 | 17.6727 | | 0.0447 | 23.98 | 10000 | 0.4761 | 17.6634 | | 0.0393 | 26.46 | 11000 | 0.4792 | 17.7656 | | 0.0308 | 28.95 | 12000 | 0.4851 | 17.8678 | | 0.0301 | 31.44 | 13000 | 0.4846 | 17.4499 | | 0.031 | 33.93 | 14000 | 0.4849 | 17.8306 | | 0.0263 | 36.41 | 15000 | 0.4880 | 17.6170 | | 0.0256 | 38.9 | 16000 | 0.4871 | 17.1991 | | 0.0236 | 41.39 | 17000 | 0.4883 | 17.2641 | | 0.0195 | 43.88 | 18000 | 0.4880 | 17.5706 | | 0.0193 | 46.36 | 19000 | 0.4993 | 17.7285 | | 0.0161 | 48.85 | 20000 | 0.4968 | 17.8306 | | ac13baa07676d36caa0b795ecb7b4076 |
mit | ['generated_from_keras_callback'] | false | gabrielgmendonca/bert-base-portuguese-cased-finetuned-chico-xavier-finetuned-chico-xavier This model is a fine-tuned version of [gabrielgmendonca/bert-base-portuguese-cased-finetuned-chico-xavier](https://huggingface.co/gabrielgmendonca/bert-base-portuguese-cased-finetuned-chico-xavier) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.8630 - Validation Loss: 1.7215 - Epoch: 0 | d9eda5be786466bcec4abe5841c53f81 |
mit | ['generated_from_keras_callback'] | false | Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 3430, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 | 577b76222f626d969eabcd4b546907ab |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 100 | 1fd96be01830cf9d3aeeaa25fac40330 |
apache-2.0 | ['generated_from_trainer'] | false | wav2vec2-base-finetuned-sentiment-mesd-v9 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3500 - Accuracy: 0.9154 | 6b184887fd009b0c477a41dbd189274b |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 64 - eval_batch_size: 40 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.01 - num_epochs: 100 | b7b938f3e9d8e71e0c197944af08ce57 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.86 | 3 | 1.7825 | 0.1846 | | 1.9553 | 1.86 | 6 | 1.7212 | 0.4308 | | 1.9553 | 2.86 | 9 | 1.6164 | 0.3769 | | 2.002 | 3.86 | 12 | 1.4904 | 0.3769 | | 1.6191 | 4.86 | 15 | 1.4426 | 0.4385 | | 1.6191 | 5.86 | 18 | 1.3516 | 0.5231 | | 1.6209 | 6.86 | 21 | 1.2176 | 0.5538 | | 1.6209 | 7.86 | 24 | 1.1683 | 0.5692 | | 1.371 | 8.86 | 27 | 1.0885 | 0.5923 | | 1.1568 | 9.86 | 30 | 1.0152 | 0.6385 | | 1.1568 | 10.86 | 33 | 0.9289 | 0.6385 | | 1.1023 | 11.86 | 36 | 0.9141 | 0.6308 | | 1.1023 | 12.86 | 39 | 0.8526 | 0.6462 | | 0.9448 | 13.86 | 42 | 0.8420 | 0.6769 | | 0.7972 | 14.86 | 45 | 0.7976 | 0.6692 | | 0.7972 | 15.86 | 48 | 0.8192 | 0.7308 | | 0.7793 | 16.86 | 51 | 0.7108 | 0.7615 | | 0.7793 | 17.86 | 54 | 0.6712 | 0.7769 | | 0.6468 | 18.86 | 57 | 0.6684 | 0.7923 | | 0.5083 | 19.86 | 60 | 0.6922 | 0.7385 | | 0.5083 | 20.86 | 63 | 0.6148 | 0.7923 | | 0.4988 | 21.86 | 66 | 0.5846 | 0.7923 | | 0.4988 | 22.86 | 69 | 0.6050 | 0.8154 | | 0.4123 | 23.86 | 72 | 0.5506 | 0.7846 | | 0.3511 | 24.86 | 75 | 0.6095 | 0.7846 | | 0.3511 | 25.86 | 78 | 0.5916 | 0.8154 | | 0.3268 | 26.86 | 81 | 0.5912 | 0.8077 | | 0.3268 | 27.86 | 84 | 0.5142 | 0.8538 | | 0.3036 | 28.86 | 87 | 0.5492 | 0.8077 | | 0.3066 | 29.86 | 90 | 0.6007 | 0.8231 | | 0.3066 | 30.86 | 93 | 0.5748 | 0.8231 | | 0.2538 | 31.86 | 96 | 0.6027 | 0.7692 | | 0.2538 | 32.86 | 99 | 0.6979 | 0.7462 | | 0.2281 | 33.86 | 102 | 0.7002 | 0.7615 | | 0.2183 | 34.86 | 105 | 0.6650 | 0.7769 | | 0.2183 | 35.86 | 108 | 0.5192 | 0.8462 | | 0.2202 | 36.86 | 111 | 0.5389 | 0.8308 | | 0.2202 | 37.86 | 114 | 0.5050 | 0.8385 | | 0.1906 | 38.86 | 117 | 0.5722 | 0.7769 | | 0.154 | 39.86 | 120 | 0.5239 | 0.8308 | | 0.154 | 40.86 | 123 | 0.4448 | 0.8615 | | 0.1474 | 41.86 | 126 | 0.4623 | 0.8615 | | 0.1474 | 42.86 | 129 | 0.4282 | 0.8615 | | 0.1345 | 43.86 | 132 | 0.5087 | 0.8615 | | 0.1567 | 44.86 | 135 | 0.4859 | 0.8385 | | 0.1567 | 45.86 | 138 | 0.6603 | 0.8077 | | 0.1731 | 46.86 | 141 | 0.5379 | 0.8385 | | 0.1731 | 47.86 | 144 | 0.8666 | 0.7538 | | 0.1606 | 48.86 | 147 | 0.7518 | 0.8 | | 0.1484 | 49.86 | 150 | 0.5986 | 0.8385 | | 0.1484 | 50.86 | 153 | 0.6368 | 0.8231 | | 0.2256 | 51.86 | 156 | 0.4639 | 0.8692 | | 0.2256 | 52.86 | 159 | 0.5533 | 0.8462 | | 0.1178 | 53.86 | 162 | 0.5038 | 0.8615 | | 0.0815 | 54.86 | 165 | 0.5052 | 0.8692 | | 0.0815 | 55.86 | 168 | 0.4337 | 0.8846 | | 0.0998 | 56.86 | 171 | 0.4422 | 0.8769 | | 0.0998 | 57.86 | 174 | 0.4317 | 0.8692 | | 0.0855 | 58.86 | 177 | 0.4025 | 0.8923 | | 0.0962 | 59.86 | 180 | 0.4605 | 0.8769 | | 0.0962 | 60.86 | 183 | 0.4356 | 0.8769 | | 0.0763 | 61.86 | 186 | 0.4614 | 0.8769 | | 0.0763 | 62.86 | 189 | 0.4382 | 0.8846 | | 0.0902 | 63.86 | 192 | 0.4701 | 0.8692 | | 0.0654 | 64.86 | 195 | 0.4922 | 0.8692 | | 0.0654 | 65.86 | 198 | 0.5413 | 0.8538 | | 0.0651 | 66.86 | 201 | 0.5759 | 0.8615 | | 0.0651 | 67.86 | 204 | 0.4238 | 0.9 | | 0.0822 | 68.86 | 207 | 0.3500 | 0.9154 | | 0.0625 | 69.86 | 210 | 0.3878 | 0.8923 | | 0.0625 | 70.86 | 213 | 0.4952 | 0.8615 | | 0.0548 | 71.86 | 216 | 0.4544 | 0.8615 | | 0.0548 | 72.86 | 219 | 0.5497 | 0.8769 | | 0.054 | 73.86 | 222 | 0.4434 | 0.8846 | | 0.0543 | 74.86 | 225 | 0.4732 | 0.8769 | | 0.0543 | 75.86 | 228 | 0.4425 | 0.8923 | | 0.0881 | 76.86 | 231 | 0.4788 | 0.8769 | | 0.0881 | 77.86 | 234 | 0.5448 | 0.8769 | | 0.061 | 78.86 | 237 | 0.4221 | 0.9077 | | 0.0567 | 79.86 | 240 | 0.4404 | 0.8769 | | 0.0567 | 80.86 | 243 | 0.4099 | 0.9 | | 0.052 | 81.86 | 246 | 0.5259 | 0.8769 | | 0.052 | 82.86 | 249 | 0.5874 | 0.8692 | | 0.0444 | 83.86 | 252 | 0.5555 | 0.8846 | | 0.0332 | 84.86 | 255 | 0.5156 | 0.8615 | | 0.0332 | 85.86 | 258 | 0.4564 | 0.8615 | | 0.0449 | 86.86 | 261 | 0.4826 | 0.8692 | | 0.0449 | 87.86 | 264 | 0.4726 | 0.8615 | | 0.0385 | 88.86 | 267 | 0.4206 | 0.8846 | | 0.0356 | 89.86 | 270 | 0.4050 | 0.8769 | | 0.0356 | 90.86 | 273 | 0.4161 | 0.8923 | | 0.0391 | 91.86 | 276 | 0.4100 | 0.9077 | | 0.0391 | 92.86 | 279 | 0.4047 | 0.9 | | 0.0249 | 93.86 | 282 | 0.4044 | 0.9 | | 0.0399 | 94.86 | 285 | 0.3968 | 0.8846 | | 0.0399 | 95.86 | 288 | 0.3802 | 0.9 | | 0.031 | 96.86 | 291 | 0.3689 | 0.9 | | 0.031 | 97.86 | 294 | 0.3616 | 0.9077 | | 0.036 | 98.86 | 297 | 0.3584 | 0.9077 | | 0.0386 | 99.86 | 300 | 0.3574 | 0.9077 | | dd6c169f678da6333623d69cf878ff48 |
mit | ['generated_from_trainer'] | false | pensive_keller This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets. | 2cbbccbe4917af9c98c404573ad17fe8 |
mit | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 512 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.01 - training_steps: 3125 - mixed_precision_training: Native AMP | ddce5c01d343d81a7cfb75623ed771e3 |
mit | ['generated_from_trainer'] | false | Full config {'dataset': {'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000', 'tomekkorbak/detoxify-pile-chunk3-50000-100000', 'tomekkorbak/detoxify-pile-chunk3-100000-150000', 'tomekkorbak/detoxify-pile-chunk3-150000-200000', 'tomekkorbak/detoxify-pile-chunk3-200000-250000', 'tomekkorbak/detoxify-pile-chunk3-250000-300000', 'tomekkorbak/detoxify-pile-chunk3-300000-350000', 'tomekkorbak/detoxify-pile-chunk3-350000-400000', 'tomekkorbak/detoxify-pile-chunk3-400000-450000', 'tomekkorbak/detoxify-pile-chunk3-450000-500000', 'tomekkorbak/detoxify-pile-chunk3-500000-550000', 'tomekkorbak/detoxify-pile-chunk3-550000-600000', 'tomekkorbak/detoxify-pile-chunk3-600000-650000', 'tomekkorbak/detoxify-pile-chunk3-650000-700000', 'tomekkorbak/detoxify-pile-chunk3-700000-750000', 'tomekkorbak/detoxify-pile-chunk3-750000-800000', 'tomekkorbak/detoxify-pile-chunk3-800000-850000', 'tomekkorbak/detoxify-pile-chunk3-850000-900000', 'tomekkorbak/detoxify-pile-chunk3-900000-950000', 'tomekkorbak/detoxify-pile-chunk3-950000-1000000', 'tomekkorbak/detoxify-pile-chunk3-1000000-1050000', 'tomekkorbak/detoxify-pile-chunk3-1050000-1100000', 'tomekkorbak/detoxify-pile-chunk3-1100000-1150000', 'tomekkorbak/detoxify-pile-chunk3-1150000-1200000', 'tomekkorbak/detoxify-pile-chunk3-1200000-1250000', 'tomekkorbak/detoxify-pile-chunk3-1250000-1300000', 'tomekkorbak/detoxify-pile-chunk3-1300000-1350000', 'tomekkorbak/detoxify-pile-chunk3-1350000-1400000', 'tomekkorbak/detoxify-pile-chunk3-1400000-1450000', 'tomekkorbak/detoxify-pile-chunk3-1450000-1500000', 'tomekkorbak/detoxify-pile-chunk3-1500000-1550000', 'tomekkorbak/detoxify-pile-chunk3-1550000-1600000', 'tomekkorbak/detoxify-pile-chunk3-1600000-1650000', 'tomekkorbak/detoxify-pile-chunk3-1650000-1700000', 'tomekkorbak/detoxify-pile-chunk3-1700000-1750000', 'tomekkorbak/detoxify-pile-chunk3-1750000-1800000', 'tomekkorbak/detoxify-pile-chunk3-1800000-1850000', 'tomekkorbak/detoxify-pile-chunk3-1850000-1900000', 'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'], 'is_split_by_sentences': True, 'skip_tokens': 1661599744}, 'generation': {'every_n_steps': 32, 'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}], 'scenario_configs': [{'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_samples': 2048}, {'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'challenging_rtp', 'num_samples': 2048, 'prompts_path': 'resources/challenging_rtp.jsonl'}], 'scorer_config': {'device': 'cuda:0'}}, 'kl_gpt3_callback': {'every_n_steps': 32, 'max_tokens': 64, 'num_samples': 4096}, 'model': {'from_scratch': False, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'model_kwargs': {'revision': '81a1701e025d2c65ae6e8c2103df559071523ee0', 'value_head_config': {'is_detached': False}}, 'path_or_name': 'tomekkorbak/goofy_pasteur'}, 'objective': {'alpha': 0.5, 'beta': 10, 'name': 'AWR'}, 'tokenizer': {'path_or_name': 'gpt2'}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 512, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'pensive_keller', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.001, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000, 'output_dir': 'training_output104340', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 3346, 'save_strategy': 'steps', 'seed': 42, 'tokens_already_seen': 1661599744, 'warmup_ratio': 0.01, 'weight_decay': 0.1}} | fa590a95b8493c7098cc88a8457d62dd |
apache-2.0 | ['generated_from_trainer'] | false | recipe-lr0.0001-wd0.02-bs64 This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2792 - Rmse: 0.5284 - Mse: 0.2792 - Mae: 0.4268 | 9bf4d1aa0f945269be3416cdf730c6ab |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:| | 0.2799 | 1.0 | 623 | 0.2789 | 0.5281 | 0.2789 | 0.4218 | | 0.2786 | 2.0 | 1246 | 0.2792 | 0.5284 | 0.2792 | 0.4268 | | 0.2785 | 3.0 | 1869 | 0.2792 | 0.5284 | 0.2792 | 0.4268 | | 809a1f1ab66549caa0a0c3696cbab8c0 |
cc-by-4.0 | [] | false | Icelandic ConvBERT-Small This model was pretrained on the [Icelandic Gigaword Corpus](http://igc.arnastofnun.is/), which contains approximately 1.69B tokens, using default settings. The model uses a Unigram tokenizer with a vocabulary size of 96,000. | 2e7880813c614d2be092af828eb91b09 |
cc-by-4.0 | [] | false | Acknowledgments This research was supported with Cloud TPUs from Google's TPU Research Cloud (TRC). This project was funded by the Language Technology Programme for Icelandic 2019-2023. The programme, which is managed and coordinated by [Almannarómur](https://almannaromur.is/), is funded by the Icelandic Ministry of Education, Science and Culture. | 92bec0a336745def8748a1b0fbe08354 |
apache-2.0 | ['exbert', 'multiberts', 'multiberts-seed-4'] | false | MultiBERTs Seed 4 Checkpoint 900k (uncased) Seed 4 intermediate checkpoint 900k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint. The final checkpoint can be found at [multiberts-seed-4](https://hf.co/multberts-seed-4). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani). | ac0dc310d2257bcf8c980195602ac029 |
apache-2.0 | ['exbert', 'multiberts', 'multiberts-seed-4'] | false | How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-4-900k') model = BertModel.from_pretrained("multiberts-seed-4-900k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` | a16871183eb62d4443c4357942057779 |
apache-2.0 | ['generated_from_trainer'] | false | distilbert-base-uncased-mlm-ta-local This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.0658 | 01ac82ff7d3947facffc4082f63ef107 |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP | 93d4bb83d0671dce815fb78e0f782a39 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.4431 | 1.0 | 3125 | 2.1817 | | 2.2197 | 2.0 | 6250 | 2.0929 | | 2.1519 | 3.0 | 9375 | 2.0696 | | ef93f4db836607bedf3e6332c55ed4f7 |
mit | ['conversational'] | false | Large-Scale Pre-Training for Goal-Directed Dialog (GODEL) GODEL is a large-scale pre-trained model for goal-directed dialogs. It is parameterized with a Transformer-based encoder-decoder model and trained for response generation grounded in external text, which allows more effective fine-tuning on dialog tasks that require conditioning the response on information that is external to the current conversation (e.g., a retrieved document). The pre-trained model can be efficiently fine-tuned and adapted to accomplish a new dialog task with a handful of task-specific dialogs. The v1.1 model is trained on 551M multi-turn dialogs from Reddit discussion thread, and 5M instruction and knowledge grounded dialogs. | 481b4329f0bc765236e29cab23244ef9 |
mit | ['conversational'] | false | Multi-turn generation examples from an interactive environment: Chitchat example: > Instruction: given a dialog context, you need to response empathically. <br> > User: Does money buy happiness? <br> > Agent: It is a question. Money buys you a lot of things, but not enough to buy happiness. <br> > User: What is the best way to buy happiness ? <br> > Agent: Happiness is bought through your experience and not money. <br> Grounded response generation example: > Instruction: given a dialog context and related knowledge, you need to response safely based on the knowledge. <br> > Knowledge: The best Stardew Valley mods PCGamesN_0 / About SMAPI <br> > User: My favorite game is stardew valley. stardew valley is very fun. <br> > Agent: I love Stardew Valley mods, like PCGamesN_0 / About SMAPI. <br> Please find the information about preprocessing, training and full details of the GODEL in the [project webpage](https://aka.ms/GODEL). ArXiv paper: [https://arxiv.org/abs/2206.11309](https://arxiv.org/abs/2206.11309) | c1470cc8983204b91f225ce3d613db5f |
mit | ['conversational'] | false | How to use Now we are ready to try out how the model works as a chatting partner! ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("microsoft/GODEL-v1_1-base-seq2seq") model = AutoModelForSeq2SeqLM.from_pretrained("microsoft/GODEL-v1_1-base-seq2seq") def generate(instruction, knowledge, dialog): if knowledge != '': knowledge = '[KNOWLEDGE] ' + knowledge dialog = ' EOS '.join(dialog) query = f"{instruction} [CONTEXT] {dialog} {knowledge}" input_ids = tokenizer(f"{query}", return_tensors="pt").input_ids outputs = model.generate(input_ids, max_length=128, min_length=8, top_p=0.9, do_sample=True) output = tokenizer.decode(outputs[0], skip_special_tokens=True) return output | abb73a82eb04f91e9d85d9e9d29dc731 |
mit | ['conversational'] | false | Leave the knowldge empty knowledge = '' dialog = [ 'Does money buy happiness?', 'It is a question. Money buys you a lot of things, but not enough to buy happiness.', 'What is the best way to buy happiness ?' ] response = generate(instruction, knowledge, dialog) print(response) ``` | 63ba9a652644738636328e5ff203dea1 |
mit | ['conversational'] | false | Citation if you use this code and data in your research, please cite our arxiv paper: ``` @misc{peng2022godel, author = {Peng, Baolin and Galley, Michel and He, Pengcheng and Brockett, Chris and Liden, Lars and Nouri, Elnaz and Yu, Zhou and Dolan, Bill and Gao, Jianfeng}, title = {GODEL: Large-Scale Pre-training for Goal-Directed Dialog}, howpublished = {arXiv}, year = {2022}, month = {June}, url = {https://www.microsoft.com/en-us/research/publication/godel-large-scale-pre-training-for-goal-directed-dialog/}, } ``` | f225237e74c10a03bc6da01849cb6bf9 |
apache-2.0 | ['automatic-speech-recognition', 'de'] | false | exp_w2v2r_de_vp-100k_accent_germany-0_austria-10_s756 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool. | 2cce04006c21f9a4ae341ef1d7655c23 |
apache-2.0 | ['generated_from_trainer'] | false | t5-small-finetuned-text2log-finetuned-nl-to-fol-finetuned-nl-to-fol-finetuned-nl-to-fol This model is a fine-tuned version of [anki08/t5-small-finetuned-text2log-finetuned-nl-to-fol-finetuned-nl-to-fol](https://huggingface.co/anki08/t5-small-finetuned-text2log-finetuned-nl-to-fol-finetuned-nl-to-fol) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1468 - Bleu: 30.3266 - Gen Len: 18.8824 | f786389f90cbac8d3b69a05a8a8e91c5 |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP | c025b79cf38819a00d9848346485abde |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:| | No log | 1.0 | 17 | 0.1486 | 30.3537 | 18.8824 | | No log | 2.0 | 34 | 0.1474 | 30.2522 | 18.8824 | | No log | 3.0 | 51 | 0.1465 | 30.2522 | 18.8824 | | No log | 4.0 | 68 | 0.1461 | 30.2522 | 18.8824 | | No log | 5.0 | 85 | 0.1469 | 30.2522 | 18.8824 | | No log | 6.0 | 102 | 0.1457 | 29.8889 | 18.8824 | | No log | 7.0 | 119 | 0.1470 | 30.3537 | 18.8824 | | No log | 8.0 | 136 | 0.1469 | 30.3537 | 18.8824 | | No log | 9.0 | 153 | 0.1469 | 30.3266 | 18.8824 | | No log | 10.0 | 170 | 0.1468 | 30.3266 | 18.8824 | | 3a3891d87cae62def5b59001c04cd419 |
apache-2.0 | ['automatic-speech-recognition', 'ar'] | false | exp_w2v2t_ar_hubert_s947 Fine-tuned [facebook/hubert-large-ll60k](https://huggingface.co/facebook/hubert-large-ll60k) for speech recognition using the train split of [Common Voice 7.0 (ar)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool. | 7a4f5282894731fc3360a2b7b976f10e |
wtfpl | [] | false | One of the first embeddings I have created, adds a horror atmosphere and monsters to an image. Download it into the embeddings folder and use it with "by HorrorByDave" or what ever you have renamed the embed. Samples (if hugging face keeps the png data then you can get the prompt by putting the sample into pnginfo): .png) .png) .png) .png) .png) .png) .png) .png) .png) .png) | 1eb175922dcb5b690d018d078e1da487 |
apache-2.0 | ['translation'] | false | opus-mt-fr-lu * source languages: fr * target languages: lu * OPUS readme: [fr-lu](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-lu/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-lu/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-lu/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-lu/opus-2020-01-20.eval.txt) | 28f8f0b09a0c9107fe7d7ef3042b197c |
mit | ['generated_from_keras_callback'] | false | fourth_iteration_model This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: | 68119dff94bb27f668697589581995a3 |
mit | ['generated_from_keras_callback'] | false | Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 65805, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 | 0c8341612af5708aa5515f1fee7bbe99 |
apache-2.0 | ['automatic-speech-recognition', 'es'] | false | exp_w2v2r_es_vp-100k_gender_male-0_female-10_s33 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool. | 42da60087f04fbeb6dfe14406e8f2ec2 |
mit | ['deberta', 'deberta-v3', 'mdeberta', 'korean', 'pretraining'] | false | mDeBERTa-v3-base-kor-further > 💡 아래 프로젝트는 KPMG Lighthouse Korea에서 진행하였습니다. > KPMG Lighthouse Korea에서는, Financial area의 다양한 문제들을 해결하기 위해 Edge Technology의 NLP/Vision AI를 모델링하고 있습니다. > https://kpmgkr.notion.site/ | 4b84726e5bfda6263c186446adf299e3 |
mit | ['deberta', 'deberta-v3', 'mdeberta', 'korean', 'pretraining'] | false | What is DeBERTa? - [DeBERTa](https://arxiv.org/abs/2006.03654)는 `Disentangled Attention` + `Enhanced Mask Decoder` 를 적용하여 단어의 positional information을 효과적으로 학습합니다. 이와 같은 아이디어를 통해, 기존의 BERT, RoBERTa에서 사용했던 absolute position embedding과는 달리 DeBERTa는 단어의 상대적인 위치 정보를 학습 가능한 벡터로 표현하여 모델을 학습하게 됩니다. 결과적으로, BERT, RoBERTA 와 비교했을 때 더 준수한 성능을 보여주었습니다. - [DeBERTa-v3](https://arxiv.org/abs/2111.09543)에서는, 이전 버전에서 사용했던 MLM (Masked Language Model) 을 RTD (Replaced Token Detection) Task 로 대체한 ELECTRA 스타일의 사전학습 방법과, Gradient-Disentangled Embedding Sharing 을 적용하여 모델 학습의 효율성을 개선하였습니다. - DeBERTa의 아키텍처로 풍부한 한국어 데이터를 학습하기 위해서, `mDeBERTa-v3-base-kor-further` 는 microsoft 가 발표한 `mDeBERTa-v3-base` 를 약 40GB의 한국어 데이터에 대해서 **추가적인 사전학습**을 진행한 언어 모델입니다. | f6d5de219ba133a62a25b47d3225d202 |
mit | ['deberta', 'deberta-v3', 'mdeberta', 'korean', 'pretraining'] | false | How to Use - Requirements ``` pip install transformers pip install sentencepiece ``` - Huggingface Hub ```python from transformers import AutoModel, AutoTokenizer model = AutoModel.from_pretrained("mdeberta-v3-base-kor-further") | 7b477e1b17a7bbfab0144f6ab2c76103 |
mit | ['deberta', 'deberta-v3', 'mdeberta', 'korean', 'pretraining'] | false | Pre-trained Models - 모델의 아키텍처는 기존 microsoft에서 발표한 `mdeberta-v3-base`와 동일한 구조입니다. | | Vocabulary(K) | Backbone Parameters(M) | Hidden Size | Layers | Note | | --- | --- | --- | --- | --- | --- | | mdeberta-v3-base-kor-further (mdeberta-v3-base와 동일) | 250 | 86 | 768 | 12 | 250K new SPM vocab | | 6bda553b80ba340fde88e68a036bbcb8 |
mit | ['deberta', 'deberta-v3', 'mdeberta', 'korean', 'pretraining'] | false | Further Pretraing Details (MLM Task) - `mDeBERTa-v3-base-kor-further` 는 `microsoft/mDeBERTa-v3-base` 를 약 40GB의 한국어 데이터에 대해서 MLM Task를 적용하여 추가적인 사전 학습을 진행하였습니다. | | Max length | Learning Rate | Batch Size | Train Steps | Warm-up Steps | | --- | --- | --- | --- | --- | --- | | mdeberta-v3-base-kor-further | 512 | 2e-5 | 8 | 5M | 50k | | b673c956f23d37e95a4559269199c15c |
mit | ['deberta', 'deberta-v3', 'mdeberta', 'korean', 'pretraining'] | false | Datasets - 모두의 말뭉치(신문, 구어, 문어), 한국어 Wiki, 국민청원 등 약 40 GB 의 한국어 데이터셋이 추가적인 사전학습에 사용되었습니다. - Train: 10M lines, 5B tokens - Valid: 2M lines, 1B tokens - cf) 기존 mDeBERTa-v3은 XLM-R 과 같이 [cc-100 데이터셋](https://data.statmt.org/cc-100/)으로 학습되었으며, 그 중 한국어 데이터셋의 크기는 54GB입니다. | ec74969142b8858c7d80a64a30853d72 |
mit | ['deberta', 'deberta-v3', 'mdeberta', 'korean', 'pretraining'] | false | Fine-tuning on NLU Tasks - Base Model | Model | Size | NSMC(acc) | Naver NER(F1) | PAWS (acc) | KorNLI (acc) | KorSTS (spearman) | Question Pair (acc) | KorQuaD (Dev) (EM/F1) | Korean-Hate-Speech (Dev) (F1) | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | XLM-Roberta-Base | 1.03G | 89.03 | 86.65 | 82.80 | 80.23 | 78.45 | 93.80 | 64.70 / 88.94 | 64.06 | | mdeberta-base | 534M | 90.01 | 87.43 | 85.55 | 80.41 | **82.65** | 94.06 | 65.48 / 89.74 | 62.91 | | mdeberta-base-kor-further (Ours) | 534M | **90.52** | **87.87** | **85.85** | **80.65** | 81.90 | **94.98** | **66.07 / 90.35** | **68.16** | | ad652f7db4fc5791019e546ca03c1de6 |
mit | ['deberta', 'deberta-v3', 'mdeberta', 'korean', 'pretraining'] | false | Citation ``` @misc{he2021debertav3, title={DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing}, author={Pengcheng He and Jianfeng Gao and Weizhu Chen}, year={2021}, eprint={2111.09543}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ``` @inproceedings{ he2021deberta, title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION}, author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen}, booktitle={International Conference on Learning Representations}, year={2021}, url={https://openreview.net/forum?id=XPZIaotutsD} } ``` | df7355e08c2de5d3d5c364baa471a8dd |
mit | ['deberta', 'deberta-v3', 'mdeberta', 'korean', 'pretraining'] | false | Reference - [mDeBERTa-v3-base-kor-further](https://github.com/kpmg-kr/mDeBERTa-v3-base-kor-further) - [DeBERTa](https://github.com/microsoft/DeBERTa) - [Huggingface Transformers](https://github.com/huggingface/transformers) - [모두의 말뭉치](https://corpus.korean.go.kr/) - [Korpora: Korean Corpora Archives](https://github.com/ko-nlp/Korpora) - [sooftware/Korean PLM](https://github.com/sooftware/Korean-PLM) | 4207db3ee78f311fc5df5cbeb62cbacc |
mit | ['roberta-base', 'roberta-base-epoch_24'] | false | RoBERTa, Intermediate Checkpoint - Epoch 24 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_24. | dc16b552fa7f3aae6d4a9275a7eb48ab |
mit | ['generated_from_trainer'] | false | deberta-v3-large__sst2__train-8-8 This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7414 - Accuracy: 0.5623 | 9749b3bb405faa4207ea5963c6fbf441 |
mit | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6597 | 1.0 | 3 | 0.7716 | 0.25 | | 0.6376 | 2.0 | 6 | 0.7802 | 0.25 | | 0.5857 | 3.0 | 9 | 0.6625 | 0.75 | | 0.4024 | 4.0 | 12 | 0.5195 | 0.75 | | 0.2635 | 5.0 | 15 | 0.4222 | 1.0 | | 0.1714 | 6.0 | 18 | 0.4410 | 0.5 | | 0.1267 | 7.0 | 21 | 0.7773 | 0.75 | | 0.0582 | 8.0 | 24 | 0.9070 | 0.75 | | 0.0374 | 9.0 | 27 | 0.9539 | 0.75 | | 0.0204 | 10.0 | 30 | 1.0507 | 0.75 | | 0.012 | 11.0 | 33 | 1.2802 | 0.5 | | 0.0086 | 12.0 | 36 | 1.4272 | 0.5 | | 0.0049 | 13.0 | 39 | 1.4803 | 0.5 | | 0.0039 | 14.0 | 42 | 1.4912 | 0.5 | | 0.0031 | 15.0 | 45 | 1.5231 | 0.5 | | 10de571333d586b7530cf4f2642e758f |
apache-2.0 | ['squad'] | false | Training data Fine-tuning was done based on the pre-trained model [KB/bert-base-swedish-cased](https://huggingface.co/KB/bert-base-swedish-cased). Training and dev datasets are our [Swedish translation of SQuAD v2](https://github.com/susumu2357/SQuAD_v2_sv). [Here](https://huggingface.co/datasets/susumu2357/squad_v2_sv) is the HuggingFace Datasets. | 25c96bec876d8eb42f1c080ca9388f8a |
apache-2.0 | ['squad'] | false | Eval results ``` 'exact': 66.72642524202223 'f1': 70.11149581003404 'total': 11156 'HasAns_exact': 55.574745730186144 'HasAns_f1': 62.821693965983044 'HasAns_total': 5211 'NoAns_exact': 76.50126156433979 'NoAns_f1': 76.50126156433979 'NoAns_total': 5945 ``` | 111ae686a95cdb17deedd22bc10802d6 |
apache-2.0 | ['squad'] | false | BibTeX entry and citation info ```bibtex @misc{svSQuADbert, author = {Susumu Okazawa}, title = {Swedish BERT Fine-tuned on Swedish SQuAD 2.0}, year = {2021}, howpublished = {\url{https://huggingface.co/susumu2357/bert-base-swedish-squad2}}, } ``` | 7ae625f1e593ec4d2b8695414aec825c |
apache-2.0 | ['generated_from_trainer'] | false | SentimentClassifier This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the amazon_polarity dataset. It achieves the following results on the evaluation set: - Loss: 0.4425 - Accuracy: 0.91 - F1: 0.91 | 2e5058a4b1e02158bfdce0bbecaada94 |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 | 8bac1586dcc2684c3e9324995d9bd0a1 |
apache-2.0 | ['generated_from_trainer'] | false | librispeech-100h-supervised-meta This model is a fine-tuned version of [Kuray107/librispeech-5h-supervised](https://huggingface.co/Kuray107/librispeech-5h-supervised) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0965 - Wer: 0.0330 | 609d35e33bb87138d91124748ec9b09c |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 20 - mixed_precision_training: Native AMP | 84f7614f240b8e011d149d84cb5798ae |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.1131 | 1.12 | 1000 | 0.0755 | 0.0487 | | 0.0725 | 2.24 | 2000 | 0.0637 | 0.0404 | | 0.0539 | 3.36 | 3000 | 0.0661 | 0.0389 | | 0.0441 | 4.48 | 4000 | 0.0637 | 0.0371 | | 0.0379 | 5.61 | 5000 | 0.0675 | 0.0356 | | 0.0341 | 6.73 | 6000 | 0.0735 | 0.0360 | | 0.0295 | 7.85 | 7000 | 0.0737 | 0.0362 | | 0.0265 | 8.97 | 8000 | 0.0741 | 0.0350 | | 0.0244 | 10.09 | 9000 | 0.0779 | 0.0337 | | 0.0217 | 11.21 | 10000 | 0.0835 | 0.0343 | | 0.0203 | 12.33 | 11000 | 0.0785 | 0.0339 | | 0.0188 | 13.45 | 12000 | 0.0827 | 0.0344 | | 0.0179 | 14.57 | 13000 | 0.0875 | 0.0332 | | 0.0169 | 15.7 | 14000 | 0.0860 | 0.0330 | | 0.0158 | 16.82 | 15000 | 0.0954 | 0.0330 | | 0.0147 | 17.94 | 16000 | 0.0934 | 0.0329 | | 0.0148 | 19.06 | 17000 | 0.0965 | 0.0330 | | b6fa6c20e69abe7c56fd9cc1e2541afe |
apache-2.0 | ['generated_from_trainer'] | false | tiny-mlm-glue-rte-custom-tokenizer This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 7.3646 | 9f470873844b964ebbf8aa9b8366e9dc |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 7.71 | 1.6 | 500 | 7.1503 | | 6.8618 | 3.21 | 1000 | 7.2787 | | 6.816 | 4.81 | 1500 | 7.2543 | | 6.7094 | 6.41 | 2000 | 7.3646 | | 663a0cf88f7d73fa4554c22ac43ba65e |
cc-by-4.0 | ['question generation'] | false | Model Card of `lmqg/t5-large-subjqa-tripadvisor-qg` This model is fine-tuned version of [lmqg/t5-large-squad](https://huggingface.co/lmqg/t5-large-squad) for question generation task on the [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) (dataset_name: tripadvisor) via [`lmqg`](https://github.com/asahi417/lm-question-generation). | d0d10ea906b29e61580adc1dcb958760 |
cc-by-4.0 | ['question generation'] | false | Overview - **Language model:** [lmqg/t5-large-squad](https://huggingface.co/lmqg/t5-large-squad) - **Language:** en - **Training data:** [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) (tripadvisor) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) | 7893a9a420358978028d6f964c5f5a59 |
cc-by-4.0 | ['question generation'] | false | model prediction questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/t5-large-subjqa-tripadvisor-qg") output = pipe("generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.") ``` | 5f1e9fbcbd9ecc9999b98ba3a96647dd |
cc-by-4.0 | ['question generation'] | false | Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/t5-large-subjqa-tripadvisor-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.tripadvisor.json) | | Score | Type | Dataset | |:-----------|--------:|:------------|:-----------------------------------------------------------------| | BERTScore | 94.46 | tripadvisor | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_1 | 26.44 | tripadvisor | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_2 | 17.84 | tripadvisor | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_3 | 9.13 | tripadvisor | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_4 | 5.35 | tripadvisor | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | METEOR | 27.45 | tripadvisor | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | MoverScore | 67.76 | tripadvisor | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | ROUGE_L | 27.69 | tripadvisor | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | 52ff50b560d4ec82e82d88d264483d54 |
cc-by-4.0 | ['question generation'] | false | Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_subjqa - dataset_name: tripadvisor - input_types: ['paragraph_answer'] - output_types: ['question'] - prefix_types: ['qg'] - model: lmqg/t5-large-squad - max_length: 512 - max_length_output: 32 - epoch: 1 - batch: 16 - lr: 0.0001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 4 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/t5-large-subjqa-tripadvisor-qg/raw/main/trainer_config.json). | c55142d809026a876cb187ff60bd71f4 |
apache-2.0 | ['translation'] | false | This is a finetuning of a MarianMT pretrained on English-Chinese. The target language pair is English-Vietnamese. The first phase of training (mixed) is performed on a dataset containing both English-Chinese and English-Vietnamese sentences. The second phase of training (pure) is performed on a dataset containing only English-Vietnamese sentences. | 249266b671c3a44087e04eaadb96f93d |
apache-2.0 | ['translation'] | false | This token is needed to identify the target language input_sentence = "<2vi> " + sentence translated = model.generate(**tokenizer_en(input_sentence, return_tensors="pt", padding=True)) output_sentence = [tokenizer.decode(t, skip_special_tokens=True) for t in translated] ``` | 0f197f266fc5260f28af5dbccc74c479 |
apache-2.0 | ['translation'] | false | Training results MIXED | Epoch | Bleu | |:-----:|:-------:| | 1.0 | 26.2407 | | 2.0 | 32.6016 | | 3.0 | 35.4060 | | 4.0 | 36.6737 | | 5.0 | 37.3774 | PURE | Epoch | Bleu | |:-----:|:-------:| | 1.0 | 37.3169 | | 2.0 | 37.4407 | | 3.0 | 37.6696 | | 4.0 | 37.8765 | | 5.0 | 38.0105 | | 0b07f09dfa2c45deab2bd1ca1e3f5a8f |
mit | ['Image Translation'] | false | Citation Information ```bibtex @Article{Texler20-SIG, author = "Ond\v{r}ej Texler and David Futschik and Michal Ku\v{c}era and Ond\v{r}ej Jamri\v{s}ka and \v{S}\'{a}rka Sochorov\'{a} and Menglei Chai and Sergey Tulyakov and Daniel S\'{y}kora", title = "Interactive Video Stylization Using Few-Shot Patch-Based Training", journal = "ACM Transactions on Graphics", volume = "39", number = "4", pages = "73", year = "2020", } ``` | 74905be414baee473c74e36315e20c5a |
apache-2.0 | ['translation'] | false | opus-mt-sv-ilo * source languages: sv * target languages: ilo * OPUS readme: [sv-ilo](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-ilo/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-ilo/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-ilo/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-ilo/opus-2020-01-16.eval.txt) | ee5f49871db1aa75c9baa5413b86fbf6 |
apache-2.0 | ['korean'] | false | KoELECTRA (Base Generator) Pretrained ELECTRA Language Model for Korean (`koelectra-base-generator`) For more detail, please see [original repository](https://github.com/monologg/KoELECTRA/blob/master/README_EN.md). | 6c1711f48fe9a67c57c519da33ff38f2 |
apache-2.0 | ['korean'] | false | Load model and tokenizer ```python >>> from transformers import ElectraModel, ElectraTokenizer >>> model = ElectraModel.from_pretrained("monologg/koelectra-base-generator") >>> tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-base-generator") ``` | 76c3f930973a76c68c1bc551112b3c0c |
apache-2.0 | ['korean'] | false | Tokenizer example ```python >>> from transformers import ElectraTokenizer >>> tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-base-generator") >>> tokenizer.tokenize("[CLS] 한국어 ELECTRA를 공유합니다. [SEP]") ['[CLS]', '한국어', 'E', ' | b3a3e5207aa1e33392c854d2fedd3983 |
apache-2.0 | ['korean'] | false | Example using ElectraForMaskedLM ```python from transformers import pipeline fill_mask = pipeline( "fill-mask", model="monologg/koelectra-base-generator", tokenizer="monologg/koelectra-base-generator" ) print(fill_mask("나는 {} 밥을 먹었다.".format(fill_mask.tokenizer.mask_token))) ``` | f0732d433cdda7cfc93f3a1b24552ea5 |
creativeml-openrail-m | ['text-to-image', 'stable-diffusion'] | false | profile Dreambooth model trained by mastergruffly with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb) Sample pictures of this concept: | d80732cc8f1ab56b52b2b469ca10fd0d |
cc-by-4.0 | ['espnet', 'audio', 'automatic-speech-recognition'] | false | Demo: How to use in ESPnet2 Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html) if you haven't done that already. ```bash cd espnet git checkout bf8c8f00194bdfed8ca388d8b20d14791b7d270e pip install -e . cd egs2/voxforge/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model pyf98/voxforge_it_conformer_e12_linear2048 ``` <!-- Generated by scripts/utils/show_asr_result.sh --> | a5741504c550dc4f8403d8b77d4d38a1 |
cc-by-4.0 | ['espnet', 'audio', 'automatic-speech-recognition'] | false | Environments - date: `Thu Dec 29 01:45:02 EST 2022` - python version: `3.9.15 (main, Nov 24 2022, 14:31:59) [GCC 11.2.0]` - espnet version: `espnet 202211` - pytorch version: `pytorch 1.12.1` - Git hash: `bf8c8f00194bdfed8ca388d8b20d14791b7d270e` - Commit date: `Wed Dec 28 22:43:13 2022 -0500` | 3c4398337f982d73e96ee4a93237f3ba |
cc-by-4.0 | ['espnet', 'audio', 'automatic-speech-recognition'] | false | WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_asr_model_valid.acc.ave/dt_it|1035|12587|70.3|24.6|5.1|3.3|33.0|95.4| |decode_asr_asr_model_valid.acc.ave/et_it|1103|13699|72.4|22.5|5.1|2.9|30.5|91.5| | e407668edfeeb04fb78dc8613782f5af |
cc-by-4.0 | ['espnet', 'audio', 'automatic-speech-recognition'] | false | CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_asr_model_valid.acc.ave/dt_it|1035|75494|92.9|3.9|3.2|1.8|8.9|95.4| |decode_asr_asr_model_valid.acc.ave/et_it|1103|81228|93.7|3.5|2.8|1.7|8.0|91.5| | 5cfb980c98eafd22942952ad8751ced5 |
cc-by-4.0 | ['espnet', 'audio', 'automatic-speech-recognition'] | false | ASR config <details><summary>expand</summary> ``` config: conf/tuning/train_asr_conformer_e12_linear2048.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_train_asr_conformer_e12_linear2048_raw_it_char_normalize_confnorm_varsFalse ngpu: 1 seed: 0 num_workers: 4 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 100 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - acc - max keep_nbest_models: 10 nbest_averaging_interval: 0 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: true log_interval: null use_matplotlib: true use_tensorboard: true create_graph_in_tensorboard: false use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 128 valid_batch_size: null batch_bins: 1000000 valid_batch_bins: null train_shape_file: - exp/asr_stats_raw_it_char/train/speech_shape - exp/asr_stats_raw_it_char/train/text_shape.char valid_shape_file: - exp/asr_stats_raw_it_char/valid/speech_shape - exp/asr_stats_raw_it_char/valid/text_shape.char batch_type: folded valid_batch_type: null fold_length: - 80000 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/tr_it/wav.scp - speech - sound - - dump/raw/tr_it/text - text - text valid_data_path_and_name_and_type: - - dump/raw/dt_it/wav.scp - speech - sound - - dump/raw/dt_it/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 0.002 scheduler: warmuplr scheduler_conf: warmup_steps: 10000 token_list: - <blank> - <unk> - <space> - A - E - I - O - R - N - L - S - T - C - D - U - M - P - V - G - F - H - B - Q - Z - '''' - Ò - À - È - Ú - X - W - Í - É - Y - K - J - '1' - <sos/eos> init: null input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: null zero_infinity: true joint_net_conf: null use_preprocessor: true token_type: char bpemodel: null non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' short_noise_thres: 0.5 frontend: default frontend_conf: fs: 16k specaug: null specaug_conf: {} normalize: global_mvn normalize_conf: stats_file: exp/asr_stats_raw_it_char/train/feats_stats.npz norm_vars: false model: espnet model_conf: ctc_weight: 0.3 lsm_weight: 0.1 length_normalized_loss: false preencoder: null preencoder_conf: {} encoder: conformer encoder_conf: output_size: 256 attention_heads: 4 linear_units: 2048 num_blocks: 12 dropout_rate: 0.1 positional_dropout_rate: 0.1 attention_dropout_rate: 0.1 input_layer: conv2d normalize_before: true macaron_style: true rel_pos_type: latest pos_enc_layer_type: rel_pos selfattention_layer_type: rel_selfattn activation_type: swish use_cnn_module: true cnn_module_kernel: 31 postencoder: null postencoder_conf: {} decoder: transformer decoder_conf: attention_heads: 4 linear_units: 2048 num_blocks: 6 dropout_rate: 0.1 positional_dropout_rate: 0.1 self_attention_dropout_rate: 0.0 src_attention_dropout_rate: 0.0 preprocessor: default preprocessor_conf: {} required: - output_dir - token_list version: '202211' distributed: false ``` </details> | 7c3dc787542c0a99ab985304254bf552 |
apache-2.0 | [] | false | albert-small-kor-cross-encoder-v1 - albert-small-kor-v1 모델을 훈련시켜 cross-encoder로 파인튜닝한 모델 - This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. | 5e1ee3480b028b4620f1a2bd3e351b03 |
apache-2.0 | [] | false | Training - sts(10)-nli(3)-sts(10)-nli(3)-sts(10) 훈련 시킴 (**distil 훈련 없음**) - STS : seed=111,epoch=10, lr=1e-4, eps=1e-6, warm_step=10%, max_seq_len=128, train_batch=128(small 모델=32) (albert 13m/7G) [훈련코드](https://github.com/kobongsoo/BERT/blob/master/sbert/cross-encoder/sbert-corossencoder-train-nli.ipynb) - NLI 훈련 : seed=111,epoch=3, lr=3e-5, eps=1e-8, warm_step=10%, max_seq_len=128, train_batch=64, eval_bath=64(albert 2h/7G) [훈련코드](https://github.com/kobongsoo/BERT/blob/master/sbert/cross-encoder/sbert-corossencoder-train-sts.ipynb) - [평가코드](https://github.com/kobongsoo/BERT/blob/master/sbert/cross-encoder/sbert-crossencoder-test3.ipynb),[테스트코드](https://github.com/kobongsoo/BERT/blob/master/sbert/cross-encoder/sbert-crossencoder-test.ipynb) - |모델 |korsts|klue-sts|glue(stsb)|stsb_multi_mt(en)| |:--------|------:|--------:|--------------:|------------:| |**albert-small-kor-cross-encoder-v1** |0.8455 |0.8526 |0.8513 |0.7976| |klue-cross-encoder-v1 |0.8262 |0.8833 |0.8512 |0.7889| |kpf-cross-encoder-v1 |0.8799 |0.9133 |0.8626 |0.8027| | 8afa458d75d85ecd2f41515445bb636a |
apache-2.0 | [] | false | Usage and Performance Pre-trained models can be used like this: ``` from sentence_transformers import CrossEncoder model = CrossEncoder('bongsoo/albert-small-kor-cross-encoder-v1') scores = model.predict([('오늘 날씨가 좋다', '오늘 등산을 한다'), ('오늘 날씨가 흐리다', '오늘 비가 내린다')]) print(scores) ``` ``` [0.45417202 0.6294121 ] ``` The model will predict scores for the pairs `('Sentence 1', 'Sentence 2')` and `('Sentence 3', 'Sentence 4')`. You can use this model also without sentence_transformers and by just using Transformers ``AutoModel`` class | 2d5401994d194ba16a167a44ddd5fb2e |
mit | ['m2m100-12B'] | false | M2M100 12B M2M100 is a multilingual encoder-decoder (seq-to-seq) model trained for Many-to-Many multilingual translation. It was introduced in this [paper](https://arxiv.org/abs/2010.11125) and first released in [this](https://github.com/pytorch/fairseq/tree/master/examples/m2m_100) repository. The model that can directly translate between the 9,900 directions of 100 languages. To translate into a target language, the target language id is forced as the first generated token. To force the target language id as the first generated token, pass the `forced_bos_token_id` parameter to the `generate` method. *Note: `M2M100Tokenizer` depends on `sentencepiece`, so make sure to install it before running the example.* To install `sentencepiece` run `pip install sentencepiece` ```python from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer hi_text = "जीवन एक चॉकलेट बॉक्स की तरह है।" chinese_text = "生活就像一盒巧克力。" model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100-12B-last-ckpt") tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100-12B-last-ckpt") | 2ffbc8a1a2e6c7cfb4a7e3404cf34f0a |
mit | ['m2m100-12B'] | false | translate Hindi to French tokenizer.src_lang = "hi" encoded_hi = tokenizer(hi_text, return_tensors="pt") generated_tokens = model.generate(**encoded_hi, forced_bos_token_id=tokenizer.get_lang_id("fr")) tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) | 6bf9eb781ce6fc2e44d66c8ccd64bede |
mit | ['m2m100-12B'] | false | translate Chinese to English tokenizer.src_lang = "zh" encoded_zh = tokenizer(chinese_text, return_tensors="pt") generated_tokens = model.generate(**encoded_zh, forced_bos_token_id=tokenizer.get_lang_id("en")) tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) | cdd97ba019191f734996503777228f7d |
mit | ['m2m100-12B'] | false | Languages covered Afrikaans (af), Amharic (am), Arabic (ar), Asturian (ast), Azerbaijani (az), Bashkir (ba), Belarusian (be), Bulgarian (bg), Bengali (bn), Breton (br), Bosnian (bs), Catalan; Valencian (ca), Cebuano (ceb), Czech (cs), Welsh (cy), Danish (da), German (de), Greeek (el), English (en), Spanish (es), Estonian (et), Persian (fa), Fulah (ff), Finnish (fi), French (fr), Western Frisian (fy), Irish (ga), Gaelic; Scottish Gaelic (gd), Galician (gl), Gujarati (gu), Hausa (ha), Hebrew (he), Hindi (hi), Croatian (hr), Haitian; Haitian Creole (ht), Hungarian (hu), Armenian (hy), Indonesian (id), Igbo (ig), Iloko (ilo), Icelandic (is), Italian (it), Japanese (ja), Javanese (jv), Georgian (ka), Kazakh (kk), Central Khmer (km), Kannada (kn), Korean (ko), Luxembourgish; Letzeburgesch (lb), Ganda (lg), Lingala (ln), Lao (lo), Lithuanian (lt), Latvian (lv), Malagasy (mg), Macedonian (mk), Malayalam (ml), Mongolian (mn), Marathi (mr), Malay (ms), Burmese (my), Nepali (ne), Dutch; Flemish (nl), Norwegian (no), Northern Sotho (ns), Occitan (post 1500) (oc), Oriya (or), Panjabi; Punjabi (pa), Polish (pl), Pushto; Pashto (ps), Portuguese (pt), Romanian; Moldavian; Moldovan (ro), Russian (ru), Sindhi (sd), Sinhala; Sinhalese (si), Slovak (sk), Slovenian (sl), Somali (so), Albanian (sq), Serbian (sr), Swati (ss), Sundanese (su), Swedish (sv), Swahili (sw), Tamil (ta), Thai (th), Tagalog (tl), Tswana (tn), Turkish (tr), Ukrainian (uk), Urdu (ur), Uzbek (uz), Vietnamese (vi), Wolof (wo), Xhosa (xh), Yiddish (yi), Yoruba (yo), Chinese (zh), Zulu (zu) | 03aff28e928b8390fb40e08c8615d17c |
mit | ['m2m100-12B'] | false | BibTeX entry and citation info ``` @misc{fan2020englishcentric, title={Beyond English-Centric Multilingual Machine Translation}, author={Angela Fan and Shruti Bhosale and Holger Schwenk and Zhiyi Ma and Ahmed El-Kishky and Siddharth Goyal and Mandeep Baines and Onur Celebi and Guillaume Wenzek and Vishrav Chaudhary and Naman Goyal and Tom Birch and Vitaliy Liptchinsky and Sergey Edunov and Edouard Grave and Michael Auli and Armand Joulin}, year={2020}, eprint={2010.11125}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` | a1c1c1e507af9bb66e2e99c062c5cbd5 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.