license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
cc-by-4.0
['question generation']
false
Model Card of `research-backup/t5-small-subjqa-vanilla-restaurants-qg` This model is fine-tuned version of [t5-small](https://huggingface.co/t5-small) for question generation task on the [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) (dataset_name: restaurants) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
e842af0915fa7ead39da0925d46924ea
cc-by-4.0
['question generation']
false
Overview - **Language model:** [t5-small](https://huggingface.co/t5-small) - **Language:** en - **Training data:** [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) (restaurants) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
3415bb99dff1445c42061d1653fae438
cc-by-4.0
['question generation']
false
model prediction questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "research-backup/t5-small-subjqa-vanilla-restaurants-qg") output = pipe("generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.") ```
40886ca6fc99b088bea7f7d0b0360cd2
cc-by-4.0
['question generation']
false
Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/research-backup/t5-small-subjqa-vanilla-restaurants-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.restaurants.json) | | Score | Type | Dataset | |:-----------|--------:|:------------|:-----------------------------------------------------------------| | BERTScore | 12.27 | restaurants | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_1 | 1.75 | restaurants | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_2 | 0 | restaurants | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_3 | 0 | restaurants | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_4 | 0 | restaurants | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | METEOR | 0.97 | restaurants | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | MoverScore | 49.45 | restaurants | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | ROUGE_L | 0.73 | restaurants | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
2ef631039a4d2007b2998263efb00bc7
cc-by-4.0
['question generation']
false
Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_subjqa - dataset_name: restaurants - input_types: ['paragraph_answer'] - output_types: ['question'] - prefix_types: ['qg'] - model: t5-small - max_length: 512 - max_length_output: 32 - epoch: 2 - batch: 32 - lr: 5e-05 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 4 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/research-backup/t5-small-subjqa-vanilla-restaurants-qg/raw/main/trainer_config.json).
386c7b67178b2696d147e30e21ecda61
apache-2.0
['generated_from_trainer']
false
smalldata-microsoft-deberta-base-eng-only-sentiment-single-finetuned-memes This model is a fine-tuned version of [jayantapaul888/twitter-data-microsoft-deberta-base-mnli-sentiment-finetuned-memes](https://huggingface.co/jayantapaul888/twitter-data-microsoft-deberta-base-mnli-sentiment-finetuned-memes) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9308 - Accuracy: 0.8429 - Precision: 0.8588 - Recall: 0.8579 - F1: 0.8583
e11e174c256f0f72f331bcf760d8b40b
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | No log | 1.0 | 378 | 0.3307 | 0.8407 | 0.8682 | 0.8549 | 0.8541 | | 0.353 | 2.0 | 756 | 0.3677 | 0.8518 | 0.8669 | 0.8656 | 0.8662 | | 0.1726 | 3.0 | 1134 | 0.5219 | 0.8392 | 0.8570 | 0.8549 | 0.8548 | | 0.0681 | 4.0 | 1512 | 0.7194 | 0.8414 | 0.8578 | 0.8566 | 0.8572 | | 0.0681 | 5.0 | 1890 | 0.8617 | 0.8407 | 0.8573 | 0.8560 | 0.8565 | | 0.0233 | 6.0 | 2268 | 0.9308 | 0.8429 | 0.8588 | 0.8579 | 0.8583 |
fe355796c5899bd67a337a3eb34013a3
apache-2.0
['webgpt', 'regression', 'reward-model']
false
Reward Model pretrained on openai/webgpt_comparison and humanfeedback summary. Unlike the other electra-large model this model is trained using rank loss with one more datasets. On validation dataset the result is much more stable than usual. You can refer to this [wandb](https://wandb.ai/theblackcat102/reward-model/runs/1d4e4oi2?workspace=) for more details Slightly better than previous webgpt only model : [electra-large](https://huggingface.co/theblackcat102/electra-large-webgpt-rm)
64e8e5c0848b675dfe03600866c1eebb
apache-2.0
['automatic-speech-recognition', 'es']
false
exp_w2v2r_es_vp-100k_accent_surpeninsular-8_nortepeninsular-2_s149 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
37acb5933df3cd1b56b1ab718089956a
apache-2.0
['generated_from_trainer']
false
Goodreads_Books_Reviews_BERT_3 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2441
60f7ecfc5e423ba921b08905c794a7aa
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 4
17a21f5b4d3081609c37eaa625621628
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.4298 | 1.0 | 675 | 1.0408 | | 1.0215 | 2.0 | 1350 | 0.9826 | | 0.6131 | 3.0 | 2025 | 1.0458 | | 0.3825 | 4.0 | 2700 | 1.2441 |
ceb3d60f8de50399f4bd96e6a939ae52
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
Demo: How to use in ESPnet2 ```bash cd espnet git checkout 0fae8113d99d092e7cbe4bcc48f9361e7012cff2 pip install -e . cd egs2/slurp_mixture/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/Yen-Ju_Lu_spatilaizedslurp_asr_train_asr_conformer_transformer_valid.acc.best ``` <!-- Generated by scripts/utils/show_asr_result.sh -->
33d8f7e7983b69287f3260a07ddf6415
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
Environments - date: `Tue Mar 29 04:17:37 UTC 2022` - python version: `3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0]` - espnet version: `espnet 0.10.7a1` - pytorch version: `pytorch 1.9.0` - Git hash: `0fae8113d99d092e7cbe4bcc48f9361e7012cff2` - Commit date: `Thu Mar 24 07:54:19 2022 +0000`
8d78cf147432e0c827ba6d6e90c08ff8
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |inference_asr_model_valid.acc.best/devel|8690|109017|63.9|21.1|15.0|2.0|38.1|75.4| |inference_asr_model_valid.acc.best/test|6099|77315|69.0|17.4|13.5|1.8|32.8|68.9| |inference_asr_model_valid.acc.best/test_ineube|6099|77315|77.8|12.0|10.2|1.5|23.6|59.4| |inference_asr_model_valid.acc.best/test_qut|6099|77315|68.4|17.9|13.6|1.8|33.3|69.5| |inference_asr_model_valid.acc.best/test_qut_ineube|6099|77315|78.0|11.9|10.2|1.4|23.4|59.3|
eedef1f62558fe5d0f482d0855fa3e71
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |inference_asr_model_valid.acc.best/devel|8690|513265|79.6|9.3|11.0|3.6|23.9|75.4| |inference_asr_model_valid.acc.best/test|6099|362039|82.6|7.6|9.8|3.0|20.4|68.9| |inference_asr_model_valid.acc.best/test_ineube|6099|362039|87.5|4.9|7.6|2.1|14.6|59.4| |inference_asr_model_valid.acc.best/test_qut|6099|362039|82.3|7.8|9.9|3.1|20.8|69.5| |inference_asr_model_valid.acc.best/test_qut_ineube|6099|362039|87.5|4.9|7.6|2.1|14.6|59.3|
aead8f01f4ef2c938e9d793737e15c36
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
ASR config <details><summary>expand</summary> ``` config: conf/train_asr.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_train_asr_raw_en_word ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: 3 dist_rank: 0 local_rank: 0 dist_master_addr: localhost dist_master_port: 35953 dist_launcher: null multiprocessing_distributed: true unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 50 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - acc - max keep_nbest_models: 10 nbest_averaging_interval: 0 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_matplotlib: true use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 48 valid_batch_size: null batch_bins: 1000000 valid_batch_bins: null train_shape_file: - exp/asr_stats_raw_en_word/train/speech_shape - exp/asr_stats_raw_en_word/train/text_shape.word valid_shape_file: - exp/asr_stats_raw_en_word/valid/speech_shape - exp/asr_stats_raw_en_word/valid/text_shape.word batch_type: folded valid_batch_type: null fold_length: - 80000 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train/wav.scp - speech - sound - - dump/raw/train/text - text - text valid_data_path_and_name_and_type: - - dump/raw/devel/wav.scp - speech - sound - - dump/raw/devel/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 0.0002 scheduler: warmuplr scheduler_conf: warmup_steps: 25000 token_list: - <blank> - <unk> - ▁the - s - ▁to - ▁i - ▁me - ▁you - ▁what - ▁a - ▁is - a - ▁my - ▁please - y - '''' - ▁in - ing - ▁s - e - o - ▁for - i - ▁on - d - t - u - er - p - ▁of - es - re - l - ▁it - ▁p - le - ▁f - ▁m - ▁email - ▁d - m - ▁c - ▁b - st - r - n - ar - ▁t - ▁h - b - ▁that - c - ▁this - h - an - email_query - ▁play - ▁re - ▁do - ▁can - at - ▁have - g - ▁from - ▁and - en - email_sendemail - ▁olly - 'on' - ▁new - it - qa_factoid - calendar_set - ▁any - or - ▁g - ent - ▁how - ▁tell - ch - ▁not - ▁about - ▁at - ate - general_negate - f - ▁today - ▁e - ed - ▁list - ▁r - in - k - ic - social_post - ▁are - play_music - general_quirky - ▁l - al - v - ▁n - ▁be - ▁an - ▁st - et - ▁am - general_praise - ▁time - weather_query - ▁up - ▁check - calendar_query - ▁w - om - ur - ▁send - ▁with - ly - w - general_explain - ad - ▁th - news_query - ▁one - ▁emails - day - ▁sh - ce - ▁ - ▁last - ve - ▁he - z - ▁ch - ▁will - ▁set - ▁would - ▁was - x - general_repeat - ▁add - ▁again - ou - ▁ex - is - ct - general_affirm - general_confirm - ▁song - ▁next - ▁j - ▁meeting - um - ation - ▁turn - ▁did - if - ▁alarm - am - ▁like - datetime_query - ter - ▁remind - ▁o - qa_definition - ▁said - ▁calendar - ll - se - ers - ▁pr - th - ▁get - our - ▁need - ▁all - ot - ▁want - ▁off - and - ▁right - ▁de - ▁tr - ut - general_dontcare - as - ▁week - ▁tweet - ight - ir - ▁your - ▁event - ▁news - ▁se - ay - ion - ▁com - ▁there - ▁ye - ▁weather - un - ▁confirm - ld - calendar_remove - ▁y - ▁lights - ▁more - ▁v - play_radio - ▁does - ▁po - ▁now - id - email_querycontact - ▁show - ▁could - ery - op - ▁day - ▁pm - ▁music - ▁tomorrow - ▁train - ▁u - ine - ▁or - ange - qa_currency - ice - ▁contact - ▁just - ▁jo - ▁think - qa_stock - end - ss - ber - ▁tw - ▁command - ▁make - ▁no - ▁mo - pe - ▁find - general_commandstop - ▁when - social_query - ▁so - ong - ▁co - ant - ow - q - ▁much - ▁where - ue - ul - ri - ake - ap - ▁start - ▁mar - ▁by - one - ▁know - ▁wor - oo - ▁give - ▁let - ▁events - der - ▁ro - ▁pl - play_podcasts - art - us - ▁work - ▁current - ol - cooking_recipe - nt - ▁correct - transport_query - ia - ▁stock - ▁br - ive - ▁app - ▁two - ▁latest - lists_query - recommendation_events - ab - ▁go - ▁but - ook - ▁some - ke - alarm_set - play_audiobook - ▁k - ▁response - ▁wr - cast - ▁open - ▁cle - ▁done - ▁got - ▁ca - ite - ase - ▁thank - iv - ag - ah - ▁answer - ie - ▁five - ▁book - ▁rec - ore - ▁john - ist - ment - ▁appreci - ▁fri - ack - ▁remove - ated - ock - ree - j - ▁good - ▁many - orn - fe - ▁radio - ▁we - int - ▁facebook - ▁cl - ▁sev - ▁schedule - ard - ▁per - ▁li - ▁going - nd - ain - recommendation_locations - ▁post - lists_createoradd - ff - ▁su - red - iot_hue_lightoff - lists_remove - ▁ar - een - ▁say - ro - ▁volume - ▁le - ▁reply - ▁complaint - ▁delete - ▁out - lly - ame - ▁ne - ▁detail - ▁if - im - ▁happ - orr - ich - em - ▁ev - ction - ▁dollar - ▁as - alarm_query - audio_volume_mute - ac - music_query - ▁mon - ther - ▁thanks - cel - ▁who - ave - ▁service - ▁mail - ▁hear - ty - de - ▁si - ▁wh - ood - ell - ▁con - icket - ▁once - ound - ▁don - ▁loc - ▁light - ▁birthday - ▁inf - ffe - ▁has - ▁playlist - ort - el - ening - ▁us - ▁un - own - ▁inc - ai - ▁speak - age - ▁mess - ast - ci - ver - ▁ten - ▁underst - gh - audio_volume_up - ome - transport_ticket - ind - iot_hue_lightchange - iot_coffee - pp - ▁res - plain - io - lar - takeaway_query - ge - takeaway_order - email_addcontact - play_game - ak - ▁fa - transport_traffic - music_likeness - ▁rep - act - ust - transport_taxi - iot_hue_lightdim - ▁mu - ▁ti - ick - ▁ha - ould - general_joke - '1' - qa_maths - ▁lo - iot_cleaning - ill - her - iot_hue_lightup - pl - '2' - alarm_remove - orrect - ▁cont - mail - out - audio_volume_down - book - ail - recommendation_movies - ck - ▁man - ▁mus - ▁che - me - ume - ▁answ - datetime_convert - ▁late - iot_wemo_on - ▁twe - music_settings - iot_wemo_off - orre - ith - ▁tom - ▁fr - ere - ▁ad - xt - ▁ab - ank - general_greet - now - ▁meet - ▁curre - ▁respon - ▁ag - audio_volume_other - ink - ▁spe - iot_hue_lighton - ght - ▁rem - '?' - urn - ▁op - ▁complain - ▁comm - let - music_dislikeness - ove - ▁sch - ather - ▁rad - edule - ▁under - lease - ▁bir - erv - ▁birth - ▁face - ▁cur - sw - ▁serv - ek - aid - '9' - ▁vol - edu - '5' - cooking_query - lete - ▁joh - ▁det - firm - nder - '0' - _ - irm - '8' - '&' - list - pon - qa_query - '7' - '3' - '-' - N - A - M - E - ']' - '[' - ':' - reci - ▁doll - <sos/eos> init: null input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: true joint_net_conf: null model_conf: ctc_weight: 0.3 lsm_weight: 0.1 length_normalized_loss: false extract_feats_in_collect_stats: false use_preprocessor: true token_type: word bpemodel: null non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' frontend: default frontend_conf: fs: 16k specaug: specaug specaug_conf: apply_time_warp: true time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 30 num_freq_mask: 2 apply_time_mask: true time_mask_width_range: - 0 - 40 num_time_mask: 2 normalize: utterance_mvn normalize_conf: {} preencoder: null preencoder_conf: {} encoder: conformer encoder_conf: output_size: 512 attention_heads: 8 linear_units: 2048 num_blocks: 12 dropout_rate: 0.1 positional_dropout_rate: 0.1 attention_dropout_rate: 0.1 input_layer: conv2d normalize_before: true macaron_style: true pos_enc_layer_type: rel_pos selfattention_layer_type: rel_selfattn activation_type: swish use_cnn_module: true cnn_module_kernel: 31 postencoder: null postencoder_conf: {} decoder: transformer decoder_conf: attention_heads: 8 linear_units: 2048 num_blocks: 6 dropout_rate: 0.1 positional_dropout_rate: 0.1 self_attention_dropout_rate: 0.1 src_attention_dropout_rate: 0.1 required: - output_dir - token_list version: 0.10.7a1 distributed: true ``` </details>
0f2e8795ea45cb2ef11d0fdb1ecd84f0
apache-2.0
['multilingual model', 'generated_from_trainer']
false
mt5-small-finetuned-multilingual-xlsum-new This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.7673 - Rouge1: 9.1368 - Rouge2: 2.3893 - Rougel: 7.6599 - Rougelsum: 7.6873
ae4d099f4e111810069033596c35dc1e
apache-2.0
['multilingual model', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5
3d2d83b456b25b0960338f55d5a011e5
apache-2.0
['multilingual model', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:| | 3.7827 | 1.0 | 1687 | 2.8911 | 8.1314 | 1.9569 | 6.7927 | 6.8179 | | 3.6518 | 2.0 | 3374 | 2.8338 | 8.6621 | 2.1437 | 7.2171 | 7.246 | | 3.3691 | 3.0 | 5061 | 2.8015 | 8.9402 | 2.2733 | 7.4744 | 7.497 | | 3.4435 | 4.0 | 6748 | 2.7746 | 9.0514 | 2.3627 | 7.6144 | 7.6358 | | 3.5139 | 5.0 | 8435 | 2.7673 | 9.1368 | 2.3893 | 7.6599 | 7.6873 |
d98a31574db497ece8df42f80eb8d62b
apache-2.0
['generated_from_trainer']
false
distilbert-targin-final This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6307 - Accuracy: 0.6882 - Precision: 0.6443 - Recall: 0.6384 - F1: 0.6409
9038c2da59ba74e521f493e74ad8b494
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | No log | 1.0 | 296 | 0.5882 | 0.6854 | 0.6355 | 0.6182 | 0.6226 | | 0.5995 | 2.0 | 592 | 0.5693 | 0.7015 | 0.6590 | 0.6019 | 0.6030 | | 0.5995 | 3.0 | 888 | 0.5823 | 0.6882 | 0.6440 | 0.6377 | 0.6403 | | 0.5299 | 4.0 | 1184 | 0.5968 | 0.6949 | 0.6488 | 0.6340 | 0.6386 | | 0.5299 | 5.0 | 1480 | 0.6236 | 0.6835 | 0.6430 | 0.6436 | 0.6433 | | 0.4698 | 6.0 | 1776 | 0.6307 | 0.6882 | 0.6443 | 0.6384 | 0.6409 |
5a5fd238eda7455d96dd8ea1b3e73172
afl-3.0
[]
false
This model is used to detect **abusive speech** in **Code-Mixed Malayalam**. It is finetuned on MuRIL model using Code-Mixed Malayalam abusive speech dataset. The model is trained with learning rates of 2e-5. Training code can be found at this [url](https://github.com/hate-alert/IndicAbusive) LABEL_0 :-> Normal LABEL_1 :-> Abusive
4c6f173118ca46be2ade338c6ea2a947
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Whisper Small Bg - Yonchevisky_tes2t This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.7377 - Wer: 61.8352
66c04969ece468e04db0172395c63208
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 1000 - mixed_precision_training: Native AMP
57891123f5053a3677cfc1b170007916
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.8067 | 0.37 | 100 | 1.6916 | 137.6897 | | 0.9737 | 0.73 | 200 | 1.1197 | 78.3571 | | 0.7747 | 1.1 | 300 | 0.9763 | 73.8906 | | 0.6672 | 1.47 | 400 | 0.8972 | 70.7102 | | 0.6196 | 1.84 | 500 | 0.8329 | 67.4545 | | 0.4849 | 2.21 | 600 | 0.7968 | 66.6029 | | 0.4402 | 2.57 | 700 | 0.7597 | 62.7795 | | 0.4601 | 2.94 | 800 | 0.7385 | 61.8642 | | 0.3545 | 3.31 | 900 | 0.7394 | 61.5050 | | 0.3596 | 3.68 | 1000 | 0.7377 | 61.8352 |
64bdc48dd61bed035dfabe28b5306604
apache-2.0
['generated_from_keras_callback']
false
Gorenzelg/bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.7916 - Epoch: 0
15936db98a97cc36ba75c88cead73d41
apache-2.0
['generated_from_trainer', 'whisper-event', 'hf-asr-leaderboard']
false
openai/whisper-small This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the common_voice_11_0 dataset. It achieves the following results on the evaluation set: - Loss: 0.2195 - Wer: 19.56
17150e6ffd0a95a85bfe5cc05c5a8a11
apache-2.0
['generated_from_trainer', 'whisper-event', 'hf-asr-leaderboard']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 10000 - mixed_precision_training: Native AMP
1aed0e10697b9a4f9c4c86cc02bf9a68
apache-2.0
['generated_from_trainer', 'whisper-event', 'hf-asr-leaderboard']
false
Training results | Epoch | Step | Wer | |:-------------:|:-----:|:----:| | 0.1 | 1000 | 43.61 | | 0.2 | 2000 | 36.79 | | 0.3 | 3000 | 33.05 | | 0.4 | 4000 | 29.53 | | 0.5 | 5000 | 26.01 | | 0.6 | 6000 | 23.44 | | 0.7 | 7000 | 22.22 | | 0.8 | 8000 | 21.88 | | 0.9 | 9000 | 20.53 | | 1.0 | 10000 | 19.56 |
7019edda672b69d9964d520cc5e82aea
apache-2.0
['classical chinese', 'literary chinese', 'ancient chinese', 'question-answering', 'dependency-parsing']
false
Model Description This is a BERT model pre-trained on Classical Chinese texts for dependency-parsing (head-detection on Universal Dependencies) as question-answering, derived from [bert-ancient-chinese](https://huggingface.co/Jihuai/bert-ancient-chinese) and [UD_Classical_Chinese-Kyoto](https://github.com/UniversalDependencies/UD_Classical_Chinese-Kyoto). Use [MASK] inside `context` to avoid ambiguity when specifying a multiple-used word as `question`.
c8cc4aa3d38baa92752158ceb71b2ffa
apache-2.0
['classical chinese', 'literary chinese', 'ancient chinese', 'question-answering', 'dependency-parsing']
false
How to Use ```py from transformers import AutoTokenizer,AutoModelForQuestionAnswering,QuestionAnsweringPipeline tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-ancient-chinese-base-ud-head") model=AutoModelForQuestionAnswering.from_pretrained("KoichiYasuoka/bert-ancient-chinese-base-ud-head") qap=QuestionAnsweringPipeline(tokenizer=tokenizer,model=model,align_to_words=False) print(qap(question="穴",context="不入虎穴不得虎子")) ``` or (with [ufal.chu-liu-edmonds](https://pypi.org/project/ufal.chu-liu-edmonds/)) ```py class TransformersUD(object): def __init__(self,bert): import os from transformers import (AutoTokenizer,AutoModelForQuestionAnswering, AutoModelForTokenClassification,AutoConfig,TokenClassificationPipeline) self.tokenizer=AutoTokenizer.from_pretrained(bert) self.model=AutoModelForQuestionAnswering.from_pretrained(bert) x=AutoModelForTokenClassification.from_pretrained if os.path.isdir(bert): d,t=x(os.path.join(bert,"deprel")),x(os.path.join(bert,"tagger")) else: from transformers.utils import cached_file c=AutoConfig.from_pretrained(cached_file(bert,"deprel/config.json")) d=x(cached_file(bert,"deprel/pytorch_model.bin"),config=c) s=AutoConfig.from_pretrained(cached_file(bert,"tagger/config.json")) t=x(cached_file(bert,"tagger/pytorch_model.bin"),config=s) self.deprel=TokenClassificationPipeline(model=d,tokenizer=self.tokenizer, aggregation_strategy="simple") self.tagger=TokenClassificationPipeline(model=t,tokenizer=self.tokenizer) def __call__(self,text): import numpy,torch,ufal.chu_liu_edmonds w=[(t["start"],t["end"],t["entity_group"]) for t in self.deprel(text)] z,n={t["start"]:t["entity"].split("|") for t in self.tagger(text)},len(w) r,m=[text[s:e] for s,e,p in w],numpy.full((n+1,n+1),numpy.nan) v,c=self.tokenizer(r,add_special_tokens=False)["input_ids"],[] for i,t in enumerate(v): q=[self.tokenizer.cls_token_id]+t+[self.tokenizer.sep_token_id] c.append([q]+v[0:i]+[[self.tokenizer.mask_token_id]]+v[i+1:]+[[q[-1]]]) b=[[len(sum(x[0:j+1],[])) for j in range(len(x))] for x in c] with torch.no_grad(): d=self.model(input_ids=torch.tensor([sum(x,[]) for x in c]), token_type_ids=torch.tensor([[0]*x[0]+[1]*(x[-1]-x[0]) for x in b])) s,e=d.start_logits.tolist(),d.end_logits.tolist() for i in range(n): for j in range(n): m[i+1,0 if i==j else j+1]=s[i][b[i][j]]+e[i][b[i][j+1]-1] h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0] if [0 for i in h if i==0]!=[0]: i=([p for s,e,p in w]+["root"]).index("root") j=i+1 if i<n else numpy.nanargmax(m[:,0]) m[0:j,0]=m[j+1:,0]=numpy.nan h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0] u="
10e404050dca10e182a839a814dc98ae
apache-2.0
['classical chinese', 'literary chinese', 'ancient chinese', 'question-answering', 'dependency-parsing']
false
text = "+text.replace("\n"," ")+"\n" for i,(s,e,p) in enumerate(w,1): p="root" if h[i]==0 else "dep" if p=="root" else p u+="\t".join([str(i),r[i-1],"_",z[s][0][2:],"_","|".join(z[s][1:]), str(h[i]),p,"_","_" if i<n and e<w[i][0] else "SpaceAfter=No"])+"\n" return u+"\n" nlp=TransformersUD("KoichiYasuoka/bert-ancient-chinese-base-ud-head") print(nlp("不入虎穴不得虎子")) ```
2e9848b980ef482ae1b9ba20babba174
apache-2.0
['generated_from_trainer']
false
swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0590 - Accuracy: 0.9830
1a56e4de9defbc18dc7bb68ffc022422
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2589 | 1.0 | 190 | 0.1036 | 0.9648 | | 0.1845 | 2.0 | 380 | 0.0707 | 0.9763 | | 0.1179 | 3.0 | 570 | 0.0590 | 0.9830 |
25c176cd498c4b995c0c24e869e30b83
cc-by-sa-4.0
['korean', 'token-classification', 'pos', 'dependency-parsing']
false
Model Description This is a RoBERTa model pre-trained on Korean texts for POS-tagging and dependency-parsing, derived from [roberta-base-korean-hanja](https://huggingface.co/KoichiYasuoka/roberta-base-korean-hanja) and [morphUD-korean](https://github.com/jungyeul/morphUD-korean). Every morpheme (형태소) is tagged by [UPOS](https://universaldependencies.org/u/pos/)(Universal Part-Of-Speech).
ed8eee00ff71fbca4c7330afee191b5a
cc-by-sa-4.0
['korean', 'token-classification', 'pos', 'dependency-parsing']
false
How to Use ```py from transformers import AutoTokenizer,AutoModelForTokenClassification,TokenClassificationPipeline tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-korean-morph-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-base-korean-morph-upos") pipeline=TokenClassificationPipeline(tokenizer=tokenizer,model=model,aggregation_strategy="simple") nlp=lambda x:[(x[t["start"]:t["end"]],t["entity_group"]) for t in pipeline(x)] print(nlp("홍시 맛이 나서 홍시라 생각한다.")) ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/roberta-base-korean-morph-upos") print(nlp("홍시 맛이 나서 홍시라 생각한다.")) ```
7900f1bb074cde46c4d9c996321b1fba
creativeml-openrail-m
['text-to-image']
false
Rose Shield model Dreambooth model trained by sebrosen8 with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the None base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! Sample pictures of: dreamroseshield (use that on your prompt) ![dreamroseshield 0](https://huggingface.co/sebrosen8/rose-shield-model/resolve/main/concept_images/dreamroseshield_%281%29.jpg)![dreamroseshield 1](https://huggingface.co/sebrosen8/rose-shield-model/resolve/main/concept_images/dreamroseshield_%282%29.jpg)![dreamroseshield 2](https://huggingface.co/sebrosen8/rose-shield-model/resolve/main/concept_images/dreamroseshield_%283%29.jpg)![dreamroseshield 3](https://huggingface.co/sebrosen8/rose-shield-model/resolve/main/concept_images/dreamroseshield_%284%29.jpg)![dreamroseshield 4](https://huggingface.co/sebrosen8/rose-shield-model/resolve/main/concept_images/dreamroseshield_%285%29.jpg)![dreamroseshield 5](https://huggingface.co/sebrosen8/rose-shield-model/resolve/main/concept_images/dreamroseshield_%286%29.jpg)![dreamroseshield 6](https://huggingface.co/sebrosen8/rose-shield-model/resolve/main/concept_images/dreamroseshield_%287%29.jpg)![dreamroseshield 7](https://huggingface.co/sebrosen8/rose-shield-model/resolve/main/concept_images/dreamroseshield_%288%29.jpg)
49ab58cf21defd4fb17a55d65bcae567
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Whisper Tiny It 1 - Gianluca Ruberto This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.711901 - Wer: 43.295896
df132db683b03ed5a6df4a35ef3120aa
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.5837 | 0.95 | 1000 | 0.789903 | 50.2149 | | 0.418 | 1.91 | 2000 | 0.730088 | 45.3411 | | 0.3144 | 2.86 | 3000 | 0.713151 | 44.3705 | | 0.2667 | 3.82 | 4000 | 0.711901 | 43.2958 |
17a9159952e9ed6e085db2d94bec7b81
apache-2.0
['audio', 'automatic-speech-recognition', 'es', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_6_0', 'robust-speech-event', 'speech', 'xlsr-fine-tuning-week']
false
Fine-tuned XLSR-53 large model for speech recognition in Spanish Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Spanish using the train and validation splits of [Common Voice 6.1](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :) The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint
1b9366e656d14888dc96b8effd9fafd4
apache-2.0
['audio', 'automatic-speech-recognition', 'es', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_6_0', 'robust-speech-event', 'speech', 'xlsr-fine-tuning-week']
false
Usage The model can be used directly (without a language model) as follows... Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library: ```python from huggingsound import SpeechRecognitionModel model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-large-xlsr-53-spanish") audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"] transcriptions = model.transcribe(audio_paths) ``` Writing your own inference script: ```python import torch import librosa from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor LANG_ID = "es" MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-spanish" SAMPLES = 10 test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]") processor = Wav2Vec2Processor.from_pretrained(MODEL_ID) model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
bb044706917f03b30cbd2855d9a1d2cc
apache-2.0
['audio', 'automatic-speech-recognition', 'es', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_6_0', 'robust-speech-event', 'speech', 'xlsr-fine-tuning-week']
false
We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000) batch["speech"] = speech_array batch["sentence"] = batch["sentence"].upper() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) predicted_sentences = processor.batch_decode(predicted_ids) for i, predicted_sentence in enumerate(predicted_sentences): print("-" * 100) print("Reference:", test_dataset[i]["sentence"]) print("Prediction:", predicted_sentence) ``` | Reference | Prediction | | ------------- | ------------- | | HABITA EN AGUAS POCO PROFUNDAS Y ROCOSAS. | HABITAN AGUAS POCO PROFUNDAS Y ROCOSAS | | OPERA PRINCIPALMENTE VUELOS DE CABOTAJE Y REGIONALES DE CARGA. | OPERA PRINCIPALMENTE VUELO DE CARBOTAJES Y REGIONALES DE CARGAN | | PARA VISITAR CONTACTAR PRIMERO CON LA DIRECCIÓN. | PARA VISITAR CONTACTAR PRIMERO CON LA DIRECCIÓN | | TRES | TRES | | REALIZÓ LOS ESTUDIOS PRIMARIOS EN FRANCIA, PARA CONTINUAR LUEGO EN ESPAÑA. | REALIZÓ LOS ESTUDIOS PRIMARIOS EN FRANCIA PARA CONTINUAR LUEGO EN ESPAÑA | | EN LOS AÑOS QUE SIGUIERON, ESTE TRABAJO ESPARTA PRODUJO DOCENAS DE BUENOS JUGADORES. | EN LOS AÑOS QUE SIGUIERON ESTE TRABAJO ESPARTA PRODUJO DOCENA DE BUENOS JUGADORES | | SE ESTÁ TRATANDO DE RECUPERAR SU CULTIVO EN LAS ISLAS CANARIAS. | SE ESTÓ TRATANDO DE RECUPERAR SU CULTIVO EN LAS ISLAS CANARIAS | | SÍ | SÍ | | "FUE ""SACADA"" DE LA SERIE EN EL EPISODIO ""LEAD"", EN QUE ALEXANDRA CABOT REGRESÓ." | FUE SACADA DE LA SERIE EN EL EPISODIO LEED EN QUE ALEXANDRA KAOT REGRESÓ | | SE UBICAN ESPECÍFICAMENTE EN EL VALLE DE MOKA, EN LA PROVINCIA DE BIOKO SUR. | SE UBICAN ESPECÍFICAMENTE EN EL VALLE DE MOCA EN LA PROVINCIA DE PÍOCOSUR |
24446587512a481a435342c1431f23fd
apache-2.0
['audio', 'automatic-speech-recognition', 'es', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_6_0', 'robust-speech-event', 'speech', 'xlsr-fine-tuning-week']
false
Evaluation 1. To evaluate on `mozilla-foundation/common_voice_6_0` with split `test` ```bash python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-spanish --dataset mozilla-foundation/common_voice_6_0 --config es --split test ``` 2. To evaluate on `speech-recognition-community-v2/dev_data` ```bash python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-spanish --dataset speech-recognition-community-v2/dev_data --config es --split validation --chunk_length_s 5.0 --stride_length_s 1.0 ```
38901902ed46d1df73053afecab5aed7
apache-2.0
['audio', 'automatic-speech-recognition', 'es', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_6_0', 'robust-speech-event', 'speech', 'xlsr-fine-tuning-week']
false
Citation If you want to cite this model you can use this: ```bibtex @misc{grosman2021xlsr53-large-spanish, title={Fine-tuned {XLSR}-53 large model for speech recognition in {S}panish}, author={Grosman, Jonatas}, howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-spanish}}, year={2021} } ```
8f5e500ea434edc219c60c25a8965deb
other
['generated_from_trainer']
false
dalio-synthetic-io-1.3b This model is a fine-tuned version of [facebook/opt-1.3b](https://huggingface.co/facebook/opt-1.3b) on the AlekseyKorshuk/dalio-synthetic-io dataset. It achieves the following results on the evaluation set: - Loss: 2.4961 - Accuracy: 0.0636
8f1ef0068ba22bab0f91ab312e8a26f1
other
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.6941 | 0.05 | 1 | 2.6543 | 0.0622 | | 2.6914 | 0.11 | 2 | 2.6543 | 0.0622 | | 2.6003 | 0.16 | 3 | 2.6016 | 0.0627 | | 2.5603 | 0.21 | 4 | 2.5703 | 0.0626 | | 2.606 | 0.26 | 5 | 2.5508 | 0.0629 | | 2.5439 | 0.32 | 6 | 2.5449 | 0.0629 | | 2.4449 | 0.37 | 7 | 2.5469 | 0.0629 | | 2.5422 | 0.42 | 8 | 2.5469 | 0.0630 | | 2.6101 | 0.47 | 9 | 2.5410 | 0.0632 | | 2.4482 | 0.53 | 10 | 2.5352 | 0.0630 | | 2.501 | 0.58 | 11 | 2.5293 | 0.0631 | | 2.5967 | 0.63 | 12 | 2.5215 | 0.0634 | | 2.4998 | 0.68 | 13 | 2.5137 | 0.0635 | | 2.5957 | 0.74 | 14 | 2.5098 | 0.0636 | | 2.5967 | 0.79 | 15 | 2.5039 | 0.0639 | | 2.5022 | 0.84 | 16 | 2.5 | 0.0637 | | 2.4314 | 0.89 | 17 | 2.4980 | 0.0637 | | 2.6279 | 0.95 | 18 | 2.4961 | 0.0636 | | 2.571 | 1.0 | 19 | 2.4961 | 0.0636 |
faf502bc9cf8ebdeaa398f2de6c1d0a1
mit
['generated_from_trainer']
false
bert-base-multilingual-cased-finetuned-lener_br This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the [Luciano/lener_br_text_to_lm](https://huggingface.co/datasets/Luciano/lener_br_text_to_lm) dataset. It achieves the following results on the evaluation set: - Loss: 0.8132 (To update)
31504b34e0e92598301446176c554e6e
mit
['generated_from_trainer']
false
Training hyperparameters (To update) The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15
12f7b6cd25762f1d73c9f7e6086ec9b4
mit
['generated_from_trainer']
false
Training results (To update) | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.3167 | 1.0 | 2079 | 1.1163 | | 1.1683 | 2.0 | 4158 | 1.0594 | | 1.0648 | 3.0 | 6237 | 1.0501 | | 1.0228 | 4.0 | 8316 | 0.9693 | | 0.9662 | 5.0 | 10395 | 0.9847 | | 0.9422 | 6.0 | 12474 | 0.9556 | | 0.8696 | 7.0 | 14553 | 0.8978 | | 0.7856 | 8.0 | 16632 | nan | | 0.7849 | 9.0 | 18711 | 0.9192 | | 0.7559 | 10.0 | 20790 | 0.8536 | | 0.7564 | 11.0 | 22869 | 0.9230 | | 0.7641 | 12.0 | 24948 | 0.8852 | | 0.7007 | 13.0 | 27027 | 0.8616 | | 0.7139 | 14.0 | 29106 | 0.8419 | | 0.6543 | 15.0 | 31185 | 0.8460 |
38939ff83728e8788616dc4d15de3290
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Whisper Small Swahili This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 sw dataset. It achieves the following results on the evaluation set: - Loss: 0.5597 - Wer: 27.6211
bdce1db693d80da9c4d0e6e0f991b4d8
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 250 - training_steps: 500 - mixed_precision_training: Native AMP
1106196c542aad5908d472e214c8da99
creativeml-openrail-m
['text-to-image', 'stable-diffusion']
false
(https://linktr.ee/kukuhtw) Model Trained by (https://linktr.ee/kukuhtw) using fast-dreambooth (https://github.com/TheLastBen/fast-stable-diffusion) use keyword : <i>male kukuhtw person</i> sample prompt : <i>portrait of male kukuhtw person style studio ghibli</i> <i>male kukuhtw person . 3d model, unreal engine realistic render, 8 k, micro detail, intricate, elegant, highly detailed, centered, digital painting, artstation, smooth, sharp focus, illustration, artgerm, tomasz alen kopera, wlop</i> <i>portrait of smiling joy male kukuhtw person, digital painting, highly detailed, intricate, 3d model, unreal engine realistic render, 8 k, micro detail, intricate, elegant, highly detailed, centered, digital painting, artstation, smooth, sharp focus, illustration, artgerm, tomasz alen kopera, wlop</i> <i>portrait of male kukuhtw person in style pixar disney</i> <i>portrait of male kukuhtw person by greg rutkowski, trending artstation</i> <i>portrait of male kukuhtw person in style comic dc</i> <i>portrait of male kukuhtw person in marvel universe</i> <i>portrait of male kukuhtw-person , low poly, colorfull</i> <i>portrait of male kukuhtw person in water oil made by davinci</i> <i>portrait of male kukuhtw person in water oil made by picasso</i> <i>A detailed portrait of young male kukuhtw-person illustrator, by justin gerard and greg rutkowski, digital art, realistic painting, dnd, character design, trending on artstation</i> <i>young male laughing kukuhtw person style yoji-shinkawa</i> <i>male kukuhtw person, portrait painting by richard schmid, edgar maxence, kehinde wiley, thomas moran, maxfield parrish, studio ghibli, loish, alphonse mucha, fashion photography </i> <i>portrait of male kukuhtw, photo realistic, highly detailed, perfect face, art by artgerm </i> <i>kukuhtw as a character from pixar, au naturel, PS2, PS1, hyper detailed, digital art, trending in artstation, cinematic lighting, studio quality, smooth render, unreal engine 5 rendered, octane rendered, art style by klimt and nixeu and ian sprigger and wlop and krenz cushart.</i> Sample Results of this concept: ![1](https://huggingface.co/kukuhtw/kukuhtw-person/resolve/main/sample_images/00133-4094970585-portrait%20of%20smiling%20%20man%20kukuhtw%20person%2C%20warrior%2C%20superhero%2C%20digital%20painting%2C%20highly%20detailed%2C%20intricate%2C%203d%20model%2C%20unreal%20engi.png) ![2](https://huggingface.co/kukuhtw/kukuhtw-person/resolve/main/sample_images/00104-43185600-portrait%20of%20laughing%20kukuhtw%20person%2C%20soccer%20player%2C%20photorealistic%2C%20volumetric%20lighting.%20artstationhd%2C%20artstationhq%2C%20unreal%20engi.png) ![3](https://huggingface.co/kukuhtw/kukuhtw-person/resolve/main/sample_images/00098-3405378873-portrait%20of%20laughing%20kukuhtw%20person%2C%20basket%20player%20%2C%20photorealistic%2C%20volumetric%20lighting.%20artstationhd%2C%20artstationhq%2C%20unreal%20eng.png) ![4](https://huggingface.co/kukuhtw/kukuhtw-person/resolve/main/sample_images/00034-366386194-portrait%20of%20smiling%20%20young%20man%20joy%20kukuhtw%20person%2C%20digital%20painting%2C%20highly%20detailed%2C%20intricate%2C%203d%20model%2C%20unreal%20engine%20realist.png) ![5](https://huggingface.co/kukuhtw/kukuhtw-person/resolve/main/sample_images/download%20-%202022-12-19T165302.012.png) ![6](https://huggingface.co/kukuhtw/kukuhtw-person/resolve/main/sample_images/4938963%20(1).jpeg)
825410cff0afb233f31911ac097df0c2
apache-2.0
['generated_from_trainer']
false
finetuned_token_2e-05_16_02_2022-14_20_41 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1722 - Precision: 0.3378 - Recall: 0.3615 - F1: 0.3492 - Accuracy: 0.9448
3595c317a06da6ce653b7f445a127c9d
apache-2.0
['automatic-speech-recognition', 'fa']
false
exp_w2v2t_fa_wav2vec2_s321 Fine-tuned [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) for speech recognition using the train split of [Common Voice 7.0 (fa)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
33484eb80a189f9fe621f57fc0682c3f
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8508 - Matthews Correlation: 0.5452
f2d23526d4543232261e3512be9dd9aa
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5221 | 1.0 | 535 | 0.5370 | 0.4246 | | 0.3462 | 2.0 | 1070 | 0.5157 | 0.5183 | | 0.2332 | 3.0 | 1605 | 0.6324 | 0.5166 | | 0.1661 | 4.0 | 2140 | 0.7616 | 0.5370 | | 0.1263 | 5.0 | 2675 | 0.8508 | 0.5452 |
b986ee12c287c8231726fe77f04499ca
mit
['fill-mask', 'japanese', 'albert']
false
for PyTorch ```py from transformers import ( AutoModelForMaskedLM, AutoTokenizer ) tokenizer = AutoTokenizer.from_pretrained("ken11/albert-base-japanese-v1-with-japanese-tokenizer") model = AutoModelForMaskedLM.from_pretrained("ken11/albert-base-japanese-v1-with-japanese-tokenizer") text = "明日は明日の[MASK]が吹く" tokens = tokenizer(text, return_tensors="pt") mask_index = tokens["input_ids"][0].tolist().index(tokenizer.mask_token_id) predict = model(**tokens)[0] _, result = predict[0, mask_index].topk(5) print(tokenizer.convert_ids_to_tokens(result.tolist())) ```
768a833ec3a386ef48e5ef7f5ff26336
mit
['fill-mask', 'japanese', 'albert']
false
for TensorFlow ```py from transformers import ( TFAutoModelForMaskedLM, AutoTokenizer ) import tensorflow as tf tokenizer = AutoTokenizer.from_pretrained("ken11/albert-base-japanese-v1-with-japanese-tokenizer") model = TFAutoModelForMaskedLM.from_pretrained("ken11/albert-base-japanese-v1-with-japanese-tokenizer") text = "明日は明日の[MASK]が吹く" tokens = tokenizer(text, return_tensors="tf") mask_index = tokens["input_ids"][0].numpy().tolist().index(tokenizer.mask_token_id) predict = model(**tokens)[0] result = tf.math.top_k(predict[0, mask_index], k=5) print(tokenizer.convert_ids_to_tokens(result.indices.numpy())) ```
8037b6ea2b4d3a8ff8513d104812d478
mit
['fill-mask', 'japanese', 'albert']
false
Training Data 学習には - [日本語Wikipediaの全文](https://ja.wikipedia.org/wiki/Wikipedia:%E3%83%87%E3%83%BC%E3%82%BF%E3%83%99%E3%83%BC%E3%82%B9%E3%83%80%E3%82%A6%E3%83%B3%E3%83%AD%E3%83%BC%E3%83%89) を利用しています
14ee0a0ea545324f967c8671c81f9825
apache-2.0
['generated_from_trainer']
false
bert_uncased_L-2_H-128_A-2-finetuned-emotion-finetuned-tweet This model is a fine-tuned version of [muhtasham/bert_uncased_L-2_H-128_A-2-finetuned-emotion](https://huggingface.co/muhtasham/bert_uncased_L-2_H-128_A-2-finetuned-emotion) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.4004 - Accuracy: 0.8717 - F1: 0.8717
ef905e80b741a5d4a76173e2498f70f9
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.4751 | 1.28 | 500 | 0.3880 | 0.828 | 0.8277 | | 0.3453 | 2.56 | 1000 | 0.3282 | 0.8608 | 0.8607 | | 0.2973 | 3.84 | 1500 | 0.3140 | 0.8695 | 0.8695 | | 0.26 | 5.12 | 2000 | 0.3154 | 0.8736 | 0.8735 | | 0.2218 | 6.39 | 2500 | 0.3144 | 0.8756 | 0.8756 | | 0.1977 | 7.67 | 3000 | 0.3197 | 0.876 | 0.8760 | | 0.1656 | 8.95 | 3500 | 0.3526 | 0.8737 | 0.8735 | | 0.1404 | 10.23 | 4000 | 0.3865 | 0.8691 | 0.8689 | | 0.121 | 11.51 | 4500 | 0.4004 | 0.8717 | 0.8717 |
3eb8fc68411b1b2c045dd4a0f911ea68
apache-2.0
['generated_from_trainer']
false
mobilebert_add_GLUE_Experiment_wnli_128 This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE WNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.6908 - Accuracy: 0.5634
fbd8a2f4f461ab2b11059a41cc9b6090
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6936 | 1.0 | 5 | 0.6912 | 0.5634 | | 0.6932 | 2.0 | 10 | 0.6918 | 0.5634 | | 0.6931 | 3.0 | 15 | 0.6920 | 0.5634 | | 0.693 | 4.0 | 20 | 0.6916 | 0.5634 | | 0.693 | 5.0 | 25 | 0.6912 | 0.5634 | | 0.693 | 6.0 | 30 | 0.6911 | 0.5634 | | 0.693 | 7.0 | 35 | 0.6908 | 0.5634 | | 0.693 | 8.0 | 40 | 0.6911 | 0.5634 | | 0.6931 | 9.0 | 45 | 0.6908 | 0.5634 | | 0.693 | 10.0 | 50 | 0.6911 | 0.5634 | | 0.693 | 11.0 | 55 | 0.6916 | 0.5634 | | 0.693 | 12.0 | 60 | 0.6916 | 0.5634 | | 0.693 | 13.0 | 65 | 0.6917 | 0.5634 | | 0.6929 | 14.0 | 70 | 0.6918 | 0.5634 |
8eccdfbddd2c98c068e6b56c6ac34b86
mit
[]
false
ScandiNLI - Natural Language Inference model for Scandinavian Languages This model is a fine-tuned version of [jonfd/electra-small-nordic](https://huggingface.co/jonfd/electra-small-nordic) for Natural Language Inference in Danish, Norwegian Bokmål and Swedish. We have released three models for Scandinavian NLI, of different sizes: - [alexandrainst/scandi-nli-large](https://huggingface.co/alexandrainst/scandi-nli-large) - [alexandrainst/scandi-nli-base](https://huggingface.co/alexandrainst/scandi-nli-base) - alexandrainst/scandi-nli-small (this) A demo of the large model can be found in [this Hugging Face Space](https://huggingface.co/spaces/alexandrainst/zero-shot-classification) - check it out! The performance and model size of each of them can be found in the Performance section below.
3ef3a07691bcd1dc4c3cd58ddf664c9d
mit
[]
false
Quick start You can use this model in your scripts as follows: ```python >>> from transformers import pipeline >>> classifier = pipeline( ... "zero-shot-classification", ... model="alexandrainst/scandi-nli-small", ... ) >>> classifier( ... "Mexicansk bokser advarer Messi - 'Du skal bede til gud, om at jeg ikke finder dig'", ... candidate_labels=['sundhed', 'politik', 'sport', 'religion'], ... hypothesis_template="Dette eksempel handler om {}", ... ) {'sequence': "Mexicansk bokser advarer Messi - 'Du skal bede til gud, om at jeg ikke finder dig'", 'labels': ['religion', 'sport', 'politik', 'sundhed'], 'scores': [0.4504755437374115, 0.20737220346927643, 0.1976872682571411, 0.14446501433849335]} ```
ed884d3f844ad2159842d1e12b175562
mit
[]
false
Scandinavian Evaluation The Scandinavian scores are the average of the Danish, Swedish and Norwegian scores, which can be found in the sections below. | **Model** | **MCC** | **Macro-F1** | **Accuracy** | **Number of Parameters** | | :-------- | :------------ | :--------- | :----------- | :----------- | | [`alexandrainst/scandi-nli-large`](https://huggingface.co/alexandrainst/scandi-nli-large) | **73.70%** | **74.44%** | **83.91%** | 354M | | [`MoritzLaurer/mDeBERTa-v3-base-xnli-multilingual-nli-2mil7`](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-xnli-multilingual-nli-2mil7) | 69.01% | 71.99% | 80.66% | 279M | | [`alexandrainst/scandi-nli-base`](https://huggingface.co/alexandrainst/scandi-nli-base) | 67.42% | 71.54% | 80.09% | 178M | | [`joeddav/xlm-roberta-large-xnli`](https://huggingface.co/joeddav/xlm-roberta-large-xnli) | 64.17% | 70.80% | 77.29% | 560M | | [`MoritzLaurer/mDeBERTa-v3-base-mnli-xnli`](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-mnli-xnli) | 63.94% | 70.41% | 77.23% | 279M | | [`NbAiLab/nb-bert-base-mnli`](https://huggingface.co/NbAiLab/nb-bert-base-mnli) | 61.71% | 68.36% | 76.08% | 178M | | `alexandrainst/scandi-nli-small` (this) | 56.02% | 65.30% | 73.56% | **22M** |
e94e43407e9edf23cb68156784eca2cd
mit
[]
false
page=439) to evaluate the Danish performance of the models. The test split is generated using [this gist](https://gist.github.com/saattrupdan/1cb8379232fdec6e943dc84595a85e7c). | **Model** | **MCC** | **Macro-F1** | **Accuracy** | **Number of Parameters** | | :-------- | :------------ | :--------- | :----------- | :----------- | | [`alexandrainst/scandi-nli-large`](https://huggingface.co/alexandrainst/scandi-nli-large) | **73.80%** | **58.41%** | **86.98%** | 354M | | [`MoritzLaurer/mDeBERTa-v3-base-xnli-multilingual-nli-2mil7`](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-xnli-multilingual-nli-2mil7) | 68.37% | 57.10% | 83.25% | 279M | | [`alexandrainst/scandi-nli-base`](https://huggingface.co/alexandrainst/scandi-nli-base) | 62.44% | 55.00% | 80.42% | 178M | | [`NbAiLab/nb-bert-base-mnli`](https://huggingface.co/NbAiLab/nb-bert-base-mnli) | 56.92% | 53.25% | 76.39% | 178M | | [`MoritzLaurer/mDeBERTa-v3-base-mnli-xnli`](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-mnli-xnli) | 52.79% | 52.00% | 72.35% | 279M | | [`joeddav/xlm-roberta-large-xnli`](https://huggingface.co/joeddav/xlm-roberta-large-xnli) | 49.18% | 50.31% | 69.73% | 560M | | `alexandrainst/scandi-nli-small` (this) | 47.28% | 48.88% | 73.46% | **22M** |
1084b5adda2eea0b79912bbbff3a4d66
mit
[]
false
Swedish Evaluation We use the test split of the machine translated version of the [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) dataset to evaluate the Swedish performance of the models. We acknowledge that not evaluating on a gold standard dataset is not ideal, but unfortunately we are not aware of any NLI datasets in Swedish. | **Model** | **MCC** | **Macro-F1** | **Accuracy** | **Number of Parameters** | | :-------- | :------------ | :--------- | :----------- | :----------- | | [`alexandrainst/scandi-nli-large`](https://huggingface.co/alexandrainst/scandi-nli-large) | **76.69%** | **84.47%** | **84.38%** | 354M | | [`joeddav/xlm-roberta-large-xnli`](https://huggingface.co/joeddav/xlm-roberta-large-xnli) | 75.35% | 83.42% | 83.55% | 560M | | [`MoritzLaurer/mDeBERTa-v3-base-mnli-xnli`](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-mnli-xnli) | 73.84% | 82.46% | 82.58% | 279M | | [`MoritzLaurer/mDeBERTa-v3-base-xnli-multilingual-nli-2mil7`](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-xnli-multilingual-nli-2mil7) | 73.32% | 82.15% | 82.08% | 279M | | [`alexandrainst/scandi-nli-base`](https://huggingface.co/alexandrainst/scandi-nli-base) | 72.29% | 81.37% | 81.51% | 178M | | [`NbAiLab/nb-bert-base-mnli`](https://huggingface.co/NbAiLab/nb-bert-base-mnli) | 64.69% | 76.40% | 76.47% | 178M | | `alexandrainst/scandi-nli-small` (this) | 62.35% | 74.79% | 74.93% | **22M** |
c44b78033a83e0dda1e8a15b440991ba
mit
[]
false
Norwegian Evaluation We use the test split of the machine translated version of the [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) dataset to evaluate the Norwegian performance of the models. We acknowledge that not evaluating on a gold standard dataset is not ideal, but unfortunately we are not aware of any NLI datasets in Norwegian. | **Model** | **MCC** | **Macro-F1** | **Accuracy** | **Number of Parameters** | | :-------- | :------------ | :--------- | :----------- | :----------- | | [`alexandrainst/scandi-nli-large`](https://huggingface.co/alexandrainst/scandi-nli-large) | **70.61%** | **80.43%** | **80.36%** | 354M | | [`joeddav/xlm-roberta-large-xnli`](https://huggingface.co/joeddav/xlm-roberta-large-xnli) | 67.99% | 78.68% | 78.60% | 560M | | [`alexandrainst/scandi-nli-base`](https://huggingface.co/alexandrainst/scandi-nli-base) | 67.53% | 78.24% | 78.33% | 178M | | [`MoritzLaurer/mDeBERTa-v3-base-xnli-multilingual-nli-2mil7`](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-xnli-multilingual-nli-2mil7) | 65.33% | 76.73% | 76.65% | 279M | | [`MoritzLaurer/mDeBERTa-v3-base-mnli-xnli`](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-mnli-xnli) | 65.18% | 76.76% | 76.77% | 279M | | [`NbAiLab/nb-bert-base-mnli`](https://huggingface.co/NbAiLab/nb-bert-base-mnli) | 63.51% | 75.42% | 75.39% | 178M | | `alexandrainst/scandi-nli-small` (this) | 58.42% | 72.22% | 72.30% | **22M** |
e6197feed396eaf34f9abe1bf0d521c0
mit
[]
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 4242 - gradient_accumulation_steps: 1 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9, 0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - max_steps: 50,000
5aae8400dd4f2ed27ee6e8e0ef6519fb
lgpl-3.0
[]
false
t5_interpreter A rut5-based model for incomplete utterance restoration, spellchecking and text normalization for dialogue utterances. Read more about the task [here](https://huggingface.co/inkoziev/rugpt_interpreter).
1705f012ac7aca77d9d6295d4635eba5
lgpl-3.0
[]
false
Usage example ``` import torch from transformers import T5ForConditionalGeneration, T5Tokenizer model_name = 'inkoziev/t5_interpreter' tokenizer = T5Tokenizer.from_pretrained(model_name,) device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = T5ForConditionalGeneration.from_pretrained(model_name) model.eval() t5_input = '- Тебя как зовут?\n- Мальвина
130ec5f1121fd5890346af94e3cd69aa
lgpl-3.0
[]
false
' input_ids = tokenizer(t5_input, return_tensors='pt').input_ids out_ids = model.generate(input_ids=input_ids, max_length=40, eos_token_id=tokenizer.eos_token_id, early_stopping=True) t5_output = tokenizer.decode(out_ids[0][1:]) print(t5_output) ```
70962921336ed9077bda48245063118e
mit
[]
false
Base model: [roberta-base](https://huggingface.co/roberta-base) Fine tuned as a progression model (to predict the acceptability of a dialogue) on the [Persuasion For Good Dataset](https://gitlab.com/ucdavisnlp/persuasionforgood) (Wang et al., 2019): Given a complete dialogue from (or in the style of) Persuasion For Good, the task is to predict a numeric score typically in the range (-3, 3) where a higher score means a more acceptable dialogue in context of the donation solicitation task. **Example input**: `How are you?</s>Good! how about yourself?</s>Great. Would you like to donate today to help the children?</s>` For more context and usage information see [https://github.rpi.edu/LACAI/dialogue-progression](https://github.rpi.edu/LACAI/dialogue-progression).
d99a142637ecefdd6834b1c3f6ab3d24
apache-2.0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_7_0', 'generated_from_trainer', 'rm-vallader', 'robust-speech-event', 'model_for_talk', 'hf-asr-leaderboard']
false
wav2vec2-large-xls-r-300m-romansh-vallader This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - RM-VALLADER dataset. It achieves the following results on the evaluation set: - Loss: 0.3155 - Wer: 0.3162
06a83e5d4dfb42a7af553d0e07ebacb0
apache-2.0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_7_0', 'generated_from_trainer', 'rm-vallader', 'robust-speech-event', 'model_for_talk', 'hf-asr-leaderboard']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 32 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 100.0 - mixed_precision_training: Native AMP
238d695df0c6572e96ff5883841e2aa1
apache-2.0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_7_0', 'generated_from_trainer', 'rm-vallader', 'robust-speech-event', 'model_for_talk', 'hf-asr-leaderboard']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.9556 | 15.62 | 500 | 2.9300 | 1.0 | | 1.7874 | 31.25 | 1000 | 0.7566 | 0.6509 | | 1.0131 | 46.88 | 1500 | 0.3671 | 0.3828 | | 0.8439 | 62.5 | 2000 | 0.3350 | 0.3416 | | 0.7502 | 78.12 | 2500 | 0.3155 | 0.3296 | | 0.7093 | 93.75 | 3000 | 0.3182 | 0.3186 |
cb03db1ec2d9ad38bad8385875583560
apache-2.0
['exbert', 'multiberts', 'multiberts-seed-1']
false
MultiBERTs Seed 1 Checkpoint 40k (uncased) Seed 1 intermediate checkpoint 40k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint. The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
c571bd686a43fdbdbe3cc3c490afab05
apache-2.0
['exbert', 'multiberts', 'multiberts-seed-1']
false
How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-40k') model = BertModel.from_pretrained("multiberts-seed-1-40k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ```
e2142611c739cf1fdb6dd0f3ecf5df81
other
['generated_from_trainer', 'text generation', 'stable diffusion', 'midjourney', 'text2image', 'text to image', 'prompt augment', 'prompt engineering']
false
pszemraj/opt-350m-multiprompt <a href="https://colab.research.google.com/gist/pszemraj/bdd1238ee4b8330aeec6774a16f9a677/opt-350m-multiprompt-demo.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> Generate/augment your prompt with a model trained on a large & diverse prompt dataset. This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on the pszemraj/text2image-prompts-multi dataset. It achieves the following results on the evaluation set: - Loss: 1.6669 - eval steps per second: 16.21 - perplexity: 5.29
c5697f0959bf2c7bc374e9d3050f8feb
other
['generated_from_trainer', 'text generation', 'stable diffusion', 'midjourney', 'text2image', 'text to image', 'prompt augment', 'prompt engineering']
false
Example ![landscape of florida](https://i.imgur.com/DeKNHtC.jpg) <br> _The above example was created with [DALL-E 2](https://labs.openai.com/sc/YbiY2kkuQeODzHNwUHn4D5RN) but will of course work with any text2image model._
5dcf7249eef5352a1004a877297499c9
other
['generated_from_trainer', 'text generation', 'stable diffusion', 'midjourney', 'text2image', 'text to image', 'prompt augment', 'prompt engineering']
false
Intended uses & limitations - The model will generate augmentations that are biased towards the training data, i.e. what people already asked for in the SD/midjourney discords, etc. Creating a larger dataset was an attempt at mitigating this through more data from different datasets.
90136ea3cbde6dc4c265601f1f40f269
other
['generated_from_trainer', 'text generation', 'stable diffusion', 'midjourney', 'text2image', 'text to image', 'prompt augment', 'prompt engineering']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - gradient_accumulation_steps: 16 - total_train_batch_size: 256 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.04 - num_epochs: 4.0
eb59bbccb09623002875630932d37490
other
['generated_from_trainer', 'text generation', 'stable diffusion', 'midjourney', 'text2image', 'text to image', 'prompt augment', 'prompt engineering']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.1677 | 1.0 | 990 | 2.0888 | | 1.856 | 2.0 | 1980 | 1.8215 | | 1.6864 | 3.0 | 2970 | 1.6935 | | 1.6228 | 4.0 | 3960 | 1.6670 |
254532b9f35aaf026cfbe23c8319f366
apache-2.0
['generated_from_trainer']
false
korean-aihub-learning-2 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.9945 - Wer: 0.9533
4006355afa21a52f0a5cbfadd858f27b
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 0.99 | 35 | 46.3840 | 1.0 | | No log | 1.99 | 70 | 26.0949 | 1.0 | | 37.1581 | 2.99 | 105 | 19.0168 | 1.0 | | 37.1581 | 3.99 | 140 | 13.3294 | 1.0 | | 37.1581 | 4.99 | 175 | 7.9410 | 1.0 | | 12.5054 | 5.99 | 210 | 5.0323 | 1.0 | | 12.5054 | 6.99 | 245 | 4.6242 | 1.0 | | 12.5054 | 7.99 | 280 | 4.6206 | 1.0 | | 4.8394 | 8.99 | 315 | 4.5820 | 1.0 | | 4.8394 | 9.99 | 350 | 4.5629 | 1.0 | | 4.8394 | 10.99 | 385 | 4.5385 | 1.0 | | 4.6489 | 11.99 | 420 | 4.5627 | 1.0 | | 4.6489 | 12.99 | 455 | 4.5276 | 1.0 | | 4.6489 | 13.99 | 490 | 4.5292 | 1.0 | | 4.5654 | 14.99 | 525 | 4.5179 | 1.0 | | 4.5654 | 15.99 | 560 | 4.4928 | 1.0 | | 4.5654 | 16.99 | 595 | 4.4791 | 1.0 | | 4.521 | 17.99 | 630 | 4.4649 | 1.0 | | 4.521 | 18.99 | 665 | 4.4588 | 1.0 | | 4.3529 | 19.99 | 700 | 4.3632 | 1.0 | | 4.3529 | 20.99 | 735 | 4.2990 | 1.0 | | 4.3529 | 21.99 | 770 | 4.2326 | 0.9988 | | 4.1301 | 22.99 | 805 | 4.0843 | 1.0 | | 4.1301 | 23.99 | 840 | 3.9784 | 0.9975 | | 4.1301 | 24.99 | 875 | 3.7876 | 1.0 | | 3.7047 | 25.99 | 910 | 3.6109 | 0.9988 | | 3.7047 | 26.99 | 945 | 3.4049 | 0.9828 | | 3.7047 | 27.99 | 980 | 3.1913 | 0.9606 | | 3.006 | 28.99 | 1015 | 3.0567 | 0.9508 | | 3.006 | 29.99 | 1050 | 2.9945 | 0.9533 |
f1194c3735f27c69ba98fa1df751d008
creativeml-openrail-m
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'animal']
false
DreamBooth model for the bongodog concept trained by dacquaviva on the dacquaviva/bongodog dataset. This is a Stable Diffusion model fine-tuned on the bongodog concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of bongodog dog** This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
78121fe7d245fc90674fbf208c2cb8e0
creativeml-openrail-m
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'animal']
false
Photo of my dog Bongo: <img src="https://drive.google.com/uc?export=view&id=1m5heLYYzQIxDeyNoxxtB6X7bwNhxqG9v" alt="bongodog" width="200"/> <img src="https://drive.google.com/uc?export=view&id=1nP3JqAYEZSlTFAgduhhFC6S7XYKo8Nz9" alt="bongodog" width="200"/>
20f2a4c6ff19ab597385509a8786c65d
creativeml-openrail-m
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'animal']
false
Examples of generated images: <img src="https://drive.google.com/uc?export=view&id=1DaJUXJP2nQy0_TVQpf3QAaJd6rBfbOlo" alt="bongodog" width="200"/> <img src="https://drive.google.com/uc?export=view&id=1ybeN5vg0OYuSalOenQX8AB8yaYBRW3O0" alt="bongodog" width="200"/> <img src="https://drive.google.com/uc?export=view&id=1-HqsSQpuPIh8Y8C0kD92mPs_Rd68cPik" alt="bongodog" width="200"/> <img src="https://drive.google.com/uc?export=view&id=1JvUDQuTC0oaZiFPKQXUfUsHfhr8LLpCw" alt="bongodog" width="200"/>
d60a178aeb65552834adf822e209d2ed
apache-2.0
['multiberts', 'multiberts-seed_3', 'multiberts-seed_3-step_1900k']
false
MultiBERTs, Intermediate Checkpoint - Seed 3, Step 1900k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model
1ab13639c6a53a74e22b9b9d2feedd69
apache-2.0
['multiberts', 'multiberts-seed_3', 'multiberts-seed_3-step_1900k']
false
How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_3-step_1900k') model = TFBertModel.from_pretrained("google/multiberts-seed_3-step_1900k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_3-step_1900k') model = BertModel.from_pretrained("google/multiberts-seed_3-step_1900k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ```
838c6411fc41605df4bad3070dd2d2ca
apache-2.0
['generated_from_trainer']
false
distilbert_sa_GLUE_Experiment_data_aug_qqp_256 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE QQP dataset. It achieves the following results on the evaluation set: - Loss: 0.5126 - Accuracy: 0.7888 - F1: 0.7301 - Combined Score: 0.7595
845c04adf4cb0bc0fe3782ab6a0fc237
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score | |:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:--------------:| | 0.3952 | 1.0 | 29671 | 0.5126 | 0.7888 | 0.7301 | 0.7595 | | 0.2233 | 2.0 | 59342 | 0.5941 | 0.7960 | 0.7346 | 0.7653 | | 0.147 | 3.0 | 89013 | 0.6603 | 0.7997 | 0.7340 | 0.7668 | | 0.1067 | 4.0 | 118684 | 0.7091 | 0.8012 | 0.7376 | 0.7694 | | 0.082 | 5.0 | 148355 | 0.8757 | 0.8000 | 0.7377 | 0.7688 | | 0.0652 | 6.0 | 178026 | 0.8332 | 0.8044 | 0.7379 | 0.7711 |
86178b4f761049433e2e4003043a9958
apache-2.0
['generated_from_trainer']
false
wspr-sm-ar4 This model is a fine-tuned version of [Seyfelislem/wspr-sm-ar3](https://huggingface.co/Seyfelislem/wspr-sm-ar3) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3664 - Wer: 58.7933
aba697071ec714d2932408f6f9da6d6a
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 150 - mixed_precision_training: Native AMP
608fca8257aacd7a9a19670b3360ddb5
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0972 | 1.0 | 150 | 0.3664 | 58.7933 |
3dcb3beb28490165541e9b7e7adcedd2
apache-2.0
['generated_from_trainer']
false
flan-t5-large-extraction-cnndm_fs0.05-all This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.6598
df35622a8dd712a7b86b899e5390c04e
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.0976 | 0.23 | 200 | 1.7854 | | 1.9321 | 0.45 | 400 | 1.7458 | | 1.891 | 0.68 | 600 | 1.7279 | | 1.8488 | 0.9 | 800 | 1.7043 | | 1.8054 | 1.13 | 1000 | 1.7050 | | 1.776 | 1.35 | 1200 | 1.6931 | | 1.7713 | 1.58 | 1400 | 1.6847 | | 1.7584 | 1.81 | 1600 | 1.6859 | | 1.7516 | 2.03 | 1800 | 1.6801 | | 1.7185 | 2.26 | 2000 | 1.6720 | | 1.7087 | 2.48 | 2200 | 1.6768 | | 1.7072 | 2.71 | 2400 | 1.6646 | | 1.6804 | 2.93 | 2600 | 1.6618 | | 1.6675 | 3.16 | 2800 | 1.6630 | | 1.6713 | 3.39 | 3000 | 1.6598 | | 1.6557 | 3.61 | 3200 | 1.6668 | | 1.6597 | 3.84 | 3400 | 1.6641 | | 1.6598 | 4.06 | 3600 | 1.6643 | | 1.6533 | 4.29 | 3800 | 1.6647 | | 1.652 | 4.51 | 4000 | 1.6646 |
05deb665bad04832dddba1807befc1df
apache-2.0
['translation']
false
vie-fra * source group: Vietnamese * target group: French * OPUS readme: [vie-fra](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/vie-fra/README.md) * model: transformer-align * source language(s): vie * target language(s): fra * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-fra/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-fra/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-fra/opus-2020-06-17.eval.txt)
66abfe31adc9357ab8d61401608386fe