license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
cc-by-4.0
['translation', 'opus-mt-tc']
false
Yksi, kaksi, kolme, neljä, viisi, kuusi, seitsemän, kahdeksan, yhdeksän, kymmenen. ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-base-uk-fi") print(pipe("Африка є колискою людства."))
fa28b608f56e1758e33a6251b73fa9e1
cc-by-4.0
['translation', 'opus-mt-tc']
false
Benchmarks * test set translations: [opusTCv20210807+pft+pbt_transformer-align_2022-03-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-fin/opusTCv20210807+pft+pbt_transformer-align_2022-03-17.test.txt) * test set scores: [opusTCv20210807+pft+pbt_transformer-align_2022-03-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-fin/opusTCv20210807+pft+pbt_transformer-align_2022-03-17.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU |
3a604a57f9d5a74b5a309254ae9582a9
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-panx-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.4990 - F1: 0.7093
358667784eea0520713eb61fc34ab528
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.8727 | 1.0 | 295 | 0.5063 | 0.6186 | | 0.4633 | 2.0 | 590 | 0.5089 | 0.6561 | | 0.3075 | 3.0 | 885 | 0.4990 | 0.7093 |
e4bcd4707c5c3418a58f09dd404e937d
apache-2.0
['generated_from_trainer']
false
SST2_DistilBERT_5E This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4125 - Accuracy: 0.8933
ee1d8290a73784f3201fe91222d28c04
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6744 | 0.12 | 50 | 0.6094 | 0.66 | | 0.4942 | 0.23 | 100 | 0.3772 | 0.8667 | | 0.3857 | 0.35 | 150 | 0.3256 | 0.8867 | | 0.3483 | 0.46 | 200 | 0.3634 | 0.84 | | 0.3235 | 0.58 | 250 | 0.3338 | 0.8733 | | 0.3129 | 0.69 | 300 | 0.3482 | 0.8667 | | 0.3573 | 0.81 | 350 | 0.3632 | 0.8333 | | 0.3266 | 0.92 | 400 | 0.3274 | 0.86 | | 0.2615 | 1.04 | 450 | 0.3400 | 0.8667 | | 0.2409 | 1.15 | 500 | 0.3541 | 0.8467 | | 0.2508 | 1.27 | 550 | 0.2997 | 0.88 | | 0.2442 | 1.39 | 600 | 0.3654 | 0.86 | | 0.2625 | 1.5 | 650 | 0.3302 | 0.8667 | | 0.1983 | 1.62 | 700 | 0.3184 | 0.8867 | | 0.2356 | 1.73 | 750 | 0.3239 | 0.8867 | | 0.2078 | 1.85 | 800 | 0.2968 | 0.9 | | 0.2343 | 1.96 | 850 | 0.3148 | 0.8933 | | 0.1544 | 2.08 | 900 | 0.3535 | 0.9 | | 0.1407 | 2.19 | 950 | 0.3603 | 0.8733 | | 0.187 | 2.31 | 1000 | 0.3843 | 0.88 | | 0.144 | 2.42 | 1050 | 0.4546 | 0.8467 | | 0.1786 | 2.54 | 1100 | 0.3681 | 0.88 | | 0.1315 | 2.66 | 1150 | 0.3806 | 0.8867 | | 0.1399 | 2.77 | 1200 | 0.3880 | 0.8867 | | 0.1905 | 2.89 | 1250 | 0.3944 | 0.8733 | | 0.2043 | 3.0 | 1300 | 0.3974 | 0.8733 | | 0.1081 | 3.12 | 1350 | 0.3731 | 0.9067 | | 0.1055 | 3.23 | 1400 | 0.3809 | 0.8867 | | 0.1092 | 3.35 | 1450 | 0.3568 | 0.9 | | 0.0981 | 3.46 | 1500 | 0.3610 | 0.9133 | | 0.109 | 3.58 | 1550 | 0.4126 | 0.8867 | | 0.1001 | 3.7 | 1600 | 0.3831 | 0.9 | | 0.1027 | 3.81 | 1650 | 0.4064 | 0.9 | | 0.133 | 3.93 | 1700 | 0.3845 | 0.9 | | 0.1031 | 4.04 | 1750 | 0.3915 | 0.9 | | 0.0772 | 4.16 | 1800 | 0.3988 | 0.8867 | | 0.0785 | 4.27 | 1850 | 0.3962 | 0.9 | | 0.1059 | 4.39 | 1900 | 0.3969 | 0.9 | | 0.0668 | 4.5 | 1950 | 0.4095 | 0.8933 | | 0.0915 | 4.62 | 2000 | 0.4077 | 0.8933 | | 0.1413 | 4.73 | 2050 | 0.4004 | 0.9067 | | 0.0727 | 4.85 | 2100 | 0.4100 | 0.8933 | | 0.0724 | 4.97 | 2150 | 0.4125 | 0.8933 |
0b683aaa1e8bbc14ca5349f82d13174f
apache-2.0
['generated_from_keras_callback']
false
hsohn3/mayo-bert-visit-uncased-wordlevel-block512-batch4-ep10 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 3.2895 - Epoch: 9
7cf36e9981153eb9b91058220381ef6e
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Epoch | |:----------:|:-----:| | 4.1298 | 0 | | 3.5157 | 1 | | 3.4732 | 2 | | 3.4565 | 3 | | 3.4444 | 4 | | 3.4349 | 5 | | 3.4197 | 6 | | 3.4109 | 7 | | 3.3493 | 8 | | 3.2895 | 9 |
0de823b1b4c29295e689eb6728fb2bae
apache-2.0
['generated_from_trainer']
false
distilroberta-base-finetuned-toxic This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.2768
2c41f8991b0ee0e871a7946d9131287d
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.5338 | 1.0 | 313 | 2.3127 | | 2.4482 | 2.0 | 626 | 2.2985 | | 2.4312 | 3.0 | 939 | 2.2411 |
23e42059d0142bb096c8fb3b2c687b42
apache-2.0
['multiberts', 'multiberts-seed_0', 'multiberts-seed_0-step_100k']
false
MultiBERTs, Intermediate Checkpoint - Seed 0, Step 100k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model
9cc84c278b23795962b4dc153993cb32
apache-2.0
['multiberts', 'multiberts-seed_0', 'multiberts-seed_0-step_100k']
false
How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_100k') model = TFBertModel.from_pretrained("google/multiberts-seed_0-step_100k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_100k') model = BertModel.from_pretrained("google/multiberts-seed_0-step_100k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ```
2e5114ab5ab2a3ff9db86d059455d04c
cc-by-4.0
['translation', 'opus-mt-tc']
false
Model Details Neural machine translation model for translating from Italic languages (itc) to Hebrew (he). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). **Model Description:** - **Developed by:** Language Technology Research Group at the University of Helsinki - **Model Type:** Translation (transformer-big) - **Release**: 2022-08-03 - **License:** CC-BY-4.0 - **Language(s):** - Source Language(s): cat fra glg ita lad_Latn por ron spa - Target Language(s): heb - Language Pair(s): cat-heb fra-heb glg-heb ita-heb por-heb ron-heb spa-heb - Valid Target Language Labels: - **Original Model**: [opusTCv20210807_transformer-big_2022-08-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/itc-heb/opusTCv20210807_transformer-big_2022-08-03.zip) - **Resources for more information:** - [OPUS-MT-train GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train) - More information about released models for this language pair: [OPUS-MT itc-heb README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/itc-heb/README.md) - [More information about MarianNMT models in the transformers library](https://huggingface.co/docs/transformers/model_doc/marian) - [Tatoeba Translation Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/
a07f03aa937f27b8d4983f281cdc2254
cc-by-4.0
['translation', 'opus-mt-tc']
false
How to Get Started With the Model A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ "La María és feminista.", "Contribuyan en Tatoeba." ] model_name = "pytorch-models/opus-mt-tc-big-itc-he" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) )
c66f12578d828b5194ee53875202f27b
cc-by-4.0
['translation', 'opus-mt-tc']
false
תרום לטאטואבה. ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-itc-he") print(pipe("La María és feminista."))
0c763496713964e4055cedbf94e25860
cc-by-4.0
['translation', 'opus-mt-tc']
false
Training - **Data**: opusTCv20210807 ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) - **Pre-processing**: SentencePiece (spm32k,spm32k) - **Model Type:** transformer-big - **Original MarianNMT Model**: [opusTCv20210807_transformer-big_2022-08-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/itc-heb/opusTCv20210807_transformer-big_2022-08-03.zip) - **Training Scripts**: [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
302d32bcb556d06ae697dd7b503e0b54
cc-by-4.0
['translation', 'opus-mt-tc']
false
Evaluation * test set translations: [opusTCv20210807_transformer-big_2022-08-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/itc-heb/opusTCv20210807_transformer-big_2022-08-03.test.txt) * test set scores: [opusTCv20210807_transformer-big_2022-08-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/itc-heb/opusTCv20210807_transformer-big_2022-08-03.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU |
0cea89b439889796022dc5dc3639208d
cc-by-4.0
['translation', 'opus-mt-tc']
false
words | |----------|---------|-------|-------|-------|--------| | fra-heb | tatoeba-test-v2021-08-07 | 0.60539 | 39.6 | 3281 | 20655 | | ita-heb | tatoeba-test-v2021-08-07 | 0.60264 | 40.0 | 1706 | 9796 | | por-heb | tatoeba-test-v2021-08-07 | 0.63087 | 44.4 | 719 | 4423 | | spa-heb | tatoeba-test-v2021-08-07 | 0.63883 | 44.5 | 1849 | 12112 | | cat-heb | flores101-devtest | 0.52457 | 23.0 | 1012 | 20749 | | fra-heb | flores101-devtest | 0.52953 | 23.2 | 1012 | 20749 | | glg-heb | flores101-devtest | 0.50918 | 20.8 | 1012 | 20749 | | ita-heb | flores101-devtest | 0.49007 | 18.3 | 1012 | 20749 | | por-heb | flores101-devtest | 0.53906 | 24.4 | 1012 | 20749 | | ron-heb | flores101-devtest | 0.52103 | 22.1 | 1012 | 20749 | | spa-heb | flores101-devtest | 0.47646 | 16.5 | 1012 | 20749 |
b240c15279912fc49457cf74a2a2a41b
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Whisper Tiny Indonesian This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the mozilla-foundation/common_voice_11_0 id dataset. It achieves the following results on the evaluation set: - Loss: 0.6202 - Wer: 32.4218
c4c522e8ba237d1e9f323937036e42b9
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 5000 - mixed_precision_training: Native AMP
a65d26e4b3046d23442420e66a4fb19c
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.3823 | 4.95 | 500 | 0.5251 | 33.4732 | | 0.0495 | 9.9 | 1000 | 0.5700 | 33.3902 | | 0.0077 | 14.85 | 1500 | 0.6202 | 32.4218 | | 0.0031 | 19.8 | 2000 | 0.6616 | 32.5371 | | 0.0019 | 24.75 | 2500 | 0.6873 | 32.7954 | | 0.0014 | 29.7 | 3000 | 0.7056 | 33.5700 | | 0.0011 | 34.65 | 3500 | 0.7204 | 33.7960 | | 0.0009 | 39.6 | 4000 | 0.7327 | 33.7729 | | 0.0008 | 44.55 | 4500 | 0.7400 | 33.9113 | | 0.0007 | 49.5 | 5000 | 0.7428 | 33.3441 |
1a4d28ba05e034d00ded2b9a0d9045bd
apache-2.0
['multiberts', 'multiberts-seed_3', 'multiberts-seed_3-step_80k']
false
MultiBERTs, Intermediate Checkpoint - Seed 3, Step 80k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model
ba9876779c712c1658e03c6998a89cd1
apache-2.0
['multiberts', 'multiberts-seed_3', 'multiberts-seed_3-step_80k']
false
How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_3-step_80k') model = TFBertModel.from_pretrained("google/multiberts-seed_3-step_80k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_3-step_80k') model = BertModel.from_pretrained("google/multiberts-seed_3-step_80k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ```
920438706c5fe8bf2b73a53d63f26955
apache-2.0
['masked-image-modeling', 'generated_from_trainer']
false
dit-base-manuscripts This model is a fine-tuned version of [facebook/deit-base-distilled-patch16-224](https://huggingface.co/facebook/deit-base-distilled-patch16-224) on the davanstrien/iiif_manuscripts_label_ge_50 dataset. It achieves the following results on the evaluation set: - Loss: 1.1266
02f803b0e32d3164ceb097f980c62421
apache-2.0
['masked-image-modeling', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 1333 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1.0
a3c37374d4247975b4cbd298ecb53620
cc-by-4.0
['espnet', 'audio', 'text-to-speech']
false
Demo: How to use in ESPnet2 ```bash cd espnet git checkout 49a284e69308d81c142b89795de255b4ce290c54 pip install -e . cd egs2/talromur/tts1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/GunnarThor_talromur_c_fastspeech2 ```
eb6a45cf0b8866d7a6819949b3597dec
cc-by-4.0
['espnet', 'audio', 'text-to-speech']
false
TTS config <details><summary>expand</summary> ``` config: conf/tuning/train_fastspeech2.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/c/tts_train_fastspeech2_raw_phn_none ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 100 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - loss - min - - train - loss - min keep_nbest_models: 5 nbest_averaging_interval: 0 grad_clip: 1.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 8 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_matplotlib: true use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: 800 batch_size: 20 valid_batch_size: null batch_bins: 2500000 valid_batch_bins: null train_shape_file: - exp/c/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/stats/train/text_shape.phn - exp/c/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/stats/train/speech_shape valid_shape_file: - exp/c/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/stats/valid/text_shape.phn - exp/c/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/stats/valid/speech_shape batch_type: numel valid_batch_type: null fold_length: - 150 - 204800 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train_c_phn/text - text - text - - exp/c/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/train_c_phn/durations - durations - text_int - - dump/raw/train_c_phn/wav.scp - speech - sound valid_data_path_and_name_and_type: - - dump/raw/dev_c_phn/text - text - text - - exp/c/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/dev_c_phn/durations - durations - text_int - - dump/raw/dev_c_phn/wav.scp - speech - sound allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 1.0 scheduler: noamlr scheduler_conf: model_size: 384 warmup_steps: 4000 token_list: - <blank> - <unk> - ',' - . - r - t - n - a0 - s - I0 - D - l - m - Y0 - v - h - E1 - k - a:1 - E:1 - G - f - j - T - a1 - p - c - au:1 - i:1 - O:1 - I:1 - E0 - I1 - r_0 - t_h - k_h - Y1 - ei1 - i0 - ou:1 - ei:1 - u:1 - O1 - N - l_0 - '91' - ai0 - au1 - ou0 - n_0 - ei0 - ai:1 - O0 - ou1 - i1 - ai1 - '9:1' - '90' - au0 - x - c_h - 9i:1 - C - p_h - u0 - Y:1 - J - 9i1 - u1 - 9i0 - N_0 - m_0 - J_0 - Oi1 - Yi0 - Yi1 - Oi0 - au:0 - '9:0' - E:0 - <sos/eos> odim: null model_conf: {} use_preprocessor: true token_type: phn bpemodel: null non_linguistic_symbols: null cleaner: null g2p: null feats_extract: fbank feats_extract_conf: n_fft: 1024 hop_length: 256 win_length: null fs: 22050 fmin: 80 fmax: 7600 n_mels: 80 normalize: global_mvn normalize_conf: stats_file: exp/c/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/stats/train/feats_stats.npz tts: fastspeech2 tts_conf: adim: 384 aheads: 2 elayers: 4 eunits: 1536 dlayers: 4 dunits: 1536 positionwise_layer_type: conv1d positionwise_conv_kernel_size: 3 duration_predictor_layers: 2 duration_predictor_chans: 256 duration_predictor_kernel_size: 3 postnet_layers: 5 postnet_filts: 5 postnet_chans: 256 use_masking: true use_scaled_pos_enc: true encoder_normalize_before: true decoder_normalize_before: true reduction_factor: 1 init_type: xavier_uniform init_enc_alpha: 1.0 init_dec_alpha: 1.0 transformer_enc_dropout_rate: 0.2 transformer_enc_positional_dropout_rate: 0.2 transformer_enc_attn_dropout_rate: 0.2 transformer_dec_dropout_rate: 0.2 transformer_dec_positional_dropout_rate: 0.2 transformer_dec_attn_dropout_rate: 0.2 pitch_predictor_layers: 5 pitch_predictor_chans: 256 pitch_predictor_kernel_size: 5 pitch_predictor_dropout: 0.5 pitch_embed_kernel_size: 1 pitch_embed_dropout: 0.0 stop_gradient_from_pitch_predictor: true energy_predictor_layers: 2 energy_predictor_chans: 256 energy_predictor_kernel_size: 3 energy_predictor_dropout: 0.5 energy_embed_kernel_size: 1 energy_embed_dropout: 0.0 stop_gradient_from_energy_predictor: false pitch_extract: dio pitch_extract_conf: fs: 22050 n_fft: 1024 hop_length: 256 f0max: 400 f0min: 80 reduction_factor: 1 pitch_normalize: global_mvn pitch_normalize_conf: stats_file: exp/c/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/stats/train/pitch_stats.npz energy_extract: energy energy_extract_conf: fs: 22050 n_fft: 1024 hop_length: 256 win_length: null reduction_factor: 1 energy_normalize: global_mvn energy_normalize_conf: stats_file: exp/c/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/stats/train/energy_stats.npz required: - output_dir - token_list version: 0.10.7a1 distributed: false ``` </details>
2ae79e1ca9611cd59d3c628c63ed3fb8
apache-2.0
['generated_from_trainer']
false
bert-large-uncased_stereoset_finetuned This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the stereoset dataset. It achieves the following results on the evaluation set: - Loss: 1.0729 - Accuracy: 0.7716
b5543fb054202bf0b6bfce8246ea923e
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.21 | 5 | 0.6925 | 0.5071 | | No log | 0.42 | 10 | 0.6978 | 0.5008 | | No log | 0.62 | 15 | 0.6891 | 0.5275 | | No log | 0.83 | 20 | 0.6850 | 0.5487 | | No log | 1.04 | 25 | 0.7521 | 0.5126 | | No log | 1.25 | 30 | 0.6577 | 0.6177 | | No log | 1.46 | 35 | 0.6759 | 0.5440 | | No log | 1.67 | 40 | 0.6395 | 0.6405 | | No log | 1.88 | 45 | 0.6064 | 0.6719 | | No log | 2.08 | 50 | 0.5822 | 0.6986 | | No log | 2.29 | 55 | 0.5566 | 0.7096 | | No log | 2.5 | 60 | 0.5411 | 0.7331 | | No log | 2.71 | 65 | 0.5448 | 0.7551 | | No log | 2.92 | 70 | 0.5384 | 0.7339 | | No log | 3.12 | 75 | 0.5487 | 0.7535 | | No log | 3.33 | 80 | 0.5572 | 0.7567 | | No log | 3.54 | 85 | 0.5763 | 0.7614 | | No log | 3.75 | 90 | 0.5756 | 0.7645 | | No log | 3.96 | 95 | 0.5524 | 0.7645 | | No log | 4.17 | 100 | 0.6320 | 0.7614 | | No log | 4.38 | 105 | 0.6512 | 0.7575 | | No log | 4.58 | 110 | 0.6582 | 0.7606 | | No log | 4.79 | 115 | 0.6731 | 0.7669 | | No log | 5.0 | 120 | 0.6944 | 0.7575 | | No log | 5.21 | 125 | 0.7142 | 0.7575 | | No log | 5.42 | 130 | 0.7004 | 0.7645 | | No log | 5.62 | 135 | 0.6794 | 0.7630 | | No log | 5.83 | 140 | 0.7108 | 0.7606 | | No log | 6.04 | 145 | 0.7730 | 0.7590 | | No log | 6.25 | 150 | 0.8083 | 0.7614 | | No log | 6.46 | 155 | 0.8361 | 0.7653 | | No log | 6.67 | 160 | 0.8498 | 0.7692 | | No log | 6.88 | 165 | 0.8769 | 0.7700 | | No log | 7.08 | 170 | 0.8324 | 0.7582 | | No log | 7.29 | 175 | 0.7945 | 0.7645 | | No log | 7.5 | 180 | 0.8480 | 0.7684 | | No log | 7.71 | 185 | 0.8905 | 0.7724 | | No log | 7.92 | 190 | 0.9560 | 0.7700 | | No log | 8.12 | 195 | 0.9976 | 0.7669 | | No log | 8.33 | 200 | 1.0315 | 0.7677 | | No log | 8.54 | 205 | 1.0413 | 0.7692 | | No log | 8.75 | 210 | 1.0216 | 0.7708 | | No log | 8.96 | 215 | 1.0251 | 0.7716 | | No log | 9.17 | 220 | 1.0483 | 0.7716 | | No log | 9.38 | 225 | 1.0616 | 0.7716 | | No log | 9.58 | 230 | 1.0703 | 0.7708 | | No log | 9.79 | 235 | 1.0731 | 0.7732 | | No log | 10.0 | 240 | 1.0729 | 0.7716 |
15ec2aedb45d5082113ed5b888d6ef00
apache-2.0
['generated_from_trainer']
false
all-roberta-large-v1-small_talk-2-16-5 This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.3566 - Accuracy: 0.3855
9c7b93e01f4ef1b69f7311d1c6e4f83f
['apache-2.0']
['xlnet', 'lm-head', 'causal-lm']
false
Model description This model require Mecab and senetencepiece with XLNetTokenizer. See details https://qiita.com/mkt3/items/4d0ae36f3f212aee8002 This model uses NFKD as the normalization method for character encoding. Japanese muddle marks and semi-muddle marks will be lost. *日本語の濁点・半濁点がないモデルです*
8f8ef6a60ef8bcf49bb9d13028bd04cd
['apache-2.0']
['xlnet', 'lm-head', 'causal-lm']
false
How to use ```python from fugashi import Tagger from transformers import ( pipeline, XLNetLMHeadModel, XLNetTokenizer ) class XLNet(): def __init__(self): self.m = Tagger('-Owakati') self.gen_model = XLNetLMHeadModel.from_pretrained("hajime9652/xlnet-japanese") self.gen_tokenizer = XLNetTokenizer.from_pretrained("hajime9652/xlnet-japanese") def generate(self, prompt="福岡のご飯は美味しい。コンパクトで暮らしやすい街。"): prompt = self.m.parse(prompt) inputs = self.gen_tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt") prompt_length = len(self.gen_tokenizer.decode(inputs[0], skip_special_tokens=True, clean_up_tokenization_spaces=True)) outputs = self.gen_model.generate(inputs, max_length=200, do_sample=True, top_p=0.95, top_k=60) generated = prompt + self.gen_tokenizer.decode(outputs[0])[prompt_length:] return generated ```
019dc81e00a65ec7a64cdb1af293dd85
['apache-2.0']
['xlnet', 'lm-head', 'causal-lm']
false
Important matter The company that created and published this model is called Stockmark. This repository is for use by HuggingFace and not for infringement. See this documents https://qiita.com/mkt3/items/4d0ae36f3f212aee8002 published by https://github.com/mkt3
1f61bd259c4737ae7e76d2e43e3234b2
apache-2.0
['generated_from_trainer']
false
bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0639 - Precision: 0.9357 - Recall: 0.9507 - F1: 0.9432 - Accuracy: 0.9857
903243283c940dcaa5ea36d72899ac5c
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0847 | 1.0 | 1756 | 0.0636 | 0.9150 | 0.9387 | 0.9267 | 0.9840 | | 0.0399 | 2.0 | 3512 | 0.0592 | 0.9302 | 0.9485 | 0.9393 | 0.9854 | | 0.0201 | 3.0 | 5268 | 0.0639 | 0.9357 | 0.9507 | 0.9432 | 0.9857 |
c983e96404a21580ea85eba61aece82e
apache-2.0
['automatic-speech-recognition', 'ru']
false
exp_w2v2t_ru_hubert_s451 Fine-tuned [facebook/hubert-large-ll60k](https://huggingface.co/facebook/hubert-large-ll60k) for speech recognition using the train split of [Common Voice 7.0 (ru)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
6307da40ad7674228a9e909a27482781
apache-2.0
['translation']
false
kor-spa * source group: Korean * target group: Spanish * OPUS readme: [kor-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/kor-spa/README.md) * model: transformer-align * source language(s): kor kor_Hang kor_Latn * target language(s): spa * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/kor-spa/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/kor-spa/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/kor-spa/opus-2020-06-17.eval.txt)
85102e1e40d822bcf64b9b966e9d6391
apache-2.0
['translation']
false
System Info: - hf_name: kor-spa - source_languages: kor - target_languages: spa - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/kor-spa/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ko', 'es'] - src_constituents: {'kor_Hani', 'kor_Hang', 'kor_Latn', 'kor'} - tgt_constituents: {'spa'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/kor-spa/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/kor-spa/opus-2020-06-17.test.txt - src_alpha3: kor - tgt_alpha3: spa - short_pair: ko-es - chrF2_score: 0.521 - bleu: 31.3 - brevity_penalty: 0.95 - ref_len: 6805.0 - src_name: Korean - tgt_name: Spanish - train_date: 2020-06-17 - src_alpha2: ko - tgt_alpha2: es - prefer_old: False - long_pair: kor-spa - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
5c09ca791a3b47bdc3ac56d210785378
apache-2.0
['generated_from_trainer']
false
distilbert_sa_GLUE_Experiment_logit_kd_data_aug_qnli_192 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE QNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.4463 - Accuracy: 0.5576
ceaf244405ecfca07485f56308312f8f
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.338 | 1.0 | 16604 | 0.4463 | 0.5576 | | 0.2791 | 2.0 | 33208 | 0.4560 | 0.5711 | | 0.256 | 3.0 | 49812 | 0.4603 | 0.5691 | | 0.2446 | 4.0 | 66416 | 0.4620 | 0.5709 | | 0.2379 | 5.0 | 83020 | 0.4547 | 0.5958 | | 0.2334 | 6.0 | 99624 | 0.4581 | 0.5863 |
3cbcd711ae716ff39736c51283395802
cc-by-4.0
['espnet', 'audio', 'text-to-speech']
false
`kan-bayashi/vctk_tts_train_xvector_tacotron2_raw_phn_tacotron_g2p_en_no_space_train.loss.ave` ♻️ Imported from https://zenodo.org/record/4394600/ This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
48330a61c967c5ed5e58790e0c5f21f1
creativeml-openrail-m
['text-to-image']
false
Duskfall's Digital Fantasy Dreambooth model trained by Duskfallcrew with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! All samples and info are here: https://civitai.com/user/duskfallcrew If you want to donate towards costs and don't want to subscribe: https://ko-fi.com/DUSKFALLcrew If you want to monthly support the EARTH & DUSK media projects and not just AI: https://www.patreon.com/earthndusk digidsk1 (use that on your prompt)
bdf67bb43805d121d8614d550ef1cf69
apache-2.0
['generated_from_trainer']
false
M4_MLM This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 7.3456
6d0dd372fcd3cfd79ad1d7541e7442ee
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 8.7633 | 1.0 | 26 | 8.0400 | | 7.8899 | 2.0 | 52 | 7.6923 | | 7.589 | 3.0 | 78 | 7.4373 |
0985edbac5ce8875df411e6f1d397363
apache-2.0
['generated_from_trainer']
false
wav2vec2-xlsr-53-espeak-cv-ft-sah-ntsema-colab This model is a fine-tuned version of [facebook/wav2vec2-xlsr-53-espeak-cv-ft](https://huggingface.co/facebook/wav2vec2-xlsr-53-espeak-cv-ft) on the audiofolder dataset. It achieves the following results on the evaluation set: - Loss: 0.2143 - Wer: 0.2247
5ace61071cc0992a6ecae56f0ed930a3
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.7431 | 5.71 | 400 | 0.2879 | 0.4054 | | 0.1876 | 11.42 | 800 | 0.2349 | 0.3023 | | 0.0986 | 17.14 | 1200 | 0.2248 | 0.2701 | | 0.0737 | 22.85 | 1600 | 0.2242 | 0.2428 | | 0.0546 | 28.57 | 2000 | 0.2143 | 0.2247 |
88239ad22822c94ca6b0d3e45601da77
apache-2.0
['automatic-speech-recognition', 'de']
false
exp_w2v2r_de_vp-100k_gender_male-10_female-0_s504 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
28d970fe40828d83a086272d23fc766d
apache-2.0
['generated_from_trainer', 'summarization']
false
arxiv27k-t5-abst-title-gen/ This model is a fine-tuned version of mt5-small on the arxiv-abstract-title dataset. It achieves the following results on the evaluation set: - Loss: 1.6002 - Rouge1: 32.8 - Rouge2: 21.9 - Rougel: 34.8 -
8dcc6bae56791bce5c339308ceb7bca9
apache-2.0
['generated_from_trainer', 'summarization']
false
Training args model_args = T5Args() model_args.max_seq_length = 256 model_args.train_batch_size = 8 model_args.eval_batch_size = 8 model_args.num_train_epochs = 6 model_args.evaluate_during_training = False model_args.use_multiprocessing = False model_args.fp16 = False model_args.save_steps = 40000 model_args.save_eval_checkpoints = False model_args.save_model_every_epoch = True model_args.output_dir = OUTPUT_DIR model_args.no_cache = True model_args.reprocess_input_data = True model_args.overwrite_output_dir = True model_args.num_return_sequences = 1
66a15d88271e0d5c70e7243c53baa298
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'diffusers']
false
Gerph Welcome to the Gerph model. This model is trained in the art of the talented artist Gerph and has three versions for you to choose from. These models can be highly NSFW and are trained mainly on characters, as the work primarily focuses on this subject. Take a look at the demo images below to see the differences between the three versions. And don't forget that these models is licensed under the Creative ML OpenRAIL-M license. Enjoy! **Gerph_Epoch8** ![Gerph Epoch8](preview1.png?raw=true) > highres, best quality, masterpiece, hatsune miku, outside, sunny day, casual clothes **Gerph_Epoch10** ![Gerph Epoch8](preview2.png?raw=true) > close up, male, solo, long hair, blonde hair, blue eyes, bishounen, colorful, boy, autumn, cinematic lighting, blue sky **Gerph_Epoch11** ![Gerph Epoch8](preview3.png?raw=true) > young girl, brown hair, green eyes, colorful, winter, cumulonimbus clouds, lighting, blue sky As you can see, the base version, *Gerph_Epoch8*, is trained exclusively in Gerph's art and offers a unique take on his style and themes. If you are a fan of Gerph's art, this version should certainly be in your set. *Gerph_Epoch10* and *Gerph_Epoch11* were continued with a wider range of concept images and work by various artists, so unfortunately the original Gerph style doesn't shine as much. Also, these models do not require any specific tokens.
a414c859727f3e821626b706da7ecd7c
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'diffusers']
false
License These models are open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the models to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the models commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
3b779891747e99735a128a95fe1b7904
creativeml-openrail-m
[]
false
Prompt with **"hutari"** **Training details:** - Trained with [TheLastBen's fast-DreamBooth notebook](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) - data set: around 20 concept images + around 50 custmized reg images, the concept images are then duplicated to balance the two - learning rate 2e-6 for 5000 steps - text encoder rate 15% **Example generations:** ![3712879356-hutari](https://huggingface.co/alea31415/hutari-bocchi-the-rock/resolve/main/3712879356-hutari.png) ![170473768-beautiful](https://huggingface.co/alea31415/hutari-bocchi-the-rock/resolve/main/170473768-beautiful.png) ![4077722094-hutari](https://huggingface.co/alea31415/hutari-bocchi-the-rock/resolve/main/4077722094-hutari.png) ![925299796-beautiful](https://huggingface.co/alea31415/hutari-bocchi-the-rock/resolve/main/925299796-beautiful.png) ![3820948984-beautiful](https://huggingface.co/alea31415/hutari-bocchi-the-rock/resolve/main/3820948984-beautiful.png)
0cae69c29cacbcfa4c30a3353f067ec6
apache-2.0
['generated_from_trainer']
false
canine-c-finetuned-mrpc This model is a fine-tuned version of [google/canine-c](https://huggingface.co/google/canine-c) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.4066 - Accuracy: 0.8627 - F1: 0.9014
453c4a66faa61ed4bad4fb2a91965e7c
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 230 | 0.5014 | 0.7696 | 0.8479 | | No log | 2.0 | 460 | 0.4755 | 0.7892 | 0.8622 | | 0.5096 | 3.0 | 690 | 0.3645 | 0.8431 | 0.8869 | | 0.5096 | 4.0 | 920 | 0.4066 | 0.8627 | 0.9014 | | 0.2619 | 5.0 | 1150 | 0.4551 | 0.8431 | 0.8877 |
f7091aa0ca8be196b69af08abf58571c
apache-2.0
['automatic-speech-recognition', 'es']
false
exp_w2v2r_es_xls-r_age_teens-5_sixties-5_s62 Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
e4d18fb588825a295e89f50bcc07a565
apache-2.0
['generated_from_trainer']
false
finetuning-sentiment-model-25000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3711 - Accuracy: 0.9314 - F1: 0.9320
96174958c4542bb5e4c3db1c329c0f0b
afl-3.0
['generated_from_trainer', 'sentiment', 'emotion']
false
electricidad-small-discriminator-finetuned-clasificacion-texto-suicida This model is a fine-tuned version of [mrm8488/electricidad-small-discriminator](https://huggingface.co/mrm8488/electricidad-small-discriminator) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0458 - Accuracy: 0.9916
5bc4eb79add91d598f4e7fd98b3e8699
afl-3.0
['generated_from_trainer', 'sentiment', 'emotion']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - lr_scheduler_type: linear - num_epochs: 15
a5430c4c545b3e367ea970cafe59ebe4
afl-3.0
['generated_from_trainer', 'sentiment', 'emotion']
false
Training results | Training Loss | Epoch | Validation Loss | Accuracy | |:-------------:|:-----:|:---------------:|:--------:| | 0.161100 | 1.0 | 0.133057 | 0.952718 | | 0.134500 | 2.0 | 0.110966 | 0.960804 | | 0.108500 | 3.0 | 0.086417 | 0.970835 | | 0.099400 | 4.0 | 0.073618 | 0.974856 | | 0.090500 | 5.0 | 0.065231 | 0.979629 | | 0.080700 | 6.0 | 0.060849 | 0.982324 | | 0.069200 | 7.0 | 0.054718 | 0.986125 | | 0.060400 | 8.0 | 0.051153 | 0.985948 | | 0.048200 | 9.0 | 0.045747 | 0.989748 | | 0.045500 | 10.0 | 0.049992 | 0.988069 | | 0.043400 | 11.0 | 0.046325 | 0.990234 | | 0.034300 | 12.0 | 0.050746 | 0.989792 | | 0.032900 | 13.0 | 0.043434 | 0.991737 | | 0.028400 | 14.0 | 0.045003 | 0.991869 | | 0.022300 | 15.0 | 0.045819 | 0.991648 |
c2e06a9b7993e1da3fe699aa3e39d9f9
creativeml-openrail-m
[]
false
mT5-small based Azerbaijani Summarization In this model, [Google's Multilingual T5-small](https://github.com/google-research/multilingual-t5) is fine-tuned on [Azerbaijani News Summary Dataset](https://huggingface.co/datasets/nijatzeynalov/azerbaijani-multi-news) for **Summarization** downstream task. The model is trained with 3 epochs, 64 batch size and 10e-4 learning rate. It took almost 12 hours on GPU instance with Ubuntu Server 20.04 LTS image in Microsoft Azure. The max news length is kept as 2048 and max summary length is determined as 128. mT5 is a multilingual variant of __T5__ and only pre-trained on [mC4](https://www.tensorflow.org/datasets/catalog/c4
0b39b4696bbc9f86a793698070b8ba98
creativeml-openrail-m
[]
false
Text-to-Text Transfer Transformer (T5) The paper [“Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer”](https://arxiv.org/pdf/1910.10683.pdf) presents a large-scale empirical survey to determine which transfer learning techniques work best and apply these insights at scale to create a new model called the Text-To-Text Transfer Transformer. ![Alt Text](https://miro.medium.com/max/1280/0*xfXDPjASztwmJlOa.gif) T5, or Text-to-Text Transfer Transformer, is a Transformer based architecture that uses a text-to-text approach. Every task – including translation, question answering, and classification – is cast as feeding the model text as input and training it to generate some target text. This allows for the use of the same model, loss function, hyperparameters, etc. across our diverse set of tasks. The changes compared to BERT include: - adding a causal decoder to the bidirectional architecture. - replacing the fill-in-the-blank cloze task with a mix of alternative pre-training tasks. The model was trained on a cleaned version of Common Crawl that is two orders of magnitude larger than Wikipedia. The T5 model, pre-trained on C4, achieves state-of-the-art results on many NLP benchmarks while being flexible enough to be fine-tuned to several downstream tasks. The pre-trained T5 in Hugging Face is also trained on the mixture of unsupervised training (which is trained by reconstructing the masked sentence) and task-specific training.
11ea434a27fd37bfcde2b7e0b1c2eeef
creativeml-openrail-m
[]
false
Multilingual t5 ["mt5"](https://arxiv.org/pdf/2010.11934v3.pdf) is a multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages. mT5 is pre-trained only by unsupervised manner with multiple languages, and it’s not trained for specific downstream tasks. To dare say, this pre-trained model has ability to build correct text in Azerbaijani, but it doesn’t have any ability for specific tasks, such as, summarization, correction, machine translation, etc. In HuggingFace, several sizes of mT5 models are available, and here I used small one (google/mt5-small). Therefore I trained (fine-tune) this model for summarization in Azerbaijani using [Azerbaijani News Summary Dataset](https://huggingface.co/datasets/nijatzeynalov/azerbaijani-multi-news).
bc9981fde6cdcf03e316d4b6628ccd5e
creativeml-openrail-m
[]
false
Training hyperparameters __mT5-based-azerbaijani-summarize__ model training took almost 12 hours on GPU instance with Ubuntu Server 20.04 LTS image in Microsoft Azure. The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 90 - num_epochs: 10
44775a5ea0f23b7c27a24c11479bc94b
creativeml-openrail-m
[]
false
Dataset Model was trained on [__az-news-summary__ dataset](https://huggingface.co/datasets/nijatzeynalov/azerbaijani-multi-news), a comprehensive and diverse dataset comprising 143k (143,448) Azerbaijani news articles extracted using a set of carefully designed heuristics. The dataset covers common topics for news reports include war, government, politics, education, health, the environment, economy, business, fashion, entertainment, and sport, as well as quirky or unusual events. This dataset has 3 splits: _train_, _validation_, and _test_. \ Token counts are white space based. | Dataset Split | Number of Instances | Size (MB) | | ------------- | --------------------|:----------------------| | Train | 100,413 | 150 | | Validation | 14,344 | 21.3 | | Test | 28,691 | 42.8 |
a3c52744f17f552c0ade056c2145c3fb
creativeml-openrail-m
[]
false
Training results with comparison __mT5-based-azerbaijani-summarize__ model rouge scores on the test set: - Rouge1: 39.4222 - Rouge2: 24.8624 - Rougel: 32.2487 For __Azerbaijani text summarization downstream task__, mT5-multilingual-XLSum has also been developed on the 45 languages of [XL-Sum](https://huggingface.co/datasets/csebuetnlp/xlsum) dataset. For finetuning details and scripts, see the [paper](https://aclanthology.org/2021.findings-acl.413/) and the [official repository](https://github.com/csebuetnlp/xl-sum). . __mT5_multilingual_XLSum__ modelrouge scores on the XL-Sum test set (only for Azerbaijani): - Rouge1: 21.4227 - Rouge2: 9.5214 - Rougel: 19.3331 As seen from the numbers, our model __mT5-based-azerbaijani-summarize__ achieves dramatically better performance than __mT5_multilingual_XLSum__.
b5dc3208eb32b26424c21e2b47800ae7
creativeml-openrail-m
[]
false
Using this model in transformers ```python !pip install sentencepiece !pip install transformers ``` ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM article_text = """Ötən il Azərbaycana 74 577 avtomobil idxal edilib. Bu da 2021-ci illə müqayisədə 16 617 ədəd və ya 18,2% azdır. Xezerxeber.az-ın məlumatına görə, avtomobil bazarı üzrə qiymətləndirici Sərxan Qədirov deyib ki, əvvəl ay ərzində 5-10 avtomobil gətirən şəxslər hazırda bu sayı 2-3 ədədə endiriblər. Hətta ölkəyə nəqliyyat vasitələrinin gətirilməsi işini dayandıranlar da var. Nəqliyyat məsələləri üzrə ekspert Eldəniz Cəfərov isə bildirib ki, gözləniləndən fərqli olaraq, ölkəyə idxal olunan kiçik mühərrikli avtomobillərin sayında da azalma var. Bunun başlıca səbəbi Rusiyada istehsalın dayandırılmasıdır. Ekspertin sözlərinə görə, əvvəllər Azərbaycan bazarında Rusiya istehsalı olan nəqliyyat vasitələri geniş yer tuturdu. Hazırda isə həmin ölkədən idxal tam dayanıb.""" model_name = "nijatzeynalov/mT5-based-azerbaijani-summarize" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSeq2SeqLM.from_pretrained(model_name) ``` ```python input_ids = tokenizer( article_text, return_tensors="pt", padding="max_length", truncation=True, max_length=2048 )["input_ids"] output_ids = model.generate( input_ids=input_ids, max_length=128, no_repeat_ngram_size=2, num_beams=4 )[0] summary = tokenizer.decode( output_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(summary) ``` Result: ```python Azərbaycana idxal olunan avtomobillərin sayı açıqlanıb ```
62afc36d4cc2ce1677ec512ae2044f2e
creativeml-openrail-m
[]
false
Citation If you use this model, please cite: ``` @misc {nijatzeynalov_2023, author = { {NijatZeynalov} }, title = { mT5-based-azerbaijani-summarize (Revision 19930ab) }, year = 2023, url = { https://huggingface.co/nijatzeynalov/mT5-based-azerbaijani-summarize }, doi = { 10.57967/hf/0316 }, publisher = { Hugging Face } } ```
81775bffdc49c08b2909b0a2886494ab
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0612 - Precision: 0.9237 - Recall: 0.9343 - F1: 0.9290 - Accuracy: 0.9833
fc725b9d412060f916db92070d375f61
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2462 | 1.0 | 878 | 0.0708 | 0.9118 | 0.9149 | 0.9133 | 0.9803 | | 0.0548 | 2.0 | 1756 | 0.0612 | 0.9218 | 0.9325 | 0.9271 | 0.9827 | | 0.0307 | 3.0 | 2634 | 0.0612 | 0.9237 | 0.9343 | 0.9290 | 0.9833 |
33d31dce70636ab99f4abd9e755d6f11
apache-2.0
['deep-narrow']
false
T5-Efficient-BASE-EL6 (Deep-Narrow version) T5-Efficient-BASE-EL6 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5). It is a *pretrained-only* checkpoint and was released with the paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*. In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures of similar parameter count. To quote the paper: > We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased > before considering any other forms of uniform scaling across other dimensions. This is largely due to > how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a > tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise, > a tall base model might also generally more efficient compared to a large model. We generally find > that, regardless of size, even if absolute performance might increase as we continue to stack layers, > the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36 > layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e., > params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params, > FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to > consider. To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially. A sequence of word embeddings is therefore processed sequentially by each transformer block.
0d5adf608012f3d9eb3c999c80977399
apache-2.0
['deep-narrow']
false
Details model architecture This model checkpoint - **t5-efficient-base-el6** - is of model type **Base** with the following variations: - **el** is **6** It has **180.45** million parameters and thus requires *ca.* **721.8 MB** of memory in full precision (*fp32*) or **360.9 MB** of memory in half precision (*fp16* or *bf16*). A summary of the *original* T5 model architectures can be seen here: | Model | nl (el/dl) | ff | dm | kv | nh |
1eda830494f407f11f2eadccc670aa35
apache-2.0
['generated_from_trainer']
false
distilbert_sa_GLUE_Experiment_logit_kd_qqp This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE QQP dataset. It achieves the following results on the evaluation set: - Loss: 0.6308 - Accuracy: 0.6473 - F1: 0.0880 - Combined Score: 0.3676
823a4065edf5a5660e5306ea048b7980
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:| | 0.7821 | 1.0 | 1422 | 0.7485 | 0.6318 | 0.0 | 0.3159 | | 0.7105 | 2.0 | 2844 | 0.7038 | 0.6364 | 0.0261 | 0.3312 | | 0.6654 | 3.0 | 4266 | 0.6862 | 0.6351 | 0.0188 | 0.3269 | | 0.6284 | 4.0 | 5688 | 0.6610 | 0.6453 | 0.0779 | 0.3616 | | 0.5969 | 5.0 | 7110 | 0.6479 | 0.6416 | 0.0554 | 0.3485 | | 0.5712 | 6.0 | 8532 | 0.6457 | 0.6404 | 0.0497 | 0.3450 | | 0.5513 | 7.0 | 9954 | 0.6308 | 0.6473 | 0.0880 | 0.3676 | | 0.5349 | 8.0 | 11376 | 0.6351 | 0.6503 | 0.1037 | 0.3770 | | 0.5222 | 9.0 | 12798 | 0.6383 | 0.6719 | 0.2134 | 0.4427 | | 0.5124 | 10.0 | 14220 | 0.6392 | 0.6685 | 0.1991 | 0.4338 | | 0.5044 | 11.0 | 15642 | 0.6379 | 0.6615 | 0.1631 | 0.4123 | | 0.4978 | 12.0 | 17064 | 0.6363 | 0.6637 | 0.1750 | 0.4194 |
9377be9c82af5f9a349dd89d8ad3a5cb
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-panx-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.4085 - F1: 0.6985
2ec85ef59cce4f204aaf527e9a445bed
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.1067 | 1.0 | 50 | 0.6303 | 0.4922 | | 0.5183 | 2.0 | 100 | 0.4321 | 0.6524 | | 0.3688 | 3.0 | 150 | 0.4085 | 0.6985 |
48235f3284c6a971f5795ed7895981c1
apache-2.0
['generated_from_trainer']
false
tiny-mlm-glue-cola-custom-tokenizer-expand-vocab-target-glue-cola This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-cola-custom-tokenizer-expand-vocab](https://huggingface.co/muhtasham/tiny-mlm-glue-cola-custom-tokenizer-expand-vocab) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7478 - Matthews Correlation: 0.0630
346dd9275737660da78989d04ec8bb8b
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.6117 | 1.87 | 500 | 0.6224 | 0.0 | | 0.5987 | 3.73 | 1000 | 0.6217 | 0.0181 | | 0.5786 | 5.6 | 1500 | 0.6271 | 0.0364 | | 0.5513 | 7.46 | 2000 | 0.6517 | 0.0412 | | 0.5219 | 9.33 | 2500 | 0.6753 | 0.1073 | | 0.5067 | 11.19 | 3000 | 0.6918 | 0.0978 | | 0.4827 | 13.06 | 3500 | 0.7235 | 0.0896 | | 0.4638 | 14.93 | 4000 | 0.7478 | 0.0630 |
37f87d6a821ea460376812d14bb9e661
mit
[]
false
This model has been pretrained on MS MARCO corpus and then finetuned on MS MARCO training data with implicit distributionally robust optimization (iDRO), following the approach described in the paper **COCO-DR: Combating Distribution Shifts in Zero-Shot Dense Retrieval with Contrastive and Distributionally Robust Learning**. The associated GitHub repository is available here https://github.com/OpenMatch/COCO-DR. This model is trained with BERT-large as the backbone with 335M hyperparameters.
b1c69143865555bf54c5c4c0db76e741
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Whisper Small Italian This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 it dataset. It achieves the following results on the evaluation set: - Loss: 0.2534 - Wer: 12.3040
a6e892facce90d7f0b999ccb2f834767
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.2737 | 2.01 | 1000 | 0.2728 | 13.4097 | | 0.1536 | 4.02 | 2000 | 0.2611 | 12.9897 | | 0.0905 | 6.03 | 3000 | 0.2686 | 12.9273 | | 0.1301 | 8.04 | 4000 | 0.2534 | 12.3040 | | 0.096 | 10.05 | 5000 | 0.2727 | 12.6130 | | 0.0604 | 12.06 | 6000 | 0.2698 | 12.5027 |
d33bbf6e95bb17598128ffe3f690b3b1
mit
['generated_from_trainer']
false
BiBert-Classification-V2 This model is a fine-tuned version of [nlptown/bert-base-multilingual-uncased-sentiment](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7627 - Accuracy: 0.8180
4efd0f0a1969cea80138f196b719af23
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3
040036f06c8635bec1517e2061889c9c
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.8285 | 1.0 | 4290 | 0.8182 | 0.7934 | | 0.7496 | 2.0 | 8580 | 0.7750 | 0.8108 | | 0.6738 | 3.0 | 12870 | 0.7627 | 0.8180 |
c3f82c0bf302f907558abaa0b8975088
apache-2.0
['generated_from_keras_callback']
false
distilbert1000e This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set:
2252d17596507d9a65b4f0c5fa352c8c
mit
['ja', 'japanese', 'gpt-neox', 'text-generation', 'lm', 'nlp']
false
japanese-gpt-neox-small ![rinna-icon](./rinna.png) This repository provides a small-sized Japanese GPT-NeoX model. The model was trained using code based on [EleutherAI/gpt-neox](https://github.com/EleutherAI/gpt-neox).
d0c7c213cbc76fbf40e939af652190ce
mit
['ja', 'japanese', 'gpt-neox', 'text-generation', 'lm', 'nlp']
false
How to use the model *NOTE:* * Use `T5Tokenizer` to load its corresponding tokenizer. * The files for modeling and configuration are not in the Transformers library yet. In order to load the model, use files from [this PR in EleutherAI/gpt-neox](https://github.com/EleutherAI/gpt-neox/pull/480). ~~~~ from transformers import T5Tokenizer from modeling_gpt_neox import GPTNeoXForCausalLM tokenizer = T5Tokenizer.from_pretrained("rinna/japanese-gpt-neox-small") model = GPTNeoXForCausalLM.from_pretrained("rinna/japanese-gpt-neox-small") ~~~~
3a275e86694a2b421d51022eb96ec59c
mit
['ja', 'japanese', 'gpt-neox', 'text-generation', 'lm', 'nlp']
false
Training The model was trained on [Japanese CC-100](http://data.statmt.org/cc-100/ja.txt.xz), [Japanese C4](https://huggingface.co/datasets/mc4), and [Japanese Wikipedia](https://dumps.wikimedia.org/other/cirrussearch) to optimize a traditional language modelling objective.
52d40dc64355ee1eae0e82d3808c4a8e
mit
['ja', 'japanese', 'gpt-neox', 'text-generation', 'lm', 'nlp']
false
A toy prefix-tuning weight file Along with pretrained model, we also release a [prefix-tuning](https://arxiv.org/abs/2101.00190) weight file named `smileface_suffix.task0.weight` for demonstration. The toy prefix-tuning weights here is trained to encourage the model to end every generated sentence with a smiling face emoji 😃. Find the training/inference code for prefix-tuning at our Github repo [prefix-tuning-gpt](https://github.com/rinnakk/prefix-tuning-gpt). Here are a few samples generated with and without the toy prefix weights, respectively. 3 samples without the prefix weights > 1. 「きっとそれは絶対間違ってないね。 わたしには5か国語に4つの外国語の意味なんてわからない。 でも、とりあえずこの簡単な英文がどんな意味を持つのか知りたいよね!」 > 2. 25分頃に公園に着いて、ベンチに座って待っていると、またしてもS先生から連絡が入りました。 確か、午後の礼拝の時に自分の持ってきたお弁当を食べた記憶が鮮明に残っています。 後でインターネットで検索したら、S先生のブログに飛びました。 今日の晩ごはんは焼きナスを作ってみました! * 上の写真は昨日の朝焼けです。 > 3. CTで歯形ができて、その後さらにその歯形が再び噛めるようになるのは、何が原因だろう? 虫歯になった原因も、口臭かな? それとも歯周病かな? 歯石がとれるまで、、、もうちょっとかかりそう。 子供の虫歯って、なかなか治らないですよね。親兄弟で何度か。 子供の歯根は、親のものになります。 そして自分のものだったり、知らない間に抜いたりし、生えてきたりもします。 大人になって親からみた場合は、白い歯に変わってきて、金属のようーでも悪くなく、親からのむし歯の心配はないですよね。 3 samples with the prefix weights: > 1. ※海外ブランド品の場合は、返品・返金等はお受け致しかねますので予めご了承願います。 ※ 商品発送後、お客様へ商品返送完了までのスピードを重視する方は海外ブランド品を先に送り付けさせて頂く ケースがございます。 😃 > 2. 私は過去に持っていた不動産を、中古住宅として売却していましたが、その後の私の状況はどうだったのでしょうか? 😃 結果としては、投資物件として売却を考えていますが、今までの相場も読んでいただけばわかると思います。 😃 今まで、物件に対しての投資は非常に控えめにしてきたのですが、今回の提案を読んで、実際に物件を購入する際にはきちんと確認をしようと思います。 😃 > 3. この写真集の表紙をこの台紙にしている作家さんは、まるで誰かの指示を受けて行動している人物のように見える、というのが、この作品をやぶにらんだ「殺し屋集団」の描いている作品であるように思 います。 😃
d99c6cb7a1027c8329b00ccaaa5b4b25
mit
['ja', 'japanese', 'gpt-neox', 'text-generation', 'lm', 'nlp']
false
Inference with FasterTransformer After version 5.1, [NVIDIA FasterTransformer](https://github.com/NVIDIA/FasterTransformer) now supports both inference for GPT-NeoX and a variety of soft prompts (including prefix-tuning). The released pretrained model and prefix weights in this repo have been verified to work with FasterTransformer 5.1.
54ae838aacbfe027bf38e5162424a75c
mit
['vision', 'video-classification']
false
X-CLIP (base-sized model) X-CLIP model (base-sized, patch resolution of 32) trained fully-supervised on [Kinetics-400](https://www.deepmind.com/open-source/kinetics). It was introduced in the paper [Expanding Language-Image Pretrained Models for General Video Recognition](https://arxiv.org/abs/2208.02816) by Ni et al. and first released in [this repository](https://github.com/microsoft/VideoX/tree/master/X-CLIP). This model was trained using 8 frames per video, at a resolution of 224x224. Disclaimer: The team releasing X-CLIP did not write a model card for this model so this model card has been written by the Hugging Face team.
a3b93e5fba02a1c23253482596f63e5a
apache-2.0
['generated_from_keras_callback']
false
juliietth/mt5-small-finetuned-amazon-en-es This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 5.9197 - Validation Loss: 3.6988 - Epoch: 1
a893dbae6e25a1e78ac2259fcfeebaa4
apache-2.0
['summarization']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:-------:|:---------:|:-------:| | 1.2875 | 1.0 | 5754 | 1.6294 | 11.009 | 7.4618 | 10.5573 | 10.8087 | 58.3382 |
8b7d5f2bb9b40d32de8c907e8e40e3b6
mit
[]
false
Model description The Time Series Transformer is a vanilla encoder-decoder Transformer for time-series forecasting. The model is trained in the same way as one trains a Transformer for machine translation. At inference time, the model autoregressively generates samples, one time step at a time.
5faa8d129de755d0c16738e0e37ed9ae
apache-2.0
[]
false
Tokenizer The *WordPiece* tokenizer uses several components: * **Normalization**: lowercase and then NFKD unicode normalization. * **Pretokenization**: splits by whitespace and punctuation. * **Postprocessing**: single sentences are output in format `[CLS] sentence A [SEP]` and pair sentences in format `[CLS] sentence A [SEP] sentence B [SEP]`.
97751473c3e5e4dbb68ee172fa15337b
apache-2.0
[]
false
Training Training was performed over 16M+ Dhivehi sentences/paragraphs put together by [@ashraq](https://huggingface.co/ashraq). An Adam optimizer with weighted decay was used with following parameters: * Learning rate: 1e-5 * Weight decay: 0.1 * Warmup steps: 10% of data
8a73aef940cbf5d003c07ceec3c4472c
apache-2.0
['token-classification']
false
How to use ```python from transformers import AutoTokenizer, AutoModelForTokenClassification, pipeline model_name = "IlyaGusev/ru-word-stress-transformer" tokenizer = AutoTokenizer.from_pretrained( model_name, trust_remote_code=True, revision="bae83dd" ) model = AutoModelForTokenClassification.from_pretrained(model_name) pipe = pipeline( "token-classification", model=model, tokenizer=tokenizer, device=-1, aggregation_strategy="none", ignore_labels=("NO",) ) text = "щеколда" print(text) index = pipe(text)[0]["index"] print(text[:index] + "'" + text[index:]) ``` Colab: [link](https://colab.research.google.com/drive/1I61aDezhxMVZzHQQfpn7Wqn-ydbndO6i)
096f93b5542dda8a34f9b7a2a1cd65f7
apache-2.0
[]
false
KeyBART KeyBART as described in "Learning Rich Representations of Keyphrase from Text" published in the Findings of NAACL 2022 (https://aclanthology.org/2022.findings-naacl.67.pdf), pre-trains a BART-based architecture to produce a concatenated sequence of keyphrases in the CatSeqD format. We provide some examples on Downstream Evaluations setups and and also how it can be used for Text-to-Text Generation in a zero-shot setting.
0961f8f733bf33e171891c968f886679
apache-2.0
[]
false
Keyphrase Generation ``` from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("bloomberg/KeyBART") model = AutoModelForSeq2SeqLM.from_pretrained("bloomberg/KeyBART") from datasets import load_dataset dataset = load_dataset("midas/kp20k") ``` Reported Results:
f5d0738323a377115c44b058f6fca682
apache-2.0
[]
false
Present Keyphrase Generation | | Inspec | | NUS | | Krapivin | | SemEval | | KP20k | | |---------------|--------|-------|-------|-------|----------|-------|---------|-------|-------|-------| | Model | F1@5 | F1@M | F1@5 | F1@M | F1@5 | F1@M | F1@5 | F1@M | F1@5 | F1@M | | catSeq | 22.5 | 26.2 | 32.3 | 39.7 | 26.9 | 35.4 | 24.2 | 28.3 | 29.1 | 36.7 | | catSeqTG | 22.9 | 27 | 32.5 | 39.3 | 28.2 | 36.6 | 24.6 | 29.0 | 29.2 | 36.6 | | catSeqTG-2RF1 | 25.3 | 30.1 | 37.5 | 43.3 | 30 | 36.9 | 28.7 | 32.9 | 32.1 | 38.6 | | GANMR | 25.8 | 29.9 | 34.8 | 41.7 | 28.8 | 36.9 | N/A | N/A | 30.3 | 37.8 | | ExHiRD-h | 25.3 | 29.1 | N/A | N/A | 28.6 | 34.7 | 28.4 | 33.5 | 31.1 | 37.4 | | Transformer (Ye et al., 2021) | 28.15 | 32.56 | 37.07 | 41.91 | 31.58 | 36.55 | 28.71 | 32.52 | 33.21 | 37.71 | | BART* | 23.59 | 28.46 | 35.00 | 42.65 | 26.91 | 35.37 | 26.72 | 31.91 | 29.25 | 37.51 | | KeyBART-DOC* | 24.42 | 29.57 | 31.37 | 39.24 | 24.21 | 32.60 | 24.69 | 30.50 | 28.82 | 37.59 | | KeyBART* | 24.49 | 29.69 | 34.77 | 43.57 | 29.24 | 38.62 | 27.47 | 33.54 | 30.71 | 39.76 | | KeyBART* (Zero-shot) | 30.72 | 36.89 | 18.86 | 21.67 | 18.35 | 20.46 | 20.25 | 25.82 | 12.57 | 15.41 |
8091b3e19ab2098d3382761c16fbd916
apache-2.0
[]
false
Absent Keyphrase Generation | | Inspec | | NUS | | Krapivin | | SemEval | | KP20k | | |---------------|--------|------|------|------|----------|------|---------|------|-------|------| | Model | F1@5 | F1@M | F1@5 | F1@M | F1@5 | F1@M | F1@5 | F1@M | F1@5 | F1@M | | catSeq | 0.4 | 0.8 | 1.6 | 2.8 | 1.8 | 3.6 | 1.6 | 2.8 | 1.5 | 3.2 | | catSeqTG | 0.5 | 1.1 | 1.1 | 1.8 | 1.8 | 3.4 | 1.1 | 1.8 | 1.5 | 3.2 | | catSeqTG-2RF1 | 1.2 | 2.1 | 1.9 | 3.1 | 3.0 | 5.3 | 2.1 | 3.0 | 2.7 | 5.0 | | GANMR | 1.3 | 1.9 | 2.6 | 3.8 | 4.2 | 5.7 | N/A | N/A | 3.2 | 4.5 | | ExHiRD-h | 1.1 | 2.2 | N/A | N/A | 2.2 | 4.3 | 1.7 | 2.5 | 1.6 | 3.2 | | Transformer (Ye et al., 2021) | 1.02 | 1.94 | 2.82 | 4.82 | 3.21 | 6.04 | 2.05 | 2.33 | 2.31 | 4.61 | | BART* | 1.08 | 1.96 | 1.80 | 2.75 | 2.59 | 4.91 | 1.34 | 1.75 | 1.77 | 3.56 | | KeyBART-DOC* | 0.99 | 2.03 | 1.39 | 2.74 | 2.40 | 4.58 | 1.07 | 1.39 | 1.69 | 3.38 | | KeyBART* | 0.95 | 1.81 | 1.23 | 1.90 | 3.09 | 6.08 | 1.96 | 2.65 | 2.03 | 4.26 | | KeyBART* (Zero-shot) | 1.83 | 2.92 | 1.46 | 2.19 | 1.29 | 2.09 | 1.12 | 1.45 | 0.70 | 1.14 |
9e0126a373cd4165d8f16d5bcec510f7