license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Train Gen Len | Epoch | |:----------:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-------------:|:-----:| | 2.3722 | 2.1596 | 21.6350 | 8.9453 | 17.8649 | 19.9099 | 19.0 | 0 |
b4c9f883a868286fe35f4bcc17303496
apache-2.0
['generated_from_trainer']
false
t5-small-finetuned-qa-google-en-question_v1 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.1358 - Rouge1: 49.6232 - Rouge2: 26.4156 - Rougel: 46.9194 - Rougelsum: 46.8814 - Gen Len: 13.5795
045bb324e05ef541596ea6978ff84501
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP - label_smoothing_factor: 0.1
d272fc6abaae51e107eef416614ee5bc
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 0.27 | 100 | 3.5967 | 43.7809 | 21.3303 | 41.6782 | 41.6869 | 12.9745 | | No log | 0.53 | 200 | 3.4539 | 45.7744 | 22.9574 | 43.4412 | 43.4249 | 13.416 | | No log | 0.8 | 300 | 3.3771 | 47.1053 | 24.1406 | 44.6092 | 44.6051 | 13.386 | | No log | 1.06 | 400 | 3.3229 | 47.5933 | 24.7048 | 45.086 | 45.1266 | 13.4725 | | 3.6954 | 1.33 | 500 | 3.2851 | 47.8847 | 24.7439 | 45.322 | 45.3243 | 13.5975 | | 3.6954 | 1.6 | 600 | 3.2570 | 48.1836 | 25.3062 | 45.6641 | 45.6346 | 13.5955 | | 3.6954 | 1.86 | 700 | 3.2321 | 48.7604 | 25.7254 | 46.1789 | 46.1537 | 13.476 | | 3.6954 | 2.13 | 800 | 3.2140 | 48.7518 | 25.639 | 46.2817 | 46.2343 | 13.5855 | | 3.6954 | 2.39 | 900 | 3.1963 | 49.0046 | 25.8439 | 46.4097 | 46.3732 | 13.6855 | | 3.3928 | 2.66 | 1000 | 3.1844 | 49.3227 | 26.0336 | 46.7032 | 46.6402 | 13.557 | | 3.3928 | 2.93 | 1100 | 3.1736 | 49.4069 | 26.0619 | 46.691 | 46.6406 | 13.5475 | | 3.3928 | 3.19 | 1200 | 3.1630 | 49.4614 | 26.1224 | 46.7679 | 46.7416 | 13.614 | | 3.3928 | 3.46 | 1300 | 3.1556 | 49.7542 | 26.4413 | 47.0601 | 47.0201 | 13.625 | | 3.3928 | 3.72 | 1400 | 3.1500 | 49.4097 | 26.1732 | 46.7324 | 46.6833 | 13.6795 | | 3.3144 | 3.99 | 1500 | 3.1440 | 49.5359 | 26.3478 | 46.8079 | 46.7769 | 13.604 | | 3.3144 | 4.26 | 1600 | 3.1406 | 49.8245 | 26.5312 | 47.1247 | 47.0744 | 13.552 | | 3.3144 | 4.52 | 1700 | 3.1378 | 49.6884 | 26.4023 | 46.9501 | 46.9063 | 13.5785 | | 3.3144 | 4.79 | 1800 | 3.1358 | 49.6232 | 26.4156 | 46.9194 | 46.8814 | 13.5795 |
5a03e0fdfd2d9a1c63003e9a1f35571f
apache-2.0
['generated_from_trainer']
false
distilbert_sa_GLUE_Experiment_cola_96 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE COLA dataset. It achieves the following results on the evaluation set: - Loss: 0.6180 - Matthews Correlation: 0.0
0a6ae74a2b825a6387b0fd03a3c8e3e6
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.647 | 1.0 | 34 | 0.6332 | 0.0 | | 0.6203 | 2.0 | 68 | 0.6210 | 0.0 | | 0.6092 | 3.0 | 102 | 0.6180 | 0.0 | | 0.6077 | 4.0 | 136 | 0.6185 | 0.0 | | 0.6083 | 5.0 | 170 | 0.6184 | 0.0 | | 0.607 | 6.0 | 204 | 0.6185 | 0.0 | | 0.6078 | 7.0 | 238 | 0.6186 | 0.0 | | 0.6087 | 8.0 | 272 | 0.6184 | 0.0 |
a7aad1d2bba1b7fed77ae32d3010b9c4
apache-2.0
['generated_from_trainer']
false
t5-small-devices-sum-ver1 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2335 - Rouge1: 93.7171 - Rouge2: 73.3058 - Rougel: 93.7211 - Rougelsum: 93.689 - Gen Len: 4.7246
f9a291ed624968dab3815a9eac83ec21
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 185 | 0.6517 | 83.2503 | 55.7516 | 83.254 | 83.2722 | 4.4729 | | No log | 2.0 | 370 | 0.4239 | 89.2246 | 65.7477 | 89.2223 | 89.2288 | 4.5575 | | 1.0224 | 3.0 | 555 | 0.3459 | 91.0524 | 68.4783 | 91.0222 | 91.0312 | 4.6685 | | 1.0224 | 4.0 | 740 | 0.3023 | 91.9741 | 70.1066 | 91.9886 | 91.9525 | 4.6549 | | 1.0224 | 5.0 | 925 | 0.2797 | 92.667 | 71.3468 | 92.6706 | 92.6611 | 4.6969 | | 0.3678 | 6.0 | 1110 | 0.2616 | 93.229 | 72.2805 | 93.222 | 93.1935 | 4.7179 | | 0.3678 | 7.0 | 1295 | 0.2469 | 93.362 | 72.6985 | 93.3651 | 93.3294 | 4.7111 | | 0.3678 | 8.0 | 1480 | 0.2401 | 93.5689 | 73.009 | 93.582 | 93.5377 | 4.7192 | | 0.2902 | 9.0 | 1665 | 0.2350 | 93.7013 | 73.2685 | 93.7256 | 93.684 | 4.724 | | 0.2902 | 10.0 | 1850 | 0.2335 | 93.7171 | 73.3058 | 93.7211 | 93.689 | 4.7246 |
a585e0ec0713a5ed1a7fa28514743bb3
gpl-3.0
['twitter', 'masked-token-prediction', 'bertweet', 'election2020', 'politics']
false
Citation ```bibtex @inproceedings{kawintiranon2022polibertweet, title = {PoliBERTweet: A Pre-trained Language Model for Analyzing Political Content on Twitter}, author = {Kawintiranon, Kornraphop and Singh, Lisa}, booktitle = {Proceedings of the Language Resources and Evaluation Conference}, year = {2022}, publisher = {European Language Resources Association} } ```
a3a4982fcae9428659179f8a1796121b
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
Demo: How to use in ESPnet2 ```bash cd espnet git checkout 395bda6123ae268f991e5ef1dab887b6e677974a pip install -e . cd egs2/tamil/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/tamil_slu ``` <!-- Generated by scripts/utils/show_asr_result.sh -->
e3b27155743054aeae442065aec2477c
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
Environments - date: `Sun Oct 3 20:59:46 EDT 2021` - python version: `3.9.5 (default, Jun 4 2021, 12:28:51) [GCC 7.5.0]` - espnet version: `espnet 0.10.3a3` - pytorch version: `pytorch 1.8.1+cu102` - Git hash: `b41391336042a4876e30d9fe5c66afb4e4be404c` - Commit date: `Wed Sep 22 10:02:03 2021 -0400`
c6d7bab49ef9b1520dacca1c2d29d8db
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |inference_asr_model_valid.acc.ave_5best/test|80|372|70.4|22.6|7.0|3.2|32.8|56.3| |inference_asr_model_valid.acc.ave_5best/valid|80|372|70.4|22.6|7.0|3.2|32.8|56.3|
18f1a43378e1836cce4f64026e6e779c
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |inference_asr_model_valid.acc.ave_5best/test|80|3234|85.9|8.2|5.9|5.5|19.6|56.3| |inference_asr_model_valid.acc.ave_5best/valid|80|3234|85.9|8.2|5.9|5.5|19.6|56.3|
b7aa0a18c328abf2ae97153919d4e91c
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
ASR config <details><summary>expand</summary> ``` config: conf/train_asr_wav2vec2_xlsr.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp_train_asr_wav2vec2_xlsr/asr_train_asr_wav2vec2_xlsr_raw_word ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 250 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - train - loss - min - - valid - loss - min - - train - acc - max - - valid - acc - max keep_nbest_models: 5 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: - frontend.upstream num_iters_per_epoch: null batch_size: 20 valid_batch_size: null batch_bins: 1000000 valid_batch_bins: null train_shape_file: - exp_train_asr_wav2vec2_xlsr/asr_stats_raw_word/train/speech_shape - exp_train_asr_wav2vec2_xlsr/asr_stats_raw_word/train/text_shape.word valid_shape_file: - exp_train_asr_wav2vec2_xlsr/asr_stats_raw_word/valid/speech_shape - exp_train_asr_wav2vec2_xlsr/asr_stats_raw_word/valid/text_shape.word batch_type: folded valid_batch_type: null fold_length: - 80000 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train/wav.scp - speech - sound - - dump/raw/train/text - text - text valid_data_path_and_name_and_type: - - dump/raw/valid/wav.scp - speech - sound - - dump/raw/valid/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 0.0001 scheduler: warmuplr scheduler_conf: warmup_steps: 5000 token_list: - <blank> - <unk> - காசு - வேணும் - Request_Acc_balance - Account - Money_deposit - Money_withdraw - Credit_card_payments - card - மீதி - Money_transfer - எவ்வளோ - Bill_payments - Credit - கட்ட - எவ்வளவு - காச - கட்டவேணும் - இந்த - Balance - வேண்டும் - போடோணும் - கணக்கு - செய்ய - Bill - போட - account - மாத்த - credit - pay - பண்ணோணும் - Deposit - மீளெடுக்க - வைப்பு - எடுக்கவேணும் - ல - இருக்கிற - எடுக்கணும் - இல - இருந்து - மற்ற - accountக்கு - balance - என்ன - bill - அ - ஒருக்கா - ஏலுமோ - deposit - பண்ண - payment - Account-la - காசெடுக்கோணும் - அனுப்பவேணும் - காசெடுக்க - இன்னொரு - கு - Cash - <sos/eos> init: null input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: true model_conf: ctc_weight: 0.3 lsm_weight: 0.1 length_normalized_loss: false extract_feats_in_collect_stats: false use_preprocessor: true token_type: word bpemodel: null non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' frontend: s3prl frontend_conf: frontend_conf: upstream: wav2vec2_xlsr download_dir: ./hub multilayer_feature: true fs: 16k specaug: specaug specaug_conf: apply_time_warp: true time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 30 num_freq_mask: 2 apply_time_mask: true time_mask_width_range: - 0 - 40 num_time_mask: 2 normalize: utterance_mvn normalize_conf: {} preencoder: linear preencoder_conf: input_size: 1024 output_size: 80 encoder: conformer encoder_conf: output_size: 512 attention_heads: 8 linear_units: 2048 num_blocks: 12 dropout_rate: 0.1 positional_dropout_rate: 0.1 attention_dropout_rate: 0.1 input_layer: conv2d normalize_before: true macaron_style: true pos_enc_layer_type: rel_pos selfattention_layer_type: rel_selfattn activation_type: swish use_cnn_module: true cnn_module_kernel: 31 postencoder: null postencoder_conf: {} decoder: transformer decoder_conf: attention_heads: 8 linear_units: 2048 num_blocks: 4 dropout_rate: 0.1 positional_dropout_rate: 0.1 self_attention_dropout_rate: 0.1 src_attention_dropout_rate: 0.1 required: - output_dir - token_list version: 0.10.3a3 distributed: false ``` </details>
cf5680518948350bc32a36ab70fe3156
apache-2.0
['generated_from_keras_callback']
false
javilonso/classificationEsp2_Attraction This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-large-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-large-bne) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.9927 - Validation Loss: 0.9926 - Epoch: 2
fd853f5b074cfd3018ea7c834820d6a6
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 35916, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16
f564b4653a766057258a9005d9c03bc1
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.8200 | 0.9930 | 0 | | 0.9942 | 0.9947 | 1 | | 0.9927 | 0.9926 | 2 |
e489f273374da89d9fa02346e8a345ed
apache-2.0
['translation']
false
opus-mt-iso-fi * source languages: iso * target languages: fi * OPUS readme: [iso-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/iso-fi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/iso-fi/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/iso-fi/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/iso-fi/opus-2020-01-09.eval.txt)
246074c440cf4692b458ee19ce3e290d
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.1279
30b163f3b1f61053e34403622f05b979
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.2189 | 1.0 | 5533 | 1.1554 | | 0.9761 | 2.0 | 11066 | 1.1279 |
e01728dfcd95583ef8d4c0a84f4d7130
apache-2.0
['translation']
false
opus-mt-sk-en * source languages: sk * target languages: en * OPUS readme: [sk-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sk-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sk-en/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sk-en/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sk-en/opus-2020-01-16.eval.txt)
204aef5c3de438443168f7887c233388
mit
['gpt_neo', 'code_synthesis']
false
GPT-Neo-125M-APPS-all > **Please refer to our new [GitHub Wiki](https://github.com/ncoop57/gpt-code-clippy/wiki) which documents our efforts in detail in creating the open source version of GitHub Copilot**
c5eed7779847142b04b7e18028471f97
mit
['gpt_neo', 'code_synthesis']
false
Training procedure The training script used to train this model can be found [here](https://github.com/ncoop57/gpt-code-clippy/blob/camera-ready/training/run_clm_apps.py). Training is done for 5 epochs using AdamW optimizer and leaner decay learning rate schedule with 800 warmup steps. To reproduce the training one can use this command with the above script: ```bash python run_clm_apps.py \ --output_dir $HOME/gpt-neo-125M-apps \ --model_name_or_path EleutherAI/gpt-neo-125B \ --dataset_name $HOME/gpt-code-clippy/data_processing/apps.py \ --dataset_config_name formatted \ --do_train --do_eval \ --block_size="1024" \ --per_device_train_batch_size="16" \ --per_device_eval_batch_size="16" \ --preprocessing_num_workers="16" \ --learning_rate="8e-5" \ --warmup_steps="800" \ --adam_beta1="0.9" \ --adam_beta2="0.98" \ --weight_decay="0.1" \ --overwrite_output_dir \ --num_train_epochs="5" \ --logging_steps="50" \ --eval_steps="2000" \ --report_to="wandb" \ --dtype="bfloat16" \ --save_strategy epoch \ --gradient_accumulation_steps 2 \ --all_data true \ ```
2f2de0a320803fde7c82463724d3bf80
mit
['gpt_neo', 'code_synthesis']
false
How to use You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: ```py from transformers import AutoModelForCausalLM, AutoTokenizer, FlaxAutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("flax-community/gpt-code-clippy-125M-apps-alldata") tokenizer = AutoTokenizer.from_pretrained("flax-community/gpt-code-clippy-125M-apps-alldata") prompt = """ A function to greet user. Given a user name it should say hello def greet(name): ANSWER: """ input_ids = tokenizer(prompt, return_tensors='pt').input_ids.to(device) start = input_ids.size(1) out = model.generate(input_ids, do_sample=True, max_length=50, num_beams=2, early_stopping=True, eos_token_id=tokenizer.eos_token_id, ) print(tokenizer.decode(out[0][start:])) ```
e451e008e3bcc4aa965b4c9616f93e0a
mit
['gpt_neo', 'code_synthesis']
false
Limitations and Biases The model is intended to be used for research purposes and comes with no guarantees of quality of generated code. The paper ["Evaluating Large Language Models Trained on Code"](https://arxiv.org/abs/2107.03374) from OpenAI has a good discussion on what the impact of a large language model trained on code could be. Therefore, some parts of their discuss are highlighted here as it pertains to this dataset and models that may be trained from it. **As well as some differences in views from the paper, particularly around legal implications**. 1. **Over-reliance:** This model may generate plausible solutions that may appear correct, but are not necessarily the correct solution. Not properly evaluating the generated code may cause have negative consequences such as the introduction of bugs, or the introduction of security vulnerabilities. Therefore, it is important that users are aware of the limitations and potential negative consequences of using this language model. 2. **Economic and labor market impacts:** Large language models trained on large code datasets such as this one that are capable of generating high-quality code have the potential to automate part of the software development process. This may negatively impact software developers. However, as discussed in the paper, as shown in the Summary Report of software developers from [O*NET OnLine](https://www.onetonline.org/link/summary/15-1252.00), developers don't just write software. 5. **Biases:** The model is trained on data containing prompt questions formatted in specific way. The performance of the model can be worse if the prompt formatting is different from that used in APPS dataset. GPT-CC is finetuned GPT-Neo and might have inhereted biases and limitations from it. See [GPT-Neo model card](https://huggingface.co/EleutherAI/gpt-neo-125M
eb4ffe18cf364faac02f8fe29b7a42cb
apache-2.0
['translation']
false
ukr-heb * source group: Ukrainian * target group: Hebrew * OPUS readme: [ukr-heb](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-heb/README.md) * model: transformer-align * source language(s): ukr * target language(s): heb * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-heb/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-heb/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-heb/opus-2020-06-17.eval.txt)
7ede90afeb427b1eeda58bc194cc25a0
apache-2.0
['translation']
false
System Info: - hf_name: ukr-heb - source_languages: ukr - target_languages: heb - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-heb/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['uk', 'he'] - src_constituents: {'ukr'} - tgt_constituents: {'heb'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-heb/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-heb/opus-2020-06-17.test.txt - src_alpha3: ukr - tgt_alpha3: heb - short_pair: uk-he - chrF2_score: 0.557 - bleu: 35.7 - brevity_penalty: 1.0 - ref_len: 4765.0 - src_name: Ukrainian - tgt_name: Hebrew - train_date: 2020-06-17 - src_alpha2: uk - tgt_alpha2: he - prefer_old: False - long_pair: ukr-heb - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
5bdc71ba755ffdd94dc164561bcb8566
apache-2.0
['deep-narrow']
false
T5-Efficient-LARGE-DM2000 (Deep-Narrow version) T5-Efficient-LARGE-DM2000 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5). It is a *pretrained-only* checkpoint and was released with the paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*. In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures of similar parameter count. To quote the paper: > We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased > before considering any other forms of uniform scaling across other dimensions. This is largely due to > how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a > tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise, > a tall base model might also generally more efficient compared to a large model. We generally find > that, regardless of size, even if absolute performance might increase as we continue to stack layers, > the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36 > layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e., > params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params, > FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to > consider. To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially. A sequence of word embeddings is therefore processed sequentially by each transformer block.
8454415a7a9c0ce58b2a5193fe4f083b
apache-2.0
['deep-narrow']
false
Details model architecture This model checkpoint - **t5-efficient-large-dm2000** - is of model type **Large** with the following variations: - **dm** is **2000** It has **1475.39** million parameters and thus requires *ca.* **5901.57 MB** of memory in full precision (*fp32*) or **2950.78 MB** of memory in half precision (*fp16* or *bf16*). A summary of the *original* T5 model architectures can be seen here: | Model | nl (el/dl) | ff | dm | kv | nh |
9f646572cec1da0da14a64e35de6fa21
cc-by-4.0
['translation']
false
DeUnCaser The output from Automated Speak Recognition software is usually uncased and without any punctation. This does not make a very readable text. The DeUnCaser is a sequence-to-sequence model that is reversing this process. It adds punctation, and capitalises the correct words. In some languages this means adding capital letters at start of sentences and on all proper nouns, in other languages, like German, it means capitalising the first letter of all nouns. It will also make attempts at adding hyphens and parentheses if this is making the meaning clearer. It is using based on the multi-lingual T5 model. It is finetuned for 130,000 steps on a TPU v4-16 using T5X starting from the mT5.1.1 pretrained model. The finetuning scripts is based on up to 1,000,000 training examples (or as many as exists in OSCAR) from each of the 42 languages with Latin alphabet that is both part of OSCAR and the mT5 training set: Afrikaans, Albanian, Basque, Catalan, Cebuano, Czech, Danish, Dutch, English, Esperanto, Estonian, Finnish, French, Galician, German, Hungarian, Icelandic, Indonesian, Irish, Italian, Kurdish, Latin, Latvian, Lithuanian, Luxembourgish, Malagasy, Malay, Maltese, Norwegian Bokmål, Norwegian Nynorsk, Polish, Portuguese, Romanian, Slovak, Spanish, Swahili, Swedish, Turkish, Uzbek, Vietnamese, Welsh, West Frisian. A Notebook for creating the training corpus is available [here](https://colab.research.google.com/drive/1bkH94z-0wIQP8Pz0qXFndhoQsokU-78x?usp=sharing).
831a7d0ca2f2de31157aa3ae91cc4acb
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
Demo: How to use in ESPnet2 ```bash cd espnet git checkout fa1b865352475b744c37f70440de1cc6b257ba70 pip install -e . cd egs2/bn_openslr53/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/bn_openslr53 ``` <!-- Generated by scripts/utils/show_asr_result.sh -->
cf239128826af790c76158ef403e093f
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
Environments - date: `Mon Jan 31 10:53:20 EST 2022` - python version: `3.9.5 (default, Jun 4 2021, 12:28:51) [GCC 7.5.0]` - espnet version: `espnet 0.10.6a1` - pytorch version: `pytorch 1.8.1+cu102` - Git hash: `9d09bf551a9fe090973de60e15adec1de6b3d054` - Commit date: `Fri Jan 21 11:43:15 2022 -0500`
fdbaad4230a28d10a28ab79a4a91a5f6
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_batch_size1_lm_lm_train_lm_bpe1000_valid.loss.ave_asr_model_valid.acc.best/sbn_test|2018|6470|74.2|21.3|4.5|2.2|28.0|48.8|
cd938def042db430bc1b20a38de5bf46
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_batch_size1_lm_lm_train_lm_bpe1000_valid.loss.ave_asr_model_valid.acc.best/sbn_test|2018|39196|89.4|4.3|6.3|1.4|12.0|48.8|
8191bcbd306e4593550c6d1e41b4b9e2
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
TER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_batch_size1_lm_lm_train_lm_bpe1000_valid.loss.ave_asr_model_valid.acc.best/sbn_test|2018|15595|77.6|12.7|9.7|1.6|24.0|48.7|
10e12c0508333d89aa4b803375a219f8
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
ASR config <details><summary>expand</summary> ``` config: conf/train_asr.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_train_asr_raw_bpe1000 ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 200 patience: 20 val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - acc - max keep_nbest_models: 10 nbest_averaging_interval: 0 grad_clip: 5 grad_clip_type: 2.0 grad_noise: false accum_grad: 20 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_matplotlib: true use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 20 valid_batch_size: null batch_bins: 200000 valid_batch_bins: null train_shape_file: - exp/asr_stats_raw_bpe1000/train/speech_shape - exp/asr_stats_raw_bpe1000/train/text_shape.bpe valid_shape_file: - exp/asr_stats_raw_bpe1000/valid/speech_shape - exp/asr_stats_raw_bpe1000/valid/text_shape.bpe batch_type: numel valid_batch_type: null fold_length: - 80000 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/sbn_train/wav.scp - speech - sound - - dump/raw/sbn_train/text - text - text valid_data_path_and_name_and_type: - - dump/raw/sbn_dev/wav.scp - speech - sound - - dump/raw/sbn_dev/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 10.0 scheduler: noamlr scheduler_conf: warmup_steps: 25000 token_list: - <blank> - <unk> - র - ে - ন - ের - া - ল - ক - ্ - ো - ত - ি - স - ▁ - ই - ী - য় - ম - ু - ▁আ - প - ব - তে - দ - শ - কে - টি - ্য - হ - ▁এ - ▁না - ▁ব - ও - গ - ট - রা - ▁অ - জ - ▁বি - ▁বা - ▁স - না - ার - ▁করে - ধ - নি - ▁ম - লে - ▁জ - ▁ও - ▁হ - চ - তা - দের - ▁মা - িত - ▁থেকে - ্যা - ণ - '-' - ▁প্র - তি - ▁হয় - ায় - িক - ▁এক - ▁পা - ▁ক - ঁ - ভ - ▁ভ - ▁সা - লা - ▁শ - ',' - ্র - ▁এই - ▁নি - ▁প - বা - ▁পর - ফ - ▁সে - ক্ষ - ছে - মা - ষ - ▁কা - টা - বে - িয়া - ড় - ▁দ - ▁চ - লি - ▁ই - ▁হা - ▁তার - ▁যে - থ - । - ড - ুল - িয়ে - ▁গ - বি - ▁তা - রি - কা - ▁র - ▁ফ - পা - ▁ন - ▁করা - ং - ▁আর - উ - নে - খ - য়ে - ▁নিয়ে - ▁তিনি - ▁একটি - নের - ▁হয়েছে - ্ব - ▁ত - ▁জন্য - ▁যা - বার - ঙ্গ - ান - স্ত - কার - জা - ূ - ঠ - ুর - ▁হবে - ▁মি - দা - াই - ▁জা - ▁বলে - ▁কি - ড়া - ▁ঘ - ▁দু - হা - ত্র - ০ - ছেন - ▁কথা - সি - াম - ▁ছিল - ▁উ - ▁বল - ▁তাদের - ৃ - ▁রা - ▁সঙ্গে - ▁প্রতি - ▁এবং - ▁ধ - ▁ল - ছ - ▁খা - ▁বে - ▁সময় - য়া - জন - মি - ন্ত - ▁করতে - ▁সু - ▁করেন - ীর - ৌ - ▁অনেক - গুলো - ষ্ট - ধা - সা - ▁হয়ে - ▁মধ্যে - ▁চা - ▁লা - ির - ▁১ - ▁সং - োর - ভাবে - ▁আমি - ১ - শা - াল - জি - ▁তারা - ▁যায় - মান - ▁কাজ - ▁কিছু - ▁দিয়ে - টে - রণ - ▁ড - ▁উপ - স্থ - দি - সে - ▁মে - ▁সরকার - ▁খ - ▁পার - ীয় - ক্ত - ওয়া - স্ট - এ - ▁বাংলাদেশ - ড়ে - ন্ট - ▁২ - ▁আছে - ▁সব - ছি - ▁দি - ▁আমার - ▁এখন - মে - ▁বছর - ▁ট - ▁শা - কি - ন্ড - ▁নাম - ▁কোন - দিন - পুর - ▁সম্ - ছিল - ▁পুলিশ - ▁য - ৈ - ▁মানুষ - ▁দা - েই - ▁এর - ▁সালে - ▁কর - ঘ - গ্র - ▁দিন - ▁পারে - ্ম - ৫ - ▁দেশ - ▁দেখ - ▁স্ব - ▁সম - ▁১৯ - ▁সি - ▁শুরু - ▁প্রথম - ত্ - ▁তো - ্ট - ▁আগে - ▁কোনো - ▁রয়েছে - ▁হচ্ছে - ▁অব - ছিলেন - যোগ - জে - ▁ভারত - ▁নে - প্র - ▁সেই - গা - ▁গা - হি - ন্ন - ▁ছ - ▁জন - ▁নির্ - খা - পি - ▁পে - ▁স্ - াব - ▁মো - ▁অনু - ▁কিন্তু - ৯ - ▁পরি - ▁ঢাকা - তার - লো - ▁বিষয় - ▁তাঁর - ৪ - র্থ - ▁অ্যা - ▁ঘটনা - ▁শেষ - ড়ি - লেন - ▁আমাদের - ▁বড় - দেশ - ▁নেই - ▁ব্যা - ানো - ▁বেশি - মার - বাস - ▁তবে - ▁কো - শি - ▁বিভিন্ন - ▁নয় - ৭ - নী - ৩ - ▁দল - ▁দেখা - ঝ - ▁করার - ▁কে - ▁হলে - ুক - ▁গু - ▁৩ - ৬ - ▁মনে - ▁নির্বাচন - ▁রাজ - ▁করেছে - ীন - লের - িতে - ▁একটা - ঞ্চ - ▁রাখ - ▁থাক - ▁আমরা - ▁চল - ২ - ▁কাছে - ▁মু - ▁পড় - ▁সহ - ▁হিসেবে - জ্ঞ - ান্ত - ণ্ড - ৎ - য়ের - ▁পু - ▁একজন - ▁বলেন - ুন - িং - ’ - ▁বাংলা - টার - ুম - ঞ্জ - ▁বাড়ি - ▁গত - ▁হাজার - ▁মতো - ডি - ▁তিন - দ্ধ - ▁এমন - ▁কয়েক - ▁কম - ত্ব - ্রা - ▁দিকে - ▁ছিলেন - ▁পড়ে - নার - ▁করি - কাল - ▁মুখ - ▁উঠ - র্ত - ▁টাকা - চার - শে - ▁এসে - ▁দুই - ▁করেছেন - ▁লোক - ম্প - ৮ - ষ্ঠ - ▁মহা - ▁কু - ▁থাকে - বাদ - চি - ▁এলাকা - ▁জানান - ▁প্রায় - ▁দেয়া - ▁গেল - য - চ্ছে - ▁ছবি - ▁নতুন - ▁অবস্থা - ▁অভি - ▁আজ - ▁কার - ▁খু - ▁জানা - ▁করছে - টির - ▁বাংলাদেশের - ▁বন্ধ - কারী - ▁অন্য - ▁ধরে - প্ত - ▁তাকে - ▁গেছে - ▁শি - চা - আ - ▁চাল - ▁আল - ▁৫ - ▁উত্ত - ▁ঝ - ▁জীবন - লার - ঙ - ▁প্রকাশ - ▁মেয়ে - ▁রে - ▁দেশের - ▁খেল - ▁মূল - ভি - ঙ্ক - ▁চি - ▁পর্যন্ত - ▁সাথে - লাম - ▁৪ - ▁টি - ▁বো - ▁আইন - গত - ▁হতে - ▁ভালো - . - স্ক - ▁অভিযোগ - ন্স - ▁কারণে - ▁অর্থ - ▁অপ - ক্স - বু - ▁২০ - ▁পাওয়া - ▁খুব - ▁মন - সম - ল্লা - ব্দ - ▁পি - ▁ওই - ▁করবে - য়ার - সহ - ক্ষণ - ▁নারী - ম্ব - ▁ফা - ▁বেশ - ▁পেয়ে - দে - ▁তখন - িয়ার - ▁ক্যা - ▁ছেলে - ▁চার - ভার - ▁দিতে - ▁ক্র - ▁গান - বাহিনী - ▁ভি - কৃত - ▁গো - বল - ▁ইসলাম - ▁জি - ▁ডি - ন্দ্র - ▁গ্রাম - ▁ওপর - ▁ভোট - ▁পাঠ - ▁গিয়ে - ▁মামলা - ▁ব্যবস্থা - সার - যুক্ত - ▁মাস - দার - ▁সেখানে - ▁জন্ম - ▁পদ - ▁কেউ - র্ণ - ▁দেওয়া - ভাগ - ▁১০ - ▁উদ্ - োয়া - রূপ - ▁ফেল - ▁তৈরি - ▁খবর - ▁কেন - ▁ভাষা - ▁৬ - ▁ভাব - ▁নেতা - ▁জানিয়েছে - ▁কী - ফা - ▁থাকা - ▁লি - টের - ▁ছা - ▁হল - ▁গ্র - ▁কর্ম - ▁সদস্য - ▁জাতীয় - ▁ব্র - দু - ▁কেন্দ্র - ▁হওয়ার - ▁দেব - ▁চলে - ▁হলো - তু - ▁বিশ্ব - ▁যাওয়া - ▁যাবে - ▁ট্র - ▁সম্পর্ক - ▁দিয়েছে - ▁যদি - ▁বিরুদ্ধে - ▁বিশেষ - ▁করলে - ▁ছোট - ▁অধি - ▁শুন - ▁আবার - ▁কারণ - ▁দলের - ▁ফি - ▁স্ট - ▁দেয় - ▁শিল্প - ▁রাজনৈতিক - ▁বলা - ▁ছাড়া - ▁জেলা - ▁দেখে - ▁প্রধান - ▁এসব - বন্ধ - ▁কর্মকর্তা - চ্ছি - ▁তথ্য - ▁অংশ - ▁দশ - ▁তাহা - মন্ত্রী - ৃত - ▁ঠিক - ▁রাত - ▁আসা - ▁থানা - ▁গোল - রাজ - ▁মৃত্যু - ▁রি - ▁পথ - ্যান - ▁বিচার - ▁শ্রমিক - ▁গল্প - ▁সকাল - ▁হাতে - ▁এটা - ▁কবি - ▁বাবা - ▁দাবি - ▁চাই - ▁মাধ্যমে - ▁হয়েছিল - ▁ঢ - ▁যাচ্ছে - ▁২০০ - ▁চলচ্চিত্র - ▁রহমান - ▁লেখা - ▁দেন - ▁পুরুষ - চিত্র - ▁ব্যবহার - ▁অনুষ্ঠান - ▁বর্তমান - ▁ধর্ম - ▁দাঁড় - ▁নিহত - ঃ - চ্ছ - ▁চেষ্টা - ▁চোখ - ▁উপজেলা - ▁আদালত - ▁সামনে - ▁রু - ▁চেয়ে - ▁সর্ব - ▁হত্যা - ▁গণ - ▁ডাক - ▁দ্বিতীয় - ▁ধরনের - ▁কবিতা - ▁ফলে - ▁সবচেয়ে - গুলি - ▁মোট - ▁পরিবার - ▁শিশু - ▁হোসেন - ▁রেখে - ▁রায় - ▁মাথা - ▁দুর্ - ▁৮ - ▁টা - ▁৭ - ▁বসে - ▁ওয়া - ▁ব্যক্তি - ▁শুধু - ▁ব্যাংক - ▁পাকিস্তান - ▁যখন - ▁করিয়া - ▁লিখ - পূর্ণ - ▁বিশ্ববিদ্যালয় - ▁সংখ্যা - ▁যুদ্ধ - ▁হইয়া - ▁ক্ষমতা - ▁সাধারণ - ▁কোটি - ▁শিক্ষা - ▁আলো - ▁তুলে - ▁সত্য - ▁ঘটে - '''' - ▁দূর - ▁প্রশ্ন - ুদ্ধ - ▁লাখ - ▁নিজের - েশন - ▁আলোচনা - ঈ - ▁ক্রিকেট - ▁সমাজ - ▁বয়স - ▁গ্রহণ - ▁জায়গা - ▁ব্যবসা - বর্তী - জীব - কল্প - ▁প্রত্য - ▁মাত্র - ▁উৎ - ▁শহরে - ▁এখানে - ▁নেয়া - ▁ঘোষণা - ▁সকল - ▁আটক - ▁নিরাপত্তা - ▁পাঁচ - ▁পূর্ব - ▁রাষ্ট্র - ▁ভাই - ▁বহু - ▁পরীক্ষা - ▁পুরো - ▁বাইরে - ▁থাকবে - ▁ক্ষেত্রে - ▁স্থান - ▁ম্যাচ - ▁ঘরে - ▁সবাই - ার্ড - ▁উদ্ধার - ▁ইতিহাস - ▁সাহিত্য - ▁সুযোগ - ▁আন্দোলন - ▁যুক্তরাষ্ট্র - দর্শন - ▁১২ - ▁১৮ - ▁প্রেম - ▁আন্তর্জাতিক - ল্যান্ড - ▁সমস্যা - ▁বিভাগ - ▁সিদ্ধান্ত - ▁মধ্য - ন্দি - ▁ছাত্র - ▁গাড়ি - ▁দীর্ঘ - ▁সংবাদ - ▁প্রয়োজন - ▁সিনেমা - ▁রাজধানী - ▁স্থানীয় - ▁একটু - ▁বাজার - জ্জ - ▁পৃথিবী - ▁বিশ্বাস - ▁আহত - ▁দায়িত্ব - ▁হরতাল - ▁সম্ভব - ▁অফিস - ▁অভিনয় - ▁কলেজ - ▁চট্টগ্রাম - ▁ক্ল - ▁দক্ষিণ - ▁পক্ষে - ▁মুক্তি - ▁সংসদ - ‘ - ▁উপস্থিত - ▁ফিরে - ▁আগামী - ▁সংগঠন - ▁মিনিট - ▁হামলা - ▁প্রতিষ্ঠান - ▁পোশাক - ▁প্ল - ▁সৃষ্টি - ▁কমিশন - ▁আমাকে - ▁তদন্ত - ▁উচ্চ - ▁রাজনীতি - দ্দ - ▁দর্শক - ▁তুমি - ▁পরিস্থিতি - াহার - ▁ক্ষতি - ▁আত্ম - ▁গ্রেপ্তার - ▁ফুট - ▁পাশাপাশি - মূল - ▁প্রধানমন্ত্রী - কর্মী - ▁সুন্দর - ▁নিয়ম - ▁আগুন - বিজ্ঞান - ▁সাংবাদিক - ▁লক্ষ্য - ▁অবশ্য - ▁শরীর - ▁উল্লেখ - ▁শতাংশ - ▁স্কুল - ভূত - ▁গ্রন্থ - ▁কখনো - ▁প্রাণ - ▁কারখানা - ▁হিন্দু - ▁বিবিসি - ▁আপনার - ▁আহমেদ - ▁স্ত্রী - বর্ষ - ▁শক্তি - সভা - ▁রাস্তা - ▁রকম - ▁পশ্চিম - ▁অপরাধ - ▁আসছে - ▁সংস্থা - ▁পৌঁছ - ▁দোকান - ▁পত্রিকা - ▁লেখক - ▁সন্তান - ▁ভেতর - ▁এগিয়ে - ▁নদী - ▁হইল - ▁পরিবেশ - ▁প্রেসিডেন্ট - ▁ছেড়ে - ▁চেয়ারম্যান - ▁ধারা - বৃত্ত - ▁বিক্রি - ▁শ্রী - ▁রক্ষা - ▁দ্রুত - ▁পরিচয় - ▁মালিক - ▁উপন্যাস - ▁শিক্ষার্থী - ▁অন্যতম - ▁চরিত্র - ▁প্রতিবেদন - ▁প্রস্তুত - ▁অভিযান - তন্ত্র - ▁অগ্নি - ▁জনগণ - ▁বৃহস্পতিবার - ▁ব্যাপক - ▁অনুযায়ী - ▁পরিবর্তন - ▁কলকাতা - ভূমি - ▁নজরুল - ▁ভূমিকা - ▁জনপ্রিয় - ▁শিক্ষক - ▁তেমন - ▁অন্যান্য - ▁বিদ্যুৎ - খ্যাত - ▁অস্ত্র - ▁প্রস্তাব - ▁স্বামী - ▁পরিচিত - ▁আয়োজন - ▁শনিবার - ▁তাঁকে - ▁যাত্রী - প্রাপ্ত - ▁কর্মসূচি - ▁গঠন - ▁প্রভাব - ▁কৃষ্ণ - ▁সমাবেশ - ▁সূত্র - ▁অনুষ্ঠিত - ▁পর্যায়ে - ঋ - ▁পুরস্কার - ▁বিক্ষোভ - ▁নিয়ন্ত্রণ - ▁রোববার - ▁প্রার্থী - ▁যোগাযোগ - ▁সোমবার - ▁মার্চ - ▁কমিটি - ▁সংঘর্ষ - ▁বুধবার - ▁সামাজিক - ▁তাঁদের - ▁মার্কিন - ▁সামরিক - ▁নিজেদের - ▁মঙ্গলবার - ▁বক্তব্য - ▁চুক্তি - ▁যুগ - ▁বৈঠক - ▁ইউনিয়ন - ▁মোহাম্মদ - অ - ▁তাঁহার - ▁নির্মাণ - ▁জানুয়ারি - ▁আবেদন - ▁বিশ্বকাপ - ▁ফেব্রুয়ারি - ▁তরুণ - ▁হিসাব - ▁সন্ধ্যা - ▁পরিকল্পনা - ▁উইকেট - ▁ধারণা - ▁আনন্দ - মুক্ত - ▁উদ্দেশ্য - ▁চিকিৎসা - ▁উন্নয়ন - ▁আধুনিক - ▁ভিত্তি - ':' - "\x94" - ঢ - ‍ - ় - e - / - i - r - t - o - '%' - l - a - n - '!' - p - '"' - s - '?' - d - '0' - '3' - u - ঞ - f - g - c - m - h - – - w - b - ; - x - '8' - '5' - '9' - k - ” - y - H - L - T - j - ৗ - B - K - _ - z - “ - F - v - '4' - '1' - '2' - ঔ - ঊ - "\x93" - D - O - œ - ঐ - ৰ - — - <sos/eos> init: chainer input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: true joint_net_conf: null model_conf: ctc_weight: 0.3 lsm_weight: 0.1 length_normalized_loss: false use_preprocessor: true token_type: bpe bpemodel: data/token_list/bpe_unigram1000/bpe.model non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' frontend: default frontend_conf: fs: 16k specaug: null specaug_conf: {} normalize: global_mvn normalize_conf: stats_file: exp/asr_stats_raw_bpe1000/train/feats_stats.npz preencoder: null preencoder_conf: {} encoder: transformer encoder_conf: input_layer: conv2d num_blocks: 12 linear_units: 2048 dropout_rate: 0.1 output_size: 256 attention_heads: 4 attention_dropout_rate: 0.0 postencoder: null postencoder_conf: {} decoder: transformer decoder_conf: input_layer: embed num_blocks: 6 linear_units: 2048 dropout_rate: 0.1 required: - output_dir - token_list version: 0.10.6a1 distributed: false ``` </details>
5ae797b860b6665fb8cd1941289c2f9a
apache-2.0
['generated_from_trainer']
false
distilroberta-base-model This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7929
304c2a67a31c5ac50ee80a468fc9e02a
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 2.0892 | 1.0 | 27036 | 1.8990 | | 1.9644 | 2.0 | 54072 | 1.8040 | | 1.9174 | 3.0 | 81108 | 1.7929 |
4c8bf3a5f9f8a9e28b5ce6ab6b653fdb
apache-2.0
['g2p', 'text2text-generation']
false
ID G2P LSTM ID G2P LSTM is a grapheme-to-phoneme model based on the [LSTM](https://doi.org/10.1162/neco.1997.9.8.1735) architecture. This model was trained from scratch on a modified [Malay/Indonesian lexicon](https://huggingface.co/datasets/bookbot/id_word2phoneme). This model was trained using the [Keras](https://keras.io/) framework. All training was done on Google Colaboratory. We adapted the [LSTM training script](https://keras.io/examples/nlp/lstm_seq2seq/) provided by the official Keras Code Example.
4ee45ec027966f1609480fe88a3d2a7b
apache-2.0
['g2p', 'text2text-generation']
false
Training Procedure <details> <summary>Model Config</summary> latent_dim: 256 num_encoder_tokens: 28 num_decoder_tokens: 32 max_encoder_seq_length: 24 max_decoder_seq_length: 25 </details> <details> <summary>Training Setting</summary> batch_size: 64 optimizer: "rmsprop" loss: "categorical_crossentropy" learning_rate: 0.001 epochs: 100 </details>
86bee037d027672c64bcdc9ef8e2b103
apache-2.0
['g2p', 'text2text-generation']
false
How to Use <details> <summary>Tokenizers</summary> g2id = { ' ': 27, "'": 0, '-': 1, 'a': 2, 'b': 3, 'c': 4, 'd': 5, 'e': 6, 'f': 7, 'g': 8, 'h': 9, 'i': 10, 'j': 11, 'k': 12, 'l': 13, 'm': 14, 'n': 15, 'o': 16, 'p': 17, 'q': 18, 'r': 19, 's': 20, 't': 21, 'u': 22, 'v': 23, 'w': 24, 'y': 25, 'z': 26 } p2id = { '\t': 0, '\n': 1, ' ': 31, '-': 2, 'a': 3, 'b': 4, 'd': 5, 'e': 6, 'f': 7, 'g': 8, 'h': 9, 'i': 10, 'j': 11, 'k': 12, 'l': 13, 'm': 14, 'n': 15, 'o': 16, 'p': 17, 'r': 18, 's': 19, 't': 20, 'u': 21, 'v': 22, 'w': 23, 'z': 24, 'ŋ': 25, 'ə': 26, 'ɲ': 27, 'ʃ': 28, 'ʒ': 29, 'ʔ': 30 } </details> ```py import keras import numpy as np from huggingface_hub import from_pretrained_keras latent_dim = 256 bos_token, eos_token, pad_token = "\t", "\n", " " num_encoder_tokens, num_decoder_tokens = 28, 32 max_encoder_seq_length, max_decoder_seq_length = 24, 25 model = from_pretrained_keras("bookbot/id-g2p-lstm") encoder_inputs = model.input[0] encoder_outputs, state_h_enc, state_c_enc = model.layers[2].output encoder_states = [state_h_enc, state_c_enc] encoder_model = keras.Model(encoder_inputs, encoder_states) decoder_inputs = model.input[1] decoder_state_input_h = keras.Input(shape=(latent_dim,), name="input_3") decoder_state_input_c = keras.Input(shape=(latent_dim,), name="input_4") decoder_states_inputs = [decoder_state_input_h, decoder_state_input_c] decoder_lstm = model.layers[3] decoder_outputs, state_h_dec, state_c_dec = decoder_lstm( decoder_inputs, initial_state=decoder_states_inputs ) decoder_states = [state_h_dec, state_c_dec] decoder_dense = model.layers[4] decoder_outputs = decoder_dense(decoder_outputs) decoder_model = keras.Model( [decoder_inputs] + decoder_states_inputs, [decoder_outputs] + decoder_states ) def inference(sequence): id2p = {v: k for k, v in p2id.items()} input_seq = np.zeros( (1, max_encoder_seq_length, num_encoder_tokens), dtype="float32" ) for t, char in enumerate(sequence): input_seq[0, t, g2id[char]] = 1.0 input_seq[0, t + 1 :, g2id[pad_token]] = 1.0 states_value = encoder_model.predict(input_seq) target_seq = np.zeros((1, 1, num_decoder_tokens)) target_seq[0, 0, p2id[bos_token]] = 1.0 stop_condition = False decoded_sentence = "" while not stop_condition: output_tokens, h, c = decoder_model.predict([target_seq] + states_value) sampled_token_index = np.argmax(output_tokens[0, -1, :]) sampled_char = id2p[sampled_token_index] decoded_sentence += sampled_char if sampled_char == eos_token or len(decoded_sentence) > max_decoder_seq_length: stop_condition = True target_seq = np.zeros((1, 1, num_decoder_tokens)) target_seq[0, 0, sampled_token_index] = 1.0 states_value = [h, c] return decoded_sentence.replace(eos_token, "") inference("mengembangkannya") ```
615a2eeec67e65dffa92209771b738ff
apache-2.0
['g2p', 'text2text-generation']
false
Authors ID G2P LSTM was trained and evaluated by [Ananto Joyoadikusumo](https://anantoj.github.io/), [Steven Limcorn](https://stevenlimcorn.github.io/), [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Colaboratory.
0df6260cdfb028beb2ec0bbb95ffa5ce
mit
['generated_from_keras_callback']
false
nandysoham16/Paper-clustered This model is a fine-tuned version of [nandysoham16/16-clustered_aug](https://huggingface.co/nandysoham16/16-clustered_aug) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3349 - Train End Logits Accuracy: 0.8854 - Train Start Logits Accuracy: 0.9132 - Validation Loss: 0.4416 - Validation End Logits Accuracy: 0.75 - Validation Start Logits Accuracy: 0.5 - Epoch: 0
3a34e98555e29246a19cc43fbe4e56e4
mit
['generated_from_keras_callback']
false
Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 0.3349 | 0.8854 | 0.9132 | 0.4416 | 0.75 | 0.5 | 0 |
233d970a0d3626a0ecc9e12377745b79
cc-by-4.0
['generated_from_trainer']
false
results This model is a fine-tuned version of [paust/pko-t5-small](https://huggingface.co/paust/pko-t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 10.5155 - Bleu: 0.8 - Gen Len: 19.0
cc6d22ce6d7c47a427d2e75f1d204a3f
cc-by-4.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:| | No log | 1.0 | 6 | 10.9861 | 0.8359 | 19.0 | | No log | 2.0 | 12 | 10.5155 | 0.8 | 19.0 |
f2e1712facd5b54b123e3637795fb8ba
apache-2.0
['generated_from_trainer', 'whisper-event']
false
whisper-medium-mediaspeech-cv-tr This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1813 - Wer: 9.9776
64922447625d642958ad85040e0e6b14
apache-2.0
['generated_from_trainer', 'whisper-event']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.1187 | 0.33 | 1000 | 0.2169 | 13.7678 | | 0.0579 | 1.26 | 2000 | 0.1814 | 10.8222 | | 0.0313 | 2.19 | 3000 | 0.1813 | 9.9776 |
aba161636247e5835c54cd8656cd106e
apache-2.0
['generated_from_keras_callback']
false
shaun-e-j/bert-finetuned-testing This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 5.9966 - Epoch: 1
a41f7322cca552d2abe9085312062f89
apache-2.0
['translation']
false
lit-ita * source group: Lithuanian * target group: Italian * OPUS readme: [lit-ita](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/lit-ita/README.md) * model: transformer-align * source language(s): lit * target language(s): ita * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/lit-ita/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/lit-ita/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/lit-ita/opus-2020-06-17.eval.txt)
61a6d90adcac2b2320f8ee3ea777b5ee
apache-2.0
['translation']
false
System Info: - hf_name: lit-ita - source_languages: lit - target_languages: ita - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/lit-ita/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['lt', 'it'] - src_constituents: {'lit'} - tgt_constituents: {'ita'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/lit-ita/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/lit-ita/opus-2020-06-17.test.txt - src_alpha3: lit - tgt_alpha3: ita - short_pair: lt-it - chrF2_score: 0.657 - bleu: 42.2 - brevity_penalty: 0.9740000000000001 - ref_len: 1505.0 - src_name: Lithuanian - tgt_name: Italian - train_date: 2020-06-17 - src_alpha2: lt - tgt_alpha2: it - prefer_old: False - long_pair: lit-ita - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
75777da7678aef657e5f6f302137d923
apache-2.0
['text-classification', 'emotion', 'pytorch']
false
Model Performance Comparision on Emotion Dataset from Twitter: | Model | Accuracy | F1 Score | Test Sample per Second | | --- | --- | --- | --- | | [Distilbert-base-uncased-emotion](https://huggingface.co/bhadresh-savani/distilbert-base-uncased-emotion) | 93.8 | 93.79 | 398.69 | | [Bert-base-uncased-emotion](https://huggingface.co/bhadresh-savani/bert-base-uncased-emotion) | 94.05 | 94.06 | 190.152 | | [Roberta-base-emotion](https://huggingface.co/bhadresh-savani/roberta-base-emotion) | 93.95 | 93.97| 195.639 | | [Albert-base-v2-emotion](https://huggingface.co/bhadresh-savani/albert-base-v2-emotion) | 93.6 | 93.65 | 182.794 | | [Electra-base-emotion](https://huggingface.co/bhadresh-savani/electra-base-emotion) | 91.95 | 91.90 | 472.72 |
b24b416c79eed7ad2bbbf4b7b5b72e86
apache-2.0
['text-classification', 'emotion', 'pytorch']
false
How to Use the model: ```python from transformers import pipeline classifier = pipeline("text-classification",model='bhadresh-savani/electra-base-emotion', return_all_scores=True) prediction = classifier("I love using transformers. The best part is wide range of support and its easy to use", ) print(prediction) """ Output: [[ {'label': 'sadness', 'score': 0.0006792712374590337}, {'label': 'joy', 'score': 0.9959300756454468}, {'label': 'love', 'score': 0.0009452480007894337}, {'label': 'anger', 'score': 0.0018055217806249857}, {'label': 'fear', 'score': 0.00041110432357527316}, {'label': 'surprise', 'score': 0.0002288572577526793} ]] """ ```
f269b404cd17c706fc306b3994d8eb76
apache-2.0
['text-classification', 'emotion', 'pytorch']
false
Eval results ```json { 'epoch': 8.0, 'eval_accuracy': 0.9195, 'eval_f1': 0.918975455617076, 'eval_loss': 0.3486028015613556, 'eval_runtime': 4.2308, 'eval_samples_per_second': 472.726, 'eval_steps_per_second': 7.564 } ```
7cbcb8ea1fbf0b4c27235f53e877a841
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-combined-DS This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0232 - Accuracy: 0.6362 - Precision: 0.6193 - Recall: 0.6204 - F1: 0.6160
be61386b191aa4d99846036b650d54c5
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4.1187640010910775e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 43 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6
b7b51e128c7aa6b175de8b8f31edfe98
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 1.0408 | 1.0 | 711 | 1.0206 | 0.5723 | 0.5597 | 0.5122 | 0.4897 | | 0.9224 | 2.0 | 1422 | 0.9092 | 0.5695 | 0.5745 | 0.5610 | 0.5572 | | 0.8395 | 3.0 | 2133 | 0.8878 | 0.6088 | 0.6083 | 0.6071 | 0.5981 | | 0.7418 | 3.99 | 2844 | 0.8828 | 0.6088 | 0.6009 | 0.6068 | 0.5936 | | 0.6484 | 4.99 | 3555 | 0.9636 | 0.6355 | 0.6235 | 0.6252 | 0.6184 | | 0.5644 | 5.99 | 4266 | 1.0232 | 0.6362 | 0.6193 | 0.6204 | 0.6160 |
895098e04823813c06c305fba1445b92
apache-2.0
['generated_from_keras_callback']
false
Yujun1of1/concrete-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 3.2256 - Validation Loss: 2.6946 - Epoch: 0
6cadd4939f5f51f5c482055fc255ec2e
apache-2.0
['generated_from_trainer']
false
wav2vec2-base960-english-phoneme_v3 This model is a fine-tuned version of [facebook/wav2vec2-large](https://huggingface.co/facebook/wav2vec2-large) on the TIMIT dataset. It achieves the following results on the evaluation set: - Loss: 0.3697 - Cer: 0.0987
84d198198bb32c67e7a809a6dee738ae
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Per | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.2678 | 6.94 | 500 | 0.2347 | 0.0874 | | 0.25 | 13.88 | 1000 | 0.3358 | 0.1122 | | 0.2126 | 20.83 | 1500 | 0.3865 | 0.1131 | | 0.1397 | 27.77 | 2000 | 0.4162 | 0.1085 | | 0.0916 | 34.72 | 2500 | 0.4429 | 0.1086 | | 0.0594 | 41.66 | 3000 | 0.3697 | 0.0987 |
50882f957e823c4ecaef59d5be5ce9b9
apache-2.0
['generated_from_trainer']
false
finetuned_token_3e-05_all_16_02_2022-16_29_13 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1630 - Precision: 0.3684 - Recall: 0.3714 - F1: 0.3699 - Accuracy: 0.9482
f30237b61cdf48270b29ec9abdd0c7a3
afl-3.0
[]
false
Model Description We release all models introduced in our [paper](https://arxiv.org/pdf/2206.11147.pdf), covering 13 different application scenarios. Each model contains 11 billion parameters. | Model | Description | Recommended Application | ----------- | ----------- |----------- | | rst-all-11b | Trained with all the signals below except signals that are used to train Gaokao models | All applications below (specialized models are recommended first if high performance is preferred) | | rst-fact-retrieval-11b | Trained with the following signals: WordNet meaning, WordNet part-of-speech, WordNet synonym, WordNet antonym, wikiHow category hierarchy, Wikidata relation, Wikidata entity typing, Paperswithcode entity typing | Knowledge intensive tasks, information extraction tasks,factual checker | | rst-summarization-11b | Trained with the following signals: DailyMail summary, Paperswithcode summary, arXiv summary, wikiHow summary | Summarization or other general generation tasks, meta-evaluation (e.g., BARTScore) | | rst-temporal-reasoning-11b | Trained with the following signals: DailyMail temporal information, wikiHow procedure | Temporal reasoning, relation extraction, event-based extraction | | rst-information-extraction-11b | Trained with the following signals: Paperswithcode entity, Paperswithcode entity typing, Wikidata entity typing, Wikidata relation, Wikipedia entity | Named entity recognition, relation extraction and other general IE tasks in the news, scientific or other domains| | rst-intent-detection-11b | Trained with the following signals: wikiHow goal-step relation | Intent prediction, event prediction | | rst-topic-classification-11b | Trained with the following signals: DailyMail category, arXiv category, wikiHow text category, Wikipedia section title | general text classification | | rst-word-sense-disambiguation-11b | Trained with the following signals: WordNet meaning, WordNet part-of-speech, WordNet synonym, WordNet antonym | Word sense disambiguation, part-of-speech tagging, general IE tasks, common sense reasoning | | **rst-natural-language-inference-11b** | **Trained with the following signals: ConTRoL dataset, DREAM dataset, LogiQA dataset, RACE & RACE-C dataset, ReClor dataset, DailyMail temporal information** | **Natural language inference, multiple-choice question answering, reasoning** | | rst-sentiment-classification-11b | Trained with the following signals: Rotten Tomatoes sentiment, Wikipedia sentiment | Sentiment classification, emotion classification | | rst-gaokao-rc-11b | Trained with multiple-choice QA datasets that are used to train the [T0pp](https://huggingface.co/bigscience/T0pp) model | General multiple-choice question answering| | rst-gaokao-cloze-11b | Trained with manually crafted cloze datasets | General cloze filling| | rst-gaokao-writing-11b | Trained with example essays from past Gaokao-English exams and grammar error correction signals | Essay writing, story generation, grammar error correction and other text generation tasks |
7c4e773290f612b5d07b979ef7cb2415
apache-2.0
['generated_from_keras_callback']
false
silviacamplani/distilbert-finetuned-tapt-ner-music This model is a fine-tuned version of [silviacamplani/distilbert-finetuned-tapt-lm-ai](https://huggingface.co/silviacamplani/distilbert-finetuned-tapt-lm-ai) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.6932 - Validation Loss: 0.7886 - Train Precision: 0.5347 - Train Recall: 0.5896 - Train F1: 0.5608 - Train Accuracy: 0.8078 - Epoch: 9
630c1dd9519e8e02c0a58425df0573a4
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch | |:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:| | 2.7047 | 2.0137 | 0.0 | 0.0 | 0.0 | 0.5482 | 0 | | 1.7222 | 1.5112 | 0.0 | 0.0 | 0.0 | 0.5561 | 1 | | 1.3564 | 1.2817 | 0.2382 | 0.2592 | 0.2483 | 0.6686 | 2 | | 1.1641 | 1.1378 | 0.3605 | 0.3816 | 0.3708 | 0.7043 | 3 | | 1.0188 | 1.0187 | 0.4583 | 0.4950 | 0.4760 | 0.7409 | 4 | | 0.8983 | 0.9267 | 0.4946 | 0.5383 | 0.5155 | 0.7638 | 5 | | 0.8117 | 0.8649 | 0.5152 | 0.5653 | 0.5391 | 0.7816 | 6 | | 0.7550 | 0.8206 | 0.5283 | 0.5806 | 0.5532 | 0.8007 | 7 | | 0.7132 | 0.7964 | 0.5326 | 0.5887 | 0.5592 | 0.8049 | 8 | | 0.6932 | 0.7886 | 0.5347 | 0.5896 | 0.5608 | 0.8078 | 9 |
1d05357a2323588f3d78f9131d78e5df
mit
['generated_from_trainer']
false
gallant_beaver This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets.
ac4f9fbfb34a99dab980965d7c71cefa
mit
['generated_from_trainer']
false
Full config {'dataset': {'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000', 'tomekkorbak/detoxify-pile-chunk3-50000-100000', 'tomekkorbak/detoxify-pile-chunk3-100000-150000', 'tomekkorbak/detoxify-pile-chunk3-150000-200000', 'tomekkorbak/detoxify-pile-chunk3-200000-250000', 'tomekkorbak/detoxify-pile-chunk3-250000-300000', 'tomekkorbak/detoxify-pile-chunk3-300000-350000', 'tomekkorbak/detoxify-pile-chunk3-350000-400000', 'tomekkorbak/detoxify-pile-chunk3-400000-450000', 'tomekkorbak/detoxify-pile-chunk3-450000-500000', 'tomekkorbak/detoxify-pile-chunk3-500000-550000', 'tomekkorbak/detoxify-pile-chunk3-550000-600000', 'tomekkorbak/detoxify-pile-chunk3-600000-650000', 'tomekkorbak/detoxify-pile-chunk3-650000-700000', 'tomekkorbak/detoxify-pile-chunk3-700000-750000', 'tomekkorbak/detoxify-pile-chunk3-750000-800000', 'tomekkorbak/detoxify-pile-chunk3-800000-850000', 'tomekkorbak/detoxify-pile-chunk3-850000-900000', 'tomekkorbak/detoxify-pile-chunk3-900000-950000', 'tomekkorbak/detoxify-pile-chunk3-950000-1000000', 'tomekkorbak/detoxify-pile-chunk3-1000000-1050000', 'tomekkorbak/detoxify-pile-chunk3-1050000-1100000', 'tomekkorbak/detoxify-pile-chunk3-1100000-1150000', 'tomekkorbak/detoxify-pile-chunk3-1150000-1200000', 'tomekkorbak/detoxify-pile-chunk3-1200000-1250000', 'tomekkorbak/detoxify-pile-chunk3-1250000-1300000', 'tomekkorbak/detoxify-pile-chunk3-1300000-1350000', 'tomekkorbak/detoxify-pile-chunk3-1350000-1400000', 'tomekkorbak/detoxify-pile-chunk3-1400000-1450000', 'tomekkorbak/detoxify-pile-chunk3-1450000-1500000', 'tomekkorbak/detoxify-pile-chunk3-1500000-1550000', 'tomekkorbak/detoxify-pile-chunk3-1550000-1600000', 'tomekkorbak/detoxify-pile-chunk3-1600000-1650000', 'tomekkorbak/detoxify-pile-chunk3-1650000-1700000', 'tomekkorbak/detoxify-pile-chunk3-1700000-1750000', 'tomekkorbak/detoxify-pile-chunk3-1750000-1800000', 'tomekkorbak/detoxify-pile-chunk3-1800000-1850000', 'tomekkorbak/detoxify-pile-chunk3-1850000-1900000', 'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'], 'is_split_by_sentences': True}, 'generation': {'force_call_on': [25354], 'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}], 'scenario_configs': [{'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_samples': 2048}, {'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'challenging_rtp', 'num_samples': 2048, 'prompts_path': 'resources/challenging_rtp.jsonl'}], 'scorer_config': {'device': 'cuda:0'}}, 'kl_gpt3_callback': {'force_call_on': [25354], 'max_tokens': 64, 'num_samples': 4096}, 'model': {'from_scratch': True, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'path_or_name': 'gpt2'}, 'objective': {'name': 'MLE'}, 'tokenizer': {'path_or_name': 'gpt2'}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 64, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'gallant_beaver', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.0005, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000, 'output_dir': 'training_output104340', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 25354, 'save_strategy': 'steps', 'seed': 42, 'warmup_ratio': 0.01, 'weight_decay': 0.1}}
b02e078427d3624482915aee454c0063
apache-2.0
['generated_from_trainer']
false
bert-base-multilingual-cased-finetuned-squad This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3348
ffa5b448bfd4b44d06b7dc5b591f03a2
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.303 | 1.0 | 1997 | 1.2828 | | 0.8647 | 2.0 | 3994 | 1.2168 | | 0.6267 | 3.0 | 5991 | 1.3348 |
294f997ba223e900ce835ec213a11188
creativeml-openrail-m
[]
false
Pinata dreambooth model for Stable-Diffusion Trained on 30 creatures, 2000 steps. With TheLastBen fast-stable-diffusion (https://github.com/TheLastBen/fast-stable-diffusion) use the token **dbvvpinata** ![grid-0004.png](https://s3.amazonaws.com/moonup/production/uploads/1668172835237-632702a8a28c096b45a9fb38.png) ![grid-0012.png](https://s3.amazonaws.com/moonup/production/uploads/1668172927129-632702a8a28c096b45a9fb38.png)
edb235e8fb1d329ac01463c776c6b80a
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.742 | 1.0 | 2334 | 3.6593 | | 3.6297 | 2.0 | 4668 | 3.6440 | | 3.5795 | 3.0 | 7002 | 3.6391 |
702608fc481a82dd5765fbe7e79c89f7
apache-2.0
['generated_from_trainer']
false
resnet-50-finetuned-FER2013-0.001 This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the image_folder dataset. It achieves the following results on the evaluation set: - Loss: 0.9002 - Accuracy: 0.6847
f5ea2a12d596672c6c25a0985ea8f756
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10
ba7d26f8722a3387a0e55fade8257679
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.4723 | 1.0 | 224 | 1.3382 | 0.4887 | | 1.2236 | 2.0 | 448 | 1.1090 | 0.5751 | | 1.1728 | 3.0 | 672 | 1.0262 | 0.6158 | | 1.1545 | 4.0 | 896 | 0.9717 | 0.6339 | | 1.0776 | 5.0 | 1120 | 0.9885 | 0.6360 | | 1.0183 | 6.0 | 1344 | 0.9475 | 0.6560 | | 0.9856 | 7.0 | 1568 | 0.9114 | 0.6700 | | 0.953 | 8.0 | 1792 | 0.9074 | 0.6767 | | 0.9151 | 9.0 | 2016 | 0.9076 | 0.6833 | | 0.9355 | 10.0 | 2240 | 0.9002 | 0.6847 |
5c13ec8af4ea637b1e99de28d61e3b5f
mit
[]
false
alberto mielgo on Stable Diffusion This is the `<street>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<street> 0](https://huggingface.co/sd-concepts-library/alberto-mielgo/resolve/main/concept_images/1.jpeg) ![<street> 1](https://huggingface.co/sd-concepts-library/alberto-mielgo/resolve/main/concept_images/5.jpeg) ![<street> 2](https://huggingface.co/sd-concepts-library/alberto-mielgo/resolve/main/concept_images/3.jpeg) ![<street> 3](https://huggingface.co/sd-concepts-library/alberto-mielgo/resolve/main/concept_images/2.jpeg) ![<street> 4](https://huggingface.co/sd-concepts-library/alberto-mielgo/resolve/main/concept_images/0.jpeg) ![<street> 5](https://huggingface.co/sd-concepts-library/alberto-mielgo/resolve/main/concept_images/4.jpeg)
bfe17a1fc80a36578fe558b27ea2a1ae
apache-2.0
['generated_from_trainer']
false
results This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.9229 - Accuracy: 0.7586
2e70c612ccef63667facd2f72ae8513c
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.9119 | 1.0 | 258 | 0.8750 | 0.7241 | | 0.8307 | 2.0 | 516 | 0.9229 | 0.7586 |
4cee69b456b3eac08ea243335c2df548
apache-2.0
['generated_from_trainer']
false
classification-poems This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the spanish Poems Dataset dataset. It achieves the following results on the evaluation set: - Loss: 0.8228 - Accuracy: 0.7241
871c91fed0c292bff41db9aef9b9fd6c
apache-2.0
['generated_from_trainer']
false
Training and evaluation data The original dataset has the columns author, content, title, year and type of poem. For each example, the type of poem it belongs to is identified. Then the model will recognize which type of poem the entered content belongs to.
2764d9c028f7f21a28e18174c3b5dda1
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.9344 | 1.0 | 258 | 0.7505 | 0.7586 | | 0.9239 | 2.0 | 516 | 0.8228 | 0.7241 |
b9b34e32d1c364279581da078905661f
apache-2.0
['transformers', 'text-classification']
false
Unam_tesis_beto_finnetuning: Unam's thesis classification with BETO This model is created from the finetuning of the pre-model for Spanish [BETO](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased), using PyTorch framework, and trained with a set of theses of the National Autonomous University of Mexico [(UNAM)](https://tesiunam.dgb.unam.mx/F?func=find-b-0&local_base=TES01). The model classifies a text into for five (Psicología, Derecho, Química Farmacéutico Biológica, Actuaría, Economía) possible careers at the UNAM.
c45546643b43fd3ec380c465d8492747
apache-2.0
['transformers', 'text-classification']
false
Training Dataset 1000 documents (Thesis introduction, Author´s first name, Author´s last name, Thesis title, Year, Career) | Careers | Size | |--------------|----------------------| | Actuaría | 200 | | Derecho| 200 | | Economía| 200 | | Psicología| 200 | | Química Farmacéutico Biológica| 200 |
34e3780995cbb86ea6f7f934debd2bfa
apache-2.0
['transformers', 'text-classification']
false
Example of use For further details on how to use unam_tesis_BETO_finnetuning you can visit the Hugging Face Transformers library, starting with the Quickstart section. The UNAM tesis model can be accessed simply as 'hackathon-pln-e/unam_tesis_BETO_finnetuning' by using the Transformers library. An example of how to download and use the model can be found next. ```python tokenizer = AutoTokenizer.from_pretrained('hiiamsid/BETO_es_binary_classification', use_fast=False) model = AutoModelForSequenceClassification.from_pretrained( 'hackathon-pln-es/unam_tesis_BETO_finnetuning', num_labels=5, output_attentions=False, output_hidden_states=False) pipe = TextClassificationPipeline(model=model, tokenizer=tokenizer, return_all_scores=True) classificationResult = pipe("Análisis de las condiciones del aprendizaje desde casa en los alumnos de preescolar y primaria del municipio de Nicolás Romero") ```
5fb81b4487fce397cc94b5c552bba353
apache-2.0
['transformers', 'text-classification']
false
Citation To cite this resource in a publication please use the following: [UNAM's Tesis with BETO finetuning classify] (https://huggingface.co/hackathon-pln-es/unam_tesis_BETO_finnetuning) To cite this resource in a publication please use the following: ``` @inproceedings{SpanishNLPHackaton2022, title={UNAM's Theses with BETO fine-tuning classify}, author={López López, Isaac Isaías; Clavel Quintero, Yisel; López Ramos, Dionis & López López, Ximena Yeraldin}, booktitle={Somos NLP Hackaton 2022}, year={2022} } ```
acb40ff06fdefac36ba3fdac7755a4db
apache-2.0
['transformers', 'text-classification']
false
Team members - Isaac Isaías López López ([MajorIsaiah](https://huggingface.co/MajorIsaiah)) - Dionis López Ramos ([inoid](https://huggingface.co/inoid)) - Yisel Clavel Quintero ([clavel](https://huggingface.co/clavel)) - Ximena Yeraldin López López ([Ximyer](https://huggingface.co/Ximyer))
856766390a0337726eec5470e6f04dd5
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 28 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5
35d2c1d5b91725f85824fdb30e01a101
mit
['generated_from_trainer']
false
eng_xlmr This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.9686
9281b96f0364bde7bd2956aeb654dee1
mit
[]
false
Usage ```python from transformers import AutoModelForSequenceClassification, AutoTokenizer, AutoConfig import torch config = AutoConfig.from_pretrained("bhavitvyamalik/fake-news_xtremedistil-l6-h256-uncased") model = AutoModelForSequenceClassification.from_pretrained("bhavitvyamalik/fake-news_xtremedistil-l6-h256-uncased", config=config) tokenizer = AutoTokenizer.from_pretrained("microsoft/xtremedistil-l6-h256-uncased", usefast=True) text = "According to reports by Fox News, Biden is the President of the USA" encode = tokenizer(text, max_length=512, truncation=True, padding="max_length", return_tensors="pt") output = model(**encode) print(torch.argmax(output["logits"])) ```
56bdb68c43e8b7f3f87b91412c4ff0ff
apache-2.0
['vision', 'image-classification']
false
Vision Transformer (base-sized model) - Hybrid The hybrid Vision Transformer (ViT) model was proposed in [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby. It's the first paper that successfully trains a Transformer encoder on ImageNet, attaining very good results compared to familiar convolutional architectures. ViT hybrid is a slight variant of the [plain Vision Transformer](vit), by leveraging a convolutional backbone (specifically, [BiT](bit)) whose features are used as initial "tokens" for the Transformer. Disclaimer: The team releasing ViT did not write a model card for this model so this model card has been written by the Hugging Face team.
ef7cb15a3a7d97218cb852bfba7bdda8
apache-2.0
['vision', 'image-classification']
false
Model description *While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train.*
fa6d7aa5d654cd58597496a37bd296b5
apache-2.0
['vision', 'image-classification']
false
How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import ViTHybridImageProcessor, ViTHybridForImageClassification from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = ViTHybridImageProcessor.from_pretrained('google/vit-hybrid-base-bit-384') model = ViTHybridForImageClassification.from_pretrained('google/vit-hybrid-base-bit-384') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits
4d679daf863d56b6f4940f00640de316
apache-2.0
['vision', 'image-classification']
false
model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) >>> tabby, tabby cat ``` For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/vit.html
f1221861d980534f4a9acba14d55bf31
apache-2.0
['vision', 'image-classification']
false
Training data The ViT-Hybrid model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes, and fine-tuned on [ImageNet](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes.
39ab811a4d74f63572e62bc3c63358cd
mit
[]
false
muxoyara on Stable Diffusion This is the `<muxoyara>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<muxoyara> 0](https://huggingface.co/sd-concepts-library/muxoyara/resolve/main/concept_images/0.jpeg) ![<muxoyara> 1](https://huggingface.co/sd-concepts-library/muxoyara/resolve/main/concept_images/9.jpeg) ![<muxoyara> 2](https://huggingface.co/sd-concepts-library/muxoyara/resolve/main/concept_images/3.jpeg) ![<muxoyara> 3](https://huggingface.co/sd-concepts-library/muxoyara/resolve/main/concept_images/4.jpeg) ![<muxoyara> 4](https://huggingface.co/sd-concepts-library/muxoyara/resolve/main/concept_images/14.jpeg) ![<muxoyara> 5](https://huggingface.co/sd-concepts-library/muxoyara/resolve/main/concept_images/6.jpeg) ![<muxoyara> 6](https://huggingface.co/sd-concepts-library/muxoyara/resolve/main/concept_images/10.jpeg) ![<muxoyara> 7](https://huggingface.co/sd-concepts-library/muxoyara/resolve/main/concept_images/13.jpeg) ![<muxoyara> 8](https://huggingface.co/sd-concepts-library/muxoyara/resolve/main/concept_images/1.jpeg) ![<muxoyara> 9](https://huggingface.co/sd-concepts-library/muxoyara/resolve/main/concept_images/17.jpeg) ![<muxoyara> 10](https://huggingface.co/sd-concepts-library/muxoyara/resolve/main/concept_images/16.jpeg) ![<muxoyara> 11](https://huggingface.co/sd-concepts-library/muxoyara/resolve/main/concept_images/5.jpeg) ![<muxoyara> 12](https://huggingface.co/sd-concepts-library/muxoyara/resolve/main/concept_images/12.jpeg) ![<muxoyara> 13](https://huggingface.co/sd-concepts-library/muxoyara/resolve/main/concept_images/8.jpeg) ![<muxoyara> 14](https://huggingface.co/sd-concepts-library/muxoyara/resolve/main/concept_images/2.jpeg) ![<muxoyara> 15](https://huggingface.co/sd-concepts-library/muxoyara/resolve/main/concept_images/11.jpeg) ![<muxoyara> 16](https://huggingface.co/sd-concepts-library/muxoyara/resolve/main/concept_images/15.jpeg) ![<muxoyara> 17](https://huggingface.co/sd-concepts-library/muxoyara/resolve/main/concept_images/19.jpeg) ![<muxoyara> 18](https://huggingface.co/sd-concepts-library/muxoyara/resolve/main/concept_images/7.jpeg) ![<muxoyara> 19](https://huggingface.co/sd-concepts-library/muxoyara/resolve/main/concept_images/18.jpeg)
0955300392d37391dfb76264339e9db2
mit
['generated_from_trainer']
false
language-detection-RoBert-base This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1398 - Accuracy: 0.9865
eaeb324f710ae0cf3cfa3dc95ac9436d
apache-2.0
['speech', 'audio', 'automatic-speech-recognition', 'hf-asr-leaderboard']
false
Wav2Vec2-Conformer-Large-100h with Rotary Position Embeddings Wav2Vec2 Conformer with rotary position embeddings, pretrained on 960h hours of Librispeech and fine-tuned on **100 hours of Librispeech** on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. **Paper**: [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) **Authors**: Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino The results of Wav2Vec2-Conformer can be found in Table 3 and Table 4 of the [official paper](https://arxiv.org/abs/2010.05171). The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec
6c5192ee27894b780d3ea3b4fd6caa4b
apache-2.0
['speech', 'audio', 'automatic-speech-recognition', 'hf-asr-leaderboard']
false
load model and processor processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-conformer-rope-large-100h-ft") model = Wav2Vec2ConformerForCTC.from_pretrained("facebook/wav2vec2-conformer-rope-large-100h-ft")
a6c4aa4189b576f66b4fb28a83bc5f1c
apache-2.0
['generated_from_keras_callback']
false
merve/distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2037 - Validation Loss: 0.0703 - Epoch: 0
402c91d1c89b333aa72b9b75ab89951f
apache-2.0
['generated_from_trainer']
false
wav2vec2-large-xls-r-300m-kor-11385 This model is a fine-tuned version of [teddy322/wav2vec2-large-xls-r-300m-kor-11385](https://huggingface.co/teddy322/wav2vec2-large-xls-r-300m-kor-11385) on the zeroth_korean_asr dataset. It achieves the following results on the evaluation set: - Loss: 0.4033 - Wer: 0.2805
c8aedec6af42e51a533ef5ae68f25bb7
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 - mixed_precision_training: Native AMP
d3c62c4dd962954fe18f78e0732574e4
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.0502 | 1.97 | 400 | 0.4049 | 0.3283 | | 0.0631 | 3.94 | 800 | 0.4618 | 0.3260 | | 0.0508 | 5.91 | 1200 | 0.4391 | 0.3170 | | 0.0325 | 7.88 | 1600 | 0.4138 | 0.2935 | | 0.0244 | 9.85 | 2000 | 0.4033 | 0.2805 |
30ca1bbce21554a1fc6c90389817b499