license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 24 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 2000 - mixed_precision_training: Native AMP
f1e19f1cbffd80f684b5ff5ec45169e0
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.293 | 0.64 | 500 | 0.3798 | 99.9451 | | 0.1701 | 1.28 | 1000 | 0.3376 | 100.0 | | 0.1392 | 1.92 | 1500 | 0.3280 | 100.0 | | 0.0628 | 2.56 | 2000 | 0.3370 | 100.0 |
bd3a7e98d9c598b1c1146a4575fd7670
mit
['generated_from_trainer']
false
roberta-offensive-lm-tapt-finetuned This model is a fine-tuned version of [k4black/roberta-offensive-lm-tapt](https://huggingface.co/k4black/roberta-offensive-lm-tapt) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4692 - F1: 0.7744
72defc34b7713e670b8eaf4b392dcf99
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 12 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 - mixed_precision_training: Native AMP
f54fa4319aa3a28eee60d9edf24f1aaf
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.6149 | 0.1 | 100 | 0.6323 | 0.3932 | | 0.5713 | 0.2 | 200 | 0.6223 | 0.5491 | | 0.5529 | 0.29 | 300 | 0.5739 | 0.6120 | | 0.5174 | 0.39 | 400 | 0.4812 | 0.7287 | | 0.5044 | 0.49 | 500 | 0.4667 | 0.7595 | | 0.5022 | 0.59 | 600 | 0.4540 | 0.7648 | | 0.4855 | 0.69 | 700 | 0.4523 | 0.7933 | | 0.465 | 0.78 | 800 | 0.4479 | 0.7727 | | 0.4591 | 0.88 | 900 | 0.4478 | 0.7914 | | 0.4702 | 0.98 | 1000 | 0.6035 | 0.7397 | | 0.4448 | 1.08 | 1100 | 0.4996 | 0.7535 | | 0.4476 | 1.18 | 1200 | 0.4692 | 0.7744 |
14e7958b186d4e79b89c7e11ee29e5ac
apache-2.0
['generated_from_keras_callback']
false
kimhieu/distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1828 - Validation Loss: 0.5520 - Train Matthews Correlation: 0.5286 - Epoch: 2
71b953726f2d36d849395b16416261e6
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Train Matthews Correlation | Epoch | |:----------:|:---------------:|:--------------------------:|:-----:| | 0.5184 | 0.4675 | 0.4484 | 0 | | 0.3164 | 0.4646 | 0.4963 | 1 | | 0.1828 | 0.5520 | 0.5286 | 2 |
55e93c1d7ad2e6f25f6b429b2eae9f94
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2085 - Accuracy: 0.9275 - F1: 0.9275
2a7df0a3d72842f4e3a62c05b74b5409
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8208 | 1.0 | 250 | 0.2989 | 0.9105 | 0.9085 | | 0.2418 | 2.0 | 500 | 0.2085 | 0.9275 | 0.9275 |
e0b7e5dc2501e79656998cdfe80d76ee
apache-2.0
['automatic-speech-recognition', 'nl']
false
exp_w2v2t_nl_wavlm_s213 Fine-tuned [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) for speech recognition using the train split of [Common Voice 7.0 (nl)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
bdbee591106af97194bde83bc4e77ee4
apache-2.0
[]
false
MobileNet V2 model from Torchvision fine-tuned for FOOD101 dataset. Checkpoint trained for 30 epoches using https://github.com/AlexKoff88/mobilenetv2_food101. Top-1 accuracy is 76.3% but one can do better. The main intend is to use it in samples and demos for model optimization. Here is the advantages: - FOOD101 can automatically downloaded without registration and SMS. - It is quite representative to reflect the real world scenarios. - MobileNet v2 is easy to train and lightweight model which is also representative and used in many public benchmarks. Here is the code to load the checkpoint in PyTorch: ```python import sys import os import torch import torch.nn as nn import torchvision.models as models FOOD101_CLASSES = 101 def fix_names(state_dict): state_dict = {key.replace('module.', ''): value for (key, value) in state_dict.items()} return state_dict model = models.mobilenet_v2(num_classes=FOOD101_CLASSES) if len(sys.argv) > 1: checkpoint_path = sys.argv[1] if os.path.isfile(checkpoint_path): print("=> loading checkpoint '{}'".format(checkpoint_path)) checkpoint = torch.load(checkpoint_path) weights = fix_names(checkpoint['state_dict']) model.load_state_dict(weights) print("=> loaded checkpoint '{}' (epoch {})" .format(checkpoint_path, checkpoint['epoch'])) ```
3daed11b4d391e84ffdeb1d864c94c9e
apache-2.0
[]
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-06 - train_batch_size: 24 - eval_batch_size: 4 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16
81f855f06559853dde58027a2da767a6
apache-2.0
['automatic-speech-recognition', 'nl']
false
exp_w2v2t_nl_unispeech-ml_s23 Fine-tuned [microsoft/unispeech-large-multi-lingual-1500h-cv](https://huggingface.co/microsoft/unispeech-large-multi-lingual-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (nl)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
2542468aaddeeae2343aa5cc00f91e6d
apache-2.0
['generated_from_trainer']
false
distil-Is-upper This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6095 - Rmse: 0.7807 - Mse: 0.6095 - Mae: 0.5993
13a64d8aada2ef5bc7b8207a199eca4d
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:| | 0.7129 | 1.0 | 492 | 0.7088 | 0.8419 | 0.7088 | 0.5968 | | 0.5953 | 2.0 | 984 | 0.6426 | 0.8016 | 0.6426 | 0.5838 | | 0.5865 | 3.0 | 1476 | 0.6083 | 0.7800 | 0.6083 | 0.6023 | | 0.5888 | 4.0 | 1968 | 0.6209 | 0.7880 | 0.6209 | 0.5880 | | 0.5859 | 5.0 | 2460 | 0.6095 | 0.7807 | 0.6095 | 0.5993 |
b2fe58d29870c7d22b4700f4757c2f26
cc-by-4.0
['espnet', 'audio', 'diarization']
false
Demo: How to use in ESPnet2 ```bash cd espnet git checkout 4dfa2be4331d3d68f124aa5fd81f63217a7278a4 pip install -e . cd egs2/mini_librispeech/diar1 ./run.sh --skip_data_prep false --skip_train true --download_model YushiUeda/test ``` <!-- Generated by scripts/utils/show_diar_result.sh -->
565f8b9c5706999b30324936906d896e
cc-by-4.0
['espnet', 'audio', 'diarization']
false
Environments - date: `Wed Aug 25 23:29:07 EDT 2021` - python version: `3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0]` - espnet version: `espnet 0.10.2a1` - pytorch version: `pytorch 1.9.0+cu102` - Git hash: `19bcd34f9395e01e54a97c4db5ecbcedb429dd92` - Commit date: `Tue Aug 24 19:50:44 2021 -0400`
350fd5428a3caa0ecc73a380dea7c15e
cc-by-4.0
['espnet', 'audio', 'diarization']
false
DER `dev_clean_2_ns2_beta2_500` |threshold_median_collar|DER| |---|---| |result_th0.3_med1_collar0.0|32.42| |result_th0.3_med11_collar0.0|32.03| |result_th0.4_med1_collar0.0|30.96| |result_th0.4_med11_collar0.0|30.26| |result_th0.5_med1_collar0.0|30.35| |result_th0.5_med11_collar0.0|29.37| |result_th0.6_med1_collar0.0|30.77| |result_th0.6_med11_collar0.0|29.52| |result_th0.7_med1_collar0.0|32.60| |result_th0.7_med11_collar0.0|31.03|
0873c4d388dc51e1e58c63cf7473d41c
cc-by-4.0
['espnet', 'audio', 'diarization']
false
DIAR config <details><summary>expand</summary> ``` config: conf/train_diar.yaml print_config: false log_level: INFO dry_run: false iterator_type: chunk output_dir: exp/diar_train_diar_raw_max_epoch20 ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 20 patience: 3 val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - acc - max keep_nbest_models: 3 grad_clip: 5 grad_clip_type: 2.0 grad_noise: false accum_grad: 2 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 16 valid_batch_size: null batch_bins: 1000000 valid_batch_bins: null train_shape_file: - exp/diar_stats_8k/train/speech_shape - exp/diar_stats_8k/train/spk_labels_shape valid_shape_file: - exp/diar_stats_8k/valid/speech_shape - exp/diar_stats_8k/valid/spk_labels_shape batch_type: folded valid_batch_type: null fold_length: - 80000 - 800 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 200000 chunk_shift_ratio: 0.5 num_cache_chunks: 64 train_data_path_and_name_and_type: - - dump/raw/simu/data/train_clean_5_ns2_beta2_500/wav.scp - speech - sound - - dump/raw/simu/data/train_clean_5_ns2_beta2_500/espnet_rttm - spk_labels - rttm valid_data_path_and_name_and_type: - - dump/raw/simu/data/dev_clean_2_ns2_beta2_500/wav.scp - speech - sound - - dump/raw/simu/data/dev_clean_2_ns2_beta2_500/espnet_rttm - spk_labels - rttm allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 0.01 scheduler: noamlr scheduler_conf: warmup_steps: 1000 num_spk: 2 init: xavier_uniform input_size: null model_conf: loss_type: pit use_preprocessor: true frontend: default frontend_conf: fs: 8k hop_length: 128 normalize: global_mvn normalize_conf: stats_file: exp/diar_stats_8k/train/feats_stats.npz encoder: transformer encoder_conf: input_layer: linear num_blocks: 2 linear_units: 512 dropout_rate: 0.1 output_size: 256 attention_heads: 4 attention_dropout_rate: 0.0 decoder: linear decoder_conf: {} label_aggregator: label_aggregator label_aggregator_conf: {} required: - output_dir version: 0.10.2a1 distributed: false ``` </details>
a8facf54aae769f029a3517c0f60ec64
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2201 - Accuracy: 0.9275 - F1: 0.9275
98f621bcc1846fc6a0da1146eec00127
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8326 | 1.0 | 250 | 0.3185 | 0.902 | 0.8983 | | 0.2499 | 2.0 | 500 | 0.2201 | 0.9275 | 0.9275 |
7e5d48009698b0435c4f6737e9612008
creativeml-openrail-m
['text-to-image']
false
kemar Dreambooth model trained by zigg-ai with with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! Sample pictures of: sdcid (use that on your prompt) ![sdcid 0](https://huggingface.co/zigg-ai/kemar/resolve/main/concept_images/sdcid_%281%29.jpg)![sdcid 1](https://huggingface.co/zigg-ai/kemar/resolve/main/concept_images/sdcid_%282%29.jpg)![sdcid 2](https://huggingface.co/zigg-ai/kemar/resolve/main/concept_images/sdcid_%283%29.jpg)![sdcid 3](https://huggingface.co/zigg-ai/kemar/resolve/main/concept_images/sdcid_%284%29.jpg)![sdcid 4](https://huggingface.co/zigg-ai/kemar/resolve/main/concept_images/sdcid_%285%29.jpg)![sdcid 5](https://huggingface.co/zigg-ai/kemar/resolve/main/concept_images/sdcid_%286%29.jpg)![sdcid 6](https://huggingface.co/zigg-ai/kemar/resolve/main/concept_images/sdcid_%287%29.jpg)![sdcid 7](https://huggingface.co/zigg-ai/kemar/resolve/main/concept_images/sdcid_%288%29.jpg)![sdcid 8](https://huggingface.co/zigg-ai/kemar/resolve/main/concept_images/sdcid_%289%29.jpg)![sdcid 9](https://huggingface.co/zigg-ai/kemar/resolve/main/concept_images/sdcid_%2810%29.jpg)![sdcid 10](https://huggingface.co/zigg-ai/kemar/resolve/main/concept_images/sdcid_%2811%29.jpg)![sdcid 11](https://huggingface.co/zigg-ai/kemar/resolve/main/concept_images/sdcid_%2812%29.jpg)![sdcid 12](https://huggingface.co/zigg-ai/kemar/resolve/main/concept_images/sdcid_%2813%29.jpg)![sdcid 13](https://huggingface.co/zigg-ai/kemar/resolve/main/concept_images/sdcid_%2814%29.jpg)![sdcid 14](https://huggingface.co/zigg-ai/kemar/resolve/main/concept_images/sdcid_%2815%29.jpg)![sdcid 15](https://huggingface.co/zigg-ai/kemar/resolve/main/concept_images/sdcid_%2816%29.jpg)![sdcid 16](https://huggingface.co/zigg-ai/kemar/resolve/main/concept_images/sdcid_%2817%29.jpg)![sdcid 17](https://huggingface.co/zigg-ai/kemar/resolve/main/concept_images/sdcid_%2818%29.jpg)
e1c6d64ac10a85254846a191ff9d90d1
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
`pyf98/swbd_e_branchformer` This model was trained by Yifan Peng using swbd recipe in [espnet](https://github.com/espnet/espnet/). References: - [E-Branchformer: Branchformer with Enhanced merging for speech recognition (SLT 2022)](https://arxiv.org/abs/2210.00077) - [Branchformer: Parallel MLP-Attention Architectures to Capture Local and Global Context for Speech Recognition and Understanding (ICML 2022)](https://proceedings.mlr.press/v162/peng22a.html)
6c560c261b47c8c980f0539c989e3de6
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
Demo: How to use in ESPnet2 Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html) if you haven't done that already. ```bash cd espnet git checkout ee573bc6f5de4309c1e29137294a7305d9175e65 pip install -e . cd egs2/swbd/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model pyf98/swbd_e_branchformer ``` <!-- Generated by scripts/utils/show_asr_result.sh -->
2985cdddb6e6f87e28d6976cf5364387
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
Environments - date: `Tue Dec 27 05:05:40 CST 2022` - python version: `3.9.15 (main, Nov 24 2022, 14:31:59) [GCC 11.2.0]` - espnet version: `espnet 202211` - pytorch version: `pytorch 1.12.1` - Git hash: `ef3ce328551c12c03284defc757f42df47c46170` - Commit date: `Mon Dec 26 20:34:28 2022 -0500`
a668ad4527630e50170984ce9f4f724e
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_asr_model_valid.acc.ave/eval2000/hyp.callhm.ctm.filt.sys|2628|21594|88.7|8.4|2.9|2.1|13.4|46.2| |decode_asr_asr_model_valid.acc.ave/eval2000/hyp.ctm.filt.sys|4459|42989|91.2|6.1|2.8|1.5|10.4|41.5| |decode_asr_asr_model_valid.acc.ave/eval2000/hyp.swbd.ctm.filt.sys|1831|21395|93.7|3.7|2.6|1.0|7.3|34.8|
f4b30a8f59891f9e4fa7e1ded0df2c61
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
ASR config <details><summary>expand</summary> ``` config: conf/tuning/train_asr_e_branchformer_e12_size256_mlp1024_linear1024_macaron.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_train_asr_e_branchformer_e12_size256_mlp1024_linear1024_macaron_raw_en_bpe2000_sp ngpu: 1 seed: 2022 num_workers: 4 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: 2 dist_rank: 0 local_rank: 0 dist_master_addr: localhost dist_master_port: 37983 dist_launcher: null multiprocessing_distributed: true unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 100 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - acc - max keep_nbest_models: 10 nbest_averaging_interval: 0 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: true log_interval: null use_matplotlib: true use_tensorboard: true create_graph_in_tensorboard: false use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 20 valid_batch_size: null batch_bins: 40000000 valid_batch_bins: null train_shape_file: - exp/asr_stats_raw_en_bpe2000_sp/train/speech_shape - exp/asr_stats_raw_en_bpe2000_sp/train/text_shape.bpe valid_shape_file: - exp/asr_stats_raw_en_bpe2000_sp/valid/speech_shape - exp/asr_stats_raw_en_bpe2000_sp/valid/text_shape.bpe batch_type: numel valid_batch_type: null fold_length: - 80000 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train_nodup_sp/wav.scp - speech - kaldi_ark - - dump/raw/train_nodup_sp/text - text - text valid_data_path_and_name_and_type: - - dump/raw/train_dev/wav.scp - speech - kaldi_ark - - dump/raw/train_dev/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 0.002 weight_decay: 1.0e-06 scheduler: warmuplr scheduler_conf: warmup_steps: 50000 token_list: - <blank> - <unk> - ▁i - '''' - s - ▁and - ▁the - ▁you - ▁that - ▁a - ▁it - ▁uh - ▁to - t - ▁of - ▁know - ▁they - '-' - ▁in - ▁we - ']' - ▁[ - ▁yeah - ▁have - ▁but - ▁so - ▁was - ▁like - re - ▁um - ▁just - ▁well - ▁do - m - ▁for - ing - ▁think - ▁don - d - ▁is - ▁there - ▁or - ▁on - ▁be - noise - ▁what - ▁oh - laughter - ▁my - ed - ve - ▁not - ▁really - ▁with - ▁he - n - ▁one - ▁if - ▁are - ▁all - ▁get - ▁right - ▁about - ▁can - ▁because - ▁out - ▁had - ▁up - ▁them - ▁lot - ▁at - ▁this - ▁would - ▁when - ▁go - ▁some - er - ▁people - ▁no - ▁mean - ▁kind - ▁then - a - v - ▁good - e - ll - ▁now - ▁got - ▁me - p - ▁time - o - ▁she - ▁as - ▁going - y - ▁see - ▁more - ▁were - ly - ▁been - ▁from - ▁too - ▁an - ▁things - ▁how - ▁something - ▁your - ▁where - ▁much - ▁guess - c - r - ▁little - ▁here - ▁s - ▁thing - ▁our - u - g - ocalized - ▁very - ▁did - b - ▁their - ▁other - ▁work - le - ▁could - ▁okay - i - ▁even - al - ▁c - ▁two - huh - ▁way - ▁say - or - in - ▁any - ▁has - ▁years - ▁want - ▁t - f - ▁back - ▁down - ▁those - ▁pretty - ▁probably - ▁re - ▁who - ▁home - ▁didn - ▁real - ▁year - ▁take - ▁over - ▁yes - ▁than - ▁sure - ▁into - ar - hum - an - l - ▁school - ▁put - ▁stuff - k - ▁make - ▁kids - ▁her - ▁said - ▁by - ▁never - ▁which - ▁off - w - ▁went - ▁b - ▁car - ▁only - ion - ▁big - ▁always - ▁around - ▁money - ▁these - ▁day - ▁anything - ▁three - ▁nice - ▁doing - ri - ▁need - ▁come - ▁f - ▁actually - ▁will - ▁maybe - ▁care - ▁him - ▁de - ent - ▁still - ▁v - ▁should - ▁new - ▁used - ch - ▁five - ▁ - th - ▁long - ▁p - ▁sort - ▁e - ▁his - ter - 'on' - ▁most - ▁house - ▁bit - ▁old - ▁every - ▁different - ck - ▁last - ▁let - ▁use - il - ▁us - ▁many - ▁look - es - ▁course - ▁getting - ur - ▁true - ▁everything - ic - ▁feel - ▁first - ▁part - ▁does - ▁pay - ▁great - it - ▁hard - ▁same - ▁thought - en - ▁problem - ▁also - ▁keep - at - ▁d - ers - ▁through - ▁o - ▁doesn - ies - ▁children - ▁four - ▁find - ▁done - ▁th - ment - ▁before - ▁far - ▁though - ▁area - ate - ▁haven - ▁w - ▁ever - ▁being - li - ▁family - ▁bad - ▁seems - se - ▁live - ation - ▁whole - ▁fact - ▁own - ▁n - ▁why - ▁huh - ▁play - ▁talking - ▁tell - ▁better - ▁interesting - ▁another - h - ▁place - ▁try - ▁trying - ro - ▁ten - ▁twenty - ▁else - ol - ▁watch - ▁read - te - ▁type - ▁quite - ▁job - ir - ▁hundred - ▁high - ▁call - ▁after - ow - ▁ago - ▁give - ra - ▁couple - ▁enough - us - ▁whatever - ke - ▁either - ▁start - ▁m - ▁having - ▁texas - ▁somebody - el - ▁husband - ▁sometimes - ▁dollars - ▁usually - ▁show - ▁help - ce - ▁while - ▁few - ▁away - ▁y - ▁ha - ▁se - ▁college - ▁system - able - ▁might - ▁ma - ▁heard - ▁r - ne - is - ▁person - ▁once - ▁made - ▁point - ▁six - ▁fun - ▁g - ▁week - ▁buy - ▁seen - ▁state - ▁anyway - ▁again - ▁pa - ▁love - ▁gonna - ▁dallas - ▁started - ▁pro - ▁exactly - ▁country - ▁life - ▁enjoy - ▁everybody - ive - id - ▁talk - ▁night - ▁able - est - ▁lo - ▁may - ▁stay - ▁remember - ▁news - ▁mo - um - ▁came - ▁co - ▁hear - et - ▁end - ▁least - tion - ▁working - ▁h - lo - ▁un - ▁sa - un - ▁po - ul - ▁boy - ▁since - age - ▁change - ▁di - ▁idea - ▁both - ▁agree - ▁st - ▁program - ▁pre - ▁dis - ▁ra - ist - ▁almost - ▁run - ▁someone - ▁con - ▁fl - ▁dog - ry - ▁reason - ▁ho - ▁took - ▁believe - co - ▁bye - ▁company - ▁eight - ▁times - ▁half - ▁wife - ▁la - ▁ba - ▁isn - ▁paper - ▁deal - ▁goes - ▁l - ▁sp - ▁hand - ant - ▁guy - ▁called - ▁next - ▁close - ▁month - ▁thirty - ▁wanted - ▁thousand - ▁yet - ▁understand - ▁cost - ▁pick - ▁drive - am - z - ▁mi - ut - ▁looking - ▁ro - ▁child - ▁government - ▁crime - ▁tax - ▁spend - ▁women - ▁parents - ▁days - ▁especially - ▁cut - ac - ▁li - me - ure - ▁saying - ▁name - ▁wow - ▁eat - ▁gone - ▁whether - ▁happen - ▁k - ▁less - ver - ▁bu - ▁ga - ated - ▁small - ity - ▁saw - ge - ▁sounds - ▁supposed - ▁number - ▁world - ▁mother - ▁music - ▁hi - ▁set - ▁such - ▁until - ▁movie - ▁credit - ▁bo - ▁bought - ▁turn - ▁am - ▁city - ▁myself - ▁u - ▁walk - ▁food - ting - ▁seem - ▁problems - ▁j - ▁ex - ▁computer - ▁makes - end - ▁le - ▁man - ▁found - ▁percent - ▁together - ▁sit - la - ▁hum - ▁coming - im - ▁basically - ▁young - ▁best - ▁listen - ance - ▁water - ▁check - ▁son - ▁business - ▁seven - ▁summer - ▁each - ▁situation - ▁sh - ▁war - ▁worked - ward - ▁side - ru - x - ▁air - ▁ti - ie - ▁definitely - ▁certain - ▁game - ▁won - ▁wh - ▁wonderful - ▁wonder - ▁matter - if - ▁public - de - ▁lived - op - ma - he - ▁comp - ▁sc - ▁fifty - ▁certainly - ▁cook - ▁fa - ▁funny - ▁cat - ▁room - na - ▁nothing - ty - ▁class - ▁health - ▁age - ▁large - ▁ju - ▁gotten - ▁mine - ▁town - ▁months - ▁ch - ▁test - ▁per - ▁places - ▁comes - ▁anymore - ▁yep - ▁under - ▁plan - ▁vote - ▁important - ▁taking - port - ▁fi - ▁daughter - ▁thinking - ▁team - ▁learn - ▁budget - ide - ▁american - ful - ▁taxes - ▁gun - ▁ca - rs - ▁eighty - ▁control - ▁service - ▁today - ▁drug - tic - ▁paying - ▁cars - ▁rather - ▁neat - ▁tend - ▁line - ▁da - ▁law - time - ▁insurance - ca - ▁wear - sh - ▁friends - ▁outside - ally - man - ▁easy - ▁north - ▁friend - ▁during - ▁card - ▁nine - bye - ical - up - ▁living - ▁mind - ▁involved - ▁gosh - ▁moved - ig - ▁tr - ▁camping - ▁several - ▁hm - ap - ▁tried - ▁bring - ph - ence - ci - ▁major - ▁newspaper - ▁favorite - ▁student - ▁consider - ▁making - ice - ▁en - ▁morning - ▁question - ▁between - ▁jury - ▁ones - ▁amount - ▁older - ▁case - ▁education - ▁cl - ous - ▁paid - und - ▁depend - ▁mar - ▁bill - ▁wa - ▁must - ▁happened - gg - ▁ri - ▁hour - ▁fr - ▁difference - mi - ▁hope - ▁experience - 'no' - ▁absolutely - ian - ▁group - ▁figure - ▁anybody - ▁miles - ▁aren - ▁although - ▁worth - ▁interest - ▁book - ia - ▁forty - ▁expensive - ▁second - ▁without - ▁gets - ish - ▁du - ating - ▁full - ▁ta - ▁app - bo - ▁along - ▁paint - ▁recently - ▁leave - ▁weather - ▁com - ▁miss - ▁sha - ▁free - ▁often - ▁gra - ▁minutes - ▁magazine - ▁wait - ▁ahead - ▁wrong - ▁hours - ▁already - ▁married - ▁left - ▁camp - ▁hit - ine - ▁fifteen - ex - ▁men - ▁cons - ▁drugs - ▁rain - ▁schools - ▁fish - ▁girl - ▁office - ▁ski - ▁weeks - ▁middle - ▁knew - ▁store - ▁watching - ious - ▁hot - ▁running - ▁act - ▁yourself - ▁cold - ▁price - all - ick - ▁lake - ▁al - ▁death - ▁dad - ▁enjoyed - ▁benefits - ▁su - ▁word - ▁main - ▁cha - ▁grow - ▁recycling - ▁past - ▁weekend - ▁break - ▁base - ▁against - ▁movies - ▁mostly - ▁guys - ▁sense - ▁san - ▁cr - ▁sell - ▁sister - ha - ▁str - ▁thank - ▁issue - way - ary - ▁pet - ▁throw - ▁cover - ities - ▁pi - ▁baby - ▁doctor - ▁wi - ▁local - ▁difficult - ▁nursing - ▁wanna - ab - ▁open - ight - ▁fee - ▁head - pa - ▁vacation - ought - ▁ask - ▁brother - ▁instead - ▁reading - ▁kid - ▁add - ▁- - ial - ▁rest - ▁interested - ▁short - ▁qu - ▁degree - ▁charge - ber - ▁topic - ped - ▁talked - land - ▁move - ▁trouble - cy - ▁told - ▁fairly - ling - ▁pe - ▁unless - ▁winter - ▁hate - ▁twelve - ▁plano - ▁wish - ▁yard - ▁sta - ▁exercise - ▁front - do - ▁somewhere - ▁east - ▁gre - ▁everyone - ▁regular - ▁restaurant - ▁states - ▁plant - ▁catch - ▁near - ▁decided - ▁imagine - ▁says - ▁except - ▁chance - ▁california - ▁kill - ▁looked - ▁punishment - ▁pull - ▁fan - ▁south - ay - ▁hold - ▁fine - ▁taken - ▁garden - ▁park - ▁takes - ▁late - ▁street - ▁door - ▁tra - ▁fall - ▁mom - ▁clean - ▁dress - ▁income - ▁teach - ▁companies - ▁ready - ▁top - ▁capital - ▁spent - ▁recycle - ▁york - ▁using - ▁social - ▁works - ▁raise - ▁father - ▁tough - ▁gu - ▁seventy - ▁ja - ▁early - ▁realize - ▁terms - ▁become - ▁send - ▁sixty - ▁themselves - ▁level - ▁phone - ▁god - ▁woman - ▁oil - ▁stand - ▁felt - ▁rent - ▁ne - ▁changed - ▁pr - ▁exp - ▁particular - ▁radio - ▁christmas - ▁station - ▁goodness - ▁pass - ▁power - ▁save - ▁society - ▁bar - ▁choice - ▁ge - ▁personal - ▁tha - ard - ▁dollar - ▁playing - ▁die - ▁national - ▁special - ▁general - ▁rate - ▁awful - ▁bra - ible - ▁cards - ▁plastic - ▁te - ▁visit - ▁fix - ▁rid - ▁train - ▁lives - ▁expect - ▁support - ▁wood - ▁books - ▁feeling - ▁na - ner - ▁center - ized - ▁acc - ▁putting - ▁bag - ton - ness - ▁later - ▁growing - ▁guns - ▁land - line - ▁travel - ▁subject - ▁period - ▁dinner - ▁judge - ▁season - ▁happens - ▁machine - ▁extra - ▁manage - ▁gave - ▁force - fi - ▁lately - ▁effect - one - ▁trip - ▁saving - ▁starting - ▁building - less - ▁cases - ▁sitting - ▁kept - ▁finally - ▁fast - ▁forth - ran - ▁stop - ▁testing - ▁spring - ▁cause - ▁require - ▁built - ▁murder - ▁black - ▁quick - ▁dr - ▁sw - ▁community - ▁record - ▁snow - ▁direct - ▁plus - ▁bank - ▁grade - ▁beautiful - ▁red - ▁afford - ▁graduate - go - ▁space - que - ▁countries - ▁cats - ▁fire - ▁process - ▁sound - ▁played - ▁limit - ▁white - ▁sad - ▁university - ▁trans - ▁mess - ▁nineteen - ▁shoot - ▁nobody - ny - ▁football - ▁speak - ▁story - ▁longer - ▁light - ▁mu - ▁ninety - ▁order - ▁road - ▁totally - ▁fishing - ▁information - ▁sign - ▁worry - ▁spending - ▁product - ▁soon - ▁ah - ▁jo - ▁bother - ▁across - ▁write - ▁bunch - ▁carry - ▁truck - ▁hey - ▁ball - ▁driving - ▁needed - ▁cra - ▁teachers - ▁church - ▁low - ▁amazing - ▁pu - ▁pen - ▁decision - ▁golf - ▁sorry - ▁hurt - gra - ▁younger - ▁account - ▁terrible - ▁wind - ▁gr - ▁report - ▁wor - ▁suppose - ▁color - ▁hunt - ▁teacher - ▁concerned - j - ▁easier - ▁strange - ▁sub - ▁strong - ▁turned - ze - ▁safe - ▁vi - ▁size - ▁given - ▁lost - ▁families - ▁happy - ▁follow - ▁view - ▁market - ▁handle - ▁within - ▁single - ▁shop - ship - vis - ▁ye - ▁television - ▁cheap - ▁ki - ▁rock - ▁engineer - ▁individual - ▁shot - ▁criminal - ▁united - ▁trial - ▁worse - ▁serious - ite - ▁neighborhood - ▁brought - ▁answer - ▁trees - ▁bi - ▁build - ▁example - ▁fair - ▁buying - mon - ▁weight - ▁military - ▁caught - ▁private - ▁field - ler - ▁che - ▁crazy - law - ▁serve - out - ▁decide - ▁opinion - ▁medical - ▁step - ▁push - ▁meet - ▁stick - ▁win - ▁chi - clock - ▁boat - ▁quality - ▁green - ▁term - ▁lose - ▁fo - ▁hospital - be - ▁scary - ▁ended - ▁police - ▁biggest - ▁apartment - ▁repair - ▁finish - ▁glad - ▁inside - ▁learned - uh - ▁prison - ▁familiar - ▁third - ▁seemed - ▁mountain - ▁whenever - ▁range - ▁watched - ▁necessarily - ook - ▁pan - ▁piece - ▁noticed - ▁collect - ▁president - ▁twice - ative - ▁glass - ▁super - ▁fund - ▁ran - ▁sleep - ▁lawn - ▁behind - ▁guilty - ▁drop - ▁mix - ▁killed - ▁court - ▁completely - ▁party - ▁current - ▁tape - ▁indi - ▁commit - ▁benefit - ▁wall - ▁particularly - ▁personally - ▁anywhere - ▁project - ▁arm - ▁si - ▁clothes - ▁eighteen - ▁bigger - ▁list - ▁hang - ▁warm - ▁kn - ▁eleven - ▁research - ▁gee - ▁grand - ▁fight - uff - ▁grass - ▁teaching - ▁million - ▁trash - ▁cash - istic - ron - ▁waiting - ▁neighbor - ▁club - ability - ▁develop - ▁unfortunately - ▁loan - ▁star - ▁picked - ▁generally - ▁environment - ▁minute - ▁obviously - ▁protect - ▁opera - ▁anyone - ▁employee - ▁houston - ize - ▁fill - qui - ▁treat - ▁baseball - ▁ground - ▁video - ▁pollution - ▁higher - ▁available - ▁generation - ▁luck - ▁excuse - ▁pound - ▁picture - ▁roll - ▁america - ▁eventually - ▁itself - ▁ooh - ▁asked - ▁forget - ▁surprised - ▁federal - ▁jail - ▁pla - ▁sun - ▁basic - ▁attention - ▁washington - ▁extreme - ▁penalty - ▁sentence - ▁poor - ▁mail - ▁cool - ▁florida - ▁clear - ▁fortunate - ▁huge - ▁aware - ▁lay - ▁civil - ▁value - ▁lead - ▁band - ▁parent - ▁giving - ▁bottle - ▁blue - ▁standard - ▁art - ▁afraid - ▁bedroom - ▁comfortable - ▁separate - ▁el - ▁position - ▁foot - ▁cap - ▁eye - ▁europe - ▁sunday - ▁discuss - ▁provide - ▁lucky - ▁sick - ▁excellent - ▁utah - ▁classes - ▁apparently - ▁condition - ▁perhaps - ▁weapon - ▁burn - ▁originally - ▁self - ▁beginning - ▁prefer - ▁cop - ade - ▁count - ▁quit - ▁typical - 'off' - ▁economic - ▁broke - ▁average - ▁smaller - ▁security - ▁virginia - ▁weird - ▁future - ▁similar - ▁hopefully - ▁economy - ▁political - ▁relative - ▁slow - ▁master - ▁financial - ▁respect - ▁expense - ▁quarter - ▁accept - ▁appeal - ▁normally - ▁channel - ▁alone - ▁z - ▁human - ▁union - ▁cou - ▁privacy - ▁science - ▁lawyer - ▁busy - ▁window - ▁automatic - ▁sold - ▁county - ▁advantage - ▁bush - ▁affect - ▁drink - ▁entire - ▁lunch - ▁switch - ▁basis - ▁role - ▁table - ▁animal - ▁basketball - ▁industry - ▁peace - ▁reunion - ▁blow - ▁department - ▁present - ▁relate - ▁positive - ▁article - ▁heavy - ▁return - place - ▁chicken - ▁stories - ▁somehow - ▁honest - ▁history - ▁saturday - ▁salary - ▁member - ▁payment - ▁moving - ▁port - ▁ride - ▁professional - ▁mexico - ▁normal - ▁lower - ▁jump - ▁rich - ▁mow - ▁design - ▁organization - ▁straight - ▁draw - ▁smoke - ▁possible - ▁bucks - ▁debt - work - ▁property - ▁teenage - ▁garage - ▁wild - ▁rough - ▁scout - ▁touch - ▁sla - ▁suit - ▁purchase - ▁retirement - ▁election - ▁carolina - ▁recipe - ▁track - ▁changing - ▁entertain - ▁grandmother - ▁thirteen - ▁instance - ▁coverage - ▁attitude - ▁box - ▁face - ▁background - ▁study - ▁kidding - ▁english - ▁ridiculous - ▁legal - ▁tonight - ▁trade - ▁random - ▁john - ▁coast - ▁dry - ▁aluminum - ▁choose - ▁colorado - ▁continue - ▁contract - ▁england - ▁ticket - ▁board - ▁replace - ▁join - ▁folks - ▁sudden - ▁garbage - ▁engine - ▁himself - ▁instrument - ▁spot - ▁row - ▁activities - ▁cross - ▁scare - ▁shape - ▁mini - ▁district - ▁floor - ▁taste - ▁corn - ▁correct - ▁opportunity - ▁threat - ified - ▁concern - ▁aerobic - ▁popular - ▁everyday - ▁adult - ▁doubt - ▁brand - ▁dead - ▁terr - ▁defense - ▁worst - ▁mexican - ▁policy - ▁vietnam - ▁pressure - ▁taught - ▁balance - ▁body - ▁accident - ▁afternoon - ▁horrible - ▁german - ▁electric - ▁tired - ▁rule - ▁everywhere - ▁opposed - ▁squa - ▁bike - ▁congress - ▁foreign - ▁physical - ▁yesterday - ▁increase - ▁metric - ▁style - ▁minor - ▁majority - ▁perfect - ▁responsibility - ▁common - ▁central - ▁improve - ▁kitchen - ▁vegetable - ▁sixteen - ▁nurse - ▁bird - ▁forever - ▁born - ▁stopped - ▁tech - ▁jeez - ▁mistake - ▁richardson - ▁russia - ▁express - ▁lady - ▁print - ▁hook - light - ▁bottom - ▁easily - ▁select - ▁option - ▁coach - ville - ▁favor - ▁pennsylvania - ▁key - ject - ▁effort - ▁schedule - ▁spread - ▁execut - ▁hobby - ▁immediate - ▁simple - ▁somewhat - ▁natural - ▁block - ▁fourteen - ▁however - ▁dump - ▁equipment - ▁perform - ▁complain - ▁planning - ▁occasionally - ▁river - ▁conversation - ▁grocery - ▁fresh - ▁besides - burg - ▁friday - ▁result - ▁smart - ▁discover - ▁various - ▁storm - ▁appreciate - ▁equal - ▁nowadays - ▁upset - ▁brown - ▁elderly - ▁invasion - ▁oklahoma - ▁politics - ▁maryland - ▁regard - ▁commercial - ▁incredible - ▁french - ▁trust - ▁seventies - ▁league - ▁ourselves - ▁possibly - ▁purpose - ▁network - ▁stuck - ▁admit - ▁sweat - ▁cousin - ▁begin - ▁elect - board - ▁supp - ▁alcohol - ▁contribut - ▁solution - ▁material - ▁deep - ▁specific - ▁convict - ▁motor - ▁tree - ▁junior - ▁nature - ▁oak - ugh - ▁restrict - ▁mentioned - ▁shoes - ▁volunteer - ▁austin - ▁prior - ▁temp - ▁extent - ▁laugh - ▁blood - ▁otherwise - ▁deduct - ▁hobbies - ▁influence - ▁writing - ▁abuse - ▁soviet - ▁mental - ▁awhile - ▁connect - ▁western - ▁italian - ▁border - ▁convenient - ▁language - ▁recommend - ▁downtown - ▁politician - ▁character - ▁truth - ▁pitch - ▁strict - ▁sixties - ▁hello - ▁chinese - ▁relax - ▁wheel - ▁drove - ▁access - ▁cannot - ▁plenty - ▁pardon - ▁model - ▁visa - ▁section - ▁boston - ▁dirt - ▁aspect - ▁electronic - ▁responsible - ▁participate - ▁steak - ▁roof - ▁profit - ▁cabin - ▁bowl - ▁japanese - ▁telephone - ▁variety - ▁piano - ▁chicago - ▁citizen - ▁broad - ▁corps - ▁assume - ▁automobile - ▁contain - ▁simply - ▁technical - ▁wrote - ▁crowd - ▁damage - ▁dental - ▁wou - ▁corporation - ▁honda - ▁necessary - ▁traffic - ▁vehicle - ▁salad - ▁southern - '0' - ▁unusual - ▁voting - ▁screen - ▁stress - ▁mandatory - ▁monday - ▁secret - ▁source - ▁load - ▁license - ▁population - ▁subscribe - ▁suspect - ▁atlanta - ▁draft - ▁knowledge - ▁tremendous - ▁earth - ▁match - ▁atmosphere - ▁democrat - ▁habit - ▁edge - ▁film - ▁earlier - ▁encourage - ▁exciting - ▁fellow - ▁suburb - ▁became - ▁ceiling - ▁disease - ▁cheese - ▁actual - ▁bathroom - ▁divorce - ▁double - ▁further - ▁pattern - ▁technology - ▁becoming - ▁investment - ▁practical - ▁dark - ▁christian - ▁discipline - ▁occur - ▁senior - ▁liberal - ▁israel - ▁scene - ▁deterrent - ▁jazz - ▁suggest - ▁beyond - ▁seventeen - ▁sauce - ▁recent - ▁interview - ▁swimming - ▁stupid - ▁voice - ▁pump - ▁independent - ▁practice - ▁tomatoes - ▁blame - ▁consumer - ▁outdoor - ▁northern - ▁craft - ▁antonio - ▁republic - ▁written - ▁tennis - ▁tune - ology - ▁legislat - ▁finance - ▁adjust - ▁massachusetts - ▁successful - ▁repeat - ▁chemical - ▁versus - ▁milk - ▁carpet - ▁horse - ▁address - ▁speed - ▁media - ▁apart - ▁occasion - ▁belong - ▁francisco - ▁grandchildren - ▁whoever - ▁quiet - ▁shirt - ▁knee - izing - ▁register - ▁holiday - ▁resource - ▁mechanic - ▁receive - ▁staff - ▁steal - ▁maintain - ▁toyota - ▁psych - ▁casual - ▁backyard - ▁chose - ▁author - ▁energy - ▁bread - ▁focus - ▁journal - ▁professor - ▁sentencing - ▁explain - ▁knock - ficial - ▁amazed - ▁baltimore - ▁facilities - ▁neither - ▁potato - ▁advance - ▁sweet - ▁gulf - hold - ▁candidate - ▁pittsburgh - ▁garland - ▁babies - ▁hung - ▁involve - ▁spec - ▁concept - ▁convince - ▁impressed - ▁leaving - ▁primarily - ▁produce - ▁victim - ▁herself - ▁shock - ▁juries - ▁loose - ▁strip - wood - ▁represent - ▁georgia - ▁kindergarten - ▁progress - ▁yellow - ▁stock - ▁junk - ▁robb - ▁surprise - ▁circumstances - ▁dangerous - ▁illegal - ▁concert - ▁shift - ▁moral - ▁disappoint - ▁advertise - ▁educate - ▁female - ▁minimum - ▁establish - ▁fantastic - ▁welfare - house - ▁birthday - ▁cruise - ▁culture - ▁elementary - ▁employer - ▁incentive - ▁relationship - ▁speech - ▁reduce - ▁original - ▁august - ▁grandparents - ▁preschool - ▁violent - ▁barbecue - ▁fifties - ▁rabbit - ▁freedom - ▁parole - ▁attract - ▁fascinat - ▁innocent - ▁perspective - ▁temperature - ▁emotion - ▁pollut - ▁negative - ▁wisconsin - ▁contact - ▁impact - ▁jersey - ▁recognize - ▁conscious - ▁detail - ▁complete - ▁creek - ▁attack - ▁claim - ▁continu - ▁attorney - ▁campaign - ▁conservative - ▁enforce - ▁excited - ▁canada - ▁multi - ▁audi - ▁challenge - ▁evidence - ▁maintenance - ▁pepper - ▁release - ▁frame - employed - ▁include - ▁paycheck - ▁raleigh - ▁religious - ▁semester - '1' - '4' - '2' - '&' - '6' - '8' - '9' - '7' - '5' - / - q - '3' - '[' - <sos/eos> init: null input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: null zero_infinity: true joint_net_conf: null use_preprocessor: true token_type: bpe bpemodel: data/en_token_list/bpe_unigram2000/bpe.model non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' short_noise_thres: 0.5 frontend: default frontend_conf: n_fft: 512 win_length: 400 hop_length: 160 fs: 16k specaug: specaug specaug_conf: apply_time_warp: true time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 27 num_freq_mask: 2 apply_time_mask: true time_mask_width_ratio_range: - 0.0 - 0.05 num_time_mask: 5 normalize: global_mvn normalize_conf: stats_file: exp/asr_stats_raw_en_bpe2000_sp/train/feats_stats.npz model: espnet model_conf: ctc_weight: 0.3 lsm_weight: 0.1 length_normalized_loss: false preencoder: null preencoder_conf: {} encoder: e_branchformer encoder_conf: output_size: 256 attention_heads: 4 attention_layer_type: rel_selfattn pos_enc_layer_type: rel_pos rel_pos_type: latest cgmlp_linear_units: 1024 cgmlp_conv_kernel: 31 use_linear_after_conv: false gate_activation: identity num_blocks: 12 dropout_rate: 0.1 positional_dropout_rate: 0.1 attention_dropout_rate: 0.1 input_layer: conv2d layer_drop_rate: 0.0 linear_units: 1024 positionwise_layer_type: linear use_ffn: true macaron_ffn: true merge_conv_kernel: 31 postencoder: null postencoder_conf: {} decoder: transformer decoder_conf: attention_heads: 4 linear_units: 2048 num_blocks: 6 dropout_rate: 0.1 positional_dropout_rate: 0.1 self_attention_dropout_rate: 0.1 src_attention_dropout_rate: 0.1 preprocessor: default preprocessor_conf: {} required: - output_dir - token_list version: '202211' distributed: true ``` </details>
39bfea4f334437b9308b40b69e861242
apache-2.0
['generated_from_trainer']
false
tiny-vanilla-target-glue-cola-linear-probe This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6182 - Matthews Correlation: 0.0
35f1f3113f78a795ed64f40119c1e270
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.6219 | 1.87 | 500 | 0.6194 | 0.0 | | 0.6094 | 3.73 | 1000 | 0.6188 | 0.0 | | 0.6086 | 5.6 | 1500 | 0.6183 | 0.0 | | 0.6079 | 7.46 | 2000 | 0.6182 | 0.0 |
c4c0e9ab096278da9822fa1aa1b4da73
apache-2.0
['generated_from_trainer']
false
juancopi81/whisper-medium-es-train-valid This model is a fine-tuned version of [juancopi81/whisper-medium-es-train-valid](https://huggingface.co/juancopi81/whisper-medium-es-train-valid) on the common_voice_11_0 dataset. It achieves the following results on the evaluation set: - Loss: 0.2227 - Wer: 6.1548 Using the script provided in the Whisper Sprint (Dec. 2022) the models achieves these results on the evaluation sets (WER): - google/fleurs: 6.94 - mozilla-foundation/common_voice_11_0: XXXX
dfbe5df8df20a652a9f51c9e2b96dc0d
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.0539 | 1.01 | 1000 | 0.2100 | 6.4465 | | 0.0211 | 2.01 | 2000 | 0.2286 | 6.5082 | | 0.0088 | 3.02 | 3000 | 0.2418 | 6.3848 | | 0.0205 | 4.02 | 4000 | 0.2288 | 6.6603 | | 0.1031 | 5.03 | 5000 | 0.2227 | 6.1548 |
e2fd3c9d1854d03f078fcf06343499f7
apache-2.0
['generated_from_trainer']
false
wav2vec2-large-xls-r-300m-irish-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 1.4286 - Wer: 0.5097
2d2c800e36ff9ff8a147ce1cf067fbba
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 210 - mixed_precision_training: Native AMP
4d05e0954b5922bbe36735016537ae9a
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | 4.3406 | 24.97 | 400 | 1.1677 | 0.7270 | | 0.2527 | 49.97 | 800 | 1.2686 | 0.5927 | | 0.0797 | 74.97 | 1200 | 1.3970 | 0.5769 | | 0.0424 | 99.97 | 1600 | 1.4093 | 0.5600 | | 0.0286 | 124.97 | 2000 | 1.3684 | 0.5407 | | 0.0174 | 149.97 | 2400 | 1.4571 | 0.5205 | | 0.0109 | 174.97 | 2800 | 1.4327 | 0.5178 | | 0.0072 | 199.97 | 3200 | 1.4286 | 0.5097 |
74079b23f95c6497e2d7eaeb585965f4
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Wav2Vec2-Large-XLSR-53-Turkish Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Turkish using the [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz.
e97cf572079e73029505c06b86574ae7
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "tr", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("ozcangundes/wav2vec2-large-xlsr-53-turkish") model = Wav2Vec2ForCTC.from_pretrained("ozcangundes/wav2vec2-large-xlsr-53-turkish") resampler = torchaudio.transforms.Resample(48_000, 16_000)
059fddb2114d2d4dc34f6fa9eddd1c2e
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
We need to read the aduio files as arrays def speech_file_to_array_fn(batch): \\tspeech_array, sampling_rate = torchaudio.load(batch["path"]) \\tbatch["speech"] = resampler(speech_array).squeeze().numpy() \\treturn batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): \\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ```
4ec7bb28af17c1007454f88f704e40c8
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Evaluation The model can be evaluated as follows on the Turkish test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "tr", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("ozcangundes/wav2vec2-large-xlsr-53-turkish") model = Wav2Vec2ForCTC.from_pretrained("ozcangundes/wav2vec2-large-xlsr-53-turkish") model.to("cuda") chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“\\%\\‘\\”\\�\\’\\']' resampler = torchaudio.transforms.Resample(48_000, 16_000)
11037556217c2b999edb23c24aa0b460
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn)
50a1eebb51f072144a0e3a0cc8c7ffd3
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 29.62 %
7ec859327ea088713d9622c793d56c9b
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Training The Common Voice `train` and `validation` datasets were used for training. The script used for training can be found [here](https://colab.research.google.com/drive/1hesw9z_kFFINT93jBvGuFspOLrHx10AE?usp=sharing)
3cb296911e9ced6ca013bfd9c9d746f7
mit
[]
false
nixeu on Stable Diffusion This is the `<nixeu>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). ![<nixeu> 6](https://cdn.discordapp.com/attachments/1004159122335354970/1018669275361329202/unknown.png) Here is the new concept you will be able to use as a `style`: ![<nixeu> 0](https://huggingface.co/sd-concepts-library/nixeu/resolve/main/concept_images/5.jpeg) ![<nixeu> 1](https://huggingface.co/sd-concepts-library/nixeu/resolve/main/concept_images/3.jpeg) ![<nixeu> 2](https://huggingface.co/sd-concepts-library/nixeu/resolve/main/concept_images/0.jpeg) ![<nixeu> 3](https://huggingface.co/sd-concepts-library/nixeu/resolve/main/concept_images/2.jpeg) ![<nixeu> 4](https://huggingface.co/sd-concepts-library/nixeu/resolve/main/concept_images/1.jpeg) ![<nixeu> 5](https://huggingface.co/sd-concepts-library/nixeu/resolve/main/concept_images/4.jpeg)
c0ae43391282a151ad5d28320fd05258
mit
[]
false
model by Pinguin This your the Stable Diffusion model fine-tuned the a hat in time girl concept taught to Stable Diffusion with Dreambooth. It can be used by modifying the `instance_prompt`: **a render of sks ** You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb). Here are the images used for training this concept: ![image 0](https://huggingface.co/sd-dreambooth-library/a-hat-in-time-girl/resolve/main/concept_images/3.jpeg) ![image 1](https://huggingface.co/sd-dreambooth-library/a-hat-in-time-girl/resolve/main/concept_images/1.jpeg) ![image 2](https://huggingface.co/sd-dreambooth-library/a-hat-in-time-girl/resolve/main/concept_images/0.jpeg) ![image 3](https://huggingface.co/sd-dreambooth-library/a-hat-in-time-girl/resolve/main/concept_images/2.jpeg)
df24d8ca5cd9af98ec69be75c6a5bcea
mit
[]
false
Spanish truecasing model This is a Spanish truecasing-model that works with the <b>Dalton Fury</b> Python project: https://github.com/daltonfury42/truecase You can install it here: https://pypi.org/project/truecase/
2890e336246172e0ab72c1bfc3197281
mit
[]
false
Quick start To use the Spanish model use the TrueCase.py file uploaded to this repository https://huggingface.co/HURIDOCS/spanish_truecasing/blob/main/TrueCaser.py Install the requirements: pip install nltk And ready to work: from TrueCaser import TrueCaser model_path = "spanish.dist" spanish_truecasing = TrueCaser(model_path) text = 'informe no.78/08. petición 785-05 admisibilidad. vicente arturo villanueva ortega y otros.' print(spanish_truecasing.get_true_case(text))
ec79f29654493b97498836e9f3cacd22
mit
[]
false
Notes The model was trained with the Europarl dataset that contains transcriptions of the European Parliament discusions: https://www.statmt.org/europarl/ Europarl: A Parallel Corpus for Statistical Machine Translation, Philipp Koehn, MT Summit 2005 Using huggingface load_dataset: europarl = load_dataset('large_spanish_corpus', name='Europarl')
20521dbf2da1d75fc919ae8f3fb55cdd
apache-2.0
[]
false
Arabic T5 Small Model A customized T5 Model for Arabic and English Task. It could be used as an alternative for `google/mt5-small` model, as it's much smaller and only targets Arabic and English based tasks.
f91f81fc04660d9e2ee6a03badfcba74
apache-2.0
[]
false
About T5 ``` T5 is an encoder-decoder model pre-trained on a multi-task mixture of unsupervised and supervised tasks and for which each task is converted into a text-to-text format. The T5 model was presented in Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. ``` [Read More](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html)
bfc55e9c8478ac94573219975e1c8f67
apache-2.0
['generated_from_trainer']
false
small-mlm-glue-qnli-custom-tokenizer-expand-vocab This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.0716
43a746e7d8f1d866b5318284e9edb909
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 5.5339 | 0.4 | 500 | 4.7224 | | 4.6477 | 0.8 | 1000 | 4.3242 | | 4.3146 | 1.2 | 1500 | 3.9988 | | 4.0046 | 1.6 | 2000 | 3.7777 | | 3.7942 | 2.0 | 2500 | 3.5976 | | 3.5684 | 2.4 | 3000 | 3.4426 | | 3.4406 | 2.8 | 3500 | 3.3275 | | 3.332 | 3.2 | 4000 | 3.2361 | | 3.1941 | 3.6 | 4500 | 3.1616 | | 3.0981 | 4.0 | 5000 | 3.0716 |
09f4aea6be15a46284e0cab64500e279
cc-by-sa-4.0
[]
false
Example *Use `Diffusers` >=0.8.0, do not support lower versions.* ```python from diffusers import StableDiffusionPipeline import torch model_path = "foldl/sd-rumeme-desc" pipe = StableDiffusionPipeline.from_pretrained(model_path, torch_dtype=torch.float16) pipe.to("cuda") image = pipe(prompt="кот").images[0] image.save("cat".jpg) ```
f23c9bd6f65fc75f2a0ff966b82ae728
cc-by-sa-4.0
[]
false
Training Procedure Model was trained on 1 P100 GPU for 10k steps. Base model - https://huggingface.co/OFA-Sys/small-stable-diffusion-v0 Training notebook here - https://www.kaggle.com/code/nukeee/meme-diffusion
b5f3516371f12dab24b46367206d4a23
apache-2.0
['automatic-speech-recognition', 'es']
false
exp_w2v2r_es_vp-100k_gender_male-8_female-2_s226 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
61ea51dbaff27ca895c036d0c58cf84a
apache-2.0
[]
false
Graphcore/gpt2-small-ipu Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore). Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.
b87197aa379214507e09c339cf55fbce
apache-2.0
[]
false
Intended uses & limitations This model contains just the `IPUConfig` files for running the [GPT2 Small](https://huggingface.co/gpt2) model on Graphcore IPUs. **This model contains no model weights, only an IPUConfig.**
d885ecf56e8dc3bb5453b92e3eb77891
apache-2.0
['generated_from_keras_callback']
false
mzchua/distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1940 - Validation Loss: 0.4943 - Train Matthews Correlation: 0.5481 - Epoch: 2
89081eb899a0eef1445aebc7f5c3241b
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Train Matthews Correlation | Epoch | |:----------:|:---------------:|:--------------------------:|:-----:| | 0.5156 | 0.4940 | 0.3942 | 0 | | 0.3226 | 0.4322 | 0.5448 | 1 | | 0.1940 | 0.4943 | 0.5481 | 2 |
d6a7f530e5f56b7721a9efebb9a7cba3
apache-2.0
['generated_from_trainer']
false
tiny-mlm-glue-cola-custom-tokenizer-target-glue-sst2 This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-cola-custom-tokenizer](https://huggingface.co/muhtasham/tiny-mlm-glue-cola-custom-tokenizer) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4535 - Accuracy: 0.7959
fec5042768982bf5a51d4d72c36a877c
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6748 | 0.24 | 500 | 0.6424 | 0.6468 | | 0.5996 | 0.48 | 1000 | 0.5542 | 0.7167 | | 0.5172 | 0.71 | 1500 | 0.5001 | 0.7511 | | 0.4835 | 0.95 | 2000 | 0.4613 | 0.7741 | | 0.4366 | 1.19 | 2500 | 0.4602 | 0.7901 | | 0.4127 | 1.43 | 3000 | 0.4334 | 0.8028 | | 0.3894 | 1.66 | 3500 | 0.4507 | 0.7867 | | 0.3732 | 1.9 | 4000 | 0.4305 | 0.8154 | | 0.3646 | 2.14 | 4500 | 0.4369 | 0.8085 | | 0.3417 | 2.38 | 5000 | 0.4589 | 0.7947 | | 0.3343 | 2.61 | 5500 | 0.4535 | 0.7959 |
40d31e9603fbbdebbf9ab420854dcd05
apache-2.0
['generated_from_trainer']
false
distilBERT-finetuned-resumes-sections This model is a fine-tuned version of [Geotrend/distilbert-base-en-fr-cased](https://huggingface.co/Geotrend/distilbert-base-en-fr-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0369 - F1: 0.9652 - Roc Auc: 0.9808 - Accuracy: 0.9621
c85ef35a4f365346f5ddbe2723645f10
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|:--------:| | 0.0509 | 1.0 | 1173 | 0.0331 | 0.9439 | 0.9659 | 0.9356 | | 0.024 | 2.0 | 2346 | 0.0274 | 0.9550 | 0.9750 | 0.9493 | | 0.0148 | 3.0 | 3519 | 0.0290 | 0.9493 | 0.9712 | 0.9446 | | 0.0089 | 4.0 | 4692 | 0.0324 | 0.9492 | 0.9714 | 0.9442 | | 0.0071 | 5.0 | 5865 | 0.0317 | 0.9540 | 0.9732 | 0.9476 | | 0.0064 | 6.0 | 7038 | 0.0324 | 0.9527 | 0.9742 | 0.9484 | | 0.0036 | 7.0 | 8211 | 0.0320 | 0.9574 | 0.9766 | 0.9540 | | 0.0042 | 8.0 | 9384 | 0.0367 | 0.9528 | 0.9732 | 0.9493 | | 0.0052 | 9.0 | 10557 | 0.0342 | 0.9563 | 0.9757 | 0.9531 | | 0.0027 | 10.0 | 11730 | 0.0294 | 0.9629 | 0.9800 | 0.9595 | | 0.0017 | 11.0 | 12903 | 0.0355 | 0.9605 | 0.9778 | 0.9582 | | 0.0022 | 12.0 | 14076 | 0.0338 | 0.9627 | 0.9792 | 0.9591 | | 0.0012 | 13.0 | 15249 | 0.0358 | 0.9609 | 0.9780 | 0.9591 | | 0.0011 | 14.0 | 16422 | 0.0360 | 0.9618 | 0.9791 | 0.9604 | | 0.0009 | 15.0 | 17595 | 0.0358 | 0.9648 | 0.9807 | 0.9625 | | 0.0007 | 16.0 | 18768 | 0.0373 | 0.9627 | 0.9794 | 0.9595 | | 0.0006 | 17.0 | 19941 | 0.0397 | 0.9597 | 0.9774 | 0.9574 | | 0.0008 | 18.0 | 21114 | 0.0369 | 0.9652 | 0.9808 | 0.9621 | | 0.0007 | 19.0 | 22287 | 0.0377 | 0.9646 | 0.9801 | 0.9621 | | 0.0005 | 20.0 | 23460 | 0.0381 | 0.9639 | 0.9797 | 0.9616 |
82a27138327bf7141e1202429aa7d422
mit
['generated_from_trainer']
false
hmBERT-CoNLL-cp3 This model is a fine-tuned version of [dbmdz/bert-base-historic-multilingual-cased](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0572 - Precision: 0.9121 - Recall: 0.9243 - F1: 0.9182 - Accuracy: 0.9862
c7b4d36a3b02f5952347c4e84cf3e396
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 0.06 | 25 | 0.4115 | 0.3643 | 0.3728 | 0.3685 | 0.9007 | | No log | 0.11 | 50 | 0.2243 | 0.6393 | 0.6908 | 0.6641 | 0.9460 | | No log | 0.17 | 75 | 0.1617 | 0.7319 | 0.7637 | 0.7475 | 0.9580 | | No log | 0.23 | 100 | 0.1544 | 0.7282 | 0.7637 | 0.7455 | 0.9585 | | No log | 0.28 | 125 | 0.1341 | 0.7595 | 0.8117 | 0.7847 | 0.9644 | | No log | 0.34 | 150 | 0.1221 | 0.7980 | 0.8251 | 0.8114 | 0.9693 | | No log | 0.4 | 175 | 0.1013 | 0.7968 | 0.8344 | 0.8152 | 0.9719 | | No log | 0.46 | 200 | 0.1076 | 0.8265 | 0.8403 | 0.8333 | 0.9732 | | No log | 0.51 | 225 | 0.0883 | 0.8453 | 0.8635 | 0.8543 | 0.9763 | | No log | 0.57 | 250 | 0.0973 | 0.8439 | 0.8633 | 0.8535 | 0.9763 | | No log | 0.63 | 275 | 0.0883 | 0.8497 | 0.8655 | 0.8575 | 0.9765 | | No log | 0.68 | 300 | 0.0879 | 0.8462 | 0.8642 | 0.8551 | 0.9766 | | No log | 0.74 | 325 | 0.0781 | 0.8592 | 0.8834 | 0.8711 | 0.9787 | | No log | 0.8 | 350 | 0.0725 | 0.8697 | 0.8928 | 0.8811 | 0.9803 | | No log | 0.85 | 375 | 0.0755 | 0.8687 | 0.8943 | 0.8813 | 0.9807 | | No log | 0.91 | 400 | 0.0666 | 0.8781 | 0.9004 | 0.8891 | 0.9822 | | No log | 0.97 | 425 | 0.0658 | 0.8877 | 0.8995 | 0.8936 | 0.9823 | | No log | 1.03 | 450 | 0.0645 | 0.8951 | 0.9036 | 0.8993 | 0.9837 | | No log | 1.08 | 475 | 0.0697 | 0.8864 | 0.9039 | 0.8951 | 0.9831 | | 0.1392 | 1.14 | 500 | 0.0688 | 0.8824 | 0.8994 | 0.8908 | 0.9824 | | 0.1392 | 1.2 | 525 | 0.0681 | 0.8950 | 0.9049 | 0.8999 | 0.9827 | | 0.1392 | 1.25 | 550 | 0.0676 | 0.8855 | 0.8977 | 0.8915 | 0.9823 | | 0.1392 | 1.31 | 575 | 0.0618 | 0.8940 | 0.9088 | 0.9014 | 0.9842 | | 0.1392 | 1.37 | 600 | 0.0644 | 0.8945 | 0.9076 | 0.9010 | 0.9840 | | 0.1392 | 1.42 | 625 | 0.0641 | 0.8936 | 0.9086 | 0.9010 | 0.9837 | | 0.1392 | 1.48 | 650 | 0.0619 | 0.8969 | 0.9120 | 0.9044 | 0.9846 | | 0.1392 | 1.54 | 675 | 0.0608 | 0.9045 | 0.9105 | 0.9075 | 0.9848 | | 0.1392 | 1.59 | 700 | 0.0624 | 0.9038 | 0.9143 | 0.9091 | 0.9851 | | 0.1392 | 1.65 | 725 | 0.0596 | 0.9062 | 0.9170 | 0.9116 | 0.9852 | | 0.1392 | 1.71 | 750 | 0.0580 | 0.8995 | 0.9143 | 0.9069 | 0.9848 | | 0.1392 | 1.77 | 775 | 0.0582 | 0.9082 | 0.9172 | 0.9127 | 0.9858 | | 0.1392 | 1.82 | 800 | 0.0588 | 0.9024 | 0.9179 | 0.9101 | 0.9852 | | 0.1392 | 1.88 | 825 | 0.0592 | 0.9020 | 0.9219 | 0.9119 | 0.9856 | | 0.1392 | 1.94 | 850 | 0.0600 | 0.9054 | 0.9182 | 0.9118 | 0.9852 | | 0.1392 | 1.99 | 875 | 0.0568 | 0.9068 | 0.9202 | 0.9135 | 0.9861 | | 0.1392 | 2.05 | 900 | 0.0571 | 0.9131 | 0.9212 | 0.9171 | 0.9861 | | 0.1392 | 2.11 | 925 | 0.0577 | 0.9110 | 0.9204 | 0.9157 | 0.9858 | | 0.1392 | 2.16 | 950 | 0.0605 | 0.9127 | 0.9243 | 0.9185 | 0.9860 | | 0.1392 | 2.22 | 975 | 0.0575 | 0.9109 | 0.9224 | 0.9166 | 0.9867 | | 0.0392 | 2.28 | 1000 | 0.0572 | 0.9121 | 0.9243 | 0.9182 | 0.9862 | | 0.0392 | 2.33 | 1025 | 0.0567 | 0.9171 | 0.9253 | 0.9212 | 0.9870 | | 0.0392 | 2.39 | 1050 | 0.0570 | 0.9193 | 0.9295 | 0.9244 | 0.9871 | | 0.0392 | 2.45 | 1075 | 0.0584 | 0.9155 | 0.9276 | 0.9215 | 0.9867 | | 0.0392 | 2.51 | 1100 | 0.0591 | 0.9168 | 0.9286 | 0.9227 | 0.9867 | | 0.0392 | 2.56 | 1125 | 0.0577 | 0.9182 | 0.9312 | 0.9246 | 0.9874 | | 0.0392 | 2.62 | 1150 | 0.0570 | 0.9184 | 0.9283 | 0.9233 | 0.9870 | | 0.0392 | 2.68 | 1175 | 0.0563 | 0.9191 | 0.9298 | 0.9245 | 0.9872 | | 0.0392 | 2.73 | 1200 | 0.0565 | 0.9180 | 0.9313 | 0.9246 | 0.9872 | | 0.0392 | 2.79 | 1225 | 0.0559 | 0.9190 | 0.9298 | 0.9244 | 0.9873 | | 0.0392 | 2.85 | 1250 | 0.0562 | 0.9185 | 0.9293 | 0.9239 | 0.9873 | | 0.0392 | 2.9 | 1275 | 0.0564 | 0.9175 | 0.9285 | 0.9230 | 0.9872 | | 0.0392 | 2.96 | 1300 | 0.0563 | 0.9181 | 0.9295 | 0.9237 | 0.9873 |
7507680b6901aa6c12b01bd7256b8bce
mit
['generated_from_trainer']
false
indobert-hoax-classification This model is a fine-tuned version of [indobenchmark/indobert-base-p1](https://huggingface.co/indobenchmark/indobert-base-p1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6230 - Accuracy: 0.8059
6d8d0240542b8d42382892dc7b55762c
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4.2173070213315e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 30 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5
af1eb8e200ac1a34af7a8904000a7c07
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 85 | 0.5540 | 0.7029 | | No log | 2.0 | 170 | 0.5432 | 0.7029 | | No log | 3.0 | 255 | 0.4963 | 0.7441 | | No log | 4.0 | 340 | 0.5791 | 0.7971 | | No log | 5.0 | 425 | 0.6230 | 0.8059 |
78ccdc0c172eb2222aa4f3c968b79dc6
creativeml-openrail-m
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'animal']
false
DreamBooth model for the mimica concept trained by mjfang27 on the mjfang27/dreambooth-hackathon-images dataset. This is a Stable Diffusion model fine-tuned on the mimica concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of mimica cat** This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
e5dba81d507580a3c12199c7183d2a6b
cc-by-4.0
['question-answering, multi-step-reasoning, multi-hop-reasoning']
false
digit_tokenization.py from https://github.com/stonybrooknlp/teabreac model_name = "StonyBrookNLP/teabreac-bart-large-iirc-retrieved" tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False)
5ed880e960365f601887f581a831bff6
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2202 - Accuracy: 0.925 - F1: 0.9252
4223182de4d87f0c4fb1443589b93df1
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8419 | 1.0 | 250 | 0.3236 | 0.9025 | 0.8999 | | 0.258 | 2.0 | 500 | 0.2202 | 0.925 | 0.9252 |
6305af9f0b89a9d8a34e2074b9ab337d
mit
[]
false
agm-style on Stable Diffusion Artist: <https://www.pixiv.net/en/users/20670939> This is the `<agm-style>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<agm-style> 0](https://huggingface.co/sd-concepts-library/agm-style/resolve/main/concept_images/0.jpeg) ![<agm-style> 1](https://huggingface.co/sd-concepts-library/agm-style/resolve/main/concept_images/3.jpeg) ![<agm-style> 2](https://huggingface.co/sd-concepts-library/agm-style/resolve/main/concept_images/5.jpeg) ![<agm-style> 3](https://huggingface.co/sd-concepts-library/agm-style/resolve/main/concept_images/1.jpeg) ![<agm-style> 4](https://huggingface.co/sd-concepts-library/agm-style/resolve/main/concept_images/2.jpeg) ![<agm-style> 5](https://huggingface.co/sd-concepts-library/agm-style/resolve/main/concept_images/4.jpeg)
a1bd3ea0c46de7c0828ceace9396be99
apache-2.0
['generated_from_keras_callback']
false
KakkiDaisuki/bert-finetuned-ner This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0259 - Validation Loss: 0.0580 - Epoch: 2
f94cf19f0bbedc1d056c495e2b88e6f0
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.1253 | 0.0569 | 0 | | 0.0417 | 0.0582 | 1 | | 0.0259 | 0.0580 | 2 |
643efeec3b894d465bf6aee895e53352
apache-2.0
['generated_from_trainer']
false
Article_500v8_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the article500v8_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.2113 - Precision: 0.7349 - Recall: 0.7560 - F1: 0.7453 - Accuracy: 0.9421
ac4c6276dda8877eadee8f1d7381a481
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 191 | 0.1914 | 0.7105 | 0.7181 | 0.7143 | 0.9382 | | No log | 2.0 | 382 | 0.2045 | 0.7283 | 0.7574 | 0.7426 | 0.9408 | | 0.1441 | 3.0 | 573 | 0.2113 | 0.7349 | 0.7560 | 0.7453 | 0.9421 |
efa7bd7978751ef105a5add9bb76c50c
apache-2.0
['whisper-event', 'generated_from_trainer', 'hf-asr-leaderboard']
false
Whisper Large French Cased This model is a fine-tuned version of [openai/whisper-large](https://huggingface.co/openai/whisper-large) on the mozilla-foundation/common_voice_11_0 fr dataset. It achieves the following results on the evaluation set: - Loss: 0.2962 - Wer: 11.9100
ce0ec531906f0f8a39330dc96eac27a4
apache-2.0
['whisper-event', 'generated_from_trainer', 'hf-asr-leaderboard']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 5000 - mixed_precision_training: Native AMP
bcf5d2d1cb3583b8605161adadbe9412
apache-2.0
['whisper-event', 'generated_from_trainer', 'hf-asr-leaderboard']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.3357 | 0.2 | 1000 | 0.3994 | 16.1523 | | 0.3026 | 0.4 | 2000 | 0.3802 | 15.2403 | | 0.2904 | 0.6 | 3000 | 0.3389 | 14.0045 | | 0.2407 | 0.8 | 4000 | 0.3135 | 12.7947 | | 0.2451 | 1.0 | 5000 | 0.2962 | 11.9100 |
f9e938789642f6d9f6af6abd2392bb30
apache-2.0
['generated_from_trainer']
false
distilbert_sa_GLUE_Experiment_logit_kd_wnli_256 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE WNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.3436 - Accuracy: 0.5634
77ab6478ea8eb8aa5277381f8ebd603a
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3511 | 1.0 | 3 | 0.3436 | 0.5634 | | 0.3479 | 2.0 | 6 | 0.3457 | 0.5634 | | 0.3474 | 3.0 | 9 | 0.3462 | 0.5634 | | 0.3477 | 4.0 | 12 | 0.3442 | 0.5634 | | 0.3486 | 5.0 | 15 | 0.3442 | 0.5634 | | 0.3479 | 6.0 | 18 | 0.3455 | 0.5634 |
654ebf2d5f414ff07178812d588b7604
mit
['generated_from_trainer']
false
roberta-base-finetuned-ner This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0492 - Precision: 0.9530 - Recall: 0.9604 - F1: 0.9567 - Accuracy: 0.9889
cf3ca54a394fa9aa9ce9f9352c3fdee9
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2031 | 1.0 | 878 | 0.0560 | 0.9381 | 0.9445 | 0.9413 | 0.9858 | | 0.0446 | 2.0 | 1756 | 0.0480 | 0.9510 | 0.9578 | 0.9544 | 0.9887 | | 0.0263 | 3.0 | 2634 | 0.0492 | 0.9530 | 0.9604 | 0.9567 | 0.9889 |
7aa8c46cd917539d80d418b92ca8b6fd
apache-2.0
['generated_from_trainer']
false
bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0858 - Precition: 0.9363 - Recall: 0.9522 - F1: 0.9442 - Accuracy: 0.9866
7a24e0d995ec0ef17b3fc545118822ce
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precition | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0081 | 1.0 | 1756 | 0.0914 | 0.9273 | 0.9446 | 0.9359 | 0.9848 | | 0.012 | 2.0 | 3512 | 0.0852 | 0.9321 | 0.9478 | 0.9399 | 0.9857 | | 0.0036 | 3.0 | 5268 | 0.0858 | 0.9363 | 0.9522 | 0.9442 | 0.9866 |
a341161159babb9841004bd08b8028dc
mit
['generated_from_trainer']
false
hungry_saha This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets.
54a23d5d8c57146f49ffe4bdc8d2909e
mit
['generated_from_trainer']
false
Full config {'dataset': {'conditional_training_config': {'aligned_prefix': '<|aligned|>', 'drop_token_fraction': 0.01, 'misaligned_prefix': '<|misaligned|>', 'threshold': 0.00056}, 'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000', 'tomekkorbak/detoxify-pile-chunk3-50000-100000', 'tomekkorbak/detoxify-pile-chunk3-100000-150000', 'tomekkorbak/detoxify-pile-chunk3-150000-200000', 'tomekkorbak/detoxify-pile-chunk3-200000-250000', 'tomekkorbak/detoxify-pile-chunk3-250000-300000', 'tomekkorbak/detoxify-pile-chunk3-300000-350000', 'tomekkorbak/detoxify-pile-chunk3-350000-400000', 'tomekkorbak/detoxify-pile-chunk3-400000-450000', 'tomekkorbak/detoxify-pile-chunk3-450000-500000', 'tomekkorbak/detoxify-pile-chunk3-500000-550000', 'tomekkorbak/detoxify-pile-chunk3-550000-600000', 'tomekkorbak/detoxify-pile-chunk3-600000-650000', 'tomekkorbak/detoxify-pile-chunk3-650000-700000', 'tomekkorbak/detoxify-pile-chunk3-700000-750000', 'tomekkorbak/detoxify-pile-chunk3-750000-800000', 'tomekkorbak/detoxify-pile-chunk3-800000-850000', 'tomekkorbak/detoxify-pile-chunk3-850000-900000', 'tomekkorbak/detoxify-pile-chunk3-900000-950000', 'tomekkorbak/detoxify-pile-chunk3-950000-1000000', 'tomekkorbak/detoxify-pile-chunk3-1000000-1050000', 'tomekkorbak/detoxify-pile-chunk3-1050000-1100000', 'tomekkorbak/detoxify-pile-chunk3-1100000-1150000', 'tomekkorbak/detoxify-pile-chunk3-1150000-1200000', 'tomekkorbak/detoxify-pile-chunk3-1200000-1250000', 'tomekkorbak/detoxify-pile-chunk3-1250000-1300000', 'tomekkorbak/detoxify-pile-chunk3-1300000-1350000', 'tomekkorbak/detoxify-pile-chunk3-1350000-1400000', 'tomekkorbak/detoxify-pile-chunk3-1400000-1450000', 'tomekkorbak/detoxify-pile-chunk3-1450000-1500000', 'tomekkorbak/detoxify-pile-chunk3-1500000-1550000', 'tomekkorbak/detoxify-pile-chunk3-1550000-1600000', 'tomekkorbak/detoxify-pile-chunk3-1600000-1650000', 'tomekkorbak/detoxify-pile-chunk3-1650000-1700000', 'tomekkorbak/detoxify-pile-chunk3-1700000-1750000', 'tomekkorbak/detoxify-pile-chunk3-1750000-1800000', 'tomekkorbak/detoxify-pile-chunk3-1800000-1850000', 'tomekkorbak/detoxify-pile-chunk3-1850000-1900000', 'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'], 'is_split_by_sentences': True}, 'generation': {'force_call_on': [25354], 'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}], 'scenario_configs': [{'generate_kwargs': {'bad_words_ids': [[50257], [50258]], 'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_samples': 2048, 'prefix': '<|aligned|>'}, {'generate_kwargs': {'bad_words_ids': [[50257], [50258]], 'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'challenging_rtp', 'num_samples': 2048, 'prefix': '<|aligned|>', 'prompt_before_control': True, 'prompts_path': 'resources/challenging_rtp.jsonl'}], 'scorer_config': {'device': 'cuda:0'}}, 'kl_gpt3_callback': {'force_call_on': [25354], 'max_tokens': 64, 'num_samples': 4096, 'prefix': '<|aligned|>'}, 'model': {'from_scratch': True, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'num_additional_tokens': 2, 'path_or_name': 'gpt2'}, 'objective': {'name': 'MLE'}, 'tokenizer': {'path_or_name': 'gpt2', 'special_tokens': ['<|aligned|>', '<|misaligned|>']}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 64, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'hungry_saha', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.0005, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000, 'output_dir': 'training_output104340', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 25354, 'save_strategy': 'steps', 'seed': 42, 'warmup_ratio': 0.01, 'weight_decay': 0.1}}
197ae1170c013251588c01ce2a12ea0b
apache-2.0
['generated_from_keras_callback']
false
prahlad/rotten_model This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on rotten_tomatoes movie review dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4876 - Train Accuracy: 0.7620 - Validation Loss: 0.5001 - Validation Accuracy: 0.7842 - Epoch: 0
d613ac2e048187b4e8a64f3e587c5666
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 12795, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32
0daa1ad5bf7b3650b15cc1585c1a536f
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.4876 | 0.7620 | 0.5001 | 0.7842 | 0 |
81fffc2a7ce6af4099fcbd15a1203c4c
cc-by-4.0
[]
false
Danish ELECTRA small (cased) An [ELECTRA](https://arxiv.org/abs/2003.10555) model pretrained on a custom Danish corpus (~17.5gb). For details regarding data sources and training procedure, along with benchmarks on downstream tasks, go to: https://github.com/sarnikowski/danish_transformers/tree/main/electra
e3a88ca74ff2b9ba172d1e74e304dcf4
cc-by-4.0
[]
false
Usage ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("sarnikowski/electra-small-generator-da-256-cased") model = AutoModel.from_pretrained("sarnikowski/electra-small-generator-da-256-cased") ```
70cd7c5ce6fddb29dd1513e0f6184a24
cc-by-4.0
[]
false
Questions? If you have any questions feel free to open an issue in the [danish_transformers](https://github.com/sarnikowski/danish_transformers) repository, or send an email to p.sarnikowski@gmail.com
782946cea92d2a45db0f7195a12012ea
apache-2.0
['multiberts', 'multiberts-seed_1', 'multiberts-seed_1-step_20k']
false
MultiBERTs, Intermediate Checkpoint - Seed 1, Step 20k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model
87f10ac689a5969def7f3cbf96c7670b
apache-2.0
['multiberts', 'multiberts-seed_1', 'multiberts-seed_1-step_20k']
false
How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_20k') model = TFBertModel.from_pretrained("google/multiberts-seed_1-step_20k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_20k') model = BertModel.from_pretrained("google/multiberts-seed_1-step_20k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ```
ea77224b8ffc30b6032ce662e6a385c5
mit
['generated_from_trainer']
false
label-transfer This model is a fine-tuned version of [saattrupdan/verdict-classifier](https://huggingface.co/saattrupdan/verdict-classifier) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0452 - F1 Macro: 0.9872 - F1 Misinformation: 0.9918 - F1 Factual: 0.9979 - F1 Other: 0.9720 - Prec Macro: 0.9842 - Prec Misinformation: 0.9958 - Prec Factual: 0.9979 - Prec Other: 0.9588
d15da288c83e6e010316c9b0156938f8
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 2048 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1423 - num_epochs: 1000
1c9063c4d2bedb937217495ef26c0847
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 Misinformation | F1 Factual | F1 Other | Prec Macro | Prec Misinformation | Prec Factual | Prec Other | |:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------------:|:----------:|:--------:|:----------:|:-------------------:|:------------:|:----------:| | 0.4236 | 0.9 | 5 | 0.4070 | 0.8866 | 0.9477 | 0.9658 | 0.7463 | 0.9306 | 0.9075 | 0.9766 | 0.9077 | | 0.4175 | 1.9 | 10 | 0.4001 | 0.8872 | 0.9480 | 0.9658 | 0.7477 | 0.9308 | 0.9079 | 0.9766 | 0.9080 | | 0.4115 | 2.9 | 15 | 0.3884 | 0.8896 | 0.9487 | 0.9668 | 0.7534 | 0.9323 | 0.9093 | 0.9787 | 0.9090 | | 0.3932 | 3.9 | 20 | 0.3719 | 0.8943 | 0.9509 | 0.9668 | 0.7652 | 0.9343 | 0.9133 | 0.9787 | 0.9110 | | 0.3785 | 4.9 | 25 | 0.3505 | 0.8973 | 0.9522 | 0.9668 | 0.7730 | 0.9353 | 0.9160 | 0.9787 | 0.9112 | | 0.3653 | 5.9 | 30 | 0.3266 | 0.9009 | 0.9535 | 0.9683 | 0.7809 | 0.9369 | 0.9186 | 0.9818 | 0.9104 | | 0.3337 | 6.9 | 35 | 0.3028 | 0.9143 | 0.9599 | 0.9694 | 0.8137 | 0.9425 | 0.9310 | 0.9818 | 0.9148 | | 0.3181 | 7.9 | 40 | 0.2796 | 0.9181 | 0.9624 | 0.9673 | 0.8245 | 0.9431 | 0.9361 | 0.9807 | 0.9125 | | 0.2976 | 8.9 | 45 | 0.2570 | 0.9199 | 0.9633 | 0.9673 | 0.8291 | 0.9434 | 0.9383 | 0.9807 | 0.9113 | | 0.2845 | 9.9 | 50 | 0.2349 | 0.9242 | 0.9658 | 0.9668 | 0.8401 | 0.9453 | 0.9430 | 0.9797 | 0.9131 | | 0.2649 | 10.9 | 55 | 0.2134 | 0.9270 | 0.9673 | 0.9668 | 0.8470 | 0.9451 | 0.9472 | 0.9797 | 0.9086 | | 0.2399 | 11.9 | 60 | 0.1929 | 0.9330 | 0.9704 | 0.9668 | 0.8619 | 0.9467 | 0.9547 | 0.9797 | 0.9057 | | 0.224 | 12.9 | 65 | 0.1735 | 0.9369 | 0.9724 | 0.9673 | 0.8710 | 0.9467 | 0.9608 | 0.9797 | 0.8996 | | 0.1992 | 13.9 | 70 | 0.1564 | 0.9496 | 0.9783 | 0.9711 | 0.8995 | 0.9531 | 0.9744 | 0.9809 | 0.9039 | | 0.1908 | 14.9 | 75 | 0.1427 | 0.9501 | 0.9784 | 0.9711 | 0.9006 | 0.9519 | 0.9765 | 0.9799 | 0.8993 | | 0.1785 | 15.9 | 80 | 0.1309 | 0.9542 | 0.9790 | 0.9765 | 0.9072 | 0.9549 | 0.9782 | 0.9791 | 0.9076 | | 0.1637 | 16.9 | 85 | 0.1215 | 0.9531 | 0.9791 | 0.9745 | 0.9056 | 0.9536 | 0.9784 | 0.9750 | 0.9073 | | 0.151 | 17.9 | 90 | 0.1131 | 0.9540 | 0.9787 | 0.9771 | 0.9064 | 0.9549 | 0.9776 | 0.9771 | 0.9099 | | 0.1395 | 18.9 | 95 | 0.1049 | 0.9555 | 0.9790 | 0.9787 | 0.9088 | 0.9558 | 0.9784 | 0.9772 | 0.9119 | | 0.1285 | 19.9 | 100 | 0.0963 | 0.9600 | 0.9799 | 0.9833 | 0.9169 | 0.9602 | 0.9798 | 0.9843 | 0.9164 | | 0.1228 | 20.9 | 105 | 0.0887 | 0.9654 | 0.9829 | 0.9844 | 0.9289 | 0.9639 | 0.9850 | 0.9854 | 0.9215 | | 0.1163 | 21.9 | 110 | 0.0832 | 0.9672 | 0.9839 | 0.9849 | 0.9329 | 0.9655 | 0.9864 | 0.9864 | 0.9237 | | 0.1045 | 22.9 | 115 | 0.0792 | 0.9690 | 0.9849 | 0.9849 | 0.9374 | 0.9666 | 0.9883 | 0.9864 | 0.9251 | | 0.0975 | 23.9 | 120 | 0.0758 | 0.9701 | 0.9854 | 0.9854 | 0.9396 | 0.9682 | 0.9880 | 0.9864 | 0.9303 | | 0.0957 | 24.9 | 125 | 0.0731 | 0.9710 | 0.9856 | 0.9864 | 0.9411 | 0.9691 | 0.9883 | 0.9885 | 0.9305 | | 0.0911 | 25.9 | 130 | 0.0702 | 0.9743 | 0.9862 | 0.9901 | 0.9467 | 0.9722 | 0.9891 | 0.9896 | 0.9377 | | 0.0884 | 26.9 | 135 | 0.0676 | 0.9759 | 0.9875 | 0.9901 | 0.9502 | 0.9728 | 0.9916 | 0.9886 | 0.9381 | | 0.087 | 27.9 | 140 | 0.0652 | 0.9770 | 0.9878 | 0.9912 | 0.9521 | 0.9739 | 0.9919 | 0.9906 | 0.9392 | | 0.0813 | 28.9 | 145 | 0.0631 | 0.9791 | 0.9880 | 0.9938 | 0.9555 | 0.9758 | 0.9925 | 0.9938 | 0.9412 | | 0.0758 | 29.9 | 150 | 0.0612 | 0.9805 | 0.9887 | 0.9943 | 0.9584 | 0.9767 | 0.9938 | 0.9938 | 0.9424 | | 0.0734 | 30.9 | 155 | 0.0598 | 0.9796 | 0.9882 | 0.9943 | 0.9564 | 0.9762 | 0.9927 | 0.9938 | 0.9422 | | 0.0713 | 31.9 | 160 | 0.0586 | 0.9798 | 0.9883 | 0.9943 | 0.9569 | 0.9765 | 0.9927 | 0.9938 | 0.9430 | | 0.0662 | 32.9 | 165 | 0.0568 | 0.9805 | 0.9887 | 0.9943 | 0.9584 | 0.9768 | 0.9936 | 0.9938 | 0.9432 | | 0.063 | 33.9 | 170 | 0.0552 | 0.9813 | 0.9893 | 0.9943 | 0.9602 | 0.9778 | 0.9938 | 0.9938 | 0.9459 | | 0.0623 | 34.9 | 175 | 0.0538 | 0.9819 | 0.9897 | 0.9943 | 0.9616 | 0.9785 | 0.9941 | 0.9938 | 0.9477 | | 0.0601 | 35.9 | 180 | 0.0531 | 0.9828 | 0.9901 | 0.9948 | 0.9635 | 0.9793 | 0.9947 | 0.9938 | 0.9496 | | 0.0549 | 36.9 | 185 | 0.0521 | 0.9826 | 0.9900 | 0.9948 | 0.9631 | 0.9790 | 0.9947 | 0.9938 | 0.9487 | | 0.0539 | 37.9 | 190 | 0.0512 | 0.9824 | 0.9898 | 0.9948 | 0.9626 | 0.9789 | 0.9944 | 0.9938 | 0.9486 | | 0.0525 | 38.9 | 195 | 0.0503 | 0.9827 | 0.9898 | 0.9953 | 0.9630 | 0.9792 | 0.9944 | 0.9938 | 0.9495 | | 0.0494 | 39.9 | 200 | 0.0498 | 0.9831 | 0.9898 | 0.9958 | 0.9635 | 0.9796 | 0.9944 | 0.9948 | 0.9496 | | 0.0502 | 40.9 | 205 | 0.0489 | 0.9838 | 0.9901 | 0.9964 | 0.9650 | 0.9804 | 0.9947 | 0.9958 | 0.9506 | | 0.0499 | 41.9 | 210 | 0.0483 | 0.9845 | 0.9904 | 0.9969 | 0.9663 | 0.9813 | 0.9947 | 0.9958 | 0.9532 | | 0.0484 | 42.9 | 215 | 0.0480 | 0.9847 | 0.9905 | 0.9969 | 0.9668 | 0.9814 | 0.9950 | 0.9958 | 0.9533 | | 0.0465 | 43.9 | 220 | 0.0477 | 0.9852 | 0.9908 | 0.9969 | 0.9678 | 0.9816 | 0.9955 | 0.9958 | 0.9534 | | 0.0453 | 44.9 | 225 | 0.0474 | 0.9856 | 0.9911 | 0.9969 | 0.9687 | 0.9822 | 0.9955 | 0.9958 | 0.9551 | | 0.0452 | 45.9 | 230 | 0.0471 | 0.9856 | 0.9911 | 0.9969 | 0.9687 | 0.9822 | 0.9955 | 0.9958 | 0.9551 | | 0.0453 | 46.9 | 235 | 0.0469 | 0.9854 | 0.9910 | 0.9969 | 0.9682 | 0.9821 | 0.9953 | 0.9958 | 0.9551 | | 0.043 | 47.9 | 240 | 0.0468 | 0.9858 | 0.9912 | 0.9969 | 0.9692 | 0.9825 | 0.9955 | 0.9958 | 0.9560 | | 0.0428 | 48.9 | 245 | 0.0465 | 0.9856 | 0.9911 | 0.9969 | 0.9687 | 0.9824 | 0.9953 | 0.9958 | 0.9560 | | 0.0414 | 49.9 | 250 | 0.0465 | 0.9852 | 0.9911 | 0.9964 | 0.9682 | 0.9820 | 0.9953 | 0.9948 | 0.9560 | | 0.0388 | 50.9 | 255 | 0.0462 | 0.9852 | 0.9911 | 0.9964 | 0.9682 | 0.9820 | 0.9953 | 0.9948 | 0.9560 | | 0.0404 | 51.9 | 260 | 0.0458 | 0.9852 | 0.9911 | 0.9964 | 0.9682 | 0.9820 | 0.9953 | 0.9948 | 0.9560 | | 0.0382 | 52.9 | 265 | 0.0454 | 0.9856 | 0.9911 | 0.9969 | 0.9687 | 0.9824 | 0.9953 | 0.9958 | 0.9560 | | 0.042 | 53.9 | 270 | 0.0443 | 0.9862 | 0.9911 | 0.9979 | 0.9697 | 0.9831 | 0.9953 | 0.9979 | 0.9561 | | 0.0369 | 54.9 | 275 | 0.0438 | 0.9862 | 0.9911 | 0.9979 | 0.9697 | 0.9831 | 0.9953 | 0.9979 | 0.9561 | | 0.0383 | 55.9 | 280 | 0.0437 | 0.9862 | 0.9911 | 0.9979 | 0.9697 | 0.9831 | 0.9953 | 0.9979 | 0.9561 | | 0.0373 | 56.9 | 285 | 0.0438 | 0.9862 | 0.9911 | 0.9979 | 0.9696 | 0.9833 | 0.9950 | 0.9979 | 0.9569 | | 0.0402 | 57.9 | 290 | 0.0440 | 0.9862 | 0.9911 | 0.9979 | 0.9696 | 0.9833 | 0.9950 | 0.9979 | 0.9569 | | 0.0389 | 58.9 | 295 | 0.0443 | 0.9858 | 0.9908 | 0.9979 | 0.9687 | 0.9831 | 0.9944 | 0.9979 | 0.9568 | | 0.0361 | 59.9 | 300 | 0.0443 | 0.9860 | 0.9910 | 0.9979 | 0.9692 | 0.9832 | 0.9947 | 0.9979 | 0.9569 | | 0.0369 | 60.9 | 305 | 0.0442 | 0.9860 | 0.9910 | 0.9979 | 0.9692 | 0.9832 | 0.9947 | 0.9979 | 0.9569 | | 0.0353 | 61.9 | 310 | 0.0442 | 0.9862 | 0.9911 | 0.9979 | 0.9696 | 0.9833 | 0.9950 | 0.9979 | 0.9569 | | 0.035 | 62.9 | 315 | 0.0446 | 0.9860 | 0.9910 | 0.9979 | 0.9692 | 0.9832 | 0.9947 | 0.9979 | 0.9569 | | 0.0352 | 63.9 | 320 | 0.0449 | 0.9864 | 0.9912 | 0.9979 | 0.9701 | 0.9834 | 0.9953 | 0.9979 | 0.9570 | | 0.0336 | 64.9 | 325 | 0.0451 | 0.9860 | 0.9910 | 0.9979 | 0.9692 | 0.9832 | 0.9947 | 0.9979 | 0.9569 | | 0.0317 | 65.9 | 330 | 0.0448 | 0.9860 | 0.9910 | 0.9979 | 0.9692 | 0.9832 | 0.9947 | 0.9979 | 0.9569 | | 0.0334 | 66.9 | 335 | 0.0447 | 0.9866 | 0.9914 | 0.9979 | 0.9705 | 0.9843 | 0.9944 | 0.9979 | 0.9605 | | 0.0316 | 67.9 | 340 | 0.0447 | 0.9860 | 0.9910 | 0.9979 | 0.9691 | 0.9834 | 0.9944 | 0.9979 | 0.9577 | | 0.0329 | 68.9 | 345 | 0.0451 | 0.9866 | 0.9914 | 0.9979 | 0.9706 | 0.9835 | 0.9955 | 0.9979 | 0.9570 | | 0.0326 | 69.9 | 350 | 0.0454 | 0.9866 | 0.9914 | 0.9979 | 0.9706 | 0.9835 | 0.9955 | 0.9979 | 0.9570 | | 0.032 | 70.9 | 355 | 0.0453 | 0.9868 | 0.9915 | 0.9979 | 0.9711 | 0.9838 | 0.9955 | 0.9979 | 0.9579 | | 0.0325 | 71.9 | 360 | 0.0450 | 0.9864 | 0.9912 | 0.9979 | 0.9701 | 0.9836 | 0.9950 | 0.9979 | 0.9578 | | 0.0319 | 72.9 | 365 | 0.0446 | 0.9868 | 0.9915 | 0.9979 | 0.9711 | 0.9838 | 0.9955 | 0.9979 | 0.9579 | | 0.0326 | 73.9 | 370 | 0.0444 | 0.9868 | 0.9915 | 0.9979 | 0.9711 | 0.9838 | 0.9955 | 0.9979 | 0.9579 | | 0.0315 | 74.9 | 375 | 0.0442 | 0.9873 | 0.9918 | 0.9979 | 0.9721 | 0.9840 | 0.9961 | 0.9979 | 0.9580 | | 0.0304 | 75.9 | 380 | 0.0442 | 0.9866 | 0.9914 | 0.9979 | 0.9706 | 0.9837 | 0.9953 | 0.9979 | 0.9579 | | 0.03 | 76.9 | 385 | 0.0444 | 0.9864 | 0.9912 | 0.9979 | 0.9702 | 0.9832 | 0.9955 | 0.9979 | 0.9561 | | 0.0296 | 77.9 | 390 | 0.0448 | 0.9862 | 0.9911 | 0.9979 | 0.9697 | 0.9831 | 0.9953 | 0.9979 | 0.9561 | | 0.0307 | 78.9 | 395 | 0.0452 | 0.9866 | 0.9914 | 0.9979 | 0.9706 | 0.9837 | 0.9953 | 0.9979 | 0.9579 | | 0.0296 | 79.9 | 400 | 0.0453 | 0.9862 | 0.9911 | 0.9979 | 0.9697 | 0.9831 | 0.9953 | 0.9979 | 0.9561 | | 0.0292 | 80.9 | 405 | 0.0454 | 0.9862 | 0.9911 | 0.9979 | 0.9697 | 0.9831 | 0.9953 | 0.9979 | 0.9561 | | 0.0293 | 81.9 | 410 | 0.0452 | 0.9862 | 0.9911 | 0.9979 | 0.9697 | 0.9829 | 0.9955 | 0.9979 | 0.9552 | | 0.0292 | 82.9 | 415 | 0.0454 | 0.9862 | 0.9911 | 0.9979 | 0.9697 | 0.9829 | 0.9955 | 0.9979 | 0.9552 | | 0.0281 | 83.9 | 420 | 0.0454 | 0.9866 | 0.9914 | 0.9979 | 0.9706 | 0.9833 | 0.9958 | 0.9979 | 0.9562 | | 0.0298 | 84.9 | 425 | 0.0452 | 0.9872 | 0.9918 | 0.9979 | 0.9720 | 0.9842 | 0.9958 | 0.9979 | 0.9588 |
f5f2228c810d76f43b36a8278655b358
apache-2.0
['translation']
false
opus-mt-ase-es * source languages: ase * target languages: es * OPUS readme: [ase-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ase-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/ase-es/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ase-es/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ase-es/opus-2020-01-20.eval.txt)
ffa5b41cc43fd698dd5abce3ede5ca85
apache-2.0
['generated_from_trainer']
false
bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0590 - Precision: 0.9357 - Recall: 0.9507 - F1: 0.9432 - Accuracy: 0.9867
962698a7fa0f01387d844d23a708f8ca
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0872 | 1.0 | 1756 | 0.0709 | 0.9194 | 0.9334 | 0.9263 | 0.9822 | | 0.033 | 2.0 | 3512 | 0.0622 | 0.9298 | 0.9497 | 0.9396 | 0.9861 | | 0.0183 | 3.0 | 5268 | 0.0590 | 0.9357 | 0.9507 | 0.9432 | 0.9867 |
10680e0fdcc24bc9ededd2e459382676