license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
apache-2.0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'mr', 'robust-speech-event', 'model_for_talk', 'hf-asr-leaderboard']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 100.0 - mixed_precision_training: Native AMP
2396ef1526c5678ccf7f9c25e48f08f9
apache-2.0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'mr', 'robust-speech-event', 'model_for_talk', 'hf-asr-leaderboard']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.671 | 22.73 | 500 | 1.3618 | 0.9499 | | 1.1599 | 45.45 | 1000 | 0.6330 | 0.6627 | | 0.8252 | 68.18 | 1500 | 0.6226 | 0.6426 | | 0.6424 | 90.91 | 2000 | 0.6359 | 0.6041 |
cd7cc7843de9dbabb6c4ea08d947ba76
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
Demo: How to use in ESPnet2 ```bash cd espnet pip install -e . cd egs2/dsing/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/ftshijt_espnet2_asr_dsing_transformer ``` <!-- Generated by scripts/utils/show_asr_result.sh -->
ff8064c0a078c2be5562abb3b17a9ffd
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
Environments - date: `Sun Mar 20 00:28:37 EDT 2022` - python version: `3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]` - espnet version: `espnet 0.10.7a1` - pytorch version: `pytorch 1.10.1` - Git hash: `c1ed71c6899e54c0b3dad82687886b1183cd0885` - Commit date: `Wed Mar 16 23:34:49 2022 -0400`
b7432f949c9e903719a336817111700d
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_lm_lm_train_lm_bpe500_valid.loss.ave_asr_model_valid.acc.ave/dev|482|4018|77.0|16.2|6.8|4.0|27.0|65.1| |decode_asr_lm_lm_train_lm_bpe500_valid.loss.ave_asr_model_valid.acc.ave/test|480|4632|76.1|17.3|6.6|3.7|27.6|57.7|
b333ae41bf7027eb9e7b7fbc90be5b80
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_lm_lm_train_lm_bpe500_valid.loss.ave_asr_model_valid.acc.ave/dev|482|18692|85.0|5.8|9.2|4.2|19.2|65.1| |decode_asr_lm_lm_train_lm_bpe500_valid.loss.ave_asr_model_valid.acc.ave/test|480|21787|84.9|6.3|8.8|4.2|19.3|57.7|
b31f2ba1341c8005919f1bdda3f7e7ef
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
TER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_lm_lm_train_lm_bpe500_valid.loss.ave_asr_model_valid.acc.ave/dev|482|6097|75.2|12.8|12.0|4.1|28.9|65.1| |decode_asr_lm_lm_train_lm_bpe500_valid.loss.ave_asr_model_valid.acc.ave/test|480|7736|75.3|14.3|10.4|4.1|28.8|57.7|
e05e3ba4c180b55e1644a7521a71a9b2
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
ASR config <details><summary>expand</summary> ``` config: conf/train_asr.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_train_asr_raw_bpe500_sp ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 100 patience: 15 val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - acc - max keep_nbest_models: 10 nbest_averaging_interval: 0 grad_clip: 5 grad_clip_type: 2.0 grad_noise: false accum_grad: 2 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_matplotlib: true use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 32 valid_batch_size: null batch_bins: 1000000 valid_batch_bins: null train_shape_file: - exp/asr_stats_raw_bpe500_sp/train/speech_shape - exp/asr_stats_raw_bpe500_sp/train/text_shape.bpe valid_shape_file: - exp/asr_stats_raw_bpe500_sp/valid/speech_shape - exp/asr_stats_raw_bpe500_sp/valid/text_shape.bpe batch_type: folded valid_batch_type: null fold_length: - 80000 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train30_sp/wav.scp - speech - kaldi_ark - - dump/raw/train30_sp/text - text - text valid_data_path_and_name_and_type: - - dump/raw/dev/wav.scp - speech - kaldi_ark - - dump/raw/dev/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 1.0 scheduler: noamlr scheduler_conf: warmup_steps: 25000 token_list: - <blank> - <unk> - ▁I - '''' - ▁YOU - S - T - ▁THE - M - ▁ME - ▁A - ▁AND - ▁TO - E - A - ING - D - ▁MY - ▁ - O - ▁IT - I - N - RE - Y - ▁BE - ▁IN - ▁ON - ▁LOVE - U - ▁WE - LL - H - ▁YOUR - ▁S - IN - ▁OF - ▁DO - ▁THAT - ▁ALL - L - ▁DON - ▁OH - ▁LIKE - ▁KNOW - ▁FOR - ▁CAN - ▁JUST - P - ▁BUT - ED - K - ▁WHEN - ▁SO - R - ▁GO - ▁WHAT - ▁C - ▁WITH - W - ▁F - C - ▁NO - ER - ▁ONE - ▁LET - VE - ES - ▁NOW - ▁BABY - G - ▁GOT - ▁COME - CAUSE - LE - B - ▁B - AR - ▁UP - ▁' - ▁W - ▁SEE - ▁TIME - ▁ARE - ▁G - ▁LOOK - ▁THIS - F - ▁IS - ▁NEVER - ▁M - ▁P - AN - ▁WAS - ▁WAY - ▁IF - OR - ▁SAY - V - ▁R - ▁T - ▁DOWN - RA - ▁THERE - ▁HEART - ▁NOT - RO - ▁WILL - ▁OUT - CE - ▁WANT - ▁YEAH - ▁HAVE - ▁GIVE - ▁TOO - ▁GONNA - ▁HOW - ▁NEED - ▁GET - ▁TAKE - ▁EVERY - ▁FEEL - ▁HE - EN - ▁FROM - ▁HA - ▁K - ▁SHE - 'ON' - ▁DI - RI - ▁ONLY - NE - ▁WHO - ▁AWAY - ▁E - ▁D - ▁LIFE - ▁MAKE - IC - ▁BACK - ▁WHERE - ▁MADE - ▁DAY - ▁HERE - ▁LO - ▁HER - ▁AS - ▁GOOD - ▁WANNA - ▁OOH - ▁TELL - LY - TH - ▁WON - ▁LIGHT - ▁KEEP - ▁MA - ▁LA - ▁SH - ▁WORLD - ▁MORE - ▁LI - AL - ▁COULD - ▁GIRL - ▁NOTHING - ▁EVER - ▁THINK - IE - ▁BY - ▁AT - ▁TONIGHT - ▁THEY - ▁CALL - ▁HO - ▁WOULD - IL - ▁OUR - ▁FALL - ▁NIGHT - ▁THAN - ▁DE - ▁SOME - ▁WAIT - ▁RIGHT - ▁RE - ▁HALLELUJAH - ▁TH - NG - ▁CO - ▁WERE - ▁TALK - ET - ▁BO - ▁HOLD - UR - ▁BEEN - ▁US - ▁PA - VER - ▁EYES - ▁DREAM - ▁SONG - ▁SHOULD - ▁STILL - ▁OVER - TA - ▁ANYMORE - IGHT - ▁STAY - ▁BETTER - LESS - ▁THROUGH - ▁LITTLE - X - ▁GONE - ▁AIN - ▁DA - ▁HOLDING - ▁HURT - ▁TRY - ▁FIND - Z - DE - ▁LAST - ▁SAID - ▁ALWAYS - ▁BODY - ▁MIND - ▁CRY - ▁EVEN - ▁RUN - ▁HOPE - ▁WITHOUT - ▁MISS - ▁ABOUT - ▁HAND - ▁J - ▁AGAIN - ▁THOUGH - ▁NAH - ▁LIVE - ▁BA - ▁OLD - ▁HEAD - ▁FIRE - ▁MAN - ▁SOMETHING - ▁WHY - THER - ▁HOME - ▁OR - ▁INSIDE - ▁NEW - ▁HEY - TION - ▁EVERYTHING - ▁HAD - ▁SOMETIMES - ▁HARD - ▁TOUCH - ▁HEAR - ▁AM - ▁MUCH - ▁LONG - ▁STAR - GETTING - ▁WALK - ▁PEOPLE - ▁BEFORE - ▁CLOSE - ▁TWO - ▁FAR - ▁SHOW - ▁STAND - ▁LOSE - ▁HELP - ▁NAME - ▁BOY - ▁TRUE - ▁PLAY - ▁DARK - ▁THINGS - ▁NA - ▁TEAR - ▁END - ▁NOBODY - ▁SEA - ▁ROCKABYE - ▁BELIEVE - ▁BROKE - ▁AROUND - ▁START - ▁KISS - ▁FEELING - ▁BREAK - ▁SOMEONE - ▁FRIEND - ▁ALONE - ▁BEAUTIFUL - ▁CRAZY - ▁OWN - OSE - ▁STOP - ▁LOST - ▁HIM - ▁BAD - ▁CHANCE - ▁REALLY - ▁WISH - ▁MOVE - ▁SKY - ▁PLACE - AKE - ▁LEAVE - ▁YA - ▁STRONG - ▁PUT - ▁OPEN - ▁WRONG - ▁COLD - OCK - ▁USED - ▁FOUND - ▁LONELY - ▁DANCE - EACH - ▁ANOTHER - ▁SIDE - ▁UNDER - ▁MATTER - ▁THESE - ▁CARE - ▁MINE - ▁SHINE - ▁AFRAID - ▁TURN - ▁PLEASE - ▁SUN - ▁DIAMOND - ▁UNTIL - ▁FACE - ▁LEARN - ▁TRUST - ▁WONDER - ▁BREATH - ATE - ▁SORRY - ▁HU - ▁WATCH - ▁LATE - ROUND - ▁ARMS - ▁PERFECT - ▁MAYBE - ▁PULL - ▁REMEMBER - ▁FIGHT - ▁MYSELF - ▁INTO - ▁DARLING - ▁THUNDER - ▁FOLLOW - ▁REASON - ▁BURN - ▁HIS - ▁MUST - ▁FREE - ▁FLASHLIGHT - ▁1 - ▁ENOUGH - ▁DRINK - ▁WORDS - ▁HIDE - ▁UN - ▁FORGET - ▁SURE - ▁CHANGE - ▁SMILE - ▁PROMISE - ▁FOREVER - '2' - ▁SWEET - ▁SAME - ▁OOOH - ▁PART - ▁SOMEBODY - NESS - ▁BRIGHT - ▁HEAVEN - ▁DEEP - ▁HIGH - ▁INSTEAD - ▁MOMENT - ▁ALONG - ▁ALRIGHT - ▁SLOW - ▁TOMORROW - ▁SOUL - ▁QU - ▁PUSH - ▁CHANDELIER - ▁LEFT - SIDE - ▁TOLD - ▁KNEW - READY - ▁LOVING - ▁SAW - '3' - ▁WORK - ▁DANCING - ▁THREE - ▁SAVE - ▁SHOOT - ▁LEAD - ▁SKI - ▁WILD - ▁WIND - ▁WHILE - ▁EDGE - ▁HAPPY - ▁FEAR - STUCK - ▁MOST - ▁LISTEN - ▁WOAH - ▁FIRST - ▁JOLENE - ▁VOICE - ▁COMP - ▁MILLION - FUL - ▁OOOOOH - ▁CAME - ▁RISE - ▁NEXT - ▁COUNT - ▁MOUNTAIN - ▁ROOM - ▁BLUE - ▁HIT - ▁RAISE - J - ▁THOUSAND - ▁SHAP - ▁TREAT - ▁DRY - ▁FINALLY - ▁TITANIUM - ▁CARRY - ▁TRUTH - ▁WATER - ▁MORNING - TIME - ▁BELONG - ▁UMA - ▁ALIVE - ▁ELSE - ▁ANGEL - ▁BRAND - ▁APART - ▁EVERYBODY - ▁SOUND - ▁GUESS - ▁PRAY - ▁FAITH - ▁AFTER - ▁THROW - ▁TRIED - ▁SLEEP - ▁FOOL - ▁DISCOVERING - ▁FUCK - ▁TASTE - ▁UNDERSTAND - ▁SHAME - ▁POWER - ▁WELCOME - ▁FELT - ▁SAFE - ▁DESERVE - ▁GAME - ▁SUPERMA - ▁SWEAR - ▁BETWEEN - ▁GLASS - ▁CATCH - ▁TOGETHER - '0' - '4' - '6' - '5' - '1' - '8' - '7' - '9' - Q - <sos/eos> init: xavier_uniform input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: true joint_net_conf: null model_conf: ctc_weight: 0.3 lsm_weight: 0.1 length_normalized_loss: false use_preprocessor: true token_type: bpe bpemodel: data/token_list/bpe_unigram500/bpe.model non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' frontend: default frontend_conf: fs: 16k specaug: null specaug_conf: {} normalize: global_mvn normalize_conf: stats_file: exp/asr_stats_raw_bpe500_sp/train/feats_stats.npz preencoder: null preencoder_conf: {} encoder: transformer encoder_conf: input_layer: conv2d num_blocks: 12 linear_units: 2048 dropout_rate: 0.1 output_size: 256 attention_heads: 4 attention_dropout_rate: 0.0 postencoder: null postencoder_conf: {} decoder: transformer decoder_conf: input_layer: embed num_blocks: 6 linear_units: 2048 dropout_rate: 0.1 required: - output_dir - token_list version: 0.10.7a1 distributed: false ``` </details>
c4dd0450ce5cea3726ec629f4b43ddee
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2292 - Accuracy: 0.926 - F1: 0.9259
9cca8f79d0a404810331173fcaca2e9a
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8732 | 1.0 | 250 | 0.3363 | 0.903 | 0.9002 | | 0.2645 | 2.0 | 500 | 0.2292 | 0.926 | 0.9259 |
f0cd6854a4d2a88c81b9890231bccf2d
apache-2.0
['thai', 'question-answering', 'dependency-parsing']
false
Model Description This is a DeBERTa(V2) model pretrained on Thai Wikipedia texts for dependency-parsing (head-detection on Universal Dependencies) as question-answering, derived from [roberta-base-thai-spm](https://huggingface.co/KoichiYasuoka/roberta-base-thai-spm). Use [MASK] inside `context` to avoid ambiguity when specifying a multiple-used word as `question`.
fac79dd41a333d9d49526c128ac76bf1
apache-2.0
['thai', 'question-answering', 'dependency-parsing']
false
How to Use ```py from transformers import AutoTokenizer,AutoModelForQuestionAnswering,QuestionAnsweringPipeline tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-thai-spm-ud-head") model=AutoModelForQuestionAnswering.from_pretrained("KoichiYasuoka/roberta-base-thai-spm-ud-head") qap=QuestionAnsweringPipeline(tokenizer=tokenizer,model=model,align_to_words=False) print(qap(question="กว่า",context="หลายหัวดีกว่าหัวเดียว")) ``` or (with [ufal.chu-liu-edmonds](https://pypi.org/project/ufal.chu-liu-edmonds/)) ```py class TransformersUD(object): def __init__(self,bert): import os from transformers import (AutoTokenizer,AutoModelForQuestionAnswering, AutoModelForTokenClassification,AutoConfig,TokenClassificationPipeline) self.tokenizer=AutoTokenizer.from_pretrained(bert) self.model=AutoModelForQuestionAnswering.from_pretrained(bert) x=AutoModelForTokenClassification.from_pretrained if os.path.isdir(bert): d,t=x(os.path.join(bert,"deprel")),x(os.path.join(bert,"tagger")) else: from transformers.utils import cached_file c=AutoConfig.from_pretrained(cached_file(bert,"deprel/config.json")) d=x(cached_file(bert,"deprel/pytorch_model.bin"),config=c) s=AutoConfig.from_pretrained(cached_file(bert,"tagger/config.json")) t=x(cached_file(bert,"tagger/pytorch_model.bin"),config=s) self.deprel=TokenClassificationPipeline(model=d,tokenizer=self.tokenizer, aggregation_strategy="simple") self.tagger=TokenClassificationPipeline(model=t,tokenizer=self.tokenizer) def __call__(self,text): import numpy,torch,ufal.chu_liu_edmonds w=[(t["start"],t["end"],t["entity_group"]) for t in self.deprel(text)] z,n={t["start"]:t["entity"].split("|") for t in self.tagger(text)},len(w) r,m=[text[s:e] for s,e,p in w],numpy.full((n+1,n+1),numpy.nan) v,c=self.tokenizer(r,add_special_tokens=False)["input_ids"],[] for i,t in enumerate(v): q=[self.tokenizer.cls_token_id]+t+[self.tokenizer.sep_token_id] c.append([q]+v[0:i]+[[self.tokenizer.mask_token_id]]+v[i+1:]+[[q[-1]]]) b=[[len(sum(x[0:j+1],[])) for j in range(len(x))] for x in c] with torch.no_grad(): d=self.model(input_ids=torch.tensor([sum(x,[]) for x in c]), token_type_ids=torch.tensor([[0]*x[0]+[1]*(x[-1]-x[0]) for x in b])) s,e=d.start_logits.tolist(),d.end_logits.tolist() for i in range(n): for j in range(n): m[i+1,0 if i==j else j+1]=s[i][b[i][j]]+e[i][b[i][j+1]-1] h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0] if [0 for i in h if i==0]!=[0]: i=([p for s,e,p in w]+["root"]).index("root") j=i+1 if i<n else numpy.nanargmax(m[:,0]) m[0:j,0]=m[j+1:,0]=numpy.nan h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0] u="
52e6ee4750de827eb292265e8de371e1
apache-2.0
['thai', 'question-answering', 'dependency-parsing']
false
text = "+text.replace("\n"," ")+"\n" for i,(s,e,p) in enumerate(w,1): p="root" if h[i]==0 else "dep" if p=="root" else p u+="\t".join([str(i),r[i-1],"_",z[s][0][2:],"_","|".join(z[s][1:]), str(h[i]),p,"_","_" if i<n and e<w[i][0] else "SpaceAfter=No"])+"\n" return u+"\n" nlp=TransformersUD("KoichiYasuoka/roberta-base-thai-spm-ud-head") print(nlp("หลายหัวดีกว่าหัวเดียว")) ```
0b4c9d3599586bf57fcf43a066701a2d
apache-2.0
['translation', 'generated_from_trainer']
false
marian-finetuned-kde4-en-to-zh This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-zh](https://huggingface.co/Helsinki-NLP/opus-mt-en-zh) on the kde4 dataset. It achieves the following results on the evaluation set: - Loss: 0.9338 - Bleu: 40.6780
b961a1d9a963ad0f109e6d74dea3f777
creativeml-openrail-m
['text-to-image']
false
noggles9000 on Stable Diffusion via Dreambooth trained on the [fast-DreamBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
225bbe18a4d789a132075bc054ed0764
creativeml-openrail-m
['text-to-image']
false
Model by alxdfy This your the Stable Diffusion model fine-tuned the noggles9000 concept taught to Stable Diffusion with Dreambooth. It can be used by modifying the `instance_prompt(s)`: **nounfootball.jpg** You can also train your own concepts and upload them to the library by using [the fast-DremaBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb). You can run your new concept via A1111 Colab :[Fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Or you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts) Sample pictures of this concept: nounfootball.jpg ![nounfootball.jpg 0](https://huggingface.co/alxdfy/noggles9000/resolve/main/concept_images/nounfootball.jpg)
f444f267e5925df8ee214d769e209dbe
mit
[]
false
Base model: [gpt2-large](https://huggingface.co/gpt2-large) Fine-tuned to generate responses on a dataset of [COVID-19 public health tweets](https://github.com/TheRensselaerIDEA/generative-response-modeling). For more information about the dataset, task and training, see [our paper](https://arxiv.org/abs/2204.04353). This checkpoint corresponds to the lowest validation perplexity (3.36 at 2 epochs) seen during training. See Training metrics for Tensorboard logs. Also see: our [Vaccine public health tweet response model](https://huggingface.co/TheRensselaerIDEA/gpt2-large-vaccine-tweet-response). **Data input format:** <span style="color:red"><|message|></span>public health message<span style="color:red"><|author|></span>public health Twitter handle<span style="color:red"><|response|></span> Example: ```python from transformers import AutoTokenizer, AutoModelForCausalLM from transformers.trainer_utils import set_seed import torch tokenizer = AutoTokenizer.from_pretrained("TheRensselaerIDEA/gpt2-large-covid-tweet-response") model = AutoModelForCausalLM.from_pretrained("TheRensselaerIDEA/gpt2-large-covid-tweet-response") device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model.to(device) set_seed(33) message = "Is your child worried about
d959c95b2e01f3a5d2270c392ea91510
mit
[]
false
COVID19? Learn the facts so you can answer your children’s questions." author = "CDCgov" num_responses = 2 author_token, message_token, response_token = tokenizer.additional_special_tokens input_str = f"{message_token}{message}{author_token}{author}{response_token}" inputs = tokenizer(input_str, return_tensors="pt").to(device) responses_ids = model.generate(**inputs, max_new_tokens=100, pad_token_id=tokenizer.pad_token_id, do_sample=True, top_p=0.95, temperature=1.5, num_beams=3, early_stopping=True, num_return_sequences=num_responses) responses = [tokenizer.decode(r[inputs.input_ids.shape[-1]:], skip_special_tokens=True) for r in responses_ids] for i, resp in enumerate(responses): print(f"Response {i}: {resp}\n") ``` Output: ``` Response 0: @CDCgov I'm not worried. I don't know who needs to hear this, but I have a feeling I know who will be listening. It is not the virus. It is the media. I know you and CDC have been lying for months now, but the media will keep pushing this lie. Response 1:
be996e59edd7c52704697dde2ffa14b1
cc-by-4.0
[]
false
MahaBERT MahaBERT is a Marathi BERT model. It is a multilingual BERT (bert-base-multilingual-cased) model fine-tuned on L3Cube-MahaCorpus and other publicly available Marathi monolingual datasets. [dataset link] (https://github.com/l3cube-pune/MarathiNLP) More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2202.01159) New version of this model is available here: https://huggingface.co/l3cube-pune/marathi-bert-v2 ``` @InProceedings{joshi:2022:WILDRE6, author = {Joshi, Raviraj}, title = {L3Cube-MahaCorpus and MahaBERT: Marathi Monolingual Corpus, Marathi BERT Language Models, and Resources}, booktitle = {Proceedings of The WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference}, month = {June}, year = {2022}, address = {Marseille, France}, publisher = {European Language Resources Association}, pages = {97--101} } ```
91bdc6dab907891ec5b87fc99be27745
apache-2.0
['Token Classification']
false
Model description This model is a fine-tuned version of macbert for the purpose of spell checking in medical application scenarios. We fine-tuned macbert Chinese base version on a 300M dataset including 60K+ authorized medical articles. We proposed to randomly confuse 30% sentences of these articles by adding noise with a either visually or phonologically resembled characters. Consequently, the fine-tuned model can achieve 96% accuracy on our test dataset.
319ef8f16b5e80c7fe19d8f9ed922fd3
apache-2.0
['Token Classification']
false
Intended uses & limitations You can use this model directly with a pipeline for token classification: ```python >>> from transformers import (AutoModelForTokenClassification, AutoTokenizer) >>> from transformers import pipeline >>> hub_model_id = "9pinus/macbert-base-chinese-medical-collation" >>> model = AutoModelForTokenClassification.from_pretrained(hub_model_id) >>> tokenizer = AutoTokenizer.from_pretrained(hub_model_id) >>> classifier = pipeline('ner', model=model, tokenizer=tokenizer) >>> result = classifier("如果病情较重,可适当口服甲肖唑片、环酯红霉素片等药物进行抗感染镇痛。") >>> for item in result: >>> if item['entity'] == 1: >>> print(item) {'entity': 1, 'score': 0.58127016, 'index': 14, 'word': '肖', 'start': 13, 'end': 14} ```
ca6cb2e99279a44de2d478d81145b692
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1
9321b95e46f7b78beae774191dc7a74e
apache-2.0
['generated_from_trainer']
false
rule_learning_test This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the enoriega/odinsynth_dataset dataset. It achieves the following results on the evaluation set: - Loss: 0.1255
d6f663b089ee7befb48ed81698fcebaa
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 1000 - total_train_batch_size: 8000 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP
2575198f9ea6a932b0e067e2eea2bbef
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.1764 | 0.32 | 20 | 0.2303 | | 0.145 | 0.64 | 40 | 0.1470 | | 0.129 | 0.96 | 60 | 0.1321 | | 0.1256 | 1.29 | 80 | 0.1265 | | 0.1304 | 1.61 | 100 | 0.1252 | | 0.1235 | 1.93 | 120 | 0.1260 | | 0.125 | 2.26 | 140 | 0.1261 | | 0.1263 | 2.58 | 160 | 0.1262 | | 0.1244 | 2.9 | 180 | 0.1256 |
aef0af20a781cba8582fc61ed1904491
apache-2.0
['generated_from_trainer']
false
wav2vec2-large-xls-r-300m-tamil-colab-final This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.7539 - Wer: 0.6135
f00e6130efe77707b857888b42ae474e
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 11.1466 | 1.0 | 118 | 4.3444 | 1.0 | | 3.4188 | 2.0 | 236 | 3.2496 | 1.0 | | 2.8617 | 3.0 | 354 | 1.6165 | 1.0003 | | 0.958 | 4.0 | 472 | 0.7984 | 0.8720 | | 0.5929 | 5.0 | 590 | 0.6733 | 0.7831 | | 0.4628 | 6.0 | 708 | 0.6536 | 0.7621 | | 0.3834 | 7.0 | 826 | 0.6037 | 0.7155 | | 0.3242 | 8.0 | 944 | 0.6376 | 0.7184 | | 0.2736 | 9.0 | 1062 | 0.6214 | 0.7070 | | 0.2433 | 10.0 | 1180 | 0.6158 | 0.6944 | | 0.2217 | 11.0 | 1298 | 0.6548 | 0.6830 | | 0.1992 | 12.0 | 1416 | 0.6331 | 0.6775 | | 0.1804 | 13.0 | 1534 | 0.6644 | 0.6874 | | 0.1639 | 14.0 | 1652 | 0.6629 | 0.6649 | | 0.143 | 15.0 | 1770 | 0.6927 | 0.6836 | | 0.1394 | 16.0 | 1888 | 0.6933 | 0.6888 | | 0.1296 | 17.0 | 2006 | 0.7039 | 0.6860 | | 0.1212 | 18.0 | 2124 | 0.7042 | 0.6628 | | 0.1121 | 19.0 | 2242 | 0.7132 | 0.6475 | | 0.1069 | 20.0 | 2360 | 0.7423 | 0.6438 | | 0.1063 | 21.0 | 2478 | 0.7171 | 0.6484 | | 0.1025 | 22.0 | 2596 | 0.7396 | 0.6451 | | 0.0946 | 23.0 | 2714 | 0.7400 | 0.6432 | | 0.0902 | 24.0 | 2832 | 0.7385 | 0.6286 | | 0.0828 | 25.0 | 2950 | 0.7368 | 0.6286 | | 0.079 | 26.0 | 3068 | 0.7471 | 0.6306 | | 0.0747 | 27.0 | 3186 | 0.7524 | 0.6201 | | 0.0661 | 28.0 | 3304 | 0.7576 | 0.6201 | | 0.0659 | 29.0 | 3422 | 0.7579 | 0.6130 | | 0.0661 | 30.0 | 3540 | 0.7539 | 0.6135 |
b5be87ce17184220cdb653e074c1db4c
creativeml-openrail-m
['text-to-image']
false
training params ```json { "pretrained_model_name_or_path": "runwayml/stable-diffusion-v1-5", "instance_data_dir": "./b3d0ef12-11d6-43df-8a96-ebcb5ca71ea1/instance_data", "class_data_dir": "./class_data/person", "output_dir": "./b3d0ef12-11d6-43df-8a96-ebcb5ca71ea1/", "train_text_encoder": true, "with_prior_preservation": true, "prior_loss_weight": 1.0, "instance_prompt": "me", "class_prompt": "person", "resolution": 512, "train_batch_size": 1, "gradient_accumulation_steps": 1, "gradient_checkpointing": true, "use_8bit_adam": true, "learning_rate": 1e-06, "lr_scheduler": "polynomial", "lr_warmup_steps": 0, "num_class_images": 500, "max_train_steps": 1050, "mixed_precision": "fp16" } ```
c5f108850baa38ce9929cd4229f196d8
mit
['generated_from_trainer']
false
BERiT_2000_custom_architecture_40_epochs_ls_.2 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 6.3120
b7fbeba08a91a9ec76465bf384fe02d1
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 40 - label_smoothing_factor: 0.2
7f3e84fa4608363c4876dabe97e08971
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:------:|:---------------:| | 15.998 | 0.19 | 500 | 8.5537 | | 7.8818 | 0.39 | 1000 | 7.3646 | | 7.2781 | 0.58 | 1500 | 7.1307 | | 7.1073 | 0.77 | 2000 | 7.0462 | | 7.0749 | 0.97 | 2500 | 7.0667 | | 7.0373 | 1.16 | 3000 | 6.9511 | | 6.9767 | 1.36 | 3500 | 6.8339 | | 6.9483 | 1.55 | 4000 | 6.7795 | | 6.9071 | 1.74 | 4500 | 6.7828 | | 6.8591 | 1.94 | 5000 | 6.7164 | | 6.8595 | 2.13 | 5500 | 6.7705 | | 6.8406 | 2.32 | 6000 | 6.6906 | | 6.7861 | 2.52 | 6500 | 6.6878 | | 6.8103 | 2.71 | 7000 | 6.6486 | | 6.7724 | 2.9 | 7500 | 6.6703 | | 6.7563 | 3.1 | 8000 | 6.6626 | | 6.7567 | 3.29 | 8500 | 6.6603 | | 6.7315 | 3.49 | 9000 | 6.6392 | | 6.7443 | 3.68 | 9500 | 6.6306 | | 6.7244 | 3.87 | 10000 | 6.6456 | | 6.7464 | 4.07 | 10500 | 6.6224 | | 6.7008 | 4.26 | 11000 | 6.6138 | | 6.7076 | 4.45 | 11500 | 6.6783 | | 6.6944 | 4.65 | 12000 | 6.6147 | | 6.6993 | 4.84 | 12500 | 6.6466 | | 6.6893 | 5.03 | 13000 | 6.6369 | | 6.6905 | 5.23 | 13500 | 6.6293 | | 6.6899 | 5.42 | 14000 | 6.6271 | | 6.6835 | 5.62 | 14500 | 6.6566 | | 6.6746 | 5.81 | 15000 | 6.6385 | | 6.68 | 6.0 | 15500 | 6.6309 | | 6.6776 | 6.2 | 16000 | 6.6069 | | 6.6714 | 6.39 | 16500 | 6.5991 | | 6.6766 | 6.58 | 17000 | 6.6180 | | 6.6591 | 6.78 | 17500 | 6.6212 | | 6.6396 | 6.97 | 18000 | 6.5804 | | 6.6575 | 7.16 | 18500 | 6.6096 | | 6.6506 | 7.36 | 19000 | 6.5579 | | 6.6618 | 7.55 | 19500 | 6.5911 | | 6.6581 | 7.75 | 20000 | 6.5870 | | 6.6703 | 7.94 | 20500 | 6.6062 | | 6.6392 | 8.13 | 21000 | 6.5962 | | 6.6343 | 8.33 | 21500 | 6.5903 | | 6.6426 | 8.52 | 22000 | 6.6010 | | 6.6227 | 8.71 | 22500 | 6.6060 | | 6.6392 | 8.91 | 23000 | 6.5935 | | 6.6198 | 9.1 | 23500 | 6.6293 | | 6.6372 | 9.3 | 24000 | 6.5594 | | 6.6146 | 9.49 | 24500 | 6.5917 | | 6.6119 | 9.68 | 25000 | 6.5694 | | 6.6292 | 9.88 | 25500 | 6.6230 | | 6.634 | 10.07 | 26000 | 6.5857 | | 6.5863 | 10.26 | 26500 | 6.5938 | | 6.5957 | 10.46 | 27000 | 6.6256 | | 6.5928 | 10.65 | 27500 | 6.6111 | | 6.5948 | 10.84 | 28000 | 6.6031 | | 6.6131 | 11.04 | 28500 | 6.5582 | | 6.5946 | 11.23 | 29000 | 6.6093 | | 6.6155 | 11.43 | 29500 | 6.5670 | | 6.6051 | 11.62 | 30000 | 6.6016 | | 6.5917 | 11.81 | 30500 | 6.6045 | | 6.5918 | 12.01 | 31000 | 6.5802 | | 6.558 | 12.2 | 31500 | 6.5195 | | 6.5896 | 12.39 | 32000 | 6.6315 | | 6.5662 | 12.59 | 32500 | 6.6112 | | 6.5702 | 12.78 | 33000 | 6.5779 | | 6.5798 | 12.97 | 33500 | 6.5662 | | 6.5963 | 13.17 | 34000 | 6.5776 | | 6.5733 | 13.36 | 34500 | 6.5870 | | 6.5499 | 13.56 | 35000 | 6.5850 | | 6.5492 | 13.75 | 35500 | 6.5957 | | 6.5466 | 13.94 | 36000 | 6.5812 | | 6.5741 | 14.14 | 36500 | 6.5287 | | 6.5612 | 14.33 | 37000 | 6.5611 | | 6.5648 | 14.52 | 37500 | 6.5381 | | 6.5661 | 14.72 | 38000 | 6.5742 | | 6.5564 | 14.91 | 38500 | 6.5424 | | 6.5423 | 15.1 | 39000 | 6.5987 | | 6.5471 | 15.3 | 39500 | 6.5662 | | 6.5559 | 15.49 | 40000 | 6.5290 | | 6.5332 | 15.69 | 40500 | 6.5412 | | 6.5362 | 15.88 | 41000 | 6.5486 | | 6.5351 | 16.07 | 41500 | 6.5959 | | 6.5337 | 16.27 | 42000 | 6.5405 | | 6.5246 | 16.46 | 42500 | 6.5217 | | 6.4999 | 16.65 | 43000 | 6.5443 | | 6.5459 | 16.85 | 43500 | 6.5424 | | 6.5077 | 17.04 | 44000 | 6.5499 | | 6.5069 | 17.23 | 44500 | 6.5509 | | 6.5189 | 17.43 | 45000 | 6.5310 | | 6.5086 | 17.62 | 45500 | 6.5361 | | 6.5182 | 17.82 | 46000 | 6.5320 | | 6.51 | 18.01 | 46500 | 6.4850 | | 6.4868 | 18.2 | 47000 | 6.5155 | | 6.4665 | 18.4 | 47500 | 6.5305 | | 6.5123 | 18.59 | 48000 | 6.5301 | | 6.4981 | 18.78 | 48500 | 6.4617 | | 6.4606 | 18.98 | 49000 | 6.4895 | | 6.4716 | 19.17 | 49500 | 6.4790 | | 6.4733 | 19.36 | 50000 | 6.4818 | | 6.4935 | 19.56 | 50500 | 6.4518 | | 6.4761 | 19.75 | 51000 | 6.4852 | | 6.4651 | 19.95 | 51500 | 6.4836 | | 6.4462 | 20.14 | 52000 | 6.4792 | | 6.4605 | 20.33 | 52500 | 6.4661 | | 6.4718 | 20.53 | 53000 | 6.4639 | | 6.459 | 20.72 | 53500 | 6.4683 | | 6.4407 | 20.91 | 54000 | 6.4663 | | 6.4388 | 21.11 | 54500 | 6.4832 | | 6.4479 | 21.3 | 55000 | 6.4606 | | 6.4583 | 21.49 | 55500 | 6.4723 | | 6.4169 | 21.69 | 56000 | 6.4897 | | 6.4437 | 21.88 | 56500 | 6.4368 | | 6.4566 | 22.08 | 57000 | 6.4491 | | 6.4248 | 22.27 | 57500 | 6.4630 | | 6.431 | 22.46 | 58000 | 6.4246 | | 6.4274 | 22.66 | 58500 | 6.4618 | | 6.4262 | 22.85 | 59000 | 6.4177 | | 6.4328 | 23.04 | 59500 | 6.4243 | | 6.4305 | 23.24 | 60000 | 6.4178 | | 6.4078 | 23.43 | 60500 | 6.4310 | | 6.4431 | 23.63 | 61000 | 6.4338 | | 6.4066 | 23.82 | 61500 | 6.4080 | | 6.417 | 24.01 | 62000 | 6.4236 | | 6.4008 | 24.21 | 62500 | 6.3703 | | 6.4222 | 24.4 | 63000 | 6.4188 | | 6.4304 | 24.59 | 63500 | 6.3924 | | 6.4063 | 24.79 | 64000 | 6.4140 | | 6.4176 | 24.98 | 64500 | 6.4419 | | 6.4203 | 25.17 | 65000 | 6.4250 | | 6.3983 | 25.37 | 65500 | 6.3602 | | 6.3911 | 25.56 | 66000 | 6.4129 | | 6.3821 | 25.76 | 66500 | 6.4225 | | 6.3864 | 25.95 | 67000 | 6.3801 | | 6.4109 | 26.14 | 67500 | 6.4032 | | 6.4136 | 26.34 | 68000 | 6.3870 | | 6.3714 | 26.53 | 68500 | 6.4385 | | 6.3711 | 26.72 | 69000 | 6.4081 | | 6.391 | 26.92 | 69500 | 6.3901 | | 6.3931 | 27.11 | 70000 | 6.4047 | | 6.3842 | 27.3 | 70500 | 6.3830 | | 6.3798 | 27.5 | 71000 | 6.3935 | | 6.3903 | 27.69 | 71500 | 6.3756 | | 6.3771 | 27.89 | 72000 | 6.3554 | | 6.3763 | 28.08 | 72500 | 6.3911 | | 6.3576 | 28.27 | 73000 | 6.4059 | | 6.3581 | 28.47 | 73500 | 6.3976 | | 6.3739 | 28.66 | 74000 | 6.3921 | | 6.363 | 28.85 | 74500 | 6.3590 | | 6.3687 | 29.05 | 75000 | 6.3683 | | 6.3788 | 29.24 | 75500 | 6.3915 | | 6.3505 | 29.43 | 76000 | 6.3826 | | 6.3618 | 29.63 | 76500 | 6.3833 | | 6.3287 | 29.82 | 77000 | 6.4055 | | 6.3589 | 30.02 | 77500 | 6.3994 | | 6.3614 | 30.21 | 78000 | 6.3848 | | 6.3729 | 30.4 | 78500 | 6.3550 | | 6.3687 | 30.6 | 79000 | 6.3683 | | 6.3377 | 30.79 | 79500 | 6.3743 | | 6.3188 | 30.98 | 80000 | 6.3113 | | 6.3613 | 31.18 | 80500 | 6.3852 | | 6.3428 | 31.37 | 81000 | 6.3610 | | 6.3541 | 31.56 | 81500 | 6.3848 | | 6.3821 | 31.76 | 82000 | 6.3706 | | 6.3357 | 31.95 | 82500 | 6.3191 | | 6.3408 | 32.15 | 83000 | 6.3357 | | 6.3301 | 32.34 | 83500 | 6.3374 | | 6.3681 | 32.53 | 84000 | 6.3583 | | 6.324 | 32.73 | 84500 | 6.3472 | | 6.3615 | 32.92 | 85000 | 6.3359 | | 6.3382 | 33.11 | 85500 | 6.3664 | | 6.34 | 33.31 | 86000 | 6.3281 | | 6.3504 | 33.5 | 86500 | 6.3688 | | 6.3393 | 33.69 | 87000 | 6.3553 | | 6.3453 | 33.89 | 87500 | 6.3493 | | 6.3293 | 34.08 | 88000 | 6.3315 | | 6.3346 | 34.28 | 88500 | 6.3134 | | 6.3325 | 34.47 | 89000 | 6.3631 | | 6.3497 | 34.66 | 89500 | 6.3380 | | 6.332 | 34.86 | 90000 | 6.3484 | | 6.3224 | 35.05 | 90500 | 6.3602 | | 6.3242 | 35.24 | 91000 | 6.3414 | | 6.3346 | 35.44 | 91500 | 6.3151 | | 6.3547 | 35.63 | 92000 | 6.3499 | | 6.3243 | 35.82 | 92500 | 6.3173 | | 6.3148 | 36.02 | 93000 | 6.3141 | | 6.3202 | 36.21 | 93500 | 6.3358 | | 6.3251 | 36.41 | 94000 | 6.2946 | | 6.3313 | 36.6 | 94500 | 6.3413 | | 6.3077 | 36.79 | 95000 | 6.2959 | | 6.3173 | 36.99 | 95500 | 6.3220 | | 6.3207 | 37.18 | 96000 | 6.3630 | | 6.311 | 37.37 | 96500 | 6.3802 | | 6.3259 | 37.57 | 97000 | 6.3425 | | 6.3269 | 37.76 | 97500 | 6.3407 | | 6.3136 | 37.96 | 98000 | 6.3140 | | 6.3007 | 38.15 | 98500 | 6.3392 | | 6.2911 | 38.34 | 99000 | 6.3874 | | 6.3241 | 38.54 | 99500 | 6.3363 | | 6.3056 | 38.73 | 100000 | 6.3766 | | 6.3138 | 38.92 | 100500 | 6.3147 | | 6.3065 | 39.12 | 101000 | 6.3622 | | 6.3118 | 39.31 | 101500 | 6.3200 | | 6.3009 | 39.5 | 102000 | 6.3316 | | 6.3107 | 39.7 | 102500 | 6.3112 | | 6.2977 | 39.89 | 103000 | 6.3120 |
314232208c31029254c2aec0a0c3575d
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 2.4174
15c56b4896f126b3970f839de1816d89
openrail
[]
false
Chinese weddings need low cfg, such as 3.5-7. Because the training set only has a head portrait, it can only be stable, Forgive me for not doing well, Suggested fusion model Love Chinese style, thank QQ friends for their long-term help and teaching, thank you again Thanks for teacher screw's training set Note It is recommended to use cute face and beautiful face to stabilize the face Negative add long neck,Use vae with high saturation BY昂扬 ![00004-2447141747-8k Wallpaper,grand,(((masterpiece))), (((best quality))), ((ultra-detailed)), (illustration), (detailed light),solo,(doukou),_(w.png](https://s3.amazonaws.com/moonup/production/uploads/1675437913416-635e14681453686fae2cee93.png) ![00010-2823428029-masterpiece, best quality,1girl, solo, earrings, jewelry, flower, black_hair, hair_ornament, hair_flower, long_sleeves, long_hai.png](https://s3.amazonaws.com/moonup/production/uploads/1675437914990-635e14681453686fae2cee93.png) ![00194-3922598814-masterpiece, best quality,beautiful face,a girl,a woman with long hair wearing a red dress and earrings with a red background an.png](https://s3.amazonaws.com/moonup/production/uploads/1675438187452-635e14681453686fae2cee93.png) ![00170-592331038-masterpiece, best quality,beautiful face,a girl,a woman with long hair wearing a red dress and earrings with a red background an.png](https://s3.amazonaws.com/moonup/production/uploads/1675438184736-635e14681453686fae2cee93.png) ![00024-3200162253-doukou,chinese style architecture,Chinese style,lake,ancient town,beautiful and meticulous water,lotus,egret,raining,(Fishing bo.png](https://s3.amazonaws.com/moonup/production/uploads/1675438185934-635e14681453686fae2cee93.png)
338ed053dadef408efa4862eb7fc47a2
apache-2.0
['translation']
false
fr-it * source group: French * target group: Italian * OPUS readme: [fra-ita](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-ita/README.md) * model: transformer-align * source language(s): fra * target language(s): ita * raw source language(s): fra * raw target language(s): ita * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opusTCv20210807-2021-11-11.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-ita/opusTCv20210807-2021-11-11.zip) * test set translations: [opusTCv20210807-2021-11-11.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-ita/opusTCv20210807-2021-11-11.test.txt) * test set scores: [opusTCv20210807-2021-11-11.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-ita/opusTCv20210807-2021-11-11.eval.txt)
c9224caa8cf8ed76d4df60a8e29e0c78
apache-2.0
['translation']
false
System Info: - hf_name: fr-it - source_languages: fra - target_languages: ita - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-ita/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['fr', 'it'] - src_constituents: ('French', {'fra'}) - tgt_constituents: ('Italian', {'ita'}) - src_multilingual: False - tgt_multilingual: False - long_pair: fra-ita - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-ita/opusTCv20210807-2021-11-11.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-ita/opusTCv20210807-2021-11-11.test.txt - src_alpha3: fra - tgt_alpha3: ita - chrF2_score: 0.737 - bleu: 54.8 - src_name: French - tgt_name: Italian - train_date: 2021-11-11 00:00:00 - src_alpha2: fr - tgt_alpha2: it - prefer_old: False - short_pair: fr-it - helsinki_git_sha: 7ab0c987850187e0b10342bfc616cd47c027ba18 - transformers_git_sha: df1f94eb4a18b1a27d27e32040b60a17410d516e - port_machine: LM0-400-22516.local - port_time: 2021-11-11-19:40
8a583b7f453a770ba407addaa7c52afc
mit
[]
false
PyAutoCode: GPT-2 based Python auto-code. PyAutoCode is a cut-down python autosuggestion built on **GPT-2** *(motivation: GPyT)* model. This baby model *(trained only up to 3 epochs)* is not **"fine-tuned"** yet therefore, I highly recommend not to use it in a production environment or incorporate PyAutoCode in any of your projects. It has been trained on **112GB** of Python data sourced from the best crowdsource platform ever -- **GitHub**. *NOTE: Increased training and fine tuning would be highly appreciated and I firmly believe that it would improve the ability of PyAutoCode significantly.*
e0b2c995ae715a3bb0f6850b3d2fe094
mit
[]
false
Some Model Features - Built on *GPT-2* - Tokenized with *ByteLevelBPETokenizer* - Data Sourced from *GitHub (almost 5 consecutive days of latest Python repositories)* - Makes use of *GPTLMHeadModel* and *DataCollatorForLanguageModelling* for training - Newline characters are custom coded as `<N>`
206411fcaa790ebb2e3334ac02852a2f
mit
[]
false
Get a Glimpse of the Model You can make use of the **Inference API** of huggingface *(present on the right sidebar)* to load the model and check the result. Just enter any code snippet as input. Something like: ```sh for i in range( ```
237be155bdc66115cebfb51e537727f1
mit
[]
false
Usage You can use my model too!. Here's a quick tour of how you can achieve this: Install transformers ```sh $ pip install transformers ``` Call the API and get it to work! ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("P0intMaN/PyAutoCode") model = AutoModelForCausalLM.from_pretrained("P0intMaN/PyAutoCode")
9e65ce98571c6a7c9d9e6b0585f55678
mit
[]
false
input: single line or multi-line. Highly recommended to use doc-strings. inp = """import pandas""" format_inp = inp.replace('\n', "<N>") tokenize_inp = tokenizer.encode(format_inp, return_tensors='pt') result = model.generate(tokenize_inp) decode_result = tokenizer.decode(result[0]) format_result = decode_result.replace('<N>', "\n")
5276c90e5f14bb8cb4f6c3afed54e224
mit
[]
false
printing the result print(format_result) ``` Upon successful execution, the above should probably produce *(your results may vary when this model is fine-tuned)* ```sh import pandas as pd import numpy as np import matplotlib.pyplot as plt ```
7d7dd434b4ff36c754e6bcbf4c50d00a
mit
['generated_from_keras_callback']
false
XLM-roberta-finetuned This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set:
54b221d0eb1472f165a8c4d7d9911aea
apache-2.0
['generated_from_keras_callback']
false
eduardopds/distilbert-base-uncased-tweets This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.7428 - Validation Loss: 0.9322 - Epoch: 9
423dd1ca66b7fcf985e13ded56b4c429
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 310, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32
c1685bb14f4924206280436628a7744c
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.0162 | 1.0010 | 0 | | 0.9552 | 0.9574 | 1 | | 0.8928 | 0.9393 | 2 | | 0.8238 | 0.9412 | 3 | | 0.7581 | 0.9322 | 4 | | 0.7268 | 0.9322 | 5 | | 0.7310 | 0.9322 | 6 | | 0.7390 | 0.9322 | 7 | | 0.7423 | 0.9322 | 8 | | 0.7428 | 0.9322 | 9 |
5a40a49f0163f81bde1807ede9c6fc71
apache-2.0
['generated_from_trainer']
false
finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3187 - Accuracy: 0.8633 - F1: 0.8629
798b4107dfd88b5de41ce7a45baf9968
apache-2.0
['generated_from_trainer']
false
small-mlm-glue-cola This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan
cf0c08fe1856fc21372c781bf43ff2d5
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.0589 | 0.47 | 500 | 2.8255 | | 2.8708 | 0.94 | 1000 | 2.8047 | | 2.7086 | 1.4 | 1500 | 2.6590 | | 2.6021 | 1.87 | 2000 | 2.7510 | | 2.4549 | 2.34 | 2500 | 2.8776 | | 2.4864 | 2.81 | 3000 | nan |
3ff492d644db351c54be4e34c9f7bb07
apache-2.0
['generated_from_trainer']
false
bert-base-multilingual-cased-finetuned-pos This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1736 - Precision: 0.9499 - Recall: 0.9504 - F1: 0.9501 - Accuracy: 0.9551
f76a92caa93377332ae32e4cc8acd140
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.7663 | 0.27 | 200 | 0.2047 | 0.9318 | 0.9312 | 0.9315 | 0.9388 | | 0.5539 | 0.53 | 400 | 0.1815 | 0.9381 | 0.9404 | 0.9392 | 0.9460 | | 0.5222 | 0.8 | 600 | 0.1787 | 0.9400 | 0.9424 | 0.9412 | 0.9468 | | 0.5084 | 1.07 | 800 | 0.1591 | 0.9470 | 0.9463 | 0.9467 | 0.9519 | | 0.4703 | 1.33 | 1000 | 0.1622 | 0.9456 | 0.9458 | 0.9457 | 0.9510 | | 0.5005 | 1.6 | 1200 | 0.1666 | 0.9470 | 0.9464 | 0.9467 | 0.9519 | | 0.4677 | 1.87 | 1400 | 0.1583 | 0.9483 | 0.9483 | 0.9483 | 0.9532 | | 0.4704 | 2.13 | 1600 | 0.1635 | 0.9472 | 0.9475 | 0.9473 | 0.9528 | | 0.4639 | 2.4 | 1800 | 0.1569 | 0.9475 | 0.9488 | 0.9482 | 0.9536 | | 0.4627 | 2.67 | 2000 | 0.1605 | 0.9474 | 0.9478 | 0.9476 | 0.9527 | | 0.4608 | 2.93 | 2200 | 0.1535 | 0.9485 | 0.9495 | 0.9490 | 0.9538 | | 0.4306 | 3.2 | 2400 | 0.1646 | 0.9489 | 0.9487 | 0.9488 | 0.9536 | | 0.4583 | 3.47 | 2600 | 0.1642 | 0.9488 | 0.9495 | 0.9491 | 0.9539 | | 0.453 | 3.73 | 2800 | 0.1646 | 0.9498 | 0.9505 | 0.9501 | 0.9554 | | 0.4347 | 4.0 | 3000 | 0.1629 | 0.9494 | 0.9504 | 0.9499 | 0.9552 | | 0.4425 | 4.27 | 3200 | 0.1738 | 0.9495 | 0.9502 | 0.9498 | 0.9550 | | 0.4335 | 4.53 | 3400 | 0.1733 | 0.9499 | 0.9506 | 0.9503 | 0.9550 | | 0.4306 | 4.8 | 3600 | 0.1736 | 0.9499 | 0.9504 | 0.9501 | 0.9551 |
d5ff05c78a6e5365b1699bddde54acf0
mit
['generated_from_trainer']
false
roberta-base-mnli This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the GLUE MNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.3617 - Accuracy: 0.8657
9ee80c195dab5db8033c119a54559c6a
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.06 - num_epochs: 10.0
092d4250a70759f9a680aaf09f8201d3
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:------:|:---------------:|:--------:| | 1.0993 | 0.02 | 500 | 1.0983 | 0.3321 | | 1.099 | 0.04 | 1000 | 1.0932 | 0.4276 | | 1.011 | 0.06 | 1500 | 0.8352 | 0.6732 | | 0.7551 | 0.08 | 2000 | 0.6018 | 0.7615 | | 0.6343 | 0.1 | 2500 | 0.5726 | 0.7813 | | 0.5884 | 0.12 | 3000 | 0.5349 | 0.7926 | | 0.5548 | 0.14 | 3500 | 0.4925 | 0.8078 | | 0.5244 | 0.16 | 4000 | 0.4806 | 0.8161 | | 0.5198 | 0.18 | 4500 | 0.4614 | 0.8257 | | 0.5168 | 0.2 | 5000 | 0.4713 | 0.8177 | | 0.5194 | 0.22 | 5500 | 0.4344 | 0.8323 | | 0.485 | 0.24 | 6000 | 0.4527 | 0.8316 | | 0.4909 | 0.26 | 6500 | 0.4377 | 0.8376 | | 0.49 | 0.29 | 7000 | 0.4649 | 0.8266 | | 0.4897 | 0.31 | 7500 | 0.4162 | 0.8413 | | 0.4672 | 0.33 | 8000 | 0.4163 | 0.8425 | | 0.4699 | 0.35 | 8500 | 0.4060 | 0.8451 | | 0.4729 | 0.37 | 9000 | 0.4412 | 0.8387 | | 0.4733 | 0.39 | 9500 | 0.4353 | 0.8401 | | 0.4699 | 0.41 | 10000 | 0.4060 | 0.8476 | | 0.4759 | 0.43 | 10500 | 0.4226 | 0.8358 | | 0.461 | 0.45 | 11000 | 0.4220 | 0.8423 | | 0.4608 | 0.47 | 11500 | 0.4404 | 0.8319 | | 0.462 | 0.49 | 12000 | 0.4280 | 0.8455 | | 0.4533 | 0.51 | 12500 | 0.4128 | 0.8468 | | 0.4691 | 0.53 | 13000 | 0.4155 | 0.8437 | | 0.4552 | 0.55 | 13500 | 0.4385 | 0.8348 | | 0.4573 | 0.57 | 14000 | 0.4498 | 0.8424 | | 0.4562 | 0.59 | 14500 | 0.4162 | 0.8442 | | 0.4665 | 0.61 | 15000 | 0.4417 | 0.8432 | | 0.4569 | 0.63 | 15500 | 0.4113 | 0.8492 | | 0.4705 | 0.65 | 16000 | 0.4454 | 0.8399 | | 0.4685 | 0.67 | 16500 | 0.4055 | 0.8451 | | 0.4475 | 0.69 | 17000 | 0.4426 | 0.8383 | | 0.4641 | 0.71 | 17500 | 0.4256 | 0.8471 | | 0.4299 | 0.73 | 18000 | 0.4260 | 0.8478 | | 0.4439 | 0.75 | 18500 | 0.4218 | 0.8454 | | 0.4628 | 0.77 | 19000 | 0.4087 | 0.8479 | | 0.4502 | 0.79 | 19500 | 0.4238 | 0.8450 | | 0.4299 | 0.81 | 20000 | 0.4091 | 0.8485 | | 0.4496 | 0.84 | 20500 | 0.4160 | 0.8439 | | 0.4492 | 0.86 | 21000 | 0.4109 | 0.8469 | | 0.432 | 0.88 | 21500 | 0.4499 | 0.8493 | | 0.4343 | 0.9 | 22000 | 0.4136 | 0.8465 | | 0.4445 | 0.92 | 22500 | 0.4095 | 0.8433 | | 0.4378 | 0.94 | 23000 | 0.3999 | 0.8483 | | 0.4367 | 0.96 | 23500 | 0.3962 | 0.8509 | | 0.4428 | 0.98 | 24000 | 0.3958 | 0.8504 | | 0.4356 | 1.0 | 24500 | 0.3998 | 0.8558 | | 0.3715 | 1.02 | 25000 | 0.4016 | 0.8589 | | 0.3649 | 1.04 | 25500 | 0.4368 | 0.8582 | | 0.3565 | 1.06 | 26000 | 0.4084 | 0.8519 | | 0.3626 | 1.08 | 26500 | 0.4302 | 0.8438 | | 0.3535 | 1.1 | 27000 | 0.4206 | 0.8557 | | 0.3684 | 1.12 | 27500 | 0.4117 | 0.8561 | | 0.3649 | 1.14 | 28000 | 0.4300 | 0.8527 | | 0.3791 | 1.16 | 28500 | 0.3916 | 0.8585 | | 0.366 | 1.18 | 29000 | 0.4101 | 0.8592 | | 0.3777 | 1.2 | 29500 | 0.3946 | 0.8561 | | 0.3672 | 1.22 | 30000 | 0.4417 | 0.8530 | | 0.3688 | 1.24 | 30500 | 0.4066 | 0.8523 | | 0.3525 | 1.26 | 31000 | 0.4299 | 0.8581 | | 0.3688 | 1.28 | 31500 | 0.3870 | 0.8553 | | 0.3699 | 1.3 | 32000 | 0.3781 | 0.8627 | | 0.3547 | 1.32 | 32500 | 0.4311 | 0.8526 | | 0.3653 | 1.34 | 33000 | 0.4034 | 0.8603 | | 0.3738 | 1.36 | 33500 | 0.4103 | 0.8554 | | 0.3824 | 1.39 | 34000 | 0.3719 | 0.8618 | | 0.3591 | 1.41 | 34500 | 0.4244 | 0.8615 | | 0.3697 | 1.43 | 35000 | 0.4689 | 0.8451 | | 0.3598 | 1.45 | 35500 | 0.4149 | 0.8532 | | 0.3586 | 1.47 | 36000 | 0.4070 | 0.8591 | | 0.3519 | 1.49 | 36500 | 0.4133 | 0.8545 | | 0.3681 | 1.51 | 37000 | 0.3889 | 0.8601 | | 0.3611 | 1.53 | 37500 | 0.3934 | 0.8591 | | 0.3696 | 1.55 | 38000 | 0.4313 | 0.8552 | | 0.3798 | 1.57 | 38500 | 0.3784 | 0.8602 | | 0.3601 | 1.59 | 39000 | 0.3994 | 0.8600 | | 0.3696 | 1.61 | 39500 | 0.4206 | 0.8577 | | 0.368 | 1.63 | 40000 | 0.3903 | 0.8627 | | 0.3473 | 1.65 | 40500 | 0.3813 | 0.8655 | | 0.3604 | 1.67 | 41000 | 0.3930 | 0.8551 | | 0.3741 | 1.69 | 41500 | 0.3644 | 0.8618 | | 0.3551 | 1.71 | 42000 | 0.3936 | 0.8583 | | 0.378 | 1.73 | 42500 | 0.3826 | 0.8607 | | 0.3609 | 1.75 | 43000 | 0.3815 | 0.8618 | | 0.3678 | 1.77 | 43500 | 0.3961 | 0.8578 | | 0.3633 | 1.79 | 44000 | 0.4011 | 0.8603 | | 0.3792 | 1.81 | 44500 | 0.4061 | 0.8592 | | 0.3675 | 1.83 | 45000 | 0.4155 | 0.8631 | | 0.3576 | 1.85 | 45500 | 0.4061 | 0.8589 | | 0.3546 | 1.87 | 46000 | 0.3862 | 0.8623 | | 0.3564 | 1.89 | 46500 | 0.3937 | 0.8607 | | 0.3602 | 1.91 | 47000 | 0.3851 | 0.8646 | | 0.3494 | 1.94 | 47500 | 0.4015 | 0.8541 | | 0.3499 | 1.96 | 48000 | 0.4266 | 0.8545 | | 0.3672 | 1.98 | 48500 | 0.3761 | 0.8588 | | 0.3661 | 2.0 | 49000 | 0.4121 | 0.8567 | | 0.2759 | 2.02 | 49500 | 0.4653 | 0.8645 | | 0.2927 | 2.04 | 50000 | 0.4652 | 0.8597 | | 0.2736 | 2.06 | 50500 | 0.4547 | 0.8597 | | 0.2749 | 2.08 | 51000 | 0.4896 | 0.8565 | | 0.2757 | 2.1 | 51500 | 0.4814 | 0.8639 | | 0.2833 | 2.12 | 52000 | 0.4110 | 0.8656 | | 0.2797 | 2.14 | 52500 | 0.4316 | 0.8636 | | 0.2643 | 2.16 | 53000 | 0.4317 | 0.8599 | | 0.2791 | 2.18 | 53500 | 0.4557 | 0.8617 | | 0.2737 | 2.2 | 54000 | 0.4102 | 0.8624 | | 0.2748 | 2.22 | 54500 | 0.4187 | 0.8585 | | 0.2619 | 2.24 | 55000 | 0.4412 | 0.8590 | | 0.2718 | 2.26 | 55500 | 0.4707 | 0.8618 | | 0.2662 | 2.28 | 56000 | 0.4754 | 0.8594 | | 0.282 | 2.3 | 56500 | 0.4376 | 0.8617 | | 0.284 | 2.32 | 57000 | 0.4393 | 0.8599 | | 0.2733 | 2.34 | 57500 | 0.4531 | 0.8581 | | 0.2878 | 2.36 | 58000 | 0.4727 | 0.8549 | | 0.2812 | 2.38 | 58500 | 0.4221 | 0.8625 | | 0.2657 | 2.4 | 59000 | 0.4456 | 0.8583 | | 0.2716 | 2.42 | 59500 | 0.4455 | 0.8668 | | 0.2766 | 2.44 | 60000 | 0.4940 | 0.8580 | | 0.2871 | 2.46 | 60500 | 0.4460 | 0.8501 | | 0.2731 | 2.49 | 61000 | 0.4600 | 0.8631 | | 0.2885 | 2.51 | 61500 | 0.4229 | 0.8645 | | 0.2764 | 2.53 | 62000 | 0.4107 | 0.8638 | | 0.2866 | 2.55 | 62500 | 0.4250 | 0.8638 | | 0.2754 | 2.57 | 63000 | 0.4846 | 0.8580 | | 0.3028 | 2.59 | 63500 | 0.4339 | 0.8627 | | 0.2828 | 2.61 | 64000 | 0.4697 | 0.8613 | | 0.2875 | 2.63 | 64500 | 0.4167 | 0.8638 | | 0.2836 | 2.65 | 65000 | 0.5050 | 0.8600 | | 0.2978 | 2.67 | 65500 | 0.4139 | 0.8628 | | 0.2946 | 2.69 | 66000 | 0.4449 | 0.8644 | | 0.2822 | 2.71 | 66500 | 0.4302 | 0.8612 | | 0.3006 | 2.73 | 67000 | 0.4256 | 0.8631 | | 0.2896 | 2.75 | 67500 | 0.4993 | 0.8603 | | 0.2787 | 2.77 | 68000 | 0.4467 | 0.8636 | | 0.3 | 2.79 | 68500 | 0.4196 | 0.8592 | | 0.2939 | 2.81 | 69000 | 0.4234 | 0.8614 | | 0.2841 | 2.83 | 69500 | 0.4173 | 0.8660 | | 0.2935 | 2.85 | 70000 | 0.4054 | 0.8658 | | 0.2977 | 2.87 | 70500 | 0.4400 | 0.8623 | | 0.2853 | 2.89 | 71000 | 0.4322 | 0.8668 | | 0.2779 | 2.91 | 71500 | 0.4460 | 0.8595 | | 0.2923 | 2.93 | 72000 | 0.4279 | 0.8619 | | 0.2915 | 2.95 | 72500 | 0.4324 | 0.8625 | | 0.2927 | 2.97 | 73000 | 0.4108 | 0.8672 | | 0.29 | 2.99 | 73500 | 0.4299 | 0.8579 | | 0.2255 | 3.01 | 74000 | 0.5337 | 0.8637 | | 0.2113 | 3.04 | 74500 | 0.5046 | 0.8624 | | 0.207 | 3.06 | 75000 | 0.6011 | 0.8551 | | 0.2226 | 3.08 | 75500 | 0.5426 | 0.8579 | | 0.2129 | 3.1 | 76000 | 0.5036 | 0.8640 | | 0.2201 | 3.12 | 76500 | 0.5629 | 0.8604 | | 0.2185 | 3.14 | 77000 | 0.5416 | 0.8607 | | 0.21 | 3.16 | 77500 | 0.5457 | 0.8605 | | 0.2372 | 3.18 | 78000 | 0.5337 | 0.8594 | | 0.2237 | 3.2 | 78500 | 0.5060 | 0.8679 | | 0.2277 | 3.22 | 79000 | 0.5647 | 0.8651 | | 0.2301 | 3.24 | 79500 | 0.4906 | 0.8602 | | 0.2238 | 3.26 | 80000 | 0.5231 | 0.8647 | | 0.2365 | 3.28 | 80500 | 0.5628 | 0.8621 | | 0.2189 | 3.3 | 81000 | 0.5496 | 0.8630 | | 0.2233 | 3.32 | 81500 | 0.5418 | 0.8639 | | 0.2216 | 3.34 | 82000 | 0.5032 | 0.8689 | | 0.2314 | 3.36 | 82500 | 0.5437 | 0.8634 | | 0.2351 | 3.38 | 83000 | 0.4863 | 0.8653 | | 0.2378 | 3.4 | 83500 | 0.5158 | 0.8635 | | 0.2357 | 3.42 | 84000 | 0.5142 | 0.8629 | | 0.2484 | 3.44 | 84500 | 0.4536 | 0.8657 | | 0.2261 | 3.46 | 85000 | 0.5619 | 0.8649 | | 0.2323 | 3.48 | 85500 | 0.5371 | 0.8587 | | 0.2336 | 3.5 | 86000 | 0.5562 | 0.8621 | | 0.2259 | 3.52 | 86500 | 0.5339 | 0.8589 | | 0.2371 | 3.54 | 87000 | 0.4711 | 0.8665 | | 0.227 | 3.57 | 87500 | 0.5350 | 0.8644 | | 0.2417 | 3.59 | 88000 | 0.4692 | 0.8665 | | 0.2176 | 3.61 | 88500 | 0.5195 | 0.8655 | | 0.2393 | 3.63 | 89000 | 0.5468 | 0.8588 | | 0.2219 | 3.65 | 89500 | 0.5498 | 0.8646 | | 0.23 | 3.67 | 90000 | 0.5367 | 0.8703 | | 0.2317 | 3.69 | 90500 | 0.4761 | 0.8639 | | 0.2241 | 3.71 | 91000 | 0.4992 | 0.8654 | | 0.2327 | 3.73 | 91500 | 0.5040 | 0.8678 | | 0.2312 | 3.75 | 92000 | 0.4943 | 0.8639 | | 0.2369 | 3.77 | 92500 | 0.4824 | 0.8721 | | 0.2235 | 3.79 | 93000 | 0.5090 | 0.8661 | | 0.2256 | 3.81 | 93500 | 0.5258 | 0.8644 | | 0.236 | 3.83 | 94000 | 0.5490 | 0.8542 | | 0.2313 | 3.85 | 94500 | 0.4672 | 0.8677 | | 0.228 | 3.87 | 95000 | 0.5037 | 0.8623 | | 0.2297 | 3.89 | 95500 | 0.5207 | 0.8545 | | 0.2332 | 3.91 | 96000 | 0.5139 | 0.8698 | | 0.2331 | 3.93 | 96500 | 0.5182 | 0.8615 | | 0.2354 | 3.95 | 97000 | 0.5090 | 0.8657 | | 0.2273 | 3.97 | 97500 | 0.5523 | 0.8637 | | 0.2433 | 3.99 | 98000 | 0.5148 | 0.8691 | | 0.191 | 4.01 | 98500 | 0.6007 | 0.8654 | | 0.1683 | 4.03 | 99000 | 0.6770 | 0.8636 | | 0.1778 | 4.05 | 99500 | 0.6595 | 0.8635 | | 0.1832 | 4.07 | 100000 | 0.6129 | 0.8608 | | 0.1842 | 4.09 | 100500 | 0.6612 | 0.8611 | | 0.1865 | 4.12 | 101000 | 0.6551 | 0.8658 | | 0.1833 | 4.14 | 101500 | 0.6294 | 0.8643 | | 0.1869 | 4.16 | 102000 | 0.6234 | 0.8614 | | 0.1806 | 4.18 | 102500 | 0.6417 | 0.8655 | | 0.1911 | 4.2 | 103000 | 0.6426 | 0.8607 | | 0.1981 | 4.22 | 103500 | 0.6247 | 0.8589 | | 0.1731 | 4.24 | 104000 | 0.6613 | 0.8626 | | 0.1977 | 4.26 | 104500 | 0.5441 | 0.8661 | | 0.1771 | 4.28 | 105000 | 0.6608 | 0.8644 | | 0.1903 | 4.3 | 105500 | 0.6174 | 0.8603 | | 0.1797 | 4.32 | 106000 | 0.6609 | 0.8607 | | 0.188 | 4.34 | 106500 | 0.6059 | 0.8643 | | 0.1863 | 4.36 | 107000 | 0.5723 | 0.8663 | | 0.19 | 4.38 | 107500 | 0.5959 | 0.8652 | | 0.1869 | 4.4 | 108000 | 0.5898 | 0.8698 | | 0.1909 | 4.42 | 108500 | 0.6052 | 0.8659 | | 0.1908 | 4.44 | 109000 | 0.5854 | 0.8690 | | 0.203 | 4.46 | 109500 | 0.5727 | 0.8694 | | 0.1993 | 4.48 | 110000 | 0.5877 | 0.8653 | | 0.1796 | 4.5 | 110500 | 0.6231 | 0.8679 | | 0.1837 | 4.52 | 111000 | 0.5749 | 0.8694 | | 0.1885 | 4.54 | 111500 | 0.6174 | 0.8618 | | 0.1902 | 4.56 | 112000 | 0.5625 | 0.8682 | | 0.2031 | 4.58 | 112500 | 0.6252 | 0.8577 | | 0.1986 | 4.6 | 113000 | 0.6147 | 0.8548 | | 0.1769 | 4.62 | 113500 | 0.6351 | 0.8648 | | 0.1974 | 4.64 | 114000 | 0.6396 | 0.8630 | | 0.1952 | 4.67 | 114500 | 0.6174 | 0.8661 | | 0.1904 | 4.69 | 115000 | 0.6188 | 0.8663 | | 0.191 | 4.71 | 115500 | 0.5860 | 0.8646 | | 0.1869 | 4.73 | 116000 | 0.5978 | 0.8586 | | 0.2056 | 4.75 | 116500 | 0.5985 | 0.8648 | | 0.1837 | 4.77 | 117000 | 0.5742 | 0.8636 | | 0.2038 | 4.79 | 117500 | 0.5726 | 0.8662 | | 0.1939 | 4.81 | 118000 | 0.6097 | 0.8623 | | 0.1869 | 4.83 | 118500 | 0.5820 | 0.8651 | | 0.1897 | 4.85 | 119000 | 0.5766 | 0.8666 | | 0.1792 | 4.87 | 119500 | 0.6093 | 0.8683 | | 0.2056 | 4.89 | 120000 | 0.5890 | 0.8633 | | 0.1989 | 4.91 | 120500 | 0.5825 | 0.8674 | | 0.1916 | 4.93 | 121000 | 0.6250 | 0.8641 | | 0.197 | 4.95 | 121500 | 0.5848 | 0.8645 | | 0.1923 | 4.97 | 122000 | 0.5666 | 0.8667 | | 0.1916 | 4.99 | 122500 | 0.6189 | 0.8638 | | 0.1642 | 5.01 | 123000 | 0.7094 | 0.8610 | | 0.1357 | 5.03 | 123500 | 0.6972 | 0.8658 | | 0.1476 | 5.05 | 124000 | 0.6965 | 0.8664 | | 0.1476 | 5.07 | 124500 | 0.7177 | 0.8638 | | 0.1486 | 5.09 | 125000 | 0.6945 | 0.8620 | | 0.1309 | 5.11 | 125500 | 0.7326 | 0.8626 | | 0.1575 | 5.13 | 126000 | 0.6473 | 0.8632 | | 0.1411 | 5.15 | 126500 | 0.6955 | 0.8651 | | 0.1473 | 5.17 | 127000 | 0.6926 | 0.8648 | | 0.153 | 5.19 | 127500 | 0.7010 | 0.8638 | | 0.1488 | 5.22 | 128000 | 0.6643 | 0.8689 | | 0.144 | 5.24 | 128500 | 0.6868 | 0.8668 | | 0.156 | 5.26 | 129000 | 0.6682 | 0.8645 | | 0.1537 | 5.28 | 129500 | 0.6740 | 0.8610 | | 0.1424 | 5.3 | 130000 | 0.7509 | 0.8603 | | 0.1531 | 5.32 | 130500 | 0.6966 | 0.8670 | | 0.1457 | 5.34 | 131000 | 0.7227 | 0.8632 | | 0.1494 | 5.36 | 131500 | 0.6911 | 0.8626 | | 0.1476 | 5.38 | 132000 | 0.6903 | 0.8630 | | 0.1531 | 5.4 | 132500 | 0.6839 | 0.8675 | | 0.1613 | 5.42 | 133000 | 0.6559 | 0.8601 | | 0.1456 | 5.44 | 133500 | 0.7161 | 0.8619 | | 0.1539 | 5.46 | 134000 | 0.7108 | 0.8638 | | 0.1685 | 5.48 | 134500 | 0.6703 | 0.8628 | | 0.1482 | 5.5 | 135000 | 0.6692 | 0.8651 | | 0.1587 | 5.52 | 135500 | 0.6936 | 0.8658 | | 0.152 | 5.54 | 136000 | 0.6844 | 0.8661 | | 0.1619 | 5.56 | 136500 | 0.6632 | 0.8641 | | 0.154 | 5.58 | 137000 | 0.6451 | 0.8666 | | 0.1525 | 5.6 | 137500 | 0.6529 | 0.8686 | | 0.1545 | 5.62 | 138000 | 0.6860 | 0.8603 | | 0.1487 | 5.64 | 138500 | 0.6842 | 0.8668 | | 0.1546 | 5.66 | 139000 | 0.6692 | 0.8655 | | 0.168 | 5.68 | 139500 | 0.6701 | 0.8649 | | 0.1513 | 5.7 | 140000 | 0.6613 | 0.8680 | | 0.1704 | 5.72 | 140500 | 0.6804 | 0.8643 | | 0.1517 | 5.74 | 141000 | 0.6871 | 0.8684 | | 0.1572 | 5.77 | 141500 | 0.6676 | 0.8670 | | 0.1551 | 5.79 | 142000 | 0.6919 | 0.8638 | | 0.1483 | 5.81 | 142500 | 0.6801 | 0.8667 | | 0.1562 | 5.83 | 143000 | 0.6791 | 0.8628 | | 0.1594 | 5.85 | 143500 | 0.6422 | 0.8671 | | 0.1627 | 5.87 | 144000 | 0.6526 | 0.8679 | | 0.1514 | 5.89 | 144500 | 0.6734 | 0.8698 | | 0.1546 | 5.91 | 145000 | 0.6377 | 0.8711 | | 0.146 | 5.93 | 145500 | 0.7214 | 0.8657 | | 0.1608 | 5.95 | 146000 | 0.6756 | 0.8674 | | 0.1648 | 5.97 | 146500 | 0.6387 | 0.8687 | | 0.1547 | 5.99 | 147000 | 0.6871 | 0.8646 | | 0.1304 | 6.01 | 147500 | 0.7543 | 0.8633 | | 0.1059 | 6.03 | 148000 | 0.7576 | 0.8638 | | 0.1089 | 6.05 | 148500 | 0.7530 | 0.8642 | | 0.112 | 6.07 | 149000 | 0.7951 | 0.8640 | | 0.1198 | 6.09 | 149500 | 0.7381 | 0.8636 | | 0.1222 | 6.11 | 150000 | 0.7560 | 0.8623 | | 0.1024 | 6.13 | 150500 | 0.7965 | 0.8669 | | 0.125 | 6.15 | 151000 | 0.7613 | 0.8620 | | 0.1005 | 6.17 | 151500 | 0.7851 | 0.8651 | | 0.1196 | 6.19 | 152000 | 0.7637 | 0.8652 | | 0.1133 | 6.21 | 152500 | 0.7810 | 0.8660 | | 0.1271 | 6.23 | 153000 | 0.7510 | 0.8672 | | 0.1167 | 6.25 | 153500 | 0.7670 | 0.8638 | | 0.1198 | 6.27 | 154000 | 0.7770 | 0.8632 | | 0.1194 | 6.29 | 154500 | 0.7720 | 0.8607 | | 0.1215 | 6.32 | 155000 | 0.7880 | 0.8609 | | 0.1134 | 6.34 | 155500 | 0.8026 | 0.8617 | | 0.1113 | 6.36 | 156000 | 0.7632 | 0.8652 | | 0.1207 | 6.38 | 156500 | 0.7369 | 0.8686 | | 0.1188 | 6.4 | 157000 | 0.7466 | 0.8657 | | 0.1283 | 6.42 | 157500 | 0.7531 | 0.8645 | | 0.1186 | 6.44 | 158000 | 0.7529 | 0.8673 | | 0.135 | 6.46 | 158500 | 0.7706 | 0.8589 | | 0.1116 | 6.48 | 159000 | 0.7754 | 0.8646 | | 0.1295 | 6.5 | 159500 | 0.7026 | 0.8693 | | 0.1309 | 6.52 | 160000 | 0.7342 | 0.8656 | | 0.1172 | 6.54 | 160500 | 0.7828 | 0.8644 | | 0.125 | 6.56 | 161000 | 0.7456 | 0.8671 | | 0.1199 | 6.58 | 161500 | 0.7464 | 0.8701 | | 0.1197 | 6.6 | 162000 | 0.7626 | 0.8639 | | 0.1126 | 6.62 | 162500 | 0.8115 | 0.8609 | | 0.1365 | 6.64 | 163000 | 0.7407 | 0.8681 | | 0.122 | 6.66 | 163500 | 0.7648 | 0.8641 | | 0.1157 | 6.68 | 164000 | 0.7636 | 0.8669 | | 0.118 | 6.7 | 164500 | 0.7688 | 0.8686 | | 0.1173 | 6.72 | 165000 | 0.8051 | 0.8687 | | 0.1137 | 6.74 | 165500 | 0.8101 | 0.8635 | | 0.1412 | 6.76 | 166000 | 0.7004 | 0.8689 | | 0.1131 | 6.78 | 166500 | 0.7589 | 0.8664 | | 0.1232 | 6.8 | 167000 | 0.7657 | 0.8654 | | 0.1343 | 6.82 | 167500 | 0.7547 | 0.8652 | | 0.1208 | 6.84 | 168000 | 0.7407 | 0.8699 | | 0.1284 | 6.87 | 168500 | 0.7182 | 0.8677 | | 0.1182 | 6.89 | 169000 | 0.7248 | 0.8681 | | 0.1166 | 6.91 | 169500 | 0.7385 | 0.8678 | | 0.1289 | 6.93 | 170000 | 0.7293 | 0.8672 | | 0.1243 | 6.95 | 170500 | 0.7178 | 0.8696 | | 0.1256 | 6.97 | 171000 | 0.7291 | 0.8633 | | 0.1162 | 6.99 | 171500 | 0.7515 | 0.8648 | | 0.1013 | 7.01 | 172000 | 0.7824 | 0.8655 | | 0.0811 | 7.03 | 172500 | 0.8297 | 0.8647 | | 0.0831 | 7.05 | 173000 | 0.8144 | 0.8678 | | 0.0872 | 7.07 | 173500 | 0.8176 | 0.8679 | | 0.0868 | 7.09 | 174000 | 0.8405 | 0.8642 | | 0.0756 | 7.11 | 174500 | 0.8867 | 0.8642 | | 0.0882 | 7.13 | 175000 | 0.8185 | 0.8659 | | 0.0879 | 7.15 | 175500 | 0.8653 | 0.8625 | | 0.0831 | 7.17 | 176000 | 0.8323 | 0.8655 | | 0.0847 | 7.19 | 176500 | 0.8358 | 0.8650 | | 0.0938 | 7.21 | 177000 | 0.7967 | 0.8665 | | 0.0908 | 7.23 | 177500 | 0.8147 | 0.8640 | | 0.0809 | 7.25 | 178000 | 0.8325 | 0.8679 | | 0.0993 | 7.27 | 178500 | 0.8131 | 0.8655 | | 0.087 | 7.29 | 179000 | 0.8249 | 0.8628 | | 0.0873 | 7.31 | 179500 | 0.8326 | 0.8661 | | 0.0889 | 7.33 | 180000 | 0.8171 | 0.8685 | | 0.0739 | 7.35 | 180500 | 0.8686 | 0.8642 | | 0.0821 | 7.37 | 181000 | 0.8739 | 0.8669 | | 0.0981 | 7.39 | 181500 | 0.8558 | 0.8639 | | 0.0858 | 7.42 | 182000 | 0.8276 | 0.8673 | | 0.083 | 7.44 | 182500 | 0.8148 | 0.8675 | | 0.0969 | 7.46 | 183000 | 0.8520 | 0.8630 | | 0.0851 | 7.48 | 183500 | 0.8604 | 0.8671 | | 0.0881 | 7.5 | 184000 | 0.8665 | 0.8634 | | 0.1036 | 7.52 | 184500 | 0.8233 | 0.8642 | | 0.0874 | 7.54 | 185000 | 0.8293 | 0.8660 | | 0.0935 | 7.56 | 185500 | 0.8006 | 0.8671 | | 0.0887 | 7.58 | 186000 | 0.8352 | 0.8637 | | 0.0897 | 7.6 | 186500 | 0.8309 | 0.8655 | | 0.0788 | 7.62 | 187000 | 0.8505 | 0.8653 | | 0.0887 | 7.64 | 187500 | 0.8465 | 0.8657 | | 0.0909 | 7.66 | 188000 | 0.8582 | 0.8637 | | 0.0895 | 7.68 | 188500 | 0.8487 | 0.8659 | | 0.0729 | 7.7 | 189000 | 0.8770 | 0.8636 | | 0.0758 | 7.72 | 189500 | 0.8717 | 0.8653 | | 0.0901 | 7.74 | 190000 | 0.8513 | 0.8639 | | 0.0848 | 7.76 | 190500 | 0.8554 | 0.8661 | | 0.0985 | 7.78 | 191000 | 0.8259 | 0.8640 | | 0.091 | 7.8 | 191500 | 0.8483 | 0.8644 | | 0.0868 | 7.82 | 192000 | 0.8776 | 0.8602 | | 0.0898 | 7.84 | 192500 | 0.8470 | 0.8634 | | 0.0959 | 7.86 | 193000 | 0.8344 | 0.8645 | | 0.0939 | 7.88 | 193500 | 0.8419 | 0.8641 | | 0.0769 | 7.9 | 194000 | 0.8355 | 0.8673 | | 0.0808 | 7.92 | 194500 | 0.8642 | 0.8646 | | 0.0797 | 7.94 | 195000 | 0.8401 | 0.8663 | | 0.0875 | 7.97 | 195500 | 0.8598 | 0.8638 | | 0.0896 | 7.99 | 196000 | 0.8624 | 0.8648 | | 0.0762 | 8.01 | 196500 | 0.8645 | 0.8656 | | 0.0552 | 8.03 | 197000 | 0.8844 | 0.8661 | | 0.0598 | 8.05 | 197500 | 0.8870 | 0.8663 | | 0.0528 | 8.07 | 198000 | 0.8866 | 0.8679 | | 0.0679 | 8.09 | 198500 | 0.8835 | 0.8657 | | 0.0628 | 8.11 | 199000 | 0.9017 | 0.8635 | | 0.0644 | 8.13 | 199500 | 0.8979 | 0.8647 | | 0.0446 | 8.15 | 200000 | 0.9144 | 0.8656 | | 0.0524 | 8.17 | 200500 | 0.9116 | 0.8651 | | 0.0561 | 8.19 | 201000 | 0.9281 | 0.8639 | | 0.0525 | 8.21 | 201500 | 0.9115 | 0.8672 | | 0.0646 | 8.23 | 202000 | 0.8933 | 0.8663 | | 0.0691 | 8.25 | 202500 | 0.8591 | 0.8662 | | 0.0708 | 8.27 | 203000 | 0.8525 | 0.8683 | | 0.0598 | 8.29 | 203500 | 0.8663 | 0.8689 | | 0.0513 | 8.31 | 204000 | 0.8671 | 0.8704 | | 0.0564 | 8.33 | 204500 | 0.8597 | 0.8694 | | 0.0619 | 8.35 | 205000 | 0.8645 | 0.8683 | | 0.0563 | 8.37 | 205500 | 0.8848 | 0.8658 | | 0.0615 | 8.39 | 206000 | 0.8728 | 0.8663 | | 0.0668 | 8.41 | 206500 | 0.8925 | 0.8657 | | 0.0592 | 8.43 | 207000 | 0.8644 | 0.8673 | | 0.0668 | 8.45 | 207500 | 0.8601 | 0.8700 | | 0.071 | 8.47 | 208000 | 0.8735 | 0.8682 | | 0.061 | 8.49 | 208500 | 0.8797 | 0.8662 | | 0.0627 | 8.52 | 209000 | 0.8742 | 0.8663 | | 0.0505 | 8.54 | 209500 | 0.9063 | 0.8649 | | 0.0607 | 8.56 | 210000 | 0.8940 | 0.8677 | | 0.0569 | 8.58 | 210500 | 0.8953 | 0.8673 | | 0.0671 | 8.6 | 211000 | 0.8784 | 0.8667 | | 0.0509 | 8.62 | 211500 | 0.8942 | 0.8678 | | 0.0526 | 8.64 | 212000 | 0.8968 | 0.8686 | | 0.0541 | 8.66 | 212500 | 0.8950 | 0.8694 | | 0.0677 | 8.68 | 213000 | 0.8808 | 0.8665 | | 0.0552 | 8.7 | 213500 | 0.8923 | 0.8662 | | 0.053 | 8.72 | 214000 | 0.9118 | 0.8673 | | 0.0608 | 8.74 | 214500 | 0.9023 | 0.8700 | | 0.0573 | 8.76 | 215000 | 0.9096 | 0.8681 | | 0.0621 | 8.78 | 215500 | 0.8872 | 0.8684 | | 0.0559 | 8.8 | 216000 | 0.8837 | 0.8672 | | 0.0593 | 8.82 | 216500 | 0.8937 | 0.8675 | | 0.0633 | 8.84 | 217000 | 0.8746 | 0.8685 | | 0.0548 | 8.86 | 217500 | 0.9049 | 0.8662 | | 0.0427 | 8.88 | 218000 | 0.9195 | 0.8685 | | 0.0623 | 8.9 | 218500 | 0.9146 | 0.8669 | | 0.0594 | 8.92 | 219000 | 0.9096 | 0.8672 | | 0.0683 | 8.94 | 219500 | 0.8778 | 0.8679 | | 0.0659 | 8.96 | 220000 | 0.8552 | 0.8699 | | 0.0603 | 8.98 | 220500 | 0.8901 | 0.8679 | | 0.0566 | 9.0 | 221000 | 0.8997 | 0.8677 | | 0.0443 | 9.02 | 221500 | 0.9009 | 0.8683 | | 0.0358 | 9.04 | 222000 | 0.9193 | 0.8680 | | 0.0317 | 9.07 | 222500 | 0.9319 | 0.8687 | | 0.0384 | 9.09 | 223000 | 0.9155 | 0.8699 | | 0.0432 | 9.11 | 223500 | 0.9243 | 0.8685 | | 0.0408 | 9.13 | 224000 | 0.9251 | 0.8693 | | 0.0443 | 9.15 | 224500 | 0.9322 | 0.8677 | | 0.0438 | 9.17 | 225000 | 0.9371 | 0.8666 | | 0.0379 | 9.19 | 225500 | 0.9283 | 0.8693 | | 0.0411 | 9.21 | 226000 | 0.9147 | 0.8703 | | 0.036 | 9.23 | 226500 | 0.9167 | 0.8703 | | 0.0394 | 9.25 | 227000 | 0.9254 | 0.8688 | | 0.0363 | 9.27 | 227500 | 0.9288 | 0.8704 | | 0.0492 | 9.29 | 228000 | 0.9242 | 0.8693 | | 0.0411 | 9.31 | 228500 | 0.9325 | 0.8677 | | 0.0408 | 9.33 | 229000 | 0.9370 | 0.8690 | | 0.0326 | 9.35 | 229500 | 0.9417 | 0.8705 | | 0.038 | 9.37 | 230000 | 0.9480 | 0.8700 | | 0.0412 | 9.39 | 230500 | 0.9398 | 0.8693 | | 0.0588 | 9.41 | 231000 | 0.9174 | 0.8707 | | 0.0417 | 9.43 | 231500 | 0.9204 | 0.8715 | | 0.0362 | 9.45 | 232000 | 0.9319 | 0.8701 | | 0.0283 | 9.47 | 232500 | 0.9562 | 0.8696 | | 0.0353 | 9.49 | 233000 | 0.9525 | 0.8690 | | 0.0384 | 9.51 | 233500 | 0.9561 | 0.8687 | | 0.0406 | 9.53 | 234000 | 0.9375 | 0.8715 | | 0.0356 | 9.55 | 234500 | 0.9575 | 0.8690 | | 0.044 | 9.57 | 235000 | 0.9429 | 0.8708 | | 0.0444 | 9.6 | 235500 | 0.9413 | 0.8690 | | 0.0421 | 9.62 | 236000 | 0.9412 | 0.8689 | | 0.038 | 9.64 | 236500 | 0.9352 | 0.8695 | | 0.0355 | 9.66 | 237000 | 0.9362 | 0.8689 | | 0.04 | 9.68 | 237500 | 0.9403 | 0.8691 | | 0.0356 | 9.7 | 238000 | 0.9402 | 0.8706 | | 0.0383 | 9.72 | 238500 | 0.9466 | 0.8692 | | 0.0534 | 9.74 | 239000 | 0.9378 | 0.8700 | | 0.0383 | 9.76 | 239500 | 0.9390 | 0.8697 | | 0.0418 | 9.78 | 240000 | 0.9404 | 0.8694 | | 0.0335 | 9.8 | 240500 | 0.9390 | 0.8705 | | 0.0398 | 9.82 | 241000 | 0.9430 | 0.8696 | | 0.0336 | 9.84 | 241500 | 0.9438 | 0.8698 | | 0.045 | 9.86 | 242000 | 0.9414 | 0.8703 | | 0.0401 | 9.88 | 242500 | 0.9425 | 0.8696 | | 0.0454 | 9.9 | 243000 | 0.9405 | 0.8696 | | 0.0361 | 9.92 | 243500 | 0.9394 | 0.8696 | | 0.0458 | 9.94 | 244000 | 0.9400 | 0.8690 | | 0.0329 | 9.96 | 244500 | 0.9402 | 0.8693 | | 0.0469 | 9.98 | 245000 | 0.9401 | 0.8691 |
28bd884f769b392132129566cab9faab
creativeml-openrail-m
['text-to-image']
false
noda model Dreambooth model trained by tommy19970714 with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! Sample pictures of: noda (use that on your prompt) ![noda 0](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%281%29.jpg)![noda 1](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%282%29.jpg)![noda 2](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%283%29.jpg)![noda 3](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%284%29.jpg)![noda 4](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%285%29.jpg)![noda 5](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%286%29.jpg)![noda 6](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%287%29.jpg)![noda 7](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%288%29.jpg)![noda 8](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%289%29.jpg)![noda 9](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%2810%29.jpg)![noda 10](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%2811%29.jpg)![noda 11](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%2812%29.jpg)![noda 12](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%2813%29.jpg)![noda 13](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%2814%29.jpg)![noda 14](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%2815%29.jpg)![noda 15](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%2816%29.jpg)![noda 16](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%2817%29.jpg)![noda 17](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%2818%29.jpg)![noda 18](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%2819%29.jpg)![noda 19](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%2820%29.jpg)![noda 20](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%2821%29.jpg)![noda 21](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%2822%29.jpg)![noda 22](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%2823%29.jpg)![noda 23](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%2824%29.jpg)![noda 24](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%2825%29.jpg)![noda 25](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%2826%29.jpg)![noda 26](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%2827%29.jpg)![noda 27](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%2828%29.jpg)![noda 28](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%2829%29.jpg)![noda 29](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%2830%29.jpg)![noda 30](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%2831%29.jpg)![noda 31](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%2832%29.jpg)![noda 32](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%2833%29.jpg)![noda 33](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%2834%29.jpg)![noda 34](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%2835%29.jpg)
878e1ccb00c192a7ecec7b687e51228c
apache-2.0
['generated_from_trainer']
false
wav2vec2-base-timit-finetune This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on the None dataset. It achieves the following results on the evaluation set: - Loss: 972.3115 - Wer: 1.0
b8c7c0ec552ded0869e4d8803f8a1009
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 60 - mixed_precision_training: Native AMP
f5583fb5fb19e99c57d56ebb863984f4
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:---:| | 3325.1297 | 1.39 | 100 | 4054.7283 | 1.0 | | 1624.4673 | 2.77 | 200 | 1100.8928 | 1.0 | | 1079.3557 | 4.17 | 300 | 1009.5025 | 1.0 | | 1026.4995 | 5.55 | 400 | 979.0 | 1.0 | | 1005.6487 | 6.94 | 500 | 964.3292 | 1.0 | | 1000.4138 | 8.33 | 600 | 972.3115 | 1.0 |
159ef42476b4ea51eccbb7c310f99d73
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Whisper Small Western Frisian (Netherlands) This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 fy-NL dataset. It achieves the following results on the evaluation set: - Loss: 0.5703 - Wer: 21.8466
c9f260899a40f5102a79f20fc708692e
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0078 | 10.01 | 1000 | 0.5184 | 23.0973 | | 0.0009 | 21.0 | 2000 | 0.5653 | 22.5434 | | 0.0007 | 31.01 | 3000 | 0.5703 | 21.8466 | | 0.0004 | 42.0 | 4000 | 0.5968 | 21.9574 | | 0.0003 | 52.01 | 5000 | 0.6044 | 22.0360 |
11ace8bcc5684ae501fe4a2c6a81c6f0
apache-2.0
['generated_from_trainer']
false
ApacheBertBaseCase This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2008
479b89be91ea9e3e12162e39b9dd86a4
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.2938 | 1.0 | 20881 | 0.2663 | | 0.2345 | 2.0 | 41762 | 0.2134 | | 0.2182 | 3.0 | 62643 | 0.2008 |
c76bfdcfbda97580a667753b9818d1c0
apache-2.0
['translation']
false
eng-afa * source group: English * target group: Afro-Asiatic languages * OPUS readme: [eng-afa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-afa/README.md) * model: transformer * source language(s): eng * target language(s): acm afb amh apc ara arq ary arz hau_Latn heb kab mlt rif_Latn shy_Latn som tir * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-afa/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-afa/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-afa/opus2m-2020-08-01.eval.txt)
6d1f9a5cb756771a83d44dddc5d02293
apache-2.0
['translation']
false
Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng-amh.eng.amh | 11.6 | 0.504 | | Tatoeba-test.eng-ara.eng.ara | 12.0 | 0.404 | | Tatoeba-test.eng-hau.eng.hau | 10.2 | 0.429 | | Tatoeba-test.eng-heb.eng.heb | 32.3 | 0.551 | | Tatoeba-test.eng-kab.eng.kab | 1.6 | 0.191 | | Tatoeba-test.eng-mlt.eng.mlt | 17.7 | 0.551 | | Tatoeba-test.eng.multi | 14.4 | 0.375 | | Tatoeba-test.eng-rif.eng.rif | 1.7 | 0.103 | | Tatoeba-test.eng-shy.eng.shy | 0.8 | 0.090 | | Tatoeba-test.eng-som.eng.som | 16.0 | 0.429 | | Tatoeba-test.eng-tir.eng.tir | 2.7 | 0.238 |
df1db7cd6c81dd476df8088381393969
apache-2.0
['translation']
false
System Info: - hf_name: eng-afa - source_languages: eng - target_languages: afa - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-afa/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'so', 'ti', 'am', 'he', 'mt', 'ar', 'afa'] - src_constituents: {'eng'} - tgt_constituents: {'som', 'rif_Latn', 'tir', 'kab', 'arq', 'afb', 'amh', 'arz', 'heb', 'shy_Latn', 'apc', 'mlt', 'thv', 'ara', 'hau_Latn', 'acm', 'ary'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-afa/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-afa/opus2m-2020-08-01.test.txt - src_alpha3: eng - tgt_alpha3: afa - short_pair: en-afa - chrF2_score: 0.375 - bleu: 14.4 - brevity_penalty: 1.0 - ref_len: 58110.0 - src_name: English - tgt_name: Afro-Asiatic languages - train_date: 2020-08-01 - src_alpha2: en - tgt_alpha2: afa - prefer_old: False - long_pair: eng-afa - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
9f1979eb1ccd4398566e16a00db6c9cd
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Whisper Large Vietnamese - Drishti Sharma This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.3681 - Wer: 16.6594
a4d1bbe8f148d62ecf3c67d57ae0df6a
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 9.5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - training_steps: 600 - mixed_precision_training: Native AMP
6c3424c9742fb21fd2bde9a739297665
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0667 | 1.73 | 600 | 0.3681 | 16.6594 |
1a07d60e46d33b0035642cf18cc14b80
mit
['generated_from_trainer', 'BACnet']
false
BACnet-Klassifizierung-Gewerke-bert-base-german-cased This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on the [gart-labor](https://huggingface.co/gart-labor) "klassifizierung_gewerke" dataset. It achieves the following results on the evaluation set: - Loss: 0.0394 - F1: [0.96296296 0.8 0.97297297 1. 0.99469027 0.98979592 0.98969072]
de0a8fc69d722a66b83444cf78533abe
mit
['generated_from_trainer', 'BACnet']
false
Model description This model makes it possible to classify the components of the technical building equipment described with the BACnet standard into different trades. The model is based on a German-language data set.
e6e615159f572c7a2de59e4785bee6e9
mit
['generated_from_trainer', 'BACnet']
false
Intended uses & limitations The model divides descriptive texts into the following building services trades: Waste_water_water_gas_systems, Other_systems, Building_automation, Refrigeration_systems, Air_technical_systems, Heavy_current_systems and Heat_supply_systems
fef92f1e2ee26c202be0feeae6fc4331
mit
['generated_from_trainer', 'BACnet']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0
26166609a180d0977aed6978002904cb
mit
['generated_from_trainer', 'BACnet']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:-------------------------------------------------------------------------------:| | 0.4309 | 0.99 | 45 | 0.0736 | [0.89655172 0.84210526 0.97297297 0.98901099 0.9929078 0.99492386 0.98701299] | | 0.0722 | 1.99 | 90 | 0.0511 | [0.92307692 0.875 0.96 1. 0.99295775 0.98979592 0.98714653] | | 0.0431 | 2.99 | 135 | 0.0460 | [1. 0.8 0.97297297 1. 0.99469027 0.98979592 0.99224806] | | 0.0313 | 3.99 | 180 | 0.0365 | [1. 0.84210526 0.97297297 1. 0.99646643 0.98979592 0.99224806] | | 0.0238 | 4.99 | 225 | 0.0394 | [0.96296296 0.8 0.97297297 1. 0.99469027 0.98979592 0.98969072] |
c115bc20b181be16a71e67c48e2c7acd
apache-2.0
['generated_from_trainer']
false
swin-base-finetuned-cifar100 This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224](https://huggingface.co/microsoft/swin-base-patch4-window7-224) on the cifar100 dataset. It achieves the following results on the evaluation set: - Accuracy: 0.9201 - Loss: 0.3670
3aab9cf149e62c8b730b87ddcdabb018
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5
ac62f93a0620b05a67aa22d696c5c739
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | |:-------------:|:-----:|:----:|:--------:|:---------------:| | 0.3536 | 1.0 | 781 | 0.9052 | 0.3141 | | 0.3254 | 2.0 | 1562 | 0.9117 | 0.2991 | | 0.0936 | 3.0 | 2343 | 0.9138 | 0.3322 | | 0.1054 | 4.0 | 3124 | 0.9158 | 0.3483 | | 0.0269 | 5.0 | 3905 | 0.9201 | 0.3670 |
9f382c1fc95f676fd48fecb1e8b7aef8
apache-2.0
[]
false
This model is used to detect **Offensive Content** in **Malayalam Code-Mixed language**. The mono in the name refers to the monolingual setting, where the model is trained using only Malayalam(pure and code-mixed) data. The weights are initialized from pretrained XLM-Roberta-Base and pretrained using Masked Language Modelling on the target dataset before fine-tuning using Cross-Entropy Loss. This model is the best of multiple trained for **EACL 2021 Shared Task on Offensive Language Identification in Dravidian Languages**. Genetic-Algorithm based ensembled test predictions got the highest weighted F1 score at the leaderboard (Weighted F1 score on hold out test set: This model - 0.97, Ensemble - 0.97)
ed09f5f48c3e64d6638112e1007b2061
apache-2.0
[]
false
For more details about our paper Debjoy Saha, Naman Paharia, Debajit Chakraborty, Punyajoy Saha, Animesh Mukherjee. "[Hate-Alert@DravidianLangTech-EACL2021: Ensembling strategies for Transformer-based Offensive language Detection](https://www.aclweb.org/anthology/2021.dravidianlangtech-1.38/)". ***Please cite our paper in any published work that uses any of these resources.*** ~~~ @inproceedings{saha-etal-2021-hate, title = "Hate-Alert@{D}ravidian{L}ang{T}ech-{EACL}2021: Ensembling strategies for Transformer-based Offensive language Detection", author = "Saha, Debjoy and Paharia, Naman and Chakraborty, Debajit and Saha, Punyajoy and Mukherjee, Animesh", booktitle = "Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages", month = apr, year = "2021", address = "Kyiv", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2021.dravidianlangtech-1.38", pages = "270--276", abstract = "Social media often acts as breeding grounds for different forms of offensive content. For low resource languages like Tamil, the situation is more complex due to the poor performance of multilingual or language-specific models and lack of proper benchmark datasets. Based on this shared task {``}Offensive Language Identification in Dravidian Languages{''} at EACL 2021; we present an exhaustive exploration of different transformer models, We also provide a genetic algorithm technique for ensembling different models. Our ensembled models trained separately for each language secured the first position in Tamil, the second position in Kannada, and the first position in Malayalam sub-tasks. The models and codes are provided.", } ~~~
de741bb63efca5bea88f9bff4c5b2b87
apache-2.0
['whisper-event', 'hf-asr-leaderboard', 'generated_from_trainer']
false
Whisper Medium Lithuanian CV11 This model is a fine-tuned version of [openai/whisper-large](https://huggingface.co/openai/whisper-medium) on the mozilla-foundation/common_voice_11_0 lt dataset. It achieves the following results on the evaluation set: - Loss: 0.354951 - Wer: 20.446244
dd24eb797648f684c3f86ebcf633fb94
apache-2.0
['whisper-event', 'hf-asr-leaderboard', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0056 | 9.42 | 1000 | 0.3252 | 20.5534 | | 0.0023 | 18.8 | 2000 | 0.3549 | 20.4462 |
1831b1a1adb92ef7ef9fff3a2f9a652b
mit
[]
false
Model Description This model is based on [bigbird-small-indonesian](https://huggingface.co/ilos-vigil/bigbird-small-indonesian) and was finetuned on 2 datasets. It is intended to be used for zero-shot text classification.
ae06168f9426778dab8249eefe01420b
mit
[]
false
How to use > Inference for ZSC (Zero Shot Classification) task ```py >>> pipe = pipeline( ... task='zero-shot-classification', ... model='./tmp/checkpoint-28832' ... ) >>> pipe( ... sequences='Fakta nomor 7 akan membuat ada terkejut', ... candidate_labels=['clickbait', 'bukan clickbait'], ... hypothesis_template='Judul video ini {}.', ... multi_label=False ... ) { 'sequence': 'Fakta nomor 7 akan membuat ada terkejut', 'labels': ['clickbait', 'bukan clickbait'], 'scores': [0.6102734804153442, 0.38972654938697815] } >>> pipe( ... sequences='Samsung tuntut balik Apple dengan alasan hak paten teknologi.', ... candidate_labels=['teknologi', 'olahraga', 'bisnis', 'politik', 'kesehatan', 'kuliner'], ... hypothesis_template='Kategori berita ini adalah {}.', ... multi_label=True ... ) { 'sequence': 'Samsung tuntut balik Apple dengan alasan hak paten teknologi.', 'labels': ['politik', 'teknologi', 'kesehatan', 'bisnis', 'olahraga', 'kuliner'], 'scores': [0.7390161752700806, 0.6657379269599915, 0.4459509551525116, 0.38407933712005615, 0.3679264783859253, 0.14181996881961823] } ``` > Inference for NLI (Natural Language Inference) task ```py >>> pipe = pipeline( ... task='text-classification', ... model='./tmp/checkpoint-28832', ... return_all_scores=True ... ) >>> pipe({ ... 'text': 'Nasi adalah makanan pokok.',
9fdd05bb52bd4404e70ad468d887c24e
mit
[]
false
Hypothesis ... }) [ {'label': 'entailment', 'score': 0.25495028495788574}, {'label': 'neutral', 'score': 0.40920916199684143}, {'label': 'contradiction', 'score': 0.33584052324295044} ] >>> pipe({ ... 'text': 'Python sering digunakan untuk web development dan AI research.', ... 'text_pair': 'AI research biasanya tidak menggunakan bahasa pemrograman Python.' ... }) [ {'label': 'entailment', 'score': 0.12508109211921692}, {'label': 'neutral', 'score': 0.22146646678447723}, {'label': 'contradiction', 'score': 0.653452455997467} ] ```
0d0f27e02fcaa767a867b85ed3ed76ac
mit
[]
false
Limitation and bias This model inherit limitation/bias from it's parent model and 2 datasets used for fine-tuning. And just like most language model, this model is sensitive towards input change. Here's an example. ```py >>> from transformers import pipeline >>> pipe = pipeline( ... task='zero-shot-classification', ... model='./tmp/checkpoint-28832' ... ) >>> text = 'Resep sate ayam enak dan mudah.' >>> candidate_labels = ['kuliner', 'olahraga'] >>> pipe( ... sequences=text, ... candidate_labels=candidate_labels, ... hypothesis_template='Kategori judul artikel ini adalah {}.', ... multi_label=False ... ) { 'sequence': 'Resep sate ayam enak dan mudah.', 'labels': ['kuliner', 'olahraga'], 'scores': [0.7711364030838013, 0.22886358201503754] } >>> pipe( ... sequences=text, ... candidate_labels=candidate_labels, ... hypothesis_template='Kelas kalimat ini {}.', ... multi_label=False ... ) { 'sequence': 'Resep sate ayam enak dan mudah.', 'labels': ['kuliner', 'olahraga'], 'scores': [0.7043636441230774, 0.295636385679245] } >>> pipe( ... sequences=text, ... candidate_labels=candidate_labels, ... hypothesis_template='{}.', ... multi_label=False ... ) { 'sequence': 'Resep sate ayam enak dan mudah.', 'labels': ['kuliner', 'olahraga'], 'scores': [0.5986711382865906, 0.4013288915157318] } ```
27051a1f6c9672a045f7744e89e5f2ea
mit
[]
false
Training, evaluation and testing data This model was finetuned with [IndoNLI](https://huggingface.co/datasets/indonli) and [multilingual-NLI-26lang-2mil7](https://huggingface.co/datasets/MoritzLaurer/multilingual-NLI-26lang-2mil7). Although `multilingual-NLI-26lang-2mil7` dataset is machine-translated, this dataset slightly improve result of NLI benchmark and extensively improve result of ZSC benchmark. Both evaluation and testing data is only based on IndoNLI dataset.
384431627f06644f192b0caf74b5bff3
mit
[]
false
Training Procedure The model was finetuned on single RTX 3060 with 16 epoch/28832 steps with accumulated batch size 64. AdamW optimizer is used with LR 1e-4, weight decay 0.05, learning rate warmup for first 6% steps (1730 steps) and linear decay of the learning rate afterwards. Take note while model weight on epoch 9 has lowest loss/highest accuracy, it has slightly lower performance on ZSC benchmark. Additional information can be seen on Tensorboard training logs.
959ae5ab460e0538d5fd8c2c2be67e85
mit
[]
false
Benchmark as NLI model Both benchmark show result of 2 different model as additional comparison. Additional benchmark using IndoNLI dataset is available on it's paper [IndoNLI: A Natural Language Inference Dataset for Indonesian](https://aclanthology.org/2021.emnlp-main.821/). | Model | bigbird-small-indonesian-nli | xlm-roberta-large-xnli | mDeBERTa-v3-base-xnli-multilingual-nli-2mil7 | | ------------------------------------------ | ---------------------------- | ---------------------- | -------------------------------------------- | | Parameter | 30.6M | 559.9M | 278.8M | | Multilingual | | V | V | | Finetuned on IndoNLI | V | | V | | Finetuned on multilingual-NLI-26lang-2mil7 | V | | | | Test (Lay) | 0.6888 | 0.2226 | 0.8151 | | Test (Expert) | 0.5734 | 0.3505 | 0.7775 |
b57aafd36371b42aa1b9391e5e6eee7b
mit
[]
false
Benchmark as ZSC model [Indonesian-Twitter-Emotion-Dataset](https://github.com/meisaputri21/Indonesian-Twitter-Emotion-Dataset/) is used to perform ZSC benchmark. This benchmark include 4 different parameter which affect performance of each model differently. Hypothesis template for this benchmark is `Kalimat ini mengekspresikan perasaan {}.` and `{}.`. Take note F1 score measurement only calculate label with highest probability. | Model | Multi-label | Use template | F1 Score | | -------------------------------------------- | ----------- | ------------ | ------------ | | bigbird-small-indonesian-nli | V | V | 0.3574 | | | V | | 0.3654 | | | | V | 0.3985 | | | | | _0.4160_ | | xlm-roberta-large-xnli | V | V | _**0.6292**_ | | | V | | 0.5596 | | | | V | 0.5737 | | | | | 0.5433 | | mDeBERTa-v3-base-xnli-multilingual-nli-2mil7 | V | V | 0.5324 | | | V | | _0.5499_ | | | | V | 0.5269 | | | | | 0.5228 |
8420bcc3b33b4625039c3adc144cb08f
apache-2.0
['generated_from_trainer']
false
t5-small-finetuned-wikihow_3epoch_b4_lr3e-5 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wikihow dataset. It achieves the following results on the evaluation set: - Loss: 2.4351 - Rouge1: 26.1071 - Rouge2: 9.3627 - Rougel: 22.0825 - Rougelsum: 25.4514 - Gen Len: 18.474
d1fab528ce41493bdbad53a62a684586
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP
b248a15cf2df5359583fd033054a941f
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:------:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 2.9216 | 0.13 | 5000 | 2.6385 | 23.8039 | 7.8863 | 20.0109 | 23.0802 | 18.3481 | | 2.8158 | 0.25 | 10000 | 2.5884 | 24.2567 | 8.2003 | 20.438 | 23.5325 | 18.3833 | | 2.7743 | 0.38 | 15000 | 2.5623 | 24.8471 | 8.3768 | 20.8711 | 24.1114 | 18.2901 | | 2.7598 | 0.51 | 20000 | 2.5368 | 25.1566 | 8.6721 | 21.1896 | 24.4558 | 18.3561 | | 2.7192 | 0.64 | 25000 | 2.5220 | 25.3477 | 8.8106 | 21.3799 | 24.6742 | 18.3108 | | 2.7207 | 0.76 | 30000 | 2.5114 | 25.5912 | 8.998 | 21.5508 | 24.9344 | 18.3445 | | 2.7041 | 0.89 | 35000 | 2.4993 | 25.457 | 8.8644 | 21.4516 | 24.7965 | 18.4354 | | 2.687 | 1.02 | 40000 | 2.4879 | 25.5886 | 8.9766 | 21.6794 | 24.9512 | 18.4035 | | 2.6652 | 1.14 | 45000 | 2.4848 | 25.7367 | 9.078 | 21.7096 | 25.0924 | 18.4328 | | 2.6536 | 1.27 | 50000 | 2.4761 | 25.7368 | 9.1609 | 21.729 | 25.0866 | 18.3117 | | 2.6589 | 1.4 | 55000 | 2.4702 | 25.7738 | 9.1413 | 21.7492 | 25.114 | 18.4862 | | 2.6384 | 1.53 | 60000 | 2.4620 | 25.7433 | 9.1356 | 21.8198 | 25.0896 | 18.489 | | 2.6337 | 1.65 | 65000 | 2.4595 | 26.0919 | 9.2605 | 21.9447 | 25.4065 | 18.4083 | | 2.6375 | 1.78 | 70000 | 2.4557 | 26.0912 | 9.3469 | 22.0182 | 25.4428 | 18.4133 | | 2.6441 | 1.91 | 75000 | 2.4502 | 26.1366 | 9.3143 | 22.058 | 25.4673 | 18.4972 | | 2.6276 | 2.03 | 80000 | 2.4478 | 25.9929 | 9.2464 | 21.9271 | 25.3263 | 18.469 | | 2.6062 | 2.16 | 85000 | 2.4467 | 26.0465 | 9.3166 | 22.0342 | 25.3998 | 18.3777 | | 2.6126 | 2.29 | 90000 | 2.4407 | 26.1953 | 9.3848 | 22.1148 | 25.5161 | 18.467 | | 2.6182 | 2.42 | 95000 | 2.4397 | 26.1331 | 9.3626 | 22.1076 | 25.4627 | 18.4413 | | 2.6041 | 2.54 | 100000 | 2.4375 | 26.1301 | 9.3567 | 22.0869 | 25.465 | 18.4929 | | 2.5996 | 2.67 | 105000 | 2.4367 | 26.0956 | 9.3314 | 22.063 | 25.4242 | 18.5074 | | 2.6144 | 2.8 | 110000 | 2.4355 | 26.1764 | 9.4157 | 22.1231 | 25.5175 | 18.4729 | | 2.608 | 2.93 | 115000 | 2.4351 | 26.1071 | 9.3627 | 22.0825 | 25.4514 | 18.474 |
9e54e0d770d22bdcd182907f070d8bde
apache-2.0
['part-of-speech', 'token-classification']
false
XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Irish This model is part of our paper called: - Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
ebe4c0f5665039960476fd988a278215
apache-2.0
['part-of-speech', 'token-classification']
false
Usage ```python from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-ga") model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-ga") ```
eaf48fdc3c5438aa634a1b50541af92b
mit
['generated_from_keras_callback']
false
nandysoham/Wayback_Machine-clustered This model is a fine-tuned version of [nandysoham16/20-clustered_aug](https://huggingface.co/nandysoham16/20-clustered_aug) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3070 - Train End Logits Accuracy: 0.9410 - Train Start Logits Accuracy: 0.8924 - Validation Loss: 0.4163 - Validation End Logits Accuracy: 0.6667 - Validation Start Logits Accuracy: 1.0 - Epoch: 0
71ea65c438944c80a51b35afd9aefd41
mit
['generated_from_keras_callback']
false
Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 0.3070 | 0.9410 | 0.8924 | 0.4163 | 0.6667 | 1.0 | 0 |
a674c1a2ea86f37f35e99eb30659f772
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-06 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3
2af1720d4458348ac2730bd255c64ba5
apache-2.0
['generated_from_trainer']
false
favs_filter_classification_v2 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the filter_v2 dataset. It achieves the following results on the evaluation set: - Loss: 0.2016 - F1: 0.9762 - Roc Auc: 0.9844 - Accuracy: 0.9545
fdf961a9eb1abdfee5d3094a50f34ab1
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20
cc0496eb14578c39408d4116b8c7ef17
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:| | 0.6596 | 1.0 | 16 | 0.6086 | 0.2687 | 0.5474 | 0.0 | | 0.5448 | 2.0 | 32 | 0.5354 | 0.3824 | 0.6063 | 0.0 | | 0.5106 | 3.0 | 48 | 0.4874 | 0.4444 | 0.6382 | 0.0455 | | 0.4353 | 4.0 | 64 | 0.4301 | 0.5352 | 0.6889 | 0.1818 | | 0.3699 | 5.0 | 80 | 0.3890 | 0.6579 | 0.7640 | 0.3636 | | 0.349 | 6.0 | 96 | 0.3663 | 0.6667 | 0.7633 | 0.3182 | | 0.3104 | 7.0 | 112 | 0.3327 | 0.7105 | 0.7953 | 0.4545 | | 0.3023 | 8.0 | 128 | 0.2971 | 0.7733 | 0.8303 | 0.5455 | | 0.2676 | 9.0 | 144 | 0.2766 | 0.8395 | 0.8861 | 0.7727 | | 0.2374 | 10.0 | 160 | 0.2541 | 0.8537 | 0.8980 | 0.7727 | | 0.2238 | 11.0 | 176 | 0.2399 | 0.9024 | 0.9293 | 0.8182 | | 0.2084 | 12.0 | 192 | 0.2221 | 0.9286 | 0.9531 | 0.8636 | | 0.2143 | 13.0 | 208 | 0.2138 | 0.9286 | 0.9531 | 0.8636 | | 0.1846 | 14.0 | 224 | 0.2016 | 0.9762 | 0.9844 | 0.9545 | | 0.1812 | 15.0 | 240 | 0.1957 | 0.9762 | 0.9844 | 0.9545 | | 0.1756 | 16.0 | 256 | 0.1881 | 0.9647 | 0.9806 | 0.9091 | | 0.1662 | 17.0 | 272 | 0.1845 | 0.9762 | 0.9844 | 0.9545 | | 0.1715 | 18.0 | 288 | 0.1802 | 0.9762 | 0.9844 | 0.9545 | | 0.1585 | 19.0 | 304 | 0.1782 | 0.9762 | 0.9844 | 0.9545 | | 0.1595 | 20.0 | 320 | 0.1775 | 0.9762 | 0.9844 | 0.9545 |
bb842994141a8d0653d7e9551089fa5b
apache-2.0
['audio', 'speech', 'speech-gender-recognition']
false
requirement packages !pip install git+https://github.com/huggingface/datasets.git !pip install git+https://github.com/huggingface/transformers.git !pip install torchaudio !pip install librosa ``` ```bash !git clone https://github.com/m3hrdadfi/soxan.git . ```
585b99ae85b5a6cb9592bc428cee3198
apache-2.0
['audio', 'speech', 'speech-gender-recognition']
false
Prediction ```python import torch import torch.nn as nn import torch.nn.functional as F import torchaudio from transformers import AutoConfig, Wav2Vec2FeatureExtractor from src.models import Wav2Vec2ForSpeechClassification, HubertForSpeechClassification import librosa import IPython.display as ipd import numpy as np import pandas as pd ``` ```python device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model_name_or_path = "m3hrdadfi/hubert-base-persian-speech-gender-recognition" config = AutoConfig.from_pretrained(model_name_or_path) feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained(model_name_or_path) sampling_rate = feature_extractor.sampling_rate model = HubertForSpeechClassification.from_pretrained(model_name_or_path).to(device) ``` ```python def speech_file_to_array_fn(path, sampling_rate): speech_array, _sampling_rate = torchaudio.load(path) resampler = torchaudio.transforms.Resample(_sampling_rate) speech = resampler(speech_array).squeeze().numpy() return speech def predict(path, sampling_rate): speech = speech_file_to_array_fn(path, sampling_rate) inputs = feature_extractor(speech, sampling_rate=sampling_rate, return_tensors="pt", padding=True) inputs = {key: inputs[key].to(device) for key in inputs} with torch.no_grad(): logits = model(**inputs).logits scores = F.softmax(logits, dim=1).detach().cpu().numpy()[0] outputs = [{"Label": config.id2label[i], "Score": f"{round(score * 100, 3):.1f}%"} for i, score in enumerate(scores)] return outputs ``` ```python path = "/path/to/female.wav" outputs = predict(path, sampling_rate) ``` ```bash [{'Label': 'F', 'Score': '98.2%'}, {'Label': 'M', 'Score': '1.8%'}] ```
481939580c47c91526326d16030db4e3