license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
apache-2.0
['multiberts', 'multiberts-seed_3', 'multiberts-seed_3-step_40k']
false
How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_3-step_40k') model = TFBertModel.from_pretrained("google/multiberts-seed_3-step_40k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_3-step_40k') model = BertModel.from_pretrained("google/multiberts-seed_3-step_40k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ```
0e9dd4071e5fc487dff0167e477c1b69
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-jm-distilled-clinc_hub This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.1291 - Accuracy: 0.9426
19d36f39dcb886db1267f35c4cfd3b55
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10
fcc3311ed381b60f5161e7f602e720f9
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.1473 | 1.0 | 318 | 0.7476 | 0.7529 | | 0.5789 | 2.0 | 636 | 0.3733 | 0.8858 | | 0.3175 | 3.0 | 954 | 0.2273 | 0.9194 | | 0.2106 | 4.0 | 1272 | 0.1733 | 0.9335 | | 0.1666 | 5.0 | 1590 | 0.1521 | 0.9365 | | 0.1452 | 6.0 | 1908 | 0.1408 | 0.9416 | | 0.133 | 7.0 | 2226 | 0.1349 | 0.9432 | | 0.1257 | 8.0 | 2544 | 0.1316 | 0.9439 | | 0.1218 | 9.0 | 2862 | 0.1298 | 0.9426 | | 0.1197 | 10.0 | 3180 | 0.1291 | 0.9426 |
f92ce04f43581a075188a943bacf2169
apache-2.0
['automatic-speech-recognition', 'th']
false
exp_w2v2t_th_r-wav2vec2_s730 Fine-tuned [facebook/wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust) for speech recognition using the train split of [Common Voice 7.0 (th)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
01184eb547de8f98271a127eb55e53ee
mit
['question-generation']
false
T5 for question-generation This is [t5-base](https://arxiv.org/abs/1910.10683) model trained for answer aware question generation task. The answer spans are highlighted within the text with special highlight tokens. You can play with the model using the inference API, just highlight the answer spans with `<hl>` tokens and end the text with `</s>`. For example `<hl> 42 <hl> is the answer to life, the universe and everything. </s>` For more deatils see [this](https://github.com/patil-suraj/question_generation) repo.
3f585a1155028a4821d05b4f5d0edb31
mit
['question-generation']
false
Model in action 🚀 You'll need to clone the [repo](https://github.com/patil-suraj/question_generation). [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patil-suraj/question_generation/blob/master/question_generation.ipynb) ```python3 from pipelines import pipeline nlp = pipeline("question-generation", model="valhalla/t5-base-qg-hl") nlp("42 is the answer to life, universe and everything.") => [{'answer': '42', 'question': 'What is the answer to life, universe and everything?'}] ```
e365fb17c2cb86545fdf36865b2b228f
mit
['ja', 'japanese', 'bart', 'lm', 'nlp']
false
bart-base-japanese-news(base-sized model) This repository provides a Japanese BART model. The model was trained by [Stockmark Inc.](https://stockmark.co.jp) An introductory article on the model can be found at the following URL. [https://tech.stockmark.co.jp/blog/bart-japanese-base-news/](https://tech.stockmark.co.jp/blog/bart-japanese-base-news/)
d400b6d5e7b2ddd82bc7206fdaf3344e
mit
['ja', 'japanese', 'bart', 'lm', 'nlp']
false
Model description BART is a transformer encoder-decoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. BART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works well for comprehension tasks (e.g. text classification, question answering).
6ae04759bdc75e9f4f55902201c4e968
mit
['ja', 'japanese', 'bart', 'lm', 'nlp']
false
Simple use ```python from transformers import AutoTokenizer, BartModel model_name = "stockmark/bart-base-japanese-news" tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) model = BartModel.from_pretrained(model_name) inputs = tokenizer("今日は良い天気です。", return_tensors="pt") outputs = model(**inputs) last_hidden_states = outputs.last_hidden_state ```
5e7f314b5a68e48bd4c218ba4c5853d8
mit
['ja', 'japanese', 'bart', 'lm', 'nlp']
false
Sentence Permutation ```python import torch from transformers import AutoTokenizer, BartForConditionalGeneration model_name = "stockmark/bart-base-japanese-news" tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) model = BartForConditionalGeneration.from_pretrained(model_name) if torch.cuda.is_available(): model = model.to("cuda")
3813392874de227dacd104290311ce5a
mit
['ja', 'japanese', 'bart', 'lm', 'nlp']
false
correct order text is "明日は大雨です。電車は止まる可能性があります。ですから、自宅から働きます。" text = "電車は止まる可能性があります。ですから、自宅から働きます。明日は大雨です。" inputs = tokenizer([text], max_length=128, return_tensors="pt", truncation=True) text_ids = model.generate(inputs["input_ids"].to(model.device), num_beams=3, max_length=128) output = tokenizer.batch_decode(text_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0] print(output)
ecc901468e8aae80585aa3a516349816
mit
['ja', 'japanese', 'bart', 'lm', 'nlp']
false
Mask filling ```python import torch from transformers import AutoTokenizer, BartForConditionalGeneration model_name = "stockmark/bart-base-japanese-news" tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) model = BartForConditionalGeneration.from_pretrained(model_name) if torch.cuda.is_available(): model = model.to("cuda") text = "今日の天気は<mask>のため、傘が必要でしょう。" inputs = tokenizer([text], max_length=128, return_tensors="pt", truncation=True) text_ids = model.generate(inputs["input_ids"].to(model.device), num_beams=3, max_length=128) output = tokenizer.batch_decode(text_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0] print(output)
723a83c0faaf6920d4847623ce4b6e24
mit
['ja', 'japanese', 'bart', 'lm', 'nlp']
false
Text generation *NOTE:* You can use the raw model for text generation. However, the model is mostly meant to be fine-tuned on a supervised dataset. ```python import torch from transformers import AutoTokenizer, BartForConditionalGeneration model_name = "stockmark/bart-base-japanese-news" tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) model = BartForConditionalGeneration.from_pretrained(model_name) if torch.cuda.is_available(): model = model.to("cuda") text = "自然言語処理(しぜんげんごしょり、略称:NLP)は、人間が日常的に使っている自然言語をコンピュータに処理させる一連の技術であり、人工知能と言語学の一分野である。「計算言語学」(computational linguistics)との類似もあるが、自然言語処理は工学的な視点からの言語処理をさすのに対して、計算言語学は言語学的視点を重視する手法をさす事が多い。" inputs = tokenizer([text], max_length=512, return_tensors="pt", truncation=True) text_ids = model.generate(inputs["input_ids"].to(model.device), num_beams=3, min_length=0, max_length=40) output = tokenizer.batch_decode(text_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0] print(output)
27028ff8289e662c93684cd33da822cd
mit
['ja', 'japanese', 'bart', 'lm', 'nlp']
false
Tokenization The model uses a [sentencepiece](https://github.com/google/sentencepiece)-based tokenizer. The vocabulary was first trained on a selected subset from the training data using the official sentencepiece training script.
6edd9da9905a403f5d1e10b81efd5fb2
mit
['ja', 'japanese', 'bart', 'lm', 'nlp']
false
Licenses The pretrained models are distributed under the terms of the [MIT License](https://opensource.org/licenses/mit-license.php). *NOTE:* Only tokenization_bart_japanese_news.py is [Apache License, Version 2.0](http://www.apache.org/licenses/LICENSE-2.0). Please see tokenization_bart_japanese_news.py for license details.
381de9ea66a5ed93ad38681749189987
mit
['donut', 'image-to-text', 'vision']
false
Donut (base-sized model, fine-tuned on ZhTrainTicket) Donut model fine-tuned on ZhTrainTicket. It was introduced in the paper [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) by Geewok et al. and first released in [this repository](https://github.com/clovaai/donut). Disclaimer: The team releasing Donut did not write a model card for this model so this model card has been written by the Hugging Face team.
5544f8130eb93ec85d16b63828ea2c49
mit
['donut', 'image-to-text', 'vision']
false
Model description Donut consists of a vision encoder (Swin Transformer) and a text decoder (BART). Given an image, the encoder first encodes the image into a tensor of embeddings (of shape batch_size, seq_len, hidden_size), after which the decoder autoregressively generates text, conditioned on the encoding of the encoder. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/donut_architecture.jpg)
7f500c07b0dc923c2c1ede39efa941ae
mit
['donut', 'image-to-text', 'vision']
false
Intended uses & limitations This model is fine-tuned on ZhTrainTicket, a document parsing dataset. We refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/donut) which includes code examples.
1cff7f96ce25924c5dcb6a05dee7ed5e
mit
['donut', 'image-to-text', 'vision']
false
BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2111-15664, author = {Geewook Kim and Teakgyu Hong and Moonbin Yim and Jinyoung Park and Jinyeong Yim and Wonseok Hwang and Sangdoo Yun and Dongyoon Han and Seunghyun Park}, title = {Donut: Document Understanding Transformer without {OCR}}, journal = {CoRR}, volume = {abs/2111.15664}, year = {2021}, url = {https://arxiv.org/abs/2111.15664}, eprinttype = {arXiv}, eprint = {2111.15664}, timestamp = {Thu, 02 Dec 2021 10:50:44 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2111-15664.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
c45c89b64cfe26abc5c85edc8fb9f6db
mit
['generated_from_trainer']
false
my_awesome_wnut_model This model is a fine-tuned version of [facebook/muppet-roberta-base](https://huggingface.co/facebook/muppet-roberta-base) on the wnut_17 dataset. It achieves the following results on the evaluation set: - Loss: 0.2298 - Precision: 0.5607 - Recall: 0.5097 - F1: 0.5340 - Accuracy: 0.9501
a868410f3733be0b5a5ff895e92e4532
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 213 | 0.2331 | 0.5333 | 0.4310 | 0.4767 | 0.9459 | | No log | 2.0 | 426 | 0.2298 | 0.5607 | 0.5097 | 0.5340 | 0.9501 |
2275154245ee72efe8f0814944cec9c5
mit
['generated_from_trainer']
false
roberta-base-iphone-2 This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1359 - Accuracy: 0.9833
22790c8b29ae75b6e7c17762cddadfd7
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 27 | 0.2765 | 0.8333 | | No log | 2.0 | 54 | 0.1359 | 0.9833 |
cdfa05c871a807fc6d91ddc060f12605
apache-2.0
[]
false
Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra) You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology
e1b38aadff1b4bf8fc2074fc789c968f
apache-2.0
[]
false
Citation If you find our resource or paper is useful, please consider including the following citation in your paper. - https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ```
0cb7f7aa5d9a53969351d3399fe62900
mit
['generated_from_trainer']
false
mbti-career This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3516
f2502fca8d28d454b20b424cb59d6dbe
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 300 - num_epochs: 15
a748dd16caf68674a3c1730adaa8272c
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.6547 | 0.59 | 100 | 0.6169 | | 0.5967 | 1.18 | 200 | 0.5943 | | 0.5872 | 1.76 | 300 | 0.5696 | | 0.554 | 2.35 | 400 | 0.5287 | | 0.5041 | 2.94 | 500 | 0.4890 | | 0.4773 | 3.53 | 600 | 0.4895 | | 0.4691 | 4.12 | 700 | 0.4840 | | 0.4253 | 4.71 | 800 | 0.4573 | | 0.4002 | 5.29 | 900 | 0.4240 | | 0.3813 | 5.88 | 1000 | 0.4031 | | 0.3561 | 6.47 | 1100 | 0.3943 | | 0.3359 | 7.06 | 1200 | 0.3864 | | 0.3126 | 7.65 | 1300 | 0.3889 | | 0.2948 | 8.24 | 1400 | 0.3869 | | 0.2816 | 8.82 | 1500 | 0.3788 | | 0.2522 | 9.41 | 1600 | 0.3891 | | 0.2451 | 10.0 | 1700 | 0.3849 | | 0.2148 | 10.59 | 1800 | 0.3784 | | 0.2132 | 11.18 | 1900 | 0.3716 | | 0.1882 | 11.76 | 2000 | 0.3659 | | 0.1754 | 12.35 | 2100 | 0.3737 | | 0.169 | 12.94 | 2200 | 0.3711 | | 0.1559 | 13.53 | 2300 | 0.3672 | | 0.1537 | 14.12 | 2400 | 0.3391 | | 0.1427 | 14.71 | 2500 | 0.3516 |
8eeb7483b142f697f13cc2960e5835aa
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2128 - Accuracy: 0.925 - F1: 0.9248
e7056c1719a3c8ce21b3f5ed6f95a098
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8215 | 1.0 | 250 | 0.3033 | 0.9105 | 0.9078 | | 0.2435 | 2.0 | 500 | 0.2128 | 0.925 | 0.9248 |
73a1aa320884e4fd510a58592d028847
['cc0-1.0']
['collaborative-filtering', 'recommender', 'tabular-classification']
false
Model description This repo contains the model and the notebook on [how to build and train a Keras model for Collaborative Filtering for Movie Recommendations](https://keras.io/examples/structured_data/collaborative_filtering_movielens/). Full credits to [Siddhartha Banerjee](https://twitter.com/sidd2006).
308c9f529816d2742cd0dbd47144967d
['cc0-1.0']
['collaborative-filtering', 'recommender', 'tabular-classification']
false
Intended uses & limitations Based on a user and movies they have rated highly in the past, this model outputs the predicted rating a user would give to a movie they haven't seen yet (between 0-1). This information can be used to find out the top recommended movies for this user.
932788e7571d20afbfcc3b58dd99c337
['cc0-1.0']
['collaborative-filtering', 'recommender', 'tabular-classification']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': 0.001, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32
3bc2f852a835635134fa42f5d1d6fb6d
creativeml-openrail-m
['text-to-image', 'stable-diffusion']
false
dogcg Dreambooth model trained by horizonial with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
88b259c1d7117116b1ee085f2951a070
apache-2.0
['generated_from_trainer']
false
codeparrot-ds-500sample-gpt-neo-10epoch This model is a fine-tuned version of [EleutherAI/gpt-neo-125M](https://huggingface.co/EleutherAI/gpt-neo-125M) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 1.5456 - eval_runtime: 87.6603 - eval_samples_per_second: 149.817 - eval_steps_per_second: 4.689 - epoch: 2.97 - step: 16000
7d48a993ab4816a80a1a20f05e7ff692
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 10 - mixed_precision_training: Native AMP
4d0ec66a365bbffd724dbbcac4c850c4
mit
['generated_from_trainer']
false
results This model is a fine-tuned version of [cointegrated/rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2872 - F1: 0.6095
05034cffb2f39ac1ef1be5a8c8d73cf9
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 21 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10
ecc6836f1ec8402f80c63b561ed6d657
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.3356 | 1.0 | 1033 | 0.2558 | 0.3761 | | 0.2588 | 2.0 | 2066 | 0.2352 | 0.5246 | | 0.2252 | 3.0 | 3099 | 0.2292 | 0.5996 | | 0.2044 | 4.0 | 4132 | 0.2417 | 0.5950 | | 0.189 | 5.0 | 5165 | 0.2433 | 0.6102 | | 0.1718 | 6.0 | 6198 | 0.2671 | 0.5894 | | 0.1627 | 7.0 | 7231 | 0.2686 | 0.6319 | | 0.1513 | 8.0 | 8264 | 0.2779 | 0.6079 | | 0.1451 | 9.0 | 9297 | 0.2848 | 0.6195 | | 0.1429 | 10.0 | 10330 | 0.2872 | 0.6095 |
c2415eef7781f1088c0dfcf8f72bdc07
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-new3-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2224 - Accuracy: 0.9465
d151f34d312d0946f624384468fb1c64
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 164 | 0.4312 | 0.8747 | | No log | 2.0 | 328 | 0.2722 | 0.9290 | | No log | 3.0 | 492 | 0.2424 | 0.9404 | | 0.4446 | 4.0 | 656 | 0.2189 | 0.9450 | | 0.4446 | 5.0 | 820 | 0.2224 | 0.9465 |
b8e5e27c9a759905272a809a1b396124
apache-2.0
['generated_from_trainer']
false
canine-c-finetuned-cola This model is a fine-tuned version of [google/canine-c](https://huggingface.co/google/canine-c) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6246 - Matthews Correlation: 0.0990
3a57a0d44b88cd5d808ff8a9556f979d
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.6142 | 1.0 | 535 | 0.6268 | 0.0 | | 0.607 | 2.0 | 1070 | 0.6234 | 0.0 | | 0.6104 | 3.0 | 1605 | 0.6226 | 0.0 | | 0.5725 | 4.0 | 2140 | 0.6246 | 0.0990 | | 0.5426 | 5.0 | 2675 | 0.6866 | 0.0495 |
1d94fb36bf33be6bb61bce091063d6f6
apache-2.0
['generated_from_trainer']
false
anil_bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0610 - Precision: 0.9352 - Recall: 0.9517 - F1: 0.9434 - Accuracy: 0.9862
ea666b5f2e1d23217359a10a4c96c980
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0897 | 1.0 | 1756 | 0.0690 | 0.9246 | 0.9325 | 0.9285 | 0.9820 | | 0.0329 | 2.0 | 3512 | 0.0629 | 0.9301 | 0.9492 | 0.9395 | 0.9862 | | 0.0172 | 3.0 | 5268 | 0.0610 | 0.9352 | 0.9517 | 0.9434 | 0.9862 |
94404e851bddcce0ab1340e468dd4dc7
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetunded-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1584 - Accuracuy: 0.9365 - F1: 0.9365
589fe3c1a4e0f0b2fa47f1ba6f0cd6a4
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3
d4ad1fa3f797918c58ca894eaf9c0b4a
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracuy | F1 | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:| | No log | 1.0 | 250 | 0.2735 | 0.9155 | 0.9134 | | No log | 2.0 | 500 | 0.1727 | 0.932 | 0.9321 | | No log | 3.0 | 750 | 0.1584 | 0.9365 | 0.9365 |
0ad0b65cea8bef5154acde105c8a1e41
apache-2.0
['speech', 'audio', 'hubert', 'audio-classification']
false
Model description This is a ported version of [S3PRL's Hubert for the SUPERB Speaker Identification task](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream/voxceleb1). The base model is [hubert-base-ls960](https://huggingface.co/facebook/hubert-base-ls960), which is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. For more information refer to [SUPERB: Speech processing Universal PERformance Benchmark](https://arxiv.org/abs/2105.01051)
43cce86b6aa0f1a415b1068d1fe8a59a
apache-2.0
['speech', 'audio', 'hubert', 'audio-classification']
false
Task and dataset description Speaker Identification (SI) classifies each utterance for its speaker identity as a multi-class classification, where speakers are in the same predefined set for both training and testing. The widely used [VoxCeleb1](https://www.robots.ox.ac.uk/~vgg/data/voxceleb/vox1.html) dataset is adopted For the original model's training and evaluation instructions refer to the [S3PRL downstream task README](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream
9f3f18faeb1cc96785d945417d4fb5de
apache-2.0
['speech', 'audio', 'hubert', 'audio-classification']
false
Usage examples You can use the model via the Audio Classification pipeline: ```python from datasets import load_dataset from transformers import pipeline dataset = load_dataset("anton-l/superb_demo", "si", split="test") classifier = pipeline("audio-classification", model="superb/hubert-base-superb-sid") labels = classifier(dataset[0]["file"], top_k=5) ``` Or use the model directly: ```python import torch import librosa from datasets import load_dataset from transformers import HubertForSequenceClassification, Wav2Vec2FeatureExtractor def map_to_array(example): speech, _ = librosa.load(example["file"], sr=16000, mono=True) example["speech"] = speech return example
b43b04bbd06249e065f008f0361f010c
apache-2.0
['speech', 'audio', 'hubert', 'audio-classification']
false
load a demo dataset and read audio files dataset = load_dataset("anton-l/superb_demo", "si", split="test") dataset = dataset.map(map_to_array) model = HubertForSequenceClassification.from_pretrained("superb/hubert-base-superb-sid") feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained("superb/hubert-base-superb-sid")
4270baa87f775266bcfb310e920793eb
apache-2.0
['speech', 'audio', 'hubert', 'audio-classification']
false
compute attention masks and normalize the waveform if needed inputs = feature_extractor(dataset[:2]["speech"], sampling_rate=16000, padding=True, return_tensors="pt") logits = model(**inputs).logits predicted_ids = torch.argmax(logits, dim=-1) labels = [model.config.id2label[_id] for _id in predicted_ids.tolist()] ```
ba3240a17a07e3af27159e18248d93fc
apache-2.0
['speech', 'audio', 'hubert', 'audio-classification']
false
BibTeX entry and citation info ```bibtex @article{yang2021superb, title={SUPERB: Speech processing Universal PERformance Benchmark}, author={Yang, Shu-wen and Chi, Po-Han and Chuang, Yung-Sung and Lai, Cheng-I Jeff and Lakhotia, Kushal and Lin, Yist Y and Liu, Andy T and Shi, Jiatong and Chang, Xuankai and Lin, Guan-Ting and others}, journal={arXiv preprint arXiv:2105.01051}, year={2021} } ```
b75fcc39ca5feccc8675c0f02b51786a
apache-2.0
['generated_from_keras_callback']
false
KenP/mt5-small-finetuned-amazon-en-es This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 4.0378 - Validation Loss: 3.3712 - Epoch: 7
31d7775f7442dea265c9b2689029fbe7
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 9672, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16
fb6e743e10e206de59da869bbfe6f816
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 9.9112 | 4.3131 | 0 | | 5.8947 | 3.7701 | 1 | | 5.1149 | 3.5826 | 2 | | 4.6940 | 3.5080 | 3 | | 4.4064 | 3.4388 | 4 | | 4.2301 | 3.4012 | 5 | | 4.1037 | 3.3755 | 6 | | 4.0378 | 3.3712 | 7 |
e4e47d33ddd55bee9c666dd877982908
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1345 - F1: 0.8593
0a9aa84523d6523211309354efc48f0d
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3
fb7b66c79ce954113cf7acde38e84f1a
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 263 | 0.1807 | 0.8065 | | 0.2218 | 2.0 | 526 | 0.1365 | 0.8485 | | 0.2218 | 3.0 | 789 | 0.1345 | 0.8593 |
345e8dd24f1520f030e0d2af74c1b229
apache-2.0
['chinese', 'classical chinese', 'literary chinese', 'ancient chinese', 'bert', 'pytorch', 'quotation detection']
false
Guwen Quote A Classical Chinese Quotation Detector. Note: There are some problems with decoding using the default sequence classification model. Use the CRF model to achieve the best results. CRF related code please refer to [Guwen Models](https://github.com/ethan-yt/guwen-models). See also: <a href="https://github.com/ethan-yt/guwen-models"> <img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=guwen-models&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" /> </a> <a href="https://github.com/ethan-yt/cclue/"> <img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=cclue&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" /> </a> <a href="https://github.com/ethan-yt/guwenbert/"> <img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=guwenbert&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" /> </a>
a5708fd2858798958f131d6b955afd71
apache-2.0
['generated_from_trainer']
false
wav2vec2-large-xls-r-300m-hindi-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 2.3273 - Wer: 0.9698
ad052aa8eb03d0a0c4ac916eee66da9f
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 60 - mixed_precision_training: Native AMP
ad0ce7d230b5044edc1995dcae1ee345
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 6.6006 | 44.42 | 400 | 2.3273 | 0.9698 |
06495795afafe9e076c2e33562d6118b
apache-2.0
['generated_from_trainer']
false
distilbert_sa_GLUE_Experiment_data_aug_cola_96 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE COLA dataset. It achieves the following results on the evaluation set: - Loss: 0.6274 - Matthews Correlation: 0.1072
8407896a27be15d546c2b2e6914e43d3
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5845 | 1.0 | 835 | 0.6274 | 0.1072 | | 0.4862 | 2.0 | 1670 | 0.6843 | 0.1085 | | 0.4221 | 3.0 | 2505 | 0.7307 | 0.0681 | | 0.3829 | 4.0 | 3340 | 0.7969 | 0.1046 | | 0.3557 | 5.0 | 4175 | 0.8648 | 0.0959 | | 0.3328 | 6.0 | 5010 | 0.8932 | 0.0792 |
d2e0f50c6af296cee78e20e3ee92106f
apache-2.0
['speechbrain', 'embeddings', 'Speaker', 'Verification', 'Identification', 'pytorch', 'ECAPA-TDNN']
false
Speaker Identification with ECAPA-TDNN embeddings on Voxceleb This repository provides a pretrained ECAPA-TDNN model using SpeechBrain. The system can be used to extract speaker embeddings as well. Since we can't find any resource that has SpeechBrain or HuggingFace compatible checkpoints that has only been trained on VoxCeleb2 development data, so we decide to pre-train an ECAPA-TDNN system from scratch.
9ff464f52fd9914676439566ecbdbc97
apache-2.0
['speechbrain', 'embeddings', 'Speaker', 'Verification', 'Identification', 'pytorch', 'ECAPA-TDNN']
false
Pipeline description This system is composed of an ECAPA-TDNN model. It is a combination of convolutional and residual blocks. The embeddings are extracted using attentive statistical pooling. The system is trained with Additive Margin Softmax Loss. We use FBank (16kHz, 25ms frame length, 10ms hop length, 80 filter-bank channels) as the input features. It was trained using initial learning rate of 0.001 and batch size of 512 with cyclical learning rate policy (CLR) for 20 epochs on 4 A100 GPUs. We employ additive noises and reverberation from [MUSAN](http://www.openslr.org/17/) and [RIR](http://www.openslr.org/28/) datasets to enrich the supervised information. The pre-training progress takes approximately ten days for the ECAPA-TDNN model.
70845f7788d9dbb356df796101d7d627
apache-2.0
['speechbrain', 'embeddings', 'Speaker', 'Verification', 'Identification', 'pytorch', 'ECAPA-TDNN']
false
Performance **VoxCeleb1-O** is the original verification test set from VoxCeleb1 consisting of 40 speakers. All speakers with names starting with "E" are reserved for testing. **VoxCeleb1-E** uses the entire VoxCeleb1 dataset, covering 1251 speakers. **VoxCeleb1-H** is a hard version of evaluation set consisting of 552536 pairs with 1190 speakers with the same nationality and gender. There are 18 nationality-gender combinations each with at least 5 individuals. | Splits | Backend | S-norm | EER(%) | minDCF(0.01) | |:-------------:|:--------------:|:--------------:|:--------------:|:--------------:| | VoxCeleb1-O | cosine | no | 1.29 | 0.13 | | VoxCeleb1-O | cosine | yes | 1.19 | 0.11 | | VoxCeleb1-E | cosine | no | 1.42 | 0.16 | | VoxCeleb1-E | cosine | yes | 1.31 | 0.14 | | VoxCeleb1-H | cosine | no | 2.66 | 0.26 | | VoxCeleb1-H | cosine | yes | 2.48 | 0.23 | - VoxCeleb1-O: includes 37611 test pairs with 40 speakers. - VoxCeleb1-E: includes 579818 test pairs with 1251 speakers. - VoxCeleb1-H: includes 550894 test pairs with 1190 speakers.
bd394a02d4d71abaf94b54151f12efd7
apache-2.0
['speechbrain', 'embeddings', 'Speaker', 'Verification', 'Identification', 'pytorch', 'ECAPA-TDNN']
false
Compute the speaker embeddings The system is trained with recordings sampled at 16kHz (single channel). ```python import torch import torchaudio from speechbrain.pretrained.interfaces import Pretrained from speechbrain.pretrained import EncoderClassifier class Encoder(Pretrained): MODULES_NEEDED = [ "compute_features", "mean_var_norm", "embedding_model" ] def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) def encode_batch(self, wavs, wav_lens=None, normalize=False):
b85d17ae845d955c0c7ee0443065e030
apache-2.0
['speechbrain', 'embeddings', 'Speaker', 'Verification', 'Identification', 'pytorch', 'ECAPA-TDNN']
false
Computing features and embeddings feats = self.mods.compute_features(wavs) feats = self.mods.mean_var_norm(feats, wav_lens) embeddings = self.mods.embedding_model(feats, wav_lens) if normalize: embeddings = self.hparams.mean_var_norm_emb( embeddings, torch.ones(embeddings.shape[0], device=self.device) ) return embeddings classifier = Encoder.from_hparams( source="yangwang825/ecapa-tdnn-vox2" ) signal, fs = torchaudio.load('spk1_snt1.wav') embeddings = classifier.encode_batch(signal) >>> torch.Size([1, 1, 192]) ``` We will release our training results (models, logs, etc) shortly.
c9c9dea4aeeca137b370c0f6b966a7ba
apache-2.0
['speechbrain', 'embeddings', 'Speaker', 'Verification', 'Identification', 'pytorch', 'ECAPA-TDNN']
false
References 1. Ravanelli et al., SpeechBrain: A General-Purpose Speech Toolkit, 2021 2. Desplanques et al., ECAPA-TDNN: Emphasized Channel Attention, Propagation and Aggregation in TDNN Based Speaker Verification, 2020
56d0b2e3d006f1c28e71bb04da4149ee
apache-2.0
['exbert']
false
Model description Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in [this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference between english and English.
6c7ff0c555e45cce3d96356c145f7218
apache-2.0
['exbert']
false
How to use Download the model by cloning the repository via `git clone https://huggingface.co/OWG/bert-base-uncased`. Then you can use the model with the following code: ```python from onnxruntime import InferenceSession, SessionOptions, GraphOptimizationLevel from transformers import BertTokenizer tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") options = SessionOptions() options.graph_optimization_level = GraphOptimizationLevel.ORT_ENABLE_ALL session = InferenceSession("path/to/model.onnx", sess_options=options) session.disable_fallback() text = "Replace me by any text you want to encode." input_ids = tokenizer(text, return_tensors="pt", return_attention_mask=True) inputs = {k: v.cpu().detach().numpy() for k, v in input_ids.items()} outputs_name = session.get_outputs()[0].name outputs = session.run(output_names=[outputs_name], input_feed=inputs) ```
f6ddeb1fd4ecf1e66f827697f4efec73
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
Demo: How to use in ESPnet2 ```bash cd espnet git checkout 8ab3d9f2191f250cb62deff222d2e6addb3842dc pip install -e . cd egs2/aidatatang_200zh/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model sw005320/aidatatang_200zh_conformer ``` <!-- Generated by scripts/utils/show_asr_result.sh -->
8860769482e4a5c9c40e8eeca8fa486c
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
Environments - date: `Fri Dec 24 23:34:58 EST 2021` - python version: `3.8.5 (default, Sep 4 2020, 07:30:14) [GCC 7.3.0]` - espnet version: `espnet 0.10.5a1` - pytorch version: `pytorch 1.7.1` - Git hash: `a5bacd349a47889aef795f999563018cf201ae64` - Commit date: `Wed Dec 22 14:08:29 2021 -0500`
6f9e0d6fc7e0d4316c72cc6568919307
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_lm_lm_train_lm_transformer_zh_char_valid.loss.ave_asr_model_valid.acc.ave/dev|24216|24216|81.5|18.5|0.0|0.0|18.5|18.5| |decode_asr_lm_lm_train_lm_transformer_zh_char_valid.loss.ave_asr_model_valid.acc.ave/test|48144|48144|79.0|21.0|0.0|0.0|21.0|21.0|
2f643765e396f82c2e750b73669bb9c4
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_lm_lm_train_lm_transformer_zh_char_valid.loss.ave_asr_model_valid.acc.ave/dev|24216|234524|96.6|3.0|0.5|0.1|3.6|18.5| |decode_asr_lm_lm_train_lm_transformer_zh_char_valid.loss.ave_asr_model_valid.acc.ave/test|48144|468933|95.9|3.6|0.4|0.2|4.3|21.0|
0f8484c0c602afb4ec6cca7db3c01b64
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
ASR config <details><summary>expand</summary> ``` config: conf/train_asr_conformer.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_train_asr_conformer_raw_zh_char_sp ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 50 patience: null val_scheduler_criterion: - valid - acc early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - acc - max keep_nbest_models: 10 nbest_averaging_interval: 0 grad_clip: 5 grad_clip_type: 2.0 grad_noise: false accum_grad: 4 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_matplotlib: true use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 20 valid_batch_size: null batch_bins: 4000000 valid_batch_bins: null train_shape_file: - exp/asr_stats_raw_zh_char_sp/train/speech_shape - exp/asr_stats_raw_zh_char_sp/train/text_shape.char valid_shape_file: - exp/asr_stats_raw_zh_char_sp/valid/speech_shape - exp/asr_stats_raw_zh_char_sp/valid/text_shape.char batch_type: numel valid_batch_type: null fold_length: - 51200 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train_sp/wav.scp - speech - sound - - dump/raw/train_sp/text - text - text valid_data_path_and_name_and_type: - - dump/raw/dev/wav.scp - speech - sound - - dump/raw/dev/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 0.0005 scheduler: warmuplr scheduler_conf: warmup_steps: 30000 token_list: - <blank> - <unk> - 我 - 的 - 你 - 么 - 不 - 是 - 了 - 一 - 有 - 天 - 什 - 好 - 在 - 个 - 怎 - 吗 - 话 - 要 - 给 - 电 - 上 - 没 - 人 - 说 - 到 - 啊 - 就 - 这 - 时 - 来 - 下 - 想 - 打 - 点 - 去 - 还 - 看 - 道 - 多 - 明 - 那 - 知 - 以 - 今 - 能 - 会 - 哪 - 都 - 可 - 大 - 吧 - 机 - 样 - 里 - 十 - 现 - 们 - 过 - 吃 - 开 - 家 - 回 - 发 - 中 - 呢 - 听 - 候 - 为 - 也 - 日 - 爱 - 歌 - 三 - 起 - 小 - 二 - 心 - 子 - 手 - 生 - 最 - 儿 - 学 - 放 - 信 - 女 - 号 - 几 - 和 - 老 - 晚 - 少 - 车 - 叫 - 快 - 用 - 自 - 年 - 睡 - 问 - 事 - 后 - 五 - 乐 - 安 - 出 - 找 - 帮 - 意 - 觉 - 气 - 国 - 得 - 情 - 请 - 早 - 地 - 做 - 首 - 真 - 公 - 近 - 对 - 办 - 很 - 行 - 己 - 呀 - 八 - 友 - 如 - 六 - 节 - 喜 - 新 - 欢 - 西 - 间 - 月 - 班 - 他 - 网 - 方 - 分 - 播 - 笑 - 查 - 息 - 名 - 四 - 成 - 东 - 美 - 零 - 市 - 饭 - 世 - 朋 - 玩 - 州 - 果 - 才 - 七 - 别 - 把 - 谁 - 九 - 再 - 平 - 太 - 干 - 思 - 关 - 谢 - 高 - 语 - 理 - 些 - 界 - 着 - 长 - 钱 - 动 - 曲 - 感 - 聊 - 片 - 何 - 面 - 男 - 音 - 工 - 南 - 午 - 本 - 通 - 火 - 经 - 路 - 星 - 唱 - Q - 业 - 讲 - 英 - 北 - 服 - 短 - 妈 - 海 - 文 - 跟 - 作 - 票 - 只 - 等 - 刚 - 码 - 字 - 影 - 附 - 婆 - 见 - 又 - 祝 - 无 - 该 - 提 - 末 - 让 - 法 - 定 - 买 - 告 - 照 - 体 - 考 - 床 - 醒 - 记 - 前 - 题 - 走 - 加 - 主 - 从 - 视 - 张 - 身 - 两 - 钟 - 京 - 于 - 收 - 阳 - 哈 - 店 - 山 - 院 - 站 - 百 - 宝 - 所 - 诉 - 期 - 之 - 嘛 - 夜 - 第 - 游 - 比 - 系 - 昨 - 费 - 交 - 水 - 应 - 次 - 周 - 亲 - 联 - 全 - 福 - 江 - 孩 - 区 - 广 - 头 - 接 - O - 校 - 已 - 空 - 门 - 认 - 相 - 度 - 实 - 活 - 色 - 假 - 白 - 算 - 外 - 流 - 啦 - 花 - 然 - 结 - 每 - 休 - 边 - 部 - 位 - 场 - 半 - 王 - 声 - 件 - 力 - 金 - 重 - 识 - 正 - 华 - 光 - 衣 - 载 - 死 - 价 - 翻 - 图 - 城 - 脑 - 同 - 久 - 译 - 特 - 物 - 搜 - 务 - 报 - 线 - 哦 - 卡 - E - 当 - A - 爸 - 圣 - 完 - 幺 - 合 - P - 雨 - 黄 - 种 - 司 - 直 - I - 她 - 哥 - 书 - 银 - 试 - 解 - 穿 - 酒 - 准 - 换 - 望 - 被 - S - 原 - 内 - 诞 - 带 - 介 - 口 - 清 - N - 马 - 习 - 否 - 置 - 啥 - 索 - 戏 - 与 - 懂 - 飞 - 需 - 性 - 错 - 送 - 级 - 器 - 单 - 离 - 远 - 备 - 师 - 课 - 注 - 因 - 难 - 其 - 像 - 元 - 消 - 表 - 便 - 球 - 风 - 教 - 故 - 科 - 李 - 常 - 林 - 龙 - 呵 - 数 - 代 - 总 - 忘 - 商 - 变 - 婚 - 苹 - 红 - 格 - 坐 - 绍 - 答 - 量 - 冷 - 青 - 询 - 春 - 神 - 省 - 蛋 - 姐 - 陪 - 兴 - 利 - 台 - 句 - 万 - 计 - 保 - 刘 - 传 - 深 - 管 - 运 - 德 - 医 - 容 - 品 - 越 - 亮 - 词 - 河 - 化 - 宁 - 始 - 武 - 希 - 洗 - 复 - 设 - 处 - 技 - 房 - T - 您 - 取 - 眼 - 县 - 笨 - 术 - 温 - 永 - 受 - 更 - 先 - 尔 - 程 - 彩 - 演 - 忙 - 专 - 愿 - 进 - 湖 - 建 - 况 - 伤 - 喝 - 底 - 卖 - 功 - 录 - 改 - H - 剧 - 预 - 梦 - L - 达 - 连 - 馆 - 包 - 写 - 客 - C - 汉 - 条 - G - 幸 - 民 - 读 - 职 - 目 - 但 - 贝 - 妹 - 资 - 较 - 雪 - 赛 - 除 - 招 - 园 - 住 - 超 - 汽 - 病 - B - 软 - 反 - 而 - 证 - 员 - 黑 - 庆 - D - 求 - 排 - 装 - 岁 - 顾 - 产 - 航 - 言 - 斯 - 拨 - 历 - 烦 - 及 - 药 - 入 - 式 - 军 - 餐 - 志 - 至 - 双 - 米 - 版 - 掉 - 千 - 者 - 充 - 微 - 失 - 转 - M - 亚 - 克 - 座 - 丽 - 络 - 战 - 使 - 猪 - 具 - 闹 - 限 - 址 - 基 - 油 - 漂 - 陈 - Y - 川 - 强 - 挺 - 奇 - 杰 - 政 - 向 - 速 - 康 - 差 - 贵 - 搞 - 义 - 奖 - 份 - 户 - 楼 - 苏 - 任 - 健 - 易 - 毛 - 型 - 石 - 礼 - 款 - 持 - 卫 - 怕 - 恋 - 邮 - 集 - R - 铁 - 圳 - 拿 - 云 - 队 - 鱼 - 慢 - 顺 - 害 - 属 - 傻 - 营 - 菜 - 货 - 麻 - 咋 - 坏 - 冒 - 累 - 杨 - 闻 - 治 - 选 - 段 - K - 香 - 闭 - 兰 - 牌 - 局 - 留 - 舍 - 非 - 推 - 室 - 简 - 拉 - 修 - 终 - 郑 - 切 - U - 将 - 村 - 沙 - 存 - 帅 - 诗 - 率 - 密 - 巴 - 频 - 士 - 初 - 楚 - 股 - 热 - 古 - 制 - 支 - 肉 - 岛 - 统 - 适 - 肥 - 鸡 - 调 - 街 - 类 - 牛 - 导 - 农 - 值 - 食 - 镇 - 棍 - 移 - 韩 - W - 嗯 - 订 - 呼 - 命 - V - 必 - 宿 - 皮 - 升 - 确 - 随 - 步 - 育 - 标 - 唐 - 精 - 决 - 木 - 由 - 弟 - 往 - 肯 - 够 - 或 - 指 - 阿 - 象 - 料 - 念 - 助 - 许 - 共 - 母 - 约 - 罗 - 板 - 秋 - 配 - 魔 - 宜 - 般 - 荐 - 扰 - 舒 - 逼 - 狗 - 嘿 - 博 - 售 - 满 - 疼 - 脸 - 整 - 抱 - 季 - 减 - 养 - 怀 - 免 - 未 - 乘 - F - 社 - 妇 - 列 - 爷 - 删 - 旦 - 弄 - 概 - 停 - 拜 - 维 - 领 - 示 - 套 - 汇 - 昌 - 晨 - 痛 - 购 - 奥 - 铃 - 案 - 济 - 鬼 - 背 - 港 - 待 - 浪 - 桥 - 血 - 冬 - 烧 - 优 - 拍 - 际 - 急 - 杭 - 称 - 遇 - 赶 - 旅 - 智 - 角 - 财 - 玉 - 团 - 形 - 论 - 静 - 景 - 退 - 普 - 呗 - 乡 - 参 - 胡 - 伦 - 讨 - 艺 - 辈 - 毒 - 此 - 轻 - 苦 - 咱 - 画 - 泰 - 宾 - 雄 - 销 - 奶 - 突 - 波 - 各 - 冰 - 块 - 夏 - 低 - 兵 - 厅 - 羊 - 杀 - 紧 - 泉 - 朝 - 谈 - 足 - 孕 - 夫 - 厂 - 聪 - 续 - 庄 - 诺 - 牙 - 质 - 立 - 依 - 仙 - 跑 - 盘 - 豆 - 它 - 怪 - 猜 - 漫 - 毕 - 兄 - 颜 - 险 - 厦 - 验 - 防 - 登 - 敢 - 乖 - 晓 - 护 - 迎 - 逗 - 摩 - 佳 - 观 - 骗 - 烟 - 细 - 临 - 惠 - 围 - 寞 - 效 - 源 - 寂 - 肚 - 暖 - 饺 - 斗 - 模 - 端 - 疗 - 付 - 绝 - 秘 - 展 - 乎 - 按 - 富 - 靠 - 范 - 规 - 刻 - 折 - 娘 - 厌 - 申 - 章 - 补 - 笔 - 锅 - 破 - 田 - 齐 - 滨 - 皇 - 族 - 典 - 史 - 左 - 蓝 - 灵 - 澡 - 秀 - 诚 - 土 - 测 - 凤 - 剑 - 响 - 倒 - 睛 - 惯 - 乌 - 币 - 扣 - 吴 - 输 - 徐 - 弃 - 纪 - 堂 - 环 - 甲 - 菲 - 缘 - 讯 - 根 - 落 - 启 - 泡 - 饿 - 积 - 府 - 递 - 绩 - 择 - 吉 - 布 - 显 - 童 - 租 - 洋 - 组 - 划 - 编 - 签 - 舞 - 困 - 贴 - 负 - 派 - 裤 - 担 - 桂 - 却 - 丝 - 丰 - 箱 - 赵 - 群 - 序 - 训 - 酸 - 惜 - 圆 - 评 - 压 - 俩 - 状 - 官 - 酷 - 鲁 - 孙 - 草 - 极 - 势 - 斤 - 腾 - 泽 - 素 - 尽 - 姓 - 屏 - 聚 - 莞 - 乱 - 雅 - 尼 - 趣 - 伟 - 肤 - 勇 - 右 - 徽 - 投 - 丹 - 尾 - 托 - 争 - 鸟 - 激 - 印 - 良 - 眠 - 松 - 跳 - 途 - 篮 - 粉 - 脚 - 屁 - 鞋 - 麦 - 则 - 估 - 津 - 努 - 距 - 胸 - 央 - 珍 - 盖 - 哭 - 洲 - 练 - 敏 - 雷 - 曾 - 恩 - 挂 - 据 - 览 - 耳 - 材 - 泪 - 吸 - 味 - 劳 - 父 - 孤 - 玛 - 旁 - 阴 - 态 - 创 - 树 - 脱 - 研 - 驾 - 拾 - 灯 - 虎 - 爆 - 嘉 - 湾 - 躺 - 猫 - 莫 - 昆 - 痘 - 阅 - 射 - 刷 - 卓 - 珠 - 峰 - 胖 - 坚 - 造 - 举 - 棒 - 梅 - 引 - 吵 - 蒙 - 详 - 借 - 瓜 - 池 - 束 - 芳 - 淘 - 寻 - 释 - 沈 - 虑 - 锦 - 胜 - 荣 - 委 - 默 - 另 - 浏 - 并 - 检 - 冠 - 独 - 厉 - 顶 - 钓 - 骂 - 且 - 欧 - 威 - 熟 - 获 - 兽 - 严 - 炎 - 含 - 厕 - 盛 - 翼 - 盟 - 余 - 姨 - 洛 - 映 - 狼 - 谅 - 众 - 宽 - 断 - 止 - 狂 - 凉 - 姑 - 辉 - 若 - 册 - 谷 - 幻 - 篇 - 瓶 - 席 - 恐 - 柔 - 迪 - 供 - 追 - 控 - 爽 - 互 - 嫁 - 宋 - 宫 - 瑞 - 滚 - 增 - 额 - 页 - 刀 - 娱 - 茶 - 钢 - 疯 - 梁 - 承 - 娜 - 须 - 陆 - 燕 - 迟 - 君 - 恶 - 遍 - 纸 - 项 - 丁 - 腿 - 误 - 殊 - 迅 - 锁 - 宇 - 媳 - 培 - 居 - 寄 - 纯 - 嘴 - 浙 - 境 - 搭 - 杯 - 插 - 朱 - 溪 - 甘 - 权 - 窝 - 警 - 糖 - 迷 - 圈 - 凯 - 帝 - 暴 - 逛 - 艳 - 击 - 颗 - 坦 - 杂 - 冲 - 谓 - 救 - 轮 - 晕 - 虽 - 塔 - 叔 - 凰 - 懒 - 议 - 肖 - 郎 - 辛 - 透 - 拥 - 鼠 - 顿 - 批 - 兔 - 尚 - 聘 - 藏 - 赚 - 继 - 享 - 欺 - 潮 - 即 - 甜 - 骨 - 悲 - 幕 - 滴 - 闲 - 液 - 缺 - 琴 - 蜜 - 善 - 暗 - 镜 - 蔡 - 吹 - 核 - 忆 - 键 - 辑 - 岗 - 例 - 涛 - 宏 - 刺 - 郭 - 降 - 秦 - 剩 - 绿 - 桌 - 咖 - 呐 - 叶 - 贸 - 架 - 账 - 亡 - 佛 - 哎 - 乳 - 归 - 忍 - 异 - 侠 - 龄 - 炒 - 洁 - 似 - 虚 - 贷 - 征 - 抽 - 败 - 枪 - 幼 - 丫 - 危 - 慰 - 究 - 婷 - 肃 - 箭 - 灰 - 届 - 律 - 秒 - 淡 - 偷 - 炫 - 鲜 - 浦 - 萨 - 旧 - 硬 - 操 - 混 - 施 - 散 - 咨 - 妻 - 吻 - 榜 - 呆 - 废 - 野 - 糕 - 骑 - 炼 - 震 - 恭 - 悔 - 跨 - 曼 - 啡 - 俊 - 晶 - 胃 - 汤 - 尊 - 貌 - 封 - 羽 - 赞 - 尸 - 隐 - 丢 - 霸 - 醉 - 盗 - 盐 - 浩 - 著 - 档 - 赢 - 幽 - 责 - 鼻 - 辣 - 恒 - 朵 - 慕 - 旗 - 娃 - 饰 - 仁 - 亦 - 竟 - 柳 - 郁 - 唯 - 夕 - 钻 - 均 - 劲 - 庭 - 巧 - 饮 - 涨 - 辆 - 傅 - 企 - 趟 - 避 - 党 - 染 - 扬 - 玲 - 筋 - 烤 - 桃 - 唉 - 慧 - 欲 - 寒 - 闷 - 某 - 恨 - 私 - 淮 - 惊 - 弱 - 弹 - 沉 - 兼 - 弯 - 残 - 偶 - 锋 - 贺 - 咯 - 纳 - 戴 - 抢 - 宗 - 浴 - 宵 - 莲 - 嗨 - 喊 - 奕 - 壁 - 症 - 冻 - 致 - 屋 - 喽 - 伊 - 绵 - 玫 - 固 - 籍 - 监 - 耐 - 井 - 寝 - 露 - 虫 - 盒 - 凡 - 摇 - 傲 - 烈 - 姿 - 陕 - 裸 - 袋 - 帐 - 凌 - 寿 - 茂 - 鹏 - 寓 - 柴 - 妞 - 森 - 既 - 紫 - 萝 - 层 - 苗 - 腊 - 邓 - 宣 - 锡 - 袜 - 陌 - 狮 - 碰 - 晴 - 塘 - 妃 - 祥 - 苍 - 针 - 敌 - 腰 - 犯 - 欠 - 垃 - 卸 - 迹 - 暑 - 祖 - 泳 - 阵 - 熊 - 励 - 澳 - 添 - 拳 - 岳 - 益 - 瘦 - 虹 - 圾 - 植 - 坡 - 攻 - 略 - 墙 - 描 - 遗 - 噢 - 窗 - 吐 - 肌 - 陵 - 逃 - 浮 - 摸 - 戒 - 哟 - 翰 - 勿 - 库 - 涯 - 妖 - 宠 - 脾 - 革 - 探 - 糊 - 采 - 惹 - 衡 - 赤 - 魏 - 羡 - 综 - 舟 - 疆 - 痴 - 催 - 朗 - 坛 - 悠 - 岭 - 驶 - 括 - 嘻 - 辽 - 粥 - 煮 - 灭 - 杜 - 域 - 令 - 替 - 翔 - 坤 - 潘 - 抓 - 铜 - 构 - 卷 - 茫 - 丑 - 涂 - 掌 - 饱 - 肝 - 疾 - 罩 - 谱 - 愚 - 抗 - 琳 - 夸 - 汪 - 墨 - 沟 - 翅 - 肠 - 患 - 柏 - 僵 - 稳 - 延 - 胆 - 伴 - 爬 - 滋 - 歉 - 轩 - 尿 - 铺 - 忠 - 黎 - 膀 - 邯 - 郸 - 愉 - 霉 - 翁 - 妙 - 隆 - 鸭 - 锻 - 涵 - 挣 - 副 - 罪 - 穷 - 恢 - 巨 - 吓 - 眉 - 棉 - 汗 - 溜 - 奏 - 滩 - 愁 - X - 执 - 霞 - 魂 - 姆 - 摄 - 偏 - 纠 - 瑰 - 洪 - 协 - 牧 - 飘 - 炸 - 悦 - 艾 - 织 - 敬 - 驹 - 欣 - 董 - 邦 - 勒 - 守 - 伙 - 狐 - 税 - 湘 - 遥 - 储 - 脏 - 坊 - 腐 - 横 - 仔 - 仪 - 判 - 忽 - 哇 - 罚 - 爹 - 怖 - 竹 - 孔 - 捡 - 挑 - 肿 - 漠 - 尘 - 焦 - 塞 - 熬 - 谊 - 樱 - 返 - 莉 - 堵 - 捷 - 惑 - 绕 - 蛇 - 竞 - 耍 - 违 - 卧 - 蝶 - J - 俗 - 滑 - 占 - 怜 - 舅 - 乔 - 泸 - 臭 - 策 - 骚 - 莱 - 岩 - 魅 - 兑 - 姥 - 兆 - 萍 - 烂 - 损 - 述 - 撒 - 烫 - 炮 - 忧 - 遵 - 桑 - 俺 - 彭 - 净 - 胶 - 柯 - 绑 - 碟 - 卜 - 饼 - 船 - 佩 - 妆 - 齿 - 厚 - 娟 - 醋 - 丘 - 恼 - 萧 - 析 - 润 - 潭 - 番 - 鹰 - 葡 - 萄 - 唤 - 胎 - 逊 - 峡 - 舰 - 障 - 伯 - 猴 - 膜 - 访 - 贤 - 耀 - 晒 - 狠 - 豪 - 剪 - 帖 - 幂 - 融 - 诱 - 韶 - 晋 - 拼 - 洞 - 氧 - 察 - 裁 - 寨 - 熙 - 喂 - 拖 - 污 - 乾 - 湿 - 嫌 - 拒 - 蕉 - 哲 - 薇 - 绒 - 婴 - 莎 - 稿 - 瞎 - 寺 - 徒 - 伞 - 碎 - 阜 - 填 - 琪 - 敦 - 柜 - 侣 - 搬 - 孟 - 蓉 - 筒 - 偿 - 献 - 径 - 畅 - 粤 - 悟 - 隔 - 赖 - 慈 - 哄 - 襄 - 扮 - 睁 - 彻 - 陶 - 瓷 - 荷 - 寸 - 牵 - 痒 - 芝 - 繁 - 倍 - 闪 - 梧 - 怒 - 蝴 - 嵩 - 赣 - 嘞 - 狱 - 猛 - 咳 - 媒 - 斌 - 斑 - 奋 - 叉 - 龟 - 贱 - 疑 - 暂 - 靓 - 叹 - 仓 - 撞 - 姜 - 疤 - 矿 - 芬 - 勤 - 纱 - 帆 - 迁 - 囧 - 佑 - 囊 - 侯 - 鼓 - 葛 - 沃 - 莹 - 诊 - 筑 - 酱 - 咬 - 糟 - 拯 - 鹤 - 驴 - 胞 - 枝 - 俄 - 呃 - 鹿 - 磨 - 姚 - 灾 - 扫 - 荡 - 吊 - 犬 - 菊 - 茹 - 链 - 嫉 - 妒 - 旺 - 夺 - 裙 - 湛 - 氏 - 鞍 - 抵 - 娇 - 耶 - 截 - 辞 - 硫 - 禁 - 怡 - 跌 - 刮 - 苑 - 媛 - 摆 - 盾 - 械 - 旋 - 卢 - 霆 - 驰 - 擦 - 符 - 肺 - 谜 - 霍 - 仅 - 迈 - 碗 - 邪 - 曹 - 咪 - 煌 - 疫 - 屠 - 握 - 奔 - Z - 燃 - 沧 - 谦 - 馨 - 嫖 - 阻 - 冯 - 振 - 雕 - 闯 - 薄 - 宙 - 倾 - 嗽 - 椒 - 墓 - 尤 - 夹 - 潇 - 骤 - 壮 - 屈 - 颖 - 菠 - 吞 - 鸣 - 渴 - 堰 - 厨 - 督 - 驻 - 腹 - 岸 - 蛮 - 翠 - 肾 - 娼 - 券 - 尖 - 丸 - 鸿 - 厘 - 召 - 劝 - 牡 - 韦 - 拔 - 灏 - 弦 - 萌 - 惩 - 倩 - 诸 - 扎 - 庙 - 炉 - 潜 - 措 - 磊 - 脂 - 郊 - 虾 - 霜 - 猎 - 蝎 - 玄 - 钰 - 审 - 蜂 - 巷 - 敷 - 拟 - 钥 - 匙 - 婉 - 纽 - 芜 - 贾 - 串 - 靖 - 抛 - 彼 - 亏 - 挽 - 贼 - 穴 - 授 - 鼎 - 孝 - 玮 - 氓 - 劫 - 俞 - 谎 - 莆 - 隋 - 钠 - 赔 - 谐 - 纶 - 闰 - 昏 - 逆 - 璇 - 樊 - 禽 - 宅 - 碳 - 妮 - 亭 - 杆 - 蠢 - 鄙 - 蜀 - 阶 - 贫 - 辰 - 盼 - 呜 - 芦 - 株 - 腔 - 巾 - 羞 - 堡 - 亿 - 踩 - 憾 - 浓 - 阔 - 塑 - 趋 - 蓄 - 桶 - 葱 - 菇 - 咒 - 蟹 - 肩 - 柿 - 缓 - 漳 - 祸 - 挤 - 巢 - 抚 - 詹 - 豫 - 俱 - 悉 - 溶 - 粒 - 谭 - 诛 - 贡 - 沿 - 躲 - 慌 - 芙 - 蒋 - 乃 - 雀 - 姻 - 岂 - 悄 - 辕 - 斜 - 捕 - 扇 - 割 - 啤 - 纲 - 纤 - 祛 - 躁 - 殖 - 珊 - 氢 - 允 - 丈 - 蹈 - 邀 - 哼 - 坑 - 吾 - 淋 - 扩 - 愤 - 潍 - 尺 - 耗 - 鉴 - 闽 - 乙 - 渭 - 触 - 撑 - 咸 - 灿 - 缩 - 蔬 - 凑 - 渡 - 梭 - 粗 - 袁 - 菌 - 妓 - 稍 - 辐 - 哀 - 浆 - 厢 - 荆 - 踪 - 桐 - 邢 - 蜡 - 奉 - 淑 - 洒 - 扁 - 蕾 - 燥 - 硕 - 牢 - 蛙 - 仍 - 侵 - 稀 - 芒 - 吕 - 跪 - 绪 - 誓 - 旭 - 阁 - 屌 - 凭 - 裹 - 崇 - 纬 - 援 - 怨 - 茄 - 埋 - 棋 - 誉 - 瑜 - 蹲 - 扯 - 跃 - 昧 - 螺 - 毅 - 叮 - 喷 - 壶 - 喉 - 脆 - 瓦 - 碧 - 奴 - 煤 - 伍 - 娶 - 雁 - 骄 - 泣 - 眷 - 屯 - 赏 - 覆 - 揍 - 绯 - 逸 - 屎 - 彦 - 辨 - 攀 - 涉 - 泥 - 廊 - 菱 - 薛 - 衍 - 荒 - 铭 - 沂 - 麟 - 咏 - 扑 - 祈 - 喔 - 磁 - 歇 - 栋 - 沫 - 漏 - 玻 - 璃 - 逝 - 葵 - 溃 - 堆 - 锐 - 楠 - 毫 - 谋 - 勾 - 梯 - 氯 - 杏 - 赌 - 鑫 - 崔 - 颠 - 邱 - 肪 - 掘 - 昭 - 悬 - 奈 - 筷 - 轨 - 诵 - 葫 - 挡 - 梨 - 缠 - 僧 - 抬 - 邻 - 栏 - 饶 - 庚 - 灌 - 呦 - 摊 - 狄 - 汕 - 缴 - 罢 - 瞌 - 腺 - 辖 - 摔 - 棵 - 弗 - 琼 - 揭 - 淀 - 仑 - 粮 - 扔 - 剂 - 邵 - 辅 - 悍 - 袖 - 侨 - 巡 - 仗 - 逢 - 挥 - 翘 - 柱 - 狸 - 赫 - 耽 - 押 - 昂 - 瘤 - 枣 - 癌 - 伏 - 秤 - 脉 - 穹 - 敲 - 贪 - 促 - 拆 - 勉 - 祷 - 弊 - 膏 - 禾 - 契 - 挨 - 纵 - 疲 - 蜘 - 蛛 - 冈 - 雾 - 娄 - 甫 - 裂 - 侦 - 愈 - 臂 - 甩 - 戈 - 钙 - 簿 - 淄 - 蓬 - 夷 - 汁 - 凶 - 匹 - 皆 - 凝 - 仰 - 叛 - 蒲 - 谣 - 砖 - 呈 - 浅 - 瞬 - 丞 - 粘 - 痕 - 癫 - 禺 - 靴 - 尝 - 枫 - 鹅 - 衷 - 暮 - 媚 - 堪 - 臣 - 瑟 - 榕 - 蘑 - 遂 - 舌 - 藤 - 遭 - 芭 - 暧 - 犹 - 砸 - 浇 - 晰 - 矮 - 禹 - 隶 - 蚊 - 塌 - 峪 - 渊 - 摘 - 崩 - 瞧 - 炭 - 瑶 - 纷 - 毁 - 瞒 - 橙 - 渣 - 霹 - 雳 - 粽 - 侧 - 胀 - 捐 - 栈 - 颈 - 伪 - 役 - 予 - 钝 - 菏 - 铠 - 稻 - 赠 - 芽 - 龚 - 幅 - 莓 - 轿 - 炖 - 炬 - 溢 - 扭 - 垂 - 坎 - 嚏 - 枯 - 绣 - 蒸 - 旬 - 迫 - 浒 - 肇 - 庸 - 蒂 - 踏 - 雯 - 埃 - 础 - 狙 - 陷 - 伽 - 滔 - 沦 - 祭 - 唠 - 瀑 - 矛 - 乒 - 乓 - 窍 - 渠 - 泛 - 陇 - 蒜 - 捉 - 扶 - 诀 - 纹 - 踢 - 馋 - 薪 - 坪 - 廉 - 荔 - 骏 - 颁 - 伸 - 贞 - 沾 - 疮 - 兮 - 擎 - 驱 - 馒 - 挖 - 韵 - 姬 - 砍 - 矫 - 巫 - 疙 - 瘩 - 峨 - 抄 - 函 - 歪 - 倚 - 昔 - 涕 - 憨 - 淇 - 宴 - 埠 - 渐 - 胳 - 膊 - 趁 - 擅 - 刑 - 渝 - 噬 - 斋 - 妍 - 债 - 邹 - 嫂 - 娥 - 践 - 禅 - 牲 - 帽 - 吨 - 腻 - 掖 - 榴 - 啸 - 纺 - 鞭 - 豚 - 爵 - 蹄 - 咙 - 澈 - 疹 - 氛 - 抑 - 绸 - 抹 - 奎 - 酬 - 坟 - 诶 - 勋 - 卑 - 沪 - 蚁 - 揉 - 锄 - 泌 - 槽 - 镖 - 卿 - 甸 - 帕 - 镁 - 盲 - 汾 - 携 - 宰 - 虞 - 瓣 - 辩 - 豌 - 樟 - 璐 - 沁 - 钦 - 蔚 - 彬 - 卦 - 轰 - 锈 - 茎 - 蹦 - 拐 - 坝 - 饥 - 捏 - 碑 - 嗓 - 澄 - 惨 - 沽 - 鄂 - 逻 - 谍 - 屿 - 聋 - 憋 - 泼 - 枕 - 盆 - 衫 - 慎 - 黛 - 轶 - 咽 - 匠 - 蚂 - 捶 - 脊 - 蚌 - 剥 - 穆 - 喇 - 叭 - 凳 - 滥 - 撤 - 蓑 - 笠 - 黔 - 诡 - 颐 - 闵 - 稚 - 茨 - 捆 - 芯 - 涩 - 哑 - 盈 - 衰 - 奢 - 贩 - 循 - 韭 - 绘 - 鸳 - 唇 - 恳 - 妥 - 杠 - 刊 - 戚 - 巩 - 胁 - 蜗 - 筝 - 漆 - 劈 - 泄 - 噩 - 椎 - 渔 - 氨 - 橘 - 仲 - 洱 - 绥 - 仿 - 耿 - 蚕 - 倦 - 葬 - 捞 - 拓 - 冤 - 御 - 忌 - 慨 - 弥 - 寡 - 昵 - 撕 - 鲤 - 隧 - 倡 - 臀 - 毙 - 蕊 - 甚 - 睹 - 哒 - 仇 - 栓 - 抒 - 滁 - 讶 - 皱 - 剖 - 闸 - 耻 - 顽 - 茅 - 碱 - 霏 - 坠 - 邑 - 嗦 - 缝 - 枚 - 垫 - 畜 - 侄 - 悴 - 庞 - 鸯 - 俏 - 铅 - 衔 - 浑 - 抖 - 逮 - 犀 - 滕 - 遮 - 淹 - 挪 - 柠 - 檬 - 荨 - 沛 - 喻 - 尹 - 抉 - 爪 - 甄 - 冀 - 蝉 - 汰 - 丧 - 愧 - 畏 - 屑 - 屉 - 娩 - 艰 - 弓 - 炜 - 框 - 娅 - 酵 - 掩 - 宪 - 枉 - 淫 - 糗 - 奸 - 岚 - 诅 - 釜 - 萱 - 窦 - 喆 - 浣 - 庐 - 阑 - 劣 - 窄 - 赈 - 茉 - 帜 - 缸 - 嫩 - 迦 - 憔 - 鸽 - 朴 - 洽 - 榆 - 烹 - 箫 - 荚 - 箍 - 稣 - 肢 - 磷 - 袭 - 橡 - 鸦 - 瞅 - 匡 - 禧 - 痣 - 勃 - 翡 - 篱 - 烽 - 衢 - 讪 - 烛 - 宥 - 铝 - 镯 - 钉 - 披 - 昼 - 跆 - 笈 - 喘 - 惫 - 唧 - 螂 - 涌 - 揣 - 旨 - 袄 - 笼 - 蛔 - 毯 - 凸 - 倪 - 碌 - 懈 - 履 - 鱿 - 菩 - 汝 - 赴 - 焉 - 钛 - 畔 - 掰 - 骆 - 崖 - 髓 - 彪 - 啰 - 撸 - 拌 - 漯 - 犒 - 蔽 - 漱 - 赐 - 饪 - 玖 - 弘 - 卵 - 沭 - 梓 - 禄 - 晖 - 籁 - 熏 - 祠 - 荟 - 伐 - 柄 - 昕 - 琶 - 鞠 - 豹 - 萎 - 裕 - 曰 - 苇 - 沌 - 牺 - 轴 - 薯 - 潞 - 痫 - 曦 - 腋 - 坞 - 隙 - 妊 - 娠 - 蝙 - 蝠 - 赘 - 咧 - 翩 - 棚 - 冕 - 旱 - 棱 - 巍 - 偕 - 杉 - 梵 - 嫦 - 煎 - 泊 - 辟 - 丛 - 艘 - 懦 - 郫 - 搅 - 佬 - 阖 - 焰 - 澜 - 琢 - 挚 - 嫣 - 啧 - 兜 - 趴 - 皂 - 窃 - 嘟 - 崛 - 睿 - 刃 - 绳 - 哗 - 窟 - 嗑 - 吭 - 朔 - 喵 - 粹 - 酶 - 辜 - 诫 - 筹 - 亩 - 椅 - 佐 - 俑 - 狡 - 陛 - 曙 - 攒 - 诈 - 叙 - 杖 - 馅 - 锌 - 矜 - 绮 - 刁 - 阙 - 亢 - 讼 - 驼 - 晃 - 逍 - 仕 - 芋 - 拇 - 掏 - 瘾 - 腕 - 魁 - 鲍 - 殷 - 荤 - 亨 - 凄 - 硝 - 嬛 - 藻 - 诣 - 桔 - 疡 - 氰 - 佰 - 鸠 - 埔 - 皋 - 谚 - 麒 - 廖 - 鳄 - 蹉 - 阎 - 琦 - 丙 - 烯 - 涮 - 絮 - 潢 - 郴 - 遛 - 琵 - 殿 - 蹭 - 笛 - 钾 - 辙 - 炊 - 廷 - 拦 - 哆 - 逐 - 钞 - 赋 - 孽 - 沸 - 龈 - 雌 - 玟 - 麓 - 焊 - 谨 - 衬 - 灸 - 栖 - 卉 - 脐 - 栽 - 扒 - 酚 - 肱 - 闺 - 猥 - 钩 - 羁 - 吱 - 吼 - 蹊 - 跷 - 磕 - 坷 - 蝇 - 唔 - 褶 - 钮 - 鹭 - 咔 - 沐 - 棠 - 锷 - 滞 - 肛 - 糜 - 噜 - 涧 - 儒 - 琅 - 捎 - 泵 - 葩 - 芥 - 轲 - 猾 - 拱 - 墅 - 蕲 - 馁 - 佚 - 渤 - 崎 - 峻 - 赎 - 霄 - 羯 - 缅 - 韧 - 勘 - 皖 - 顷 - 喀 - 忏 - 圭 - 槟 - 榔 - 兹 - 坂 - 镒 - 堕 - 蟒 - 芹 - 浃 - 哉 - 晏 - 绐 - 陀 - 茵 - 倘 - 缆 - 浊 - 碍 - 惰 - 濮 - 杵 - 削 - 裘 - 嗅 - 呕 - 绊 - 哩 - 腩 - 撇 - 郝 - 铿 - 锵 - 赃 - 缪 - 卤 - 吝 - 涟 - 冶 - 匪 - 婿 - 蛳 - 搏 - 圩 - 旷 - 汞 - 鹦 - 茱 - 粪 - 崂 - 陋 - 掐 - 郡 - 哮 - 邸 - 帘 - 柚 - 鬓 - 剃 - 忻 - 羔 - 聆 - 刹 - 嗷 - 罕 - 沥 - 钗 - 尴 - 尬 - 莽 - 捧 - 拽 - 懵 - 噶 - 虐 - 囚 - 囡 - 颓 - 亥 - 傍 - 疏 - 乞 - 丐 - 皓 - 孜 - 愣 - 檐 - 橱 - 绅 - 噻 - 痊 - 鳞 - 瞳 - 衩 - 捂 - 吔 - 螳 - 暇 - 嘎 - 缤 - 镍 - 吟 - 斥 - 饲 - 鲢 - 猩 - 狒 - 腼 - 腆 - 轼 - 梗 - 熨 - 荫 - 糙 - 妾 - 粕 - 烘 - 壹 - 骥 - 秽 - 熔 - 歹 - 谬 - 侈 - 蜈 - 蚣 - 婵 - 渍 - 斩 - 棕 - 辱 - 醇 - 磅 - 礴 - 颊 - 彝 - 庾 - 叠 - 忒 - 稽 - 幢 - 嘱 - 醛 - 砂 - 炳 - 拂 - 殇 - 邬 - 冥 - 擒 - 汶 - 罐 - 镑 - 祁 - 氮 - 怆 - 羌 - 拧 - 芸 - 堀 - 婊 - 暄 - 挎 - 躬 - 噎 - 菅 - 奂 - 龌 - 龊 - 睬 - 燎 - 鲈 - 拢 - 啬 - 脖 - 尧 - 馗 - 皎 - 滤 - 镶 - 椭 - 狈 - 澎 - 阉 - 侃 - 婕 - 脓 - 桨 - 阪 - 湃 - 溏 - 箕 - 蚯 - 蚓 - 呛 - 矩 - 彤 - 惟 - 鹉 - 讽 - 募 - 惦 - 飓 - 抠 - 肮 - 溟 - 膝 - 芗 - 逞 - 娌 - 湮 - 舵 - 挫 - 椰 - 螃 - 绽 - 蟑 - 聂 - 拘 - 萸 - 洼 - 弛 - 澧 - 玺 - 芊 - 枢 - 鲨 - 毋 - 搂 - 跎 - 趾 - 琐 - 徘 - 徊 - 濡 - 咩 - 钏 - 舔 - 烷 - 胺 - 拙 - 溺 - 竖 - 蕴 - 巅 - 魄 - 吖 - 啵 - 庇 - 灼 - 遣 - 怠 - 枭 - 乏 - 缕 - 掂 - 秩 - 蜕 - 泾 - 汀 - 肆 - 倔 - 吒 - 矣 - 豁 - 仨 - 俯 - 嘲 - 瞪 - 唬 - 骋 - 辍 - 曝 - 泻 - 鼾 - 捣 - 妨 - 撵 - 撮 - 猕 - 浜 - 哺 - 睫 - 荧 - 噪 - 栗 - 垣 - 獒 - 冼 - 瞄 - 刍 - 硅 - 翊 - 泓 - 枥 - 凋 - 匣 - 孢 - 飙 - 俭 - 珑 - 嵊 - 佣 - 祟 - 枞 - 蓟 - 斧 - 镕 - 棺 - 痔 - 娴 - 苔 - 笙 - 蔻 - 芮 - 迭 - 暨 - 诏 - 癜 - 芷 - 臧 - 驿 - 珂 - 藕 - 笋 - 竭 - 歼 - 铉 - 恹 - 雇 - 诲 - 漓 - 扳 - 寰 - 颂 - 缈 - 砣 - 戳 - 疣 - 寮 - 甥 - 牦 - 衅 - 湄 - 汨 - 褐 - 腑 - 啼 - 惭 - 痰 - 梳 - 驮 - 阮 - 壳 - 慷 - 牟 - 捺 - 瘁 - 锂 - 狩 - 沱 - 烁 - 摞 - 楷 - 楞 - 瑾 - 饯 - 灶 - 薰 - 伎 - 忐 - 忑 - 煽 - 骁 - 娲 - 赁 - 锑 - 嵌 - 苞 - 咫 - 锴 - 岐 - 蓓 - 毽 - 黏 - 攸 - 恰 - 惶 - 矶 - 簸 - 坨 - 踝 - 掺 - 榨 - 阀 - 婢 - 纨 - 搓 - 闫 - 瘫 - 垢 - 蚀 - 貂 - 壑 - 婧 - 腥 - 兖 - 觅 - 壤 - 珉 - 胭 - 惧 - 僻 - 峥 - 炀 - 蔗 - 铂 - 宛 - 巳 - 氟 - 秸 - 菁 - 鹃 - 疱 - 矢 - 拭 - 缀 - 朦 - 胧 - 筏 - 贯 - 汐 - 蛤 - 蟆 - 迩 - 犁 - 馈 - 叽 - 喳 - 袈 - 裟 - 啃 - 敞 - 踊 - 雏 - 朽 - 撩 - 恙 - 亵 - 淤 - 垦 - 眺 - 熄 - 衲 - 伺 - 墟 - 孚 - 墩 - 猬 - 堤 - 鞘 - 署 - 陂 - 鬟 - 萤 - 悯 - 恃 - 峙 - 咄 - 奠 - 跺 - 笆 - 啄 - 殆 - 赅 - 锭 - 铛 - 枷 - 姗 - 驭 - 嘀 - 煲 - 腚 - 霖 - 孪 - 翟 - 濒 - 邂 - 逅 - 筱 - 霓 - 窈 - 窕 - 眨 - 耸 - 羚 - 尉 - 谀 - 竿 - 蛟 - 籽 - 铲 - 潼 - 匆 - 肽 - 戬 - 岔 - 奚 - 裴 - 嘏 - 玥 - 妯 - 昙 - 烨 - 吏 - 鼹 - 筵 - 崭 - 涪 - 來 - 瘆 - 彰 - 杞 - 疽 - 琥 - A - 栾 - 庵 - 窘 - 擀 - 痤 - 蟾 - 唾 - 嚼 - 癖 - 蛹 - 浸 - 狭 - 迂 - 脍 - 炙 - 覃 - 悖 - 阆 - 铸 - 洮 - 瑙 - 呷 - 呸 - 谛 - 膨 - 柑 - 眯 - 奘 - 吆 - 孰 - 珈 - 曜 - 拈 - 麝 - 嘘 - 缚 - 徕 - 糸 - 崴 - 藓 - 婺 - 揽 - 溧 - 熠 - 膳 - 犊 - 贬 - 脯 - 剿 - 鼬 - 焕 - 胛 - 拷 - 勺 - 鲫 - 炅 - 卒 - 刨 - 糯 - 瘪 - 雍 - 襟 - 酋 - 胤 - 戟 - 褔 - 惆 - 怅 - 阂 - 扉 - 锚 - 砌 - 祺 - 淅 - 濠 - 匀 - 隍 - 氦 - 绫 - 濑 - 佝 - 偻 - 翎 - 颌 - 咚 - 疖 - 媲 - 祗 - 寅 - 靡 - 稞 - 骝 - 锏 - 焖 - 栀 - 蝗 - 甭 - 罄 - 酪 - 酮 - 嘢 - 钨 - 涎 - 沼 - 嚯 - 阱 - 驸 - 爰 - 酌 - 绛 - 畴 - 辄 - 藜 - 碚 - 馥 - 茧 - 鲛 - 溅 - 浯 - 沮 - 蹿 - 诠 - 姊 - 藉 - 骡 - 褪 - 酞 - 臻 - 靛 - 譬 - 粼 - 肘 - 孺 - 苟 - 瓯 - 蕨 - 冉 - 稠 - 蒿 - 锤 - 焙 - 蜃 - 淌 - 瘸 - 汲 - 噼 - 啪 - 橇 - 虔 - 裳 - 煞 - 淳 - 锟 - 摧 - 篷 - 癞 - 凹 - 汹 - 樵 - 睐 - 叁 - 飒 - 舶 - 驷 - 嘚 - 垮 - 妩 - 焚 - 扪 - 溥 - 鹊 - 鹄 - 汴 - 妁 - 廓 - 谙 - 苛 - 喏 - 嬉 - 裆 - 谔 - 哝 - 岑 - 喧 - 咆 - 茁 - 霎 - 泷 - 笃 - 沣 - 戮 - 蓦 - 滢 - 碜 - 滇 - 妤 - 盯 - 眶 - 婶 - 侍 - 崽 - 辘 - 轳 - 斓 - 郢 - 泞 - 窖 - 镭 - 痹 - 缉 - 镐 - 膛 - 睦 - 歧 - 扦 - 筛 - 嵘 - 茗 - 戎 - 萦 - 柒 - 咀 - 诋 - 搁 - 婪 - 漾 - 瀚 - 绎 - 盏 - 庹 - 吩 - 咐 - 堇 - 矾 - 茯 - 苓 - 潦 - 嘁 - 噫 - 窑 - 鳗 - 孵 - 彷 - 徨 - 耕 - 晗 - 撂 - 猿 - 昊 - 淼 - 驯 - 垒 - 铤 - 胱 - 桦 - 铮 - 坳 - 厥 - 叨 - 烙 - 苷 - 殴 - 鸥 - 蜥 - 蜴 - 湟 - 衙 - 敖 - 阐 - 穗 - 攥 - 俾 - 锥 - 粱 - 绰 - 漕 - 钕 - 硼 - 蚤 - 铢 - 疚 - 挟 - 昱 - 栅 - 煦 - 鳝 - 枸 - 锯 - 茜 - 悼 - 跤 - 犍 - 衿 - 筐 - 恪 - 琛 - 砝 - 秆 - 歆 - 晾 - 慑 - 蜍 - 诃 - 盔 - 寇 - 璧 - 鹩 - 恤 - 匿 - 踉 - 焗 - 戍 - 憎 - 桓 - 裔 - 梢 - 蝼 - 贿 - 诽 - 橄 - 榄 - 蔺 - 鲅 - 鳖 - 荞 - 槐 - 砚 - 癣 - 胚 - 沅 - 菀 - 荀 - 亳 - 铵 - 垌 - 釉 - 摁 - 瑕 - 疵 - 泗 - 逵 - 饵 - 旌 - 磺 - 彗 - 娣 - 晟 - 惘 - 棘 - 屹 - 逾 - 淞 - 逑 - 茴 - 楹 - 珀 - <sos/eos> init: null input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: true joint_net_conf: null model_conf: ctc_weight: 0.3 lsm_weight: 0.1 length_normalized_loss: false use_preprocessor: true token_type: char bpemodel: null non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' frontend: default frontend_conf: fs: 16k specaug: specaug specaug_conf: apply_time_warp: true time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 30 num_freq_mask: 2 apply_time_mask: true time_mask_width_range: - 0 - 40 num_time_mask: 2 normalize: global_mvn normalize_conf: stats_file: exp/asr_stats_raw_zh_char_sp/train/feats_stats.npz preencoder: null preencoder_conf: {} encoder: conformer encoder_conf: output_size: 256 attention_heads: 4 linear_units: 2048 num_blocks: 12 dropout_rate: 0.1 positional_dropout_rate: 0.1 attention_dropout_rate: 0.0 input_layer: conv2d normalize_before: true pos_enc_layer_type: rel_pos selfattention_layer_type: rel_selfattn activation_type: swish macaron_style: true use_cnn_module: true cnn_module_kernel: 15 postencoder: null postencoder_conf: {} decoder: transformer decoder_conf: attention_heads: 4 linear_units: 2048 num_blocks: 6 dropout_rate: 0.1 positional_dropout_rate: 0.1 self_attention_dropout_rate: 0.0 src_attention_dropout_rate: 0.0 required: - output_dir - token_list version: 0.10.5a1 distributed: false ``` </details>
af0271b7fa89ef034b4d0ac21398ce26
apache-2.0
['multiberts', 'multiberts-seed_4', 'multiberts-seed_4-step_1000k']
false
MultiBERTs, Intermediate Checkpoint - Seed 4, Step 1000k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model
6a139c7637c1cb99e121ee119d51222b
apache-2.0
['multiberts', 'multiberts-seed_4', 'multiberts-seed_4-step_1000k']
false
How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_4-step_1000k') model = TFBertModel.from_pretrained("google/multiberts-seed_4-step_1000k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_4-step_1000k') model = BertModel.from_pretrained("google/multiberts-seed_4-step_1000k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ```
4e4349f98a4b6ae7adc7dc30aac4ac1f
apache-2.0
['generated_from_trainer']
false
NER_ehealth_Spanish_mBERT_fine_tuned This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6563 - Precision: 0.8094 - Recall: 0.8330 - F1: 0.8210 - Accuracy: 0.9051
ee86c1692c997b146047f8c53332133c
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 12
7c5316a0607b2ec35d637075f3296f95
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 100 | 0.5335 | 0.8018 | 0.8307 | 0.8160 | 0.9047 | | No log | 2.0 | 200 | 0.5034 | 0.8110 | 0.8253 | 0.8181 | 0.9067 | | No log | 3.0 | 300 | 0.5632 | 0.7932 | 0.8230 | 0.8078 | 0.9038 | | No log | 4.0 | 400 | 0.5904 | 0.8004 | 0.8299 | 0.8149 | 0.9027 | | 0.017 | 5.0 | 500 | 0.5958 | 0.7993 | 0.8330 | 0.8158 | 0.9071 | | 0.017 | 6.0 | 600 | 0.6168 | 0.7980 | 0.8352 | 0.8162 | 0.9022 | | 0.017 | 7.0 | 700 | 0.6219 | 0.8079 | 0.8314 | 0.8195 | 0.9062 | | 0.017 | 8.0 | 800 | 0.6441 | 0.8046 | 0.8299 | 0.8171 | 0.9038 | | 0.017 | 9.0 | 900 | 0.6338 | 0.8086 | 0.8253 | 0.8168 | 0.9051 | | 0.0066 | 10.0 | 1000 | 0.6482 | 0.8021 | 0.8261 | 0.8139 | 0.9029 | | 0.0066 | 11.0 | 1100 | 0.6578 | 0.8039 | 0.8291 | 0.8163 | 0.9038 | | 0.0066 | 12.0 | 1200 | 0.6563 | 0.8094 | 0.8330 | 0.8210 | 0.9051 |
7adf4d66dc65ce8831a17e73ee11dcf3
apache-2.0
['automatic-speech-recognition', 'fr']
false
exp_w2v2t_fr_wav2vec2_s227 Fine-tuned [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
b37b7e9574b5deddd6f5262609a675e3
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-tweets This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2703 - Accuracy: 0.9068 - F1: 0.9081
7549eb993e5727fe2c86a18bda9c0905
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.3212 | 1.0 | 143 | 0.2487 | 0.8989 | 0.8991 | | 0.2031 | 2.0 | 286 | 0.2268 | 0.9077 | 0.9074 | | 0.1474 | 3.0 | 429 | 0.2385 | 0.9094 | 0.9107 | | 0.1061 | 4.0 | 572 | 0.2516 | 0.9103 | 0.9111 | | 0.0804 | 5.0 | 715 | 0.2703 | 0.9068 | 0.9081 |
c9991c9edeb7c22cec989bdc64d307aa
apache-2.0
['generated_from_trainer']
false
Tagged_One_250v5_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one250v5_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.3623 - Precision: 0.5500 - Recall: 0.4923 - F1: 0.5196 - Accuracy: 0.8950
00a9f87483e60468f9ad1c44db6baa06
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 91 | 0.3950 | 0.2800 | 0.2138 | 0.2424 | 0.8558 | | No log | 2.0 | 182 | 0.3633 | 0.4938 | 0.4306 | 0.4601 | 0.8887 | | No log | 3.0 | 273 | 0.3623 | 0.5500 | 0.4923 | 0.5196 | 0.8950 |
218ddb3dfb611b65df25b53d6ef0f635
apache-2.0
['generated_from_trainer']
false
english-filipino-wav2vec2-l-xls-r-test-06 This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-english](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-english) on the filipino_voice dataset. It achieves the following results on the evaluation set: - Loss: 1.5806 - Wer: 0.6568
d1a13d332ebd32e1075ac8289191ed61
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP
197ab5bd46f439f25fa61a45473d82f3
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.0031 | 2.09 | 400 | 1.2366 | 0.8780 | | 0.9084 | 4.19 | 800 | 1.0653 | 0.8081 | | 0.6484 | 6.28 | 1200 | 1.1648 | 0.8258 | | 0.5335 | 8.38 | 1600 | 1.0903 | 0.7542 | | 0.4359 | 10.47 | 2000 | 0.9466 | 0.7058 | | 0.3629 | 12.57 | 2400 | 0.9266 | 0.7048 | | 0.3057 | 14.66 | 2800 | 1.0879 | 0.7018 | | 0.2477 | 16.75 | 3200 | 1.1113 | 0.7022 | | 0.208 | 18.85 | 3600 | 1.1345 | 0.6742 | | 0.1781 | 20.94 | 4000 | 1.3117 | 0.6974 | | 0.1465 | 23.04 | 4400 | 1.3248 | 0.6916 | | 0.1288 | 25.13 | 4800 | 1.4306 | 0.6523 | | 0.1108 | 27.23 | 5200 | 1.5155 | 0.6685 | | 0.099 | 29.32 | 5600 | 1.5806 | 0.6568 |
42aceb249a667f9f8f7633a560b9b26c
apache-2.0
['summarization', 'generated_from_trainer']
false
mt5-small-finetuned-18jan-4 This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.6070 - Rouge1: 5.8518 - Rouge2: 0.3333 - Rougel: 5.8423 - Rougelsum: 5.7268
97ce462901e7c3f7b31a1b46bdd36c5a
apache-2.0
['summarization', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10
10f9370d32a6c37113041c87112fa399
apache-2.0
['summarization', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:| | 7.6303 | 1.0 | 60 | 3.0842 | 6.1768 | 1.2345 | 6.2047 | 6.1838 | | 3.8899 | 2.0 | 120 | 2.7540 | 7.9407 | 1.0 | 7.8852 | 7.9087 | | 3.4335 | 3.0 | 180 | 2.7391 | 8.5431 | 0.5667 | 8.5448 | 8.4406 | | 3.2524 | 4.0 | 240 | 2.6775 | 8.7375 | 0.4167 | 8.6926 | 8.569 | | 3.0853 | 5.0 | 300 | 2.6776 | 7.7823 | 0.1667 | 7.7548 | 7.6573 | | 2.974 | 6.0 | 360 | 2.6641 | 8.375 | 0.1667 | 8.3333 | 8.2167 | | 2.9018 | 7.0 | 420 | 2.6233 | 7.2137 | 0.3333 | 7.147 | 7.0595 | | 2.859 | 8.0 | 480 | 2.6238 | 6.6125 | 0.4167 | 6.656 | 6.4595 | | 2.8123 | 9.0 | 540 | 2.5961 | 6.4262 | 0.3333 | 6.3682 | 6.2131 | | 2.7843 | 10.0 | 600 | 2.6070 | 5.8518 | 0.3333 | 5.8423 | 5.7268 |
96be60587ad1a3f7ea22fe7d8567d2b6
apache-2.0
['summarization', 'generated_from_trainer']
false
mt5-small-finetuned-12feb-1 This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.4285 - Rouge1: 18.23 - Rouge2: 5.42 - Rougel: 18.09
e48f75ee3d293c320a8b52235e2ed7be
apache-2.0
['summarization', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 9
8c86875ed707ba1bcee04b716875c4a7
apache-2.0
['summarization', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:| | 3.0346 | 1.0 | 311 | 2.4880 | 17.19 | 5.28 | 17.06 | | 2.8943 | 2.0 | 622 | 2.4751 | 17.77 | 5.18 | 17.59 | | 2.8397 | 3.0 | 933 | 2.4719 | 17.65 | 5.38 | 17.55 | | 2.806 | 4.0 | 1244 | 2.4614 | 18.26 | 5.23 | 18.03 | | 2.7842 | 5.0 | 1555 | 2.4464 | 18.08 | 5.51 | 17.96 | | 2.7855 | 6.0 | 1866 | 2.4437 | 17.9 | 5.37 | 17.8 | | 2.7796 | 7.0 | 2177 | 2.4270 | 18.07 | 5.38 | 17.95 | | 2.7951 | 8.0 | 2488 | 2.4267 | 17.96 | 5.36 | 17.85 | | 2.7864 | 9.0 | 2799 | 2.4285 | 18.23 | 5.42 | 18.09 |
1f41dc8f8571547fe3498a84b501b10d
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 - mixed_precision_training: Native AMP
45994c9fe4de607505ec607e50245b60