license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
apache-2.0
['image-classification', 'pytorch', 'onnx']
false
Usage instructions ```python from PIL import Image from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize from torchvision.transforms.functional import InterpolationMode from holocron.models import model_from_hf_hub model = model_from_hf_hub("frgfm/rexnet1_0x").eval() img = Image.open(path_to_an_image).convert("RGB")
e20b0da036f2ffde2a38df683d31cc65
apache-2.0
['image-classification', 'pytorch', 'onnx']
false
Citation Original paper ```bibtex @article{DBLP:journals/corr/abs-2007-00992, author = {Dongyoon Han and Sangdoo Yun and Byeongho Heo and Young Joon Yoo}, title = {ReXNet: Diminishing Representational Bottleneck on Convolutional Neural Network}, journal = {CoRR}, volume = {abs/2007.00992}, year = {2020}, url = {https://arxiv.org/abs/2007.00992}, eprinttype = {arXiv}, eprint = {2007.00992}, timestamp = {Mon, 06 Jul 2020 15:26:01 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2007-00992.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` Source of this implementation ```bibtex @software{Fernandez_Holocron_2020, author = {Fernandez, François-Guillaume}, month = {5}, title = {{Holocron}}, url = {https://github.com/frgfm/Holocron}, year = {2020} } ```
08aae6977ce04cee4ebc4e5524052689
mit
['generated_from_trainer']
false
nervous_wozniak This model was trained from scratch on the tomekkorbak/pii-pile-chunk3-0-50000, the tomekkorbak/pii-pile-chunk3-50000-100000, the tomekkorbak/pii-pile-chunk3-100000-150000, the tomekkorbak/pii-pile-chunk3-150000-200000, the tomekkorbak/pii-pile-chunk3-200000-250000, the tomekkorbak/pii-pile-chunk3-250000-300000, the tomekkorbak/pii-pile-chunk3-300000-350000, the tomekkorbak/pii-pile-chunk3-350000-400000, the tomekkorbak/pii-pile-chunk3-400000-450000, the tomekkorbak/pii-pile-chunk3-450000-500000, the tomekkorbak/pii-pile-chunk3-500000-550000, the tomekkorbak/pii-pile-chunk3-550000-600000, the tomekkorbak/pii-pile-chunk3-600000-650000, the tomekkorbak/pii-pile-chunk3-650000-700000, the tomekkorbak/pii-pile-chunk3-700000-750000, the tomekkorbak/pii-pile-chunk3-750000-800000, the tomekkorbak/pii-pile-chunk3-800000-850000, the tomekkorbak/pii-pile-chunk3-850000-900000, the tomekkorbak/pii-pile-chunk3-900000-950000, the tomekkorbak/pii-pile-chunk3-950000-1000000, the tomekkorbak/pii-pile-chunk3-1000000-1050000, the tomekkorbak/pii-pile-chunk3-1050000-1100000, the tomekkorbak/pii-pile-chunk3-1100000-1150000, the tomekkorbak/pii-pile-chunk3-1150000-1200000, the tomekkorbak/pii-pile-chunk3-1200000-1250000, the tomekkorbak/pii-pile-chunk3-1250000-1300000, the tomekkorbak/pii-pile-chunk3-1300000-1350000, the tomekkorbak/pii-pile-chunk3-1350000-1400000, the tomekkorbak/pii-pile-chunk3-1400000-1450000, the tomekkorbak/pii-pile-chunk3-1450000-1500000, the tomekkorbak/pii-pile-chunk3-1500000-1550000, the tomekkorbak/pii-pile-chunk3-1550000-1600000, the tomekkorbak/pii-pile-chunk3-1600000-1650000, the tomekkorbak/pii-pile-chunk3-1650000-1700000, the tomekkorbak/pii-pile-chunk3-1700000-1750000, the tomekkorbak/pii-pile-chunk3-1750000-1800000, the tomekkorbak/pii-pile-chunk3-1800000-1850000, the tomekkorbak/pii-pile-chunk3-1850000-1900000 and the tomekkorbak/pii-pile-chunk3-1900000-1950000 datasets.
2972a02f9d36655eedaabc38fbeac10d
mit
['generated_from_trainer']
false
Full config {'dataset': {'datasets': ['tomekkorbak/pii-pile-chunk3-0-50000', 'tomekkorbak/pii-pile-chunk3-50000-100000', 'tomekkorbak/pii-pile-chunk3-100000-150000', 'tomekkorbak/pii-pile-chunk3-150000-200000', 'tomekkorbak/pii-pile-chunk3-200000-250000', 'tomekkorbak/pii-pile-chunk3-250000-300000', 'tomekkorbak/pii-pile-chunk3-300000-350000', 'tomekkorbak/pii-pile-chunk3-350000-400000', 'tomekkorbak/pii-pile-chunk3-400000-450000', 'tomekkorbak/pii-pile-chunk3-450000-500000', 'tomekkorbak/pii-pile-chunk3-500000-550000', 'tomekkorbak/pii-pile-chunk3-550000-600000', 'tomekkorbak/pii-pile-chunk3-600000-650000', 'tomekkorbak/pii-pile-chunk3-650000-700000', 'tomekkorbak/pii-pile-chunk3-700000-750000', 'tomekkorbak/pii-pile-chunk3-750000-800000', 'tomekkorbak/pii-pile-chunk3-800000-850000', 'tomekkorbak/pii-pile-chunk3-850000-900000', 'tomekkorbak/pii-pile-chunk3-900000-950000', 'tomekkorbak/pii-pile-chunk3-950000-1000000', 'tomekkorbak/pii-pile-chunk3-1000000-1050000', 'tomekkorbak/pii-pile-chunk3-1050000-1100000', 'tomekkorbak/pii-pile-chunk3-1100000-1150000', 'tomekkorbak/pii-pile-chunk3-1150000-1200000', 'tomekkorbak/pii-pile-chunk3-1200000-1250000', 'tomekkorbak/pii-pile-chunk3-1250000-1300000', 'tomekkorbak/pii-pile-chunk3-1300000-1350000', 'tomekkorbak/pii-pile-chunk3-1350000-1400000', 'tomekkorbak/pii-pile-chunk3-1400000-1450000', 'tomekkorbak/pii-pile-chunk3-1450000-1500000', 'tomekkorbak/pii-pile-chunk3-1500000-1550000', 'tomekkorbak/pii-pile-chunk3-1550000-1600000', 'tomekkorbak/pii-pile-chunk3-1600000-1650000', 'tomekkorbak/pii-pile-chunk3-1650000-1700000', 'tomekkorbak/pii-pile-chunk3-1700000-1750000', 'tomekkorbak/pii-pile-chunk3-1750000-1800000', 'tomekkorbak/pii-pile-chunk3-1800000-1850000', 'tomekkorbak/pii-pile-chunk3-1850000-1900000', 'tomekkorbak/pii-pile-chunk3-1900000-1950000'], 'is_split_by_sentences': True}, 'generation': {'force_call_on': [25177], 'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}], 'scenario_configs': [{'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_samples': 2048}], 'scorer_config': {}}, 'kl_gpt3_callback': {'force_call_on': [25177], 'max_tokens': 64, 'num_samples': 4096}, 'model': {'from_scratch': True, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'path_or_name': 'gpt2'}, 'objective': {'name': 'MLE'}, 'tokenizer': {'path_or_name': 'gpt2'}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 64, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'nervous_wozniak', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.0005, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000, 'output_dir': 'training_output2', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 25177, 'save_strategy': 'steps', 'seed': 42, 'warmup_ratio': 0.01, 'weight_decay': 0.1}}
f8809fb4cd4d8c834dde9ea782629a91
mit
['generated_from_trainer']
false
gpt2-wikitext2 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset. It achieves the following results on the evaluation set: - eval_loss: 6.6488 - eval_runtime: 22.5221 - eval_samples_per_second: 85.871 - eval_steps_per_second: 10.745 - epoch: 0.66 - step: 1490
fbf0a51a698bcd1418777529106b819f
apache-2.0
['generated_from_trainer']
false
finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3328 - Accuracy: 0.8633 - F1: 0.8647
25ded3826e34ddc8605b4dfeba75cab5
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0
a064a515070e8fe8a6cc6b4bf31b7740
apache-2.0
['translation']
false
opus-mt-en-ng * source languages: en * target languages: ng * OPUS readme: [en-ng](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ng/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ng/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ng/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ng/opus-2020-01-08.eval.txt)
223df1ceca44c650598e27bdd93ed47a
apache-2.0
['generated_from_trainer']
false
tiny-mlm-snli-target-glue-qnli This model is a fine-tuned version of [muhtasham/tiny-mlm-snli](https://huggingface.co/muhtasham/tiny-mlm-snli) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4710 - Accuracy: 0.7811
2621411e8b454a7822c340466e24a84c
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6125 | 0.15 | 500 | 0.5374 | 0.7371 | | 0.5442 | 0.31 | 1000 | 0.5321 | 0.7414 | | 0.5223 | 0.46 | 1500 | 0.4991 | 0.7628 | | 0.5165 | 0.61 | 2000 | 0.5155 | 0.7545 | | 0.5118 | 0.76 | 2500 | 0.4795 | 0.7752 | | 0.5052 | 0.92 | 3000 | 0.4663 | 0.7856 | | 0.4916 | 1.07 | 3500 | 0.4500 | 0.7955 | | 0.4818 | 1.22 | 4000 | 0.4669 | 0.7811 | | 0.4685 | 1.37 | 4500 | 0.4774 | 0.7759 | | 0.4761 | 1.53 | 5000 | 0.4710 | 0.7811 |
9c07764986e2ddec6df17bc3322aa9d8
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2141 - Accuracy: 0.924 - F1: 0.9241
d560ec21e718a4a00ea46a67e91a8b41
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.7828 | 1.0 | 250 | 0.2936 | 0.909 | 0.9070 | | 0.2344 | 2.0 | 500 | 0.2141 | 0.924 | 0.9241 |
517903cd46aa4d987e5b6113d36d18d7
mit
[]
false
Garfield-Pizza-Plush on Stable Diffusion This is the `<garfield-plushy>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<garfield-plushy> 0](https://huggingface.co/sd-concepts-library/garfield-pizza-plush/resolve/main/concept_images/5.jpeg) ![<garfield-plushy> 1](https://huggingface.co/sd-concepts-library/garfield-pizza-plush/resolve/main/concept_images/3.jpeg) ![<garfield-plushy> 2](https://huggingface.co/sd-concepts-library/garfield-pizza-plush/resolve/main/concept_images/0.jpeg) ![<garfield-plushy> 3](https://huggingface.co/sd-concepts-library/garfield-pizza-plush/resolve/main/concept_images/2.jpeg) ![<garfield-plushy> 4](https://huggingface.co/sd-concepts-library/garfield-pizza-plush/resolve/main/concept_images/1.jpeg) ![<garfield-plushy> 5](https://huggingface.co/sd-concepts-library/garfield-pizza-plush/resolve/main/concept_images/4.jpeg)
f31740c59d19168753ae1aaa43774fef
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Fine-tuned French Voxpopuli wav2vec2 large model for speech recognition in French Fine-tuned [facebook/wav2vec2-large-fr-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-fr-voxpopuli) on French using the train and validation splits of [Common Voice 6.1](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :) The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint
a7cf3f70f413f154bfe6ba2e13620578
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Usage The model can be used directly (without a language model) as follows... Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library: ```python from huggingsound import SpeechRecognitionModel model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-large-fr-voxpopuli-french") audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"] transcriptions = model.transcribe(audio_paths) ``` Writing your own inference script: ```python import torch import librosa from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor LANG_ID = "fr" MODEL_ID = "jonatasgrosman/wav2vec2-large-fr-voxpopuli-french" SAMPLES = 10 test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]") processor = Wav2Vec2Processor.from_pretrained(MODEL_ID) model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
4bc659f99e647a149dc1c86a6e665225
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000) batch["speech"] = speech_array batch["sentence"] = batch["sentence"].upper() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) predicted_sentences = processor.batch_decode(predicted_ids) for i, predicted_sentence in enumerate(predicted_sentences): print("-" * 100) print("Reference:", test_dataset[i]["sentence"]) print("Prediction:", predicted_sentence) ``` | Reference | Prediction | | ------------- | ------------- | | "CE DERNIER A ÉVOLUÉ TOUT AU LONG DE L'HISTOIRE ROMAINE." | CE DERNIER A ÉVOLÉ TOUT AU LONG DE L'HISTOIRE ROMAINE | | CE SITE CONTIENT QUATRE TOMBEAUX DE LA DYNASTIE ACHÉMÉNIDE ET SEPT DES SASSANIDES. | CE SITE CONTIENT QUATRE TOMBEAUX DE LA DYNESTIE ACHÉMÉNIDE ET SEPT DES SACENNIDES | | "J'AI DIT QUE LES ACTEURS DE BOIS AVAIENT, SELON MOI, BEAUCOUP D'AVANTAGES SUR LES AUTRES." | JAI DIT QUE LES ACTEURS DE BOIS AVAIENT SELON MOI BEAUCOUP DAVANTAGE SUR LES AUTRES | | LES PAYS-BAS ONT REMPORTÉ TOUTES LES ÉDITIONS. | LE PAYS-BAS ON REMPORTÉ TOUTES LES ÉDITIONS | | IL Y A MAINTENANT UNE GARE ROUTIÈRE. | IL A MAINTENANT GULA E RETIREN | | HUIT | HUIT | | DANS L’ATTENTE DU LENDEMAIN, ILS NE POUVAIENT SE DÉFENDRE D’UNE VIVE ÉMOTION | DANS LATTENTE DU LENDEMAIN IL NE POUVAIT SE DÉFENDRE DUNE VIVE ÉMOTION | | LA PREMIÈRE SAISON EST COMPOSÉE DE DOUZE ÉPISODES. | LA PREMIÈRE SAISON EST COMPOSÉE DE DOUZ ÉPISODES | | ELLE SE TROUVE ÉGALEMENT DANS LES ÎLES BRITANNIQUES. | ELLE SE TROUVE ÉGALEMENT DANS LES ÎLES BRITANNIQUES | | ZÉRO | ZÉRO |
9e7d460520ba41e4c4376b09db6e85f4
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Evaluation The model can be evaluated as follows on the French (fr) test data of Common Voice. ```python import torch import re import librosa from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor LANG_ID = "fr" MODEL_ID = "jonatasgrosman/wav2vec2-large-fr-voxpopuli-french" DEVICE = "cuda" CHARS_TO_IGNORE = [",", "?", "¿", ".", "!", "¡", ";", ";", ":", '""', "%", '"', "�", "ʿ", "·", "჻", "~", "՞", "؟", "،", "।", "॥", "«", "»", "„", "“", "”", "「", "」", "‘", "’", "《", "》", "(", ")", "[", "]", "{", "}", "=", "`", "_", "+", "<", ">", "…", "–", "°", "´", "ʾ", "‹", "›", "©", "®", "—", "→", "。", "、", "﹂", "﹁", "‧", "~", "﹏", ",", "{", "}", "(", ")", "[", "]", "【", "】", "‥", "〽", "『", "』", "〝", "〟", "⟨", "⟩", "〜", ":", "!", "?", "♪", "؛", "/", "\\", "º", "−", "^", "ʻ", "ˆ"] test_dataset = load_dataset("common_voice", LANG_ID, split="test") wer = load_metric("wer.py")
64811f98eab029a510bcf896e6a6ca31
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
ERROR: type should be string, got " https://github.com/jonatasgrosman/wav2vec2-sprint/blob/main/cer.py chars_to_ignore_regex = f\"[{re.escape(''.join(CHARS_TO_IGNORE))}]\" processor = Wav2Vec2Processor.from_pretrained(MODEL_ID) model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID) model.to(DEVICE) "
70a06aa5da66d91874efb8328b09f91c
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
We need to read the audio files as arrays def speech_file_to_array_fn(batch): with warnings.catch_warnings(): warnings.simplefilter("ignore") speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000) batch["speech"] = speech_array batch["sentence"] = re.sub(chars_to_ignore_regex, "", batch["sentence"]).upper() return batch test_dataset = test_dataset.map(speech_file_to_array_fn)
e0257151865129a567a03bc24841d9bf
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
We need to read the audio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to(DEVICE), attention_mask=inputs.attention_mask.to(DEVICE)).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) predictions = [x.upper() for x in result["pred_strings"]] references = [x.upper() for x in result["sentence"]] print(f"WER: {wer.compute(predictions=predictions, references=references, chunk_size=1000) * 100}") print(f"CER: {cer.compute(predictions=predictions, references=references, chunk_size=1000) * 100}") ``` **Test Result**: In the table below I report the Word Error Rate (WER) and the Character Error Rate (CER) of the model. I ran the evaluation script described above on other models as well (on 2021-05-16). Note that the table below may show different results from those already reported, this may have been caused due to some specificity of the other evaluation scripts used. | Model | WER | CER | | ------------- | ------------- | ------------- | | jonatasgrosman/wav2vec2-large-xlsr-53-french | **15.90%** | **5.29%** | | jonatasgrosman/wav2vec2-large-fr-voxpopuli-french | 17.62% | 6.04% | | Ilyes/wav2vec2-large-xlsr-53-french | 19.67% | 6.70% | | Nhut/wav2vec2-large-xlsr-french | 24.09% | 8.42% | | facebook/wav2vec2-large-xlsr-53-french | 25.45% | 10.35% | | MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-French | 28.22% | 9.70% | | Ilyes/wav2vec2-large-xlsr-53-french_punctuation | 29.80% | 11.79% | | facebook/wav2vec2-base-10k-voxpopuli-ft-fr | 61.06% | 33.31% |
64eed83b4ae9182e24c79c1d2ab4d5e2
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Citation If you want to cite this model you can use this: ```bibtex @misc{grosman2021voxpopuli-fr-wav2vec2-large-french, title={Fine-tuned {F}rench {V}oxpopuli wav2vec2 large model for speech recognition in {F}rench}, author={Grosman, Jonatas}, howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-large-fr-voxpopuli-french}}, year={2021} } ```
93ea671aad16aa79840e3be589d1fa7d
apache-2.0
[]
false
Example Usage ```python from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("laituan245/molt5-base-caption2smiles", model_max_length=512) model = T5ForConditionalGeneration.from_pretrained('laituan245/molt5-base-caption2smiles') input_text = 'The molecule is a monomethoxybenzene that is 2-methoxyphenol substituted by a hydroxymethyl group at position 4. It has a role as a plant metabolite. It is a member of guaiacols and a member of benzyl alcohols.' input_ids = tokenizer(input_text, return_tensors="pt").input_ids outputs = model.generate(input_ids, num_beams=5, max_length=512) print(tokenizer.decode(outputs[0], skip_special_tokens=True))
8ff3a563afd9330b5047ead80f4fa666
apache-2.0
[]
false
Paper For more information, please take a look at our paper. Paper: [Translation between Molecules and Natural Language](https://arxiv.org/abs/2204.11817) Authors: *Carl Edwards\*, Tuan Lai\*, Kevin Ros, Garrett Honke, Heng Ji*
f20027548f04481251fa2cb4d636d85a
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
`pyf98/aishell_e_branchformer` This model was trained by Yifan Peng using aishell recipe in [espnet](https://github.com/espnet/espnet/). References: - [E-Branchformer: Branchformer with Enhanced merging for speech recognition (SLT 2022)](https://arxiv.org/abs/2210.00077) - [Branchformer: Parallel MLP-Attention Architectures to Capture Local and Global Context for Speech Recognition and Understanding (ICML 2022)](https://proceedings.mlr.press/v162/peng22a.html)
419d76b2dad452bf4e0ebff516b10f2b
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
Demo: How to use in ESPnet2 Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html) if you haven't done that already. ```bash cd espnet git checkout 89567acf6047737820aef96d2dd2e611825c8b1e pip install -e . cd egs2/aishell/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model pyf98/aishell_e_branchformer ``` <!-- Generated by scripts/utils/show_asr_result.sh -->
da90f7537069934ed30133d31524055c
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
Environments - date: `Sun Dec 18 12:21:46 CST 2022` - python version: `3.9.15 (main, Nov 24 2022, 14:31:59) [GCC 11.2.0]` - espnet version: `espnet 202209` - pytorch version: `pytorch 1.12.1` - Git hash: `26f432bc859e5e40cac1a86042d498ba7baffbb0` - Commit date: `Fri Dec 9 02:16:01 2022 +0000`
42e31c9ef9eca92bd3524ad069641371
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_branchformer_asr_model_valid.acc.ave/dev|14326|14326|66.9|33.1|0.0|0.0|33.1|33.1| |decode_asr_branchformer_asr_model_valid.acc.ave/test|7176|7176|65.4|34.6|0.0|0.0|34.6|34.6|
b0230e132fb95bde13d28ac1ee256b3b
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_branchformer_asr_model_valid.acc.ave/dev|14326|205341|95.9|4.0|0.1|0.1|4.2|33.1| |decode_asr_branchformer_asr_model_valid.acc.ave/test|7176|104765|95.6|4.3|0.1|0.1|4.5|34.6|
64ab2c265ba661bdca83636f442ce646
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
ASR config <details><summary>expand</summary> ``` config: conf/tuning/train_asr_e_branchformer_e12_mlp1024_linear1024_mactrue_amp.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_train_asr_e_branchformer_e12_mlp1024_linear1024_mactrue_amp_raw_zh_char_sp ngpu: 1 seed: 0 num_workers: 4 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: 2 dist_rank: 0 local_rank: 0 dist_master_addr: localhost dist_master_port: 39475 dist_launcher: null multiprocessing_distributed: true unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 60 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - acc - max keep_nbest_models: 10 nbest_averaging_interval: 0 grad_clip: 5 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: true log_interval: null use_matplotlib: true use_tensorboard: true create_graph_in_tensorboard: false use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 20 valid_batch_size: null batch_bins: 25000000 valid_batch_bins: null train_shape_file: - exp/asr_stats_raw_zh_char_sp/train/speech_shape - exp/asr_stats_raw_zh_char_sp/train/text_shape.char valid_shape_file: - exp/asr_stats_raw_zh_char_sp/valid/speech_shape - exp/asr_stats_raw_zh_char_sp/valid/text_shape.char batch_type: numel valid_batch_type: null fold_length: - 51200 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train_sp/wav.scp - speech - kaldi_ark - - dump/raw/train_sp/text - text - text valid_data_path_and_name_and_type: - - dump/raw/dev/wav.scp - speech - kaldi_ark - - dump/raw/dev/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 0.001 weight_decay: 1.0e-06 scheduler: warmuplr scheduler_conf: warmup_steps: 35000 token_list: - <blank> - <unk> - 的 - 一 - 在 - 十 - 中 - 是 - 人 - 有 - 二 - 上 - 了 - 不 - 国 - 市 - 大 - 业 - 为 - 年 - 三 - 发 - 个 - 分 - 出 - 会 - 公 - 行 - 地 - 成 - 这 - 和 - 到 - 五 - 产 - 时 - 对 - 房 - 百 - 能 - 场 - 来 - 以 - 新 - 之 - 日 - 者 - 将 - 现 - 四 - 要 - 家 - 资 - 多 - 月 - 也 - 方 - 后 - 机 - 下 - 前 - 零 - 比 - 于 - 生 - 点 - 开 - 动 - 高 - 经 - 进 - 报 - 体 - 赛 - 子 - 万 - 车 - 用 - 金 - 司 - 可 - 被 - 过 - 手 - 本 - 作 - 自 - 全 - 八 - 六 - 最 - 价 - 目 - 电 - 部 - 交 - 九 - 七 - 面 - 我 - 企 - 加 - 小 - 度 - 实 - 同 - 城 - 工 - 其 - 力 - 定 - 而 - 元 - 合 - 已 - 内 - 与 - 法 - 还 - 关 - 网 - 得 - 他 - 就 - 入 - 名 - 品 - 女 - 记 - 理 - 事 - 长 - 两 - 商 - 都 - 们 - 京 - 并 - 但 - 平 - 制 - 保 - 据 - 期 - 化 - 主 - 重 - 表 - 次 - 相 - 量 - 通 - 道 - 政 - 所 - 天 - 第 - 利 - 间 - 海 - 数 - 务 - 提 - 北 - 展 - 员 - 管 - 投 - 因 - 建 - 好 - 外 - 区 - 更 - 示 - 增 - 从 - 计 - 信 - 性 - 等 - 运 - 项 - 应 - 当 - 收 - 位 - 着 - 起 - 学 - 台 - 民 - 持 - 规 - 设 - 明 - 股 - 正 - 没 - 心 - 然 - 很 - 今 - 调 - 去 - 安 - 此 - 东 - 队 - 如 - 线 - 科 - 世 - 无 - 达 - 身 - 果 - 证 - 基 - 受 - 男 - 需 - 标 - 布 - 情 - 格 - 近 - 步 - 未 - 费 - 求 - 式 - 消 - 千 - 美 - 些 - 里 - 米 - 向 - 看 - 续 - 息 - 意 - 接 - 门 - 回 - 及 - 销 - 老 - 获 - 总 - 监 - 打 - 联 - 至 - 亿 - 说 - 讯 - 住 - 环 - 件 - 整 - 水 - 技 - 路 - 院 - 局 - 特 - 该 - 统 - 由 - 售 - 购 - 强 - 改 - 问 - 乐 - 楼 - 涨 - 处 - 决 - 让 - 系 - 户 - 题 - 推 - 少 - 广 - 显 - 降 - 跑 - 影 - 只 - 选 - 称 - 创 - 易 - 战 - 首 - 完 - 案 - 策 - 常 - 查 - 参 - 种 - 牌 - 程 - 银 - 备 - 认 - 营 - 立 - 势 - 结 - 造 - 超 - 己 - 准 - 存 - 险 - 球 - 各 - 代 - 低 - 再 - 做 - 级 - 款 - 放 - 物 - 告 - 原 - 友 - 转 - 警 - 周 - 界 - 张 - 样 - 传 - 较 - 风 - 单 - 给 - 她 - 州 - 解 - 则 - 视 - 指 - 预 - 升 - 华 - 供 - 走 - 每 - 取 - 导 - 搜 - 集 - 文 - 变 - 客 - 排 - 片 - 头 - 任 - 积 - 术 - 率 - 型 - 军 - 斯 - 研 - 别 - 非 - 直 - 智 - 速 - 组 - 星 - 领 - 口 - 份 - 岁 - 马 - 王 - 快 - 专 - 社 - 使 - 团 - 模 - 器 - 难 - 活 - 拉 - 或 - 约 - 施 - 源 - 构 - 支 - 医 - 儿 - 带 - 服 - 先 - 想 - 引 - 么 - 办 - 照 - 狐 - 权 - 微 - 南 - 始 - 融 - 深 - 士 - 游 - 绩 - 仅 - 况 - 媒 - 随 - 半 - 越 - 幅 - 确 - 注 - 类 - 争 - 税 - 限 - 流 - 均 - 控 - 充 - 额 - 望 - 连 - 划 - 奥 - 亚 - 包 - 娱 - 西 - 财 - 值 - 伤 - 某 - 致 - 终 - 空 - 济 - 众 - 际 - 土 - 买 - 仍 - 育 - 师 - 汽 - 知 - 质 - 态 - 具 - 李 - 责 - 究 - 露 - 条 - 几 - 居 - 共 - 响 - 反 - 站 - 冠 - 节 - 季 - 优 - 委 - 宅 - 观 - 互 - 见 - 范 - 境 - 感 - 负 - 段 - 失 - 采 - 套 - 域 - 尔 - 举 - 何 - 光 - 气 - 落 - 博 - 教 - 锦 - 林 - 山 - 依 - 继 - 极 - 形 - 图 - 审 - 竞 - 益 - 断 - 贷 - 效 - 府 - 复 - 许 - 容 - 健 - 击 - 足 - 又 - 诉 - 助 - 孩 - 色 - 停 - 票 - 双 - 拿 - 板 - 松 - 热 - 那 - 把 - 却 - 清 - 刘 - 议 - 考 - 减 - 曾 - 疑 - 例 - 除 - 功 - 占 - 你 - 试 - 根 - 港 - 太 - 离 - 才 - 货 - 突 - 涉 - 且 - 券 - 配 - 盘 - 即 - 库 - 付 - 破 - 职 - 演 - 农 - 置 - 纪 - 论 - 真 - 龙 - 晚 - 装 - 爱 - 号 - 练 - 死 - 压 - 亲 - 严 - 评 - 田 - 话 - 托 - 护 - 火 - 协 - 红 - 江 - 克 - 卖 - 言 - 租 - 善 - 频 - 普 - 飞 - 验 - 补 - 边 - 满 - 象 - 软 - 算 - 遭 - 馀 - 闻 - 稳 - 厂 - 远 - 苹 - 钱 - 担 - 判 - 官 - 虽 - 湾 - 按 - 昨 - 校 - 必 - 园 - 略 - 救 - 希 - 底 - 执 - 够 - 征 - 拍 - 历 - 像 - 润 - 层 - 债 - 便 - 障 - 围 - 康 - 店 - 往 - 列 - 早 - 测 - 录 - 否 - 香 - 宝 - 阳 - 索 - 核 - 兴 - 检 - 状 - 英 - 村 - 料 - 云 - 留 - 夫 - 移 - 奖 - 病 - 临 - 轻 - 省 - 秒 - 激 - 请 - 革 - 属 - 遇 - 跌 - 维 - 批 - 德 - 承 - 端 - 介 - 精 - 夺 - 群 - 初 - 胜 - 卡 - 尽 - 花 - 辆 - 它 - 故 - 神 - 届 - 治 - 透 - 景 - 白 - 副 - 什 - 宣 - 铁 - 杨 - 跳 - 假 - 登 - 福 - 青 - 药 - 婚 - 养 - 幕 - 违 - 短 - 访 - 修 - 纷 - 律 - 左 - 角 - 酒 - 括 - 爆 - 嫌 - 径 - 宁 - 董 - 适 - 逐 - 刚 - 防 - 陈 - 午 - 差 - 庭 - 独 - 波 - 食 - 识 - 似 - 候 - 黄 - 亡 - 训 - 书 - 退 - 待 - 航 - 块 - 冲 - 扩 - 吴 - 甚 - 申 - 伟 - 眼 - 巴 - 觉 - 找 - 换 - 义 - 轮 - 滑 - 席 - 央 - 送 - 右 - 卫 - 乘 - 石 - 字 - 罪 - 罗 - 泳 - 孙 - 析 - 志 - 另 - 母 - 绿 - 抢 - 止 - 令 - 童 - 妈 - 史 - 刑 - 洲 - 述 - 穿 - 念 - 纳 - 损 - 富 - 免 - 毒 - 络 - 紧 - 妻 - 乎 - 豪 - 素 - 害 - 倒 - 吸 - 街 - 促 - 择 - 杀 - 追 - 巨 - 犯 - 声 - 愿 - 晨 - 思 - 谈 - 河 - 镇 - 尼 - 跟 - 庆 - 链 - 措 - 借 - 赔 - 密 - 圳 - 贴 - 苏 - 温 - 骗 - 习 - 摄 - 版 - 帮 - 币 - 阶 - 阿 - 迎 - 驾 - 黑 - 趋 - 县 - 私 - 吃 - 疗 - 细 - 虑 - 脑 - 韩 - 亮 - 旅 - 抓 - 罚 - 良 - 背 - 脸 - 绝 - 班 - 危 - 础 - 戏 - 戴 - 招 - 命 - 尚 - 缺 - 伙 - 须 - 父 - 夜 - 切 - 操 - 挥 - 派 - 延 - 撞 - 披 - 衣 - 剧 - 陆 - 竟 - 签 - 欧 - 享 - 春 - 徽 - 裁 - 偿 - 启 - 艺 - 宗 - 味 - 察 - 估 - 净 - 募 - 拥 - 释 - 喜 - 顺 - 励 - 靠 - 渐 - 兰 - 油 - 佳 - 困 - 针 - 迷 - 写 - 材 - 硬 - 桥 - 坚 - 订 - 拳 - 累 - 盖 - 室 - 束 - 截 - 距 - 驶 - 旬 - 歌 - 悉 - 烈 - 序 - 患 - 干 - 污 - 圈 - 杰 - 顶 - 败 - 伴 - 归 - 探 - 曝 - 怀 - 急 - 池 - 织 - 秀 - 姐 - 峰 - 顾 - 误 - 键 - 丰 - 玩 - 汉 - 古 - 彩 - 讨 - 朋 - 抗 - 刺 - 挑 - 血 - 凌 - 旧 - 拟 - 晒 - 附 - 惊 - 欢 - 劳 - 丈 - 播 - 徐 - 吗 - 湖 - 笑 - 馆 - 音 - 阵 - 坐 - 谷 - 异 - 怎 - 夏 - 龄 - 熟 - 若 - 惠 - 休 - 永 - 哪 - 暂 - 输 - 绍 - 印 - 冰 - 缓 - 暖 - 听 - 避 - 嘉 - 寻 - 培 - 筹 - 伦 - 雪 - 账 - 暴 - 简 - 予 - 丽 - 泽 - 刻 - 野 - 威 - 宽 - 笔 - 语 - 武 - 炒 - 虚 - 架 - 奇 - 哥 - 尤 - 座 - 迅 - 粉 - 倍 - 朱 - 屋 - 般 - 错 - 津 - 弟 - 汇 - 概 - 鼓 - 掉 - 郑 - 钟 - 召 - 礼 - 禁 - 折 - 缩 - 锁 - 涛 - 乡 - 肥 - 幸 - 雨 - 梦 - 肉 - 攻 - 冬 - 呼 - 蓝 - 综 - 码 - 杯 - 映 - 刀 - 谢 - 编 - 脚 - 晓 - 遍 - 朝 - 吉 - 洗 - 盗 - 丹 - 屏 - 盛 - 秘 - 拘 - 染 - 渠 - 扣 - 洋 - 梯 - 枪 - 久 - 诈 - 川 - 摩 - 俄 - 迪 - 毛 - 赞 - 符 - 画 - 翻 - 妹 - 筑 - 聚 - 哈 - 兵 - 肯 - 胎 - 潮 - 苦 - 逃 - 讲 - 授 - 慢 - 顿 - 遗 - 丝 - 呈 - 揭 - 挂 - 封 - 慧 - 跨 - 询 - 拆 - 森 - 孕 - 脱 - 读 - 枚 - 捐 - 桩 - 跃 - 刷 - 芯 - 斗 - 昆 - 储 - 守 - 触 - 木 - 皮 - 饭 - 添 - 莞 - 震 - 载 - 贵 - 侵 - 撑 - 爸 - 册 - 舞 - 丁 - 贸 - 奶 - 隐 - 妇 - 榜 - 睡 - 陷 - 草 - 扬 - 袭 - 偷 - 督 - 亏 - 吕 - 珠 - 赶 - 扶 - 盈 - 档 - 诺 - 返 - 既 - 末 - 沙 - 谁 - 宏 - 摘 - 典 - 床 - 闭 - 弃 - 雷 - 毕 - 郭 - 玲 - 郎 - 芝 - 胡 - 瑞 - 盟 - 厅 - 抱 - 燃 - 铜 - 旗 - 荣 - 餐 - 牙 - 爷 - 迹 - 宇 - 途 - 潜 - 抵 - 骨 - 援 - 浪 - 玉 - 祖 - 振 - 虹 - 散 - 焦 - 勇 - 努 - 婆 - 拒 - 弹 - 梁 - 坛 - 含 - 坏 - 纯 - 烟 - 冷 - 镜 - 叫 - 赵 - 静 - 仪 - 藏 - 杂 - 痛 - 慎 - 树 - 章 - 塞 - 钢 - 狂 - 呢 - 雅 - 寿 - 恩 - 固 - 狗 - 菜 - 沟 - 献 - 叶 - 泰 - 赢 - 剩 - 窃 - 偏 - 掌 - 宜 - 课 - 趣 - 喝 - 纠 - 籍 - 替 - 炸 - 隔 - 砸 - 搭 - 诚 - 族 - 浙 - 齐 - 杆 - 晋 - 恶 - 奋 - 秋 - 鲜 - 鲁 - 冒 - 赚 - 弱 - 腿 - 祝 - 混 - 缴 - 疾 - 握 - 汪 - 辉 - 奔 - 醒 - 捕 - 骑 - 鸟 - 摆 - 灵 - 敏 - 牛 - 岛 - 恋 - 耗 - 瓦 - 拼 - 恐 - 棒 - 坦 - 厚 - 侧 - 尝 - 薪 - 堂 - 曲 - 答 - 雄 - 徒 - 碍 - 拓 - 翔 - 佛 - 佐 - 滴 - 杭 - 残 - 毫 - 射 - 拖 - 阻 - 辑 - 踪 - 症 - 姓 - 欲 - 鱼 - 船 - 恢 - 衡 - 淡 - 唯 - 乏 - 迟 - 琪 - 烧 - 唐 - 卷 - 陪 - 伏 - 劵 - 繁 - 逆 - 迁 - 诊 - 乱 - 亦 - 谓 - 矿 - 迫 - 忧 - 扮 - 巢 - 扎 - 卓 - 恒 - 庄 - 递 - 灾 - 莱 - 赴 - 煤 - 搏 - 剂 - 梅 - 吧 - 撤 - 哲 - 炳 - 尾 - 誉 - 洛 - 轨 - 署 - 党 - 惯 - 幼 - 缘 - 墨 - 莫 - 辞 - 奏 - 敢 - 垄 - 旁 - 蒙 - 箱 - 吨 - 泛 - 怕 - 闹 - 欠 - 劫 - 纸 - 岸 - 淘 - 赌 - 窗 - 洁 - 岗 - 娘 - 晶 - 劲 - 凭 - 斤 - 洪 - 液 - 槛 - 兼 - 摔 - 楚 - 昌 - 菲 - 萌 - 伍 - 沿 - 咨 - 饮 - 墙 - 沈 - 坡 - 寸 - 溢 - 仓 - 鉴 - 慈 - 柯 - 旦 - 殊 - 坠 - 诸 - 搞 - 伊 - 霸 - 绑 - 氧 - 墅 - 轿 - 蛋 - 忙 - 滨 - 井 - 逼 - 伯 - 癌 - 燕 - 赖 - 浦 - 漏 - 携 - 堪 - 阅 - 诗 - 贩 - 腐 - 倾 - 铺 - 旺 - 横 - 逊 - 允 - 窄 - 鸡 - 唱 - 贿 - 拨 - 砍 - 猛 - 碳 - 堵 - 邀 - 冕 - 栏 - 姆 - 耳 - 绕 - 览 - 聘 - 琳 - 霞 - 挖 - 庞 - 彻 - 颁 - 挺 - 沉 - 抄 - 宫 - 殴 - 垃 - 圾 - 尸 - 涵 - 娃 - 婷 - 牵 - 腾 - 卧 - 偶 - 扰 - 澳 - 迈 - 虎 - 贡 - 词 - 壁 - 宾 - 捷 - 忍 - 佩 - 喊 - 抽 - 植 - 炼 - 奸 - 吐 - 抛 - 祥 - 莉 - 泄 - 械 - 乒 - 辛 - 疯 - 凯 - 扫 - 灯 - 淀 - 毁 - 鬼 - 婴 - 淫 - 冻 - 篮 - 聊 - 帅 - 乔 - 沪 - 羽 - 舍 - 裂 - 忽 - 圆 - 拔 - 朗 - 宿 - 麻 - 眠 - 玮 - 塔 - 碰 - 怪 - 押 - 攀 - 驰 - 欣 - 踏 - 巩 - 废 - 艰 - 乳 - 句 - 侦 - 兄 - 荐 - 寓 - 厦 - 贝 - 纵 - 肖 - 杜 - 忘 - 丢 - 搬 - 曼 - 瓶 - 鹏 - 默 - 惨 - 泡 - 愈 - 敦 - 洞 - 劝 - 颖 - 酷 - 颜 - 巡 - 脏 - 仿 - 羊 - 挤 - 廉 - 麦 - 塌 - 君 - 敌 - 乌 - 俩 - 樊 - 邮 - 烯 - 详 - 舒 - 契 - 漫 - 胞 - 魔 - 宋 - 伐 - 谨 - 姿 - 姑 - 隆 - 纹 - 傅 - 茶 - 著 - 谋 - 敬 - 郁 - 驱 - 菌 - 悬 - 循 - 摊 - 闪 - 伪 - 鸿 - 娜 - 澎 - 湃 - 炉 - 暗 - 闯 - 绪 - 汰 - 稿 - 咬 - 卢 - 泉 - 涌 - 蕾 - 姻 - 熊 - 稀 - 摇 - 吊 - 桌 - 俊 - 哭 - 赠 - 逸 - 吓 - 赫 - 凡 - 俱 - 冯 - 巧 - 涯 - 啦 - 讼 - 恰 - 抚 - 肇 - 锋 - 凶 - 贯 - 悄 - 灭 - 冀 - 糕 - 伸 - 胖 - 腹 - 郊 - 斌 - 鑫 - 厉 - 肩 - 圣 - 浮 - 妙 - 饰 - 尖 - 尊 - 邱 - 诞 - 屡 - 摸 - 酬 - 闲 - 晰 - 匹 - 锻 - 甲 - 敲 - 遥 - 勒 - 兑 - 熙 - 稽 - 蔡 - 惜 - 猫 - 怒 - 驻 - 颇 - 浓 - 宴 - 仁 - 赏 - 磨 - 悲 - 骂 - 轴 - 姜 - 猪 - 割 - 歉 - 玻 - 浩 - 番 - 渡 - 肌 - 践 - 盾 - 甜 - 溺 - 尺 - 忆 - 盐 - 泥 - 薄 - 矛 - 畅 - 抑 - 颗 - 蒋 - 稍 - 碎 - 帝 - 璃 - 掀 - 拐 - 牢 - 幻 - 仔 - 粮 - 艾 - 扭 - 尿 - 刊 - 仑 - 黎 - 埃 - 臂 - 邻 - 苗 - 衔 - 桂 - 潭 - 履 - 贾 - 饼 - 惩 - 诱 - 旋 - 篇 - 辽 - 旭 - 逾 - 豆 - 潘 - 堆 - 甘 - 邦 - 氏 - 拦 - 硕 - 棋 - 裤 - 乓 - 姚 - 厘 - 邓 - 陶 - 萨 - 弗 - 辅 - 廷 - 吁 - 杠 - 绮 - 瑄 - 夹 - 槽 - 祸 - 袁 - 勾 - 赁 - 帖 - 腰 - 漂 - 裕 - 嘴 - 壮 - 弯 - 啊 - 汤 - 垫 - 魏 - 倡 - 栋 - 碑 - 颈 - 暑 - 魅 - 裸 - 疏 - 雇 - 毅 - 忠 - 疆 - 葛 - 凤 - 屈 - 悦 - 馈 - 挡 - 闫 - 氮 - 兆 - 貌 - 厕 - 谣 - 颠 - 猜 - 疲 - 框 - 揽 - 胁 - 憾 - 秩 - 艳 - 帽 - 氛 - 荷 - 泪 - 剑 - 懂 - 钻 - 遵 - 贪 - 贼 - 狱 - 姣 - 寺 - 胶 - 吵 - 催 - 削 - 丑 - 欺 - 肃 - 妥 - 烦 - 灰 - 擅 - 佣 - 萧 - 虾 - 鞋 - 捧 - 逝 - 猥 - 瓜 - 酸 - 奈 - 厨 - 紫 - 侠 - 塑 - 娇 - 辖 - 舆 - 擦 - 柏 - 澄 - 磊 - 虐 - 轰 - 曹 - 删 - 鼻 - 柳 - 屯 - 笼 - 皇 - 糖 - 珍 - 疼 - 柜 - 捡 - 址 - 肠 - 捞 - 拜 - 峻 - 吹 - 乃 - 瘦 - 肚 - 贤 - 帕 - 岳 - 勤 - 瑜 - 锅 - 沫 - 俗 - 昕 - 帆 - 茂 - 醉 - 填 - 饱 - 爬 - 轩 - 滞 - 蜜 - 汗 - 飙 - 耐 - 亨 - 媳 - 彭 - 蓄 - 蝶 - 炮 - 鼠 - 咖 - 琴 - 宠 - 棍 - 掘 - 茨 - 坑 - 湘 - 孟 - 劣 - 灿 - 虫 - 彦 - 喷 - 描 - 辩 - 尴 - 尬 - 弥 - 孤 - 峡 - 凸 - 逻 - 辰 - 孔 - 抬 - 馨 - 蔚 - 怡 - 雯 - 砖 - 崇 - 肢 - 柱 - 阔 - 彼 - 荒 - 滚 - 葡 - 萄 - 昂 - 盆 - 怨 - 瞬 - 斜 - 斩 - 睛 - 剪 - 插 - 棚 - 串 - 沃 - 柔 - 肤 - 壳 - 胸 - 陕 - 凉 - 崛 - 鸣 - 罕 - 衷 - 阴 - 盲 - 伞 - 戒 - 踢 - 狼 - 埋 - 酿 - 旨 - 戈 - 捉 - 跪 - 贺 - 谭 - 涂 - 萎 - 滋 - 昏 - 扇 - 鼎 - 楠 - 驳 - 溪 - 桑 - 钧 - 荡 - 痕 - 玛 - 躲 - 谐 - 您 - 叹 - 桶 - 晕 - 丙 - 璇 - 咚 - 烂 - 杉 - 挣 - 窝 - 亵 - 芸 - 渝 - 芳 - 妆 - 膜 - 煌 - 尘 - 侯 - 赋 - 渣 - 贫 - 桃 - 页 - 吞 - 胀 - 竹 - 肝 - 雾 - 嫁 - 辈 - 愤 - 琐 - 殖 - 媛 - 寄 - 僵 - 逮 - 聪 - 粗 - 寒 - 弄 - 墓 - 谌 - 扔 - 役 - 呆 - 靖 - 蒂 - 芬 - 翼 - 喂 - 孵 - 谎 - 硅 - 璨 - 喀 - 盼 - 盒 - 慌 - 烫 - 秦 - 梳 - 韦 - 袋 - 钓 - 夕 - 碗 - 寨 - 塘 - 衍 - 垒 - 卿 - 滩 - 扑 - 绘 - 辱 - 炎 - 铅 - 肿 - 衰 - 厢 - 躺 - 纽 - 硫 - 睐 - 翁 - 慰 - 耍 - 缠 - 狠 - 脉 - 斥 - 脂 - 趴 - 钩 - 歧 - 椅 - 踩 - 掷 - 挽 - 锐 - 勘 - 逢 - 郝 - 宪 - 胃 - 粒 - 瞩 - 辟 - 皆 - 仰 - 腕 - 匪 - 陵 - 钥 - 缝 - 闸 - 犬 - 锡 - 弊 - 凝 - 臭 - 趁 - 拾 - 夸 - 掩 - 耀 - 炭 - 铬 - 叠 - 坊 - 挪 - 蟹 - 裹 - 狮 - 辐 - 陌 - 捅 - 疫 - 兹 - 霍 - 锈 - 娟 - 蚁 - 奢 - 吻 - 侃 - 晖 - 扳 - 冤 - 彰 - 蹈 - 畴 - 蛇 - 濠 - 啡 - 堡 - 侣 - 撒 - 铭 - 掏 - 奎 - 蜂 - 咸 - 穷 - 瞄 - 遂 - 碾 - 匿 - 瓷 - 舱 - 刹 - 柄 - 倪 - 睹 - 译 - 淇 - 猝 - 浅 - 肺 - 湿 - 顽 - 罩 - 胆 - 匙 - 渴 - 妮 - 羞 - 脆 - 魄 - 锂 - 纤 - 炫 - 裙 - 肾 - 傲 - 膝 - 叔 - 啥 - 撕 - 牲 - 猴 - 辨 - 酝 - 刮 - 惑 - 渗 - 喻 - 晴 - 淑 - 羡 - 慕 - 擂 - 骚 - 纺 - 咕 - 僧 - 悔 - 垂 - 瘫 - 剥 - 舰 - 浏 - 鲍 - 跻 - 亭 - 撰 - 卸 - 莲 - 纱 - 糊 - 朵 - 岩 - 眉 - 函 - 糟 - 仗 - 惹 - 琦 - 贞 - 氢 - 楷 - 莓 - 瞒 - 奠 - 勃 - 锤 - 妨 - 帷 - 洽 - 乞 - 牺 - 亩 - 簿 - 斑 - 翘 - 祈 - 唇 - 耕 - 扯 - 妍 - 坎 - 谱 - 盯 - 泼 - 悍 - 莎 - 汁 - 囊 - 甩 - 辣 - 浸 - 恼 - 盔 - 烤 - 坝 - 巅 - 沸 - 抹 - 邹 - 霾 - 怖 - 犹 - 擎 - 迄 - 恨 - 丧 - 坞 - 袖 - 赤 - 萍 - 爽 - 穆 - 娶 - 闷 - 捍 - 膀 - 侈 - 筋 - 逛 - 倩 - 纲 - 遮 - 御 - 姨 - 淮 - 宰 - 叉 - 绵 - 惧 - 钦 - 廊 - 鳄 - 砂 - 浆 - 禽 - 咏 - 瘾 - 饿 - 痴 - 绳 - 碟 - 韵 - 皓 - 廖 - 岭 - 蛙 - 兔 - 芽 - 剖 - 嫖 - 昔 - 哀 - 蔓 - 谦 - 滥 - 赂 - 渊 - 捣 - 佑 - 弈 - 仙 - 澡 - 骤 - 侨 - 奉 - 磅 - 慨 - 筛 - 嘲 - 竣 - 箭 - 荧 - 脖 - 彤 - 豫 - 躁 - 秉 - 鹤 - 幺 - 渔 - 罢 - 贬 - 铲 - 卵 - 逗 - 牧 - 蔬 - 苑 - 沦 - 遏 - 柴 - 庙 - 兽 - 耶 - 魂 - 溜 - 缉 - 俏 - 蕴 - 苛 - 凑 - 婿 - 铸 - 兜 - 蹭 - 鸭 - 朴 - 肋 - 噪 - 焚 - 坍 - 啤 - 钉 - 戚 - 谍 - 挫 - 艇 - 余 - 巷 - 屠 - 咋 - 詹 - 衫 - 浴 - 爹 - 孝 - 瘤 - 霖 - 崩 - 甸 - 悼 - 擒 - 浇 - 雕 - 竖 - 帐 - 萤 - 靡 - 漠 - 傻 - 撼 - 崔 - 筒 - 脊 - 嘛 - 臣 - 禾 - 龟 - 唤 - 呀 - 壤 - 灌 - 邵 - 稻 - 巾 - 葩 - 饥 - 缔 - 舌 - 窜 - 秽 - 茅 - 靓 - 阱 - 钞 - 潼 - 硝 - 墩 - 蝙 - 蝠 - 嫂 - 艘 - 嚣 - 铃 - 扒 - 佬 - 竭 - 赎 - 傍 - 熬 - 悠 - 挨 - 泊 - 攒 - 坪 - 焰 - 螺 - 薇 - 蛛 - 牟 - 忌 - 愧 - 酵 - 迭 - 饶 - 惟 - 钮 - 闵 - 碧 - 徘 - 徊 - 溯 - 棉 - 歪 - 捂 - 蚊 - 锰 - 屁 - 畸 - 肪 - 蹲 - 剔 - 榆 - 撇 - 瑟 - 讶 - 飘 - 蒸 - 诠 - 寂 - 罄 - 莹 - 鹅 - 泣 - 崖 - 珊 - 讳 - 翰 - 蜘 - 仲 - 燥 - 菱 - 滢 - 煎 - 蛮 - 瞻 - 蘑 - 菇 - 隙 - 捆 - 蕉 - 遣 - 宛 - 肆 - 丸 - 磁 - 玥 - 嵌 - 韶 - 枝 - 咪 - 愉 - 呕 - 淤 - 誓 - 辄 - 俯 - 桐 - 舅 - 蓉 - 渭 - 氯 - 溅 - 雁 - 龚 - 恺 - 妖 - 饽 - 荆 - 枯 - 仇 - 坟 - 澜 - 麟 - 藤 - 猎 - 洒 - 茹 - 碌 - 畏 - 涤 - 俞 - 勿 - 蔽 - 罐 - 尹 - 堰 - 儒 - 芮 - 孚 - 哗 - 掐 - 矶 - 椎 - 阐 - 驴 - 蝉 - 焕 - 鄂 - 耻 - 炯 - 衬 - 婉 - 愁 - 梨 - 丛 - 谅 - 膨 - 曙 - 鹿 - 骄 - 缅 - 匆 - 赃 - 蒲 - 睁 - 焱 - 灼 - 刃 - 螃 - 瑕 - 讹 - 禅 - 臀 - 姗 - 媚 - 呛 - 凰 - 瀚 - 埔 - 弓 - 阚 - 湛 - 奕 - 扛 - 齿 - 挟 - 髓 - 狭 - 栈 - 骏 - 崭 - 慑 - 殿 - 祭 - 僻 - 蹬 - 寡 - 呦 - 鞠 - 酱 - 瑰 - 馒 - 坤 - 趟 - 臻 - 咒 - 豹 - 畜 - 冉 - 绎 - 岌 - 甄 - 绞 - 宵 - 庸 - 歇 - 挠 - 氨 - 乙 - 茵 - 岔 - 淄 - 碘 - 淋 - 蓬 - 颅 - 羹 - 浑 - 昧 - 翠 - 峥 - 惕 - 睿 - 芦 - 蚀 - 颓 - 霜 - 钰 - 橘 - 堤 - 凳 - 溶 - 锯 - 幂 - 榴 - 娼 - 汹 - 茫 - 厌 - 绰 - 崎 - 溃 - 撬 - 沾 - 拇 - 疵 - 哦 - 弧 - 弘 - 咽 - 葬 - 阁 - 竿 - 篡 - 隶 - 诟 - 煮 - 丘 - 耿 - 彬 - 敞 - 泻 - 夷 - 隅 - 渎 - 淹 - 骆 - 醋 - 霆 - 涩 - 陀 - 叙 - 梗 - 冶 - 敛 - 痪 - 讽 - 疤 - 螂 - 芒 - 幢 - 炜 - 毯 - 橙 - 拢 - 俨 - 仕 - 氰 - 钾 - 呐 - 株 - 脾 - 烨 - 磕 - 薛 - 窖 - 芷 - 蜕 - 衅 - 歹 - 哒 - 诡 - 摧 - 漆 - 蟑 - 劈 - 呵 - 絮 - 抖 - 娅 - 铝 - 霉 - 芭 - 辜 - 昊 - 嘘 - 哑 - 枢 - 脐 - 庐 - 钠 - 鳌 - 矩 - 锆 - 婧 - 沛 - 饲 - 熄 - 翡 - 屹 - 膏 - 阙 - 搂 - 锣 - 幌 - 橄 - 榄 - 杖 - 旷 - 矫 - 冈 - 舟 - 腊 - 聂 - 拣 - 遛 - 勋 - 窘 - 韧 - 咱 - 拎 - 椒 - 揣 - 殷 - 揪 - 伽 - 贱 - 琼 - 菡 - 闺 - 昭 - 雏 - 蹊 - 黛 - 禹 - 鞍 - 乖 - 汝 - 甫 - 彝 - 泸 - 诬 - 拽 - 毽 - 搅 - 葵 - 旱 - 勉 - 跷 - 畔 - 肘 - 坂 - 漩 - 涡 - 倘 - 醛 - 曦 - 铀 - 杏 - 棕 - 幽 - 裴 - 阮 - 敷 - 茄 - 沧 - 剽 - 恳 - 淳 - 萱 - 袱 - 亥 - 痱 - 腔 - 嫉 - 粹 - 焊 - 诀 - 粪 - 朔 - 黯 - 谜 - 眨 - 祁 - 暧 - 魁 - 辗 - 穗 - 倦 - 剿 - 袍 - 恭 - 炙 - 娴 - 玫 - 锏 - 熏 - 窥 - 堕 - 悟 - 晃 - 缪 - 驿 - 泷 - 雀 - 惫 - 玺 - 剃 - 斐 - 袂 - 梭 - 哄 - 邪 - 岂 - 腻 - 嫩 - 榕 - 谴 - 潇 - 纬 - 侮 - 翅 - 镶 - 坷 - 彪 - 祷 - 匝 - 耽 - 萝 - 窑 - 瑾 - 滤 - 拱 - 哨 - 蠢 - 邢 - 涞 - 恤 - 泾 - 谤 - 瀑 - 舶 - 懈 - 忱 - 烹 - 晟 - 踞 - 剁 - 珉 - 庚 - 晤 - 壶 - 砾 - 嗅 - 妒 - 匈 - 胰 - 绯 - 荼 - 爪 - 茜 - 桦 - 蜇 - 芜 - 玄 - 葫 - 蚂 - 绊 - 搁 - 霏 - 粘 - 佟 - 雍 - 垮 - 羁 - 娥 - 碱 - 磷 - 钊 - 毙 - 诿 - 绸 - 捏 - 遴 - 畊 - 厮 - 巫 - 猖 - 獗 - 掴 - 辍 - 蜡 - 赣 - 筵 - 芙 - 蒜 - 缆 - 俪 - 鹰 - 笋 - 毋 - 喆 - 鹭 - 蝴 - 汀 - 诽 - 桔 - 篷 - 莽 - 栖 - 饪 - 伺 - 戳 - 谊 - 霄 - 侄 - 滔 - 瞎 - 皱 - 蛟 - 裔 - 烽 - 猿 - 叮 - 绷 - 腺 - 暨 - 沥 - 喧 - 囤 - 掠 - 陡 - 膺 - 痒 - 饵 - 戎 - 褚 - 丐 - 渤 - 帜 - 娄 - 洼 - 禄 - 婵 - 琢 - 躯 - 禺 - 峙 - 踹 - 怜 - 炖 - 剐 - 缚 - 襄 - 枫 - 绽 - 庾 - 斧 - 穴 - 寇 - 蝇 - 鞭 - 阎 - 矢 - 糙 - 巍 - 蒿 - 殒 - 蛰 - 囧 - 卜 - 宙 - 珮 - 鸦 - 璞 - 翟 - 酗 - 褒 - 豁 - 镑 - 耷 - 棠 - 垦 - 韬 - 荫 - 窨 - 鸽 - 羲 - 懒 - 躬 - 匕 - 犀 - 吼 - 珀 - 昙 - 樱 - 蹿 - 抉 - 苍 - 汛 - 铉 - 镉 - 喔 - 邯 - 郸 - 噱 - 瓯 - 沼 - 捻 - 苯 - 蹼 - 麋 - 阀 - 煞 - 踝 - 缭 - 菊 - 竺 - 峭 - 攥 - 癖 - 肛 - 泔 - 拯 - 窟 - 靳 - 舵 - 嘱 - 昱 - 勺 - 吾 - 丫 - 觅 - 醇 - 磋 - 徙 - 陨 - 惺 - 渍 - 炬 - 栽 - 晏 - 颂 - 奴 - 榔 - 驭 - 嚼 - 赡 - 豚 - 蔷 - 梓 - 梧 - 哽 - 晗 - 汞 - 嫣 - 蕊 - 祺 - 疹 - 壹 - 噬 - 皂 - 矗 - 悚 - 憧 - 憬 - 拷 - 扁 - 廓 - 蹴 - 岚 - 瑛 - 崴 - 栗 - 囚 - 涿 - 礁 - 晔 - 殡 - 璀 - 淞 - 隋 - 踵 - 钵 - 煊 - 赘 - 瞧 - 寞 - 陋 - 骷 - 髅 - 秸 - 秆 - 夯 - 荔 - 襁 - 褓 - 笨 - 沮 - 瞅 - 怂 - 茗 - 甥 - 亟 - 杳 - 煦 - 挚 - 棵 - 祠 - 嗯 - 枕 - 粟 - 泌 - 蜀 - 寥 - 遐 - 涝 - 辫 - 籁 - 窍 - 聋 - 逍 - 跤 - 凹 - 釜 - 嘀 - 嗒 - 淝 - 藜 - 翱 - 硚 - 叼 - 痹 - 腼 - 腆 - 伎 - 骋 - 愕 - 腥 - 拮 - 轧 - 癫 - 橡 - 膊 - 觑 - 寅 - 砒 - 趾 - 颐 - 漳 - 峨 - 呜 - 淆 - 凿 - 壕 - 铨 - 莆 - 筷 - 璧 - 譬 - 岖 - 抠 - 笛 - 厥 - 砺 - 喉 - 酌 - 簧 - 鲸 - 踊 - 牡 - 嬛 - 缜 - 奂 - 熹 - 闽 - 馊 - 胯 - 喇 - 伶 - 墟 - 煜 - 耘 - 榷 - 骁 - 猩 - 辙 - 狸 - 滕 - 诵 - 窒 - 恍 - 髦 - 诫 - 榨 - 熠 - 蔺 - 薯 - 歆 - 粤 - 夭 - 拌 - 唏 - 厄 - 吝 - 眷 - 峪 - 拙 - 咎 - 粥 - 痰 - 琅 - 羚 - 莘 - 憨 - 瞰 - 炅 - 孜 - 亢 - 缮 - 焯 - 咄 - 暇 - 矮 - 汲 - 灶 - 闰 - 奚 - 汶 - 珲 - 麓 - 憋 - 崂 - 镳 - 殃 - 卉 - 诧 - 矣 - 屎 - 聆 - 芋 - 屑 - 罂 - 籽 - 绚 - 卞 - 枉 - 汕 - 懋 - 媲 - 啧 - 掣 - 嬉 - 仨 - 姬 - 懿 - 馅 - 胺 - 撂 - 睫 - 蛐 - 萃 - 眈 - 飚 - 毓 - 涅 - 昼 - 橱 - 驼 - 涠 - 谩 - 婶 - 膛 - 拄 - 绣 - 栅 - 邬 - 怠 - 鄙 - 哉 - 跺 - 帘 - 沓 - 搀 - 腌 - 羿 - 泵 - 鄞 - 郡 - 烃 - 愚 - 蕙 - 垤 - 锌 - 柠 - 檬 - 葱 - 垢 - 匮 - 卦 - 懊 - 掺 - 叱 - 坯 - 糯 - 覆 - 铆 - 琬 - 抡 - 潢 - 棺 - 塾 - 飓 - 诅 - 翩 - 揍 - 檀 - 鳝 - 讪 - 熔 - 杞 - 啃 - 昀 - 紊 - 敖 - 璐 - 蔗 - 槌 - 铐 - 搡 - 磐 - 宕 - 栓 - 叭 - 戟 - 顷 - 濒 - 窦 - 摁 - 俐 - 瞳 - 蚕 - 鹊 - 迂 - 畿 - 瓣 - 媞 - 寝 - 蹦 - 嗑 - 袒 - 殉 - 稚 - 俘 - 搪 - 沽 - 妃 - 嗓 - 胫 - 町 - 莴 - 苣 - 痘 - 蔑 - 皖 - 枞 - 忐 - 忑 - 靴 - 菁 - 姥 - 诙 - 嚷 - 焉 - 沣 - 霹 - 雳 - 僚 - 尧 - 嘎 - 诩 - 咫 - 柬 - 惮 - 狄 - 匀 - 裆 - 黏 - 釉 - 膳 - 渺 - 苟 - 瑶 - 唾 - 瘠 - 讧 - 睦 - 弦 - 庇 - 袄 - 噩 - 扼 - 戛 - 禀 - 恿 - 滁 - 麾 - 筱 - 瘀 - 褪 - 槟 - 缨 - 绒 - 犷 - 茸 - 惋 - 嗤 - 寮 - 褂 - 咳 - 缀 - 谙 - 涧 - 炽 - 缄 - 鹜 - 砌 - 贮 - 庵 - 隧 - 卤 - 跆 - 皋 - 蝗 - 洱 - 圪 - 邑 - 锄 - 荟 - 渚 - 苇 - 孰 - 鹃 - 哼 - 呃 - 琛 - 痣 - 摹 - 痼 - 镯 - 刁 - 秧 - 腩 - 鳞 - 乍 - 颚 - 慷 - 氓 - 惦 - 卑 - 挝 - 熨 - 濮 - 胳 - 瓢 - 砰 - 溧 - 锷 - 鸠 - 犒 - 姝 - 蹄 - 宸 - 侥 - 锭 - 佶 - 浊 - 婪 - 磺 - 咤 - 迢 - 檐 - 邺 - 掂 - 渲 - 嚎 - 祛 - 伢 - 叛 - 撮 - 甬 - 淌 - 瀛 - 朽 - 陂 - 帼 - 铿 - 锵 - 漓 - 驯 - 鲨 - 抒 - 茁 - 柿 - 貔 - 貅 - 钝 - 鳅 - 嚏 - 暮 - 瑚 - 荤 - 蜓 - 垣 - 颤 - 溥 - 臃 - 戮 - 枣 - 佼 - 拗 - 哆 - 嗦 - 惚 - 鸥 - 倚 - 嗨 - 舸 - 赐 - 姊 - 憔 - 悴 - 铰 - 黝 - 屿 - 秃 - 嘻 - 楞 - 棱 - 袈 - 裟 - 汴 - 揉 - 髋 - 悸 - 榻 - 逞 - 晾 - 屌 - 闳 - 痊 - 袜 - 扉 - 琶 - 摒 - 捺 - 匠 - 窈 - 窕 - 飒 - 猬 - 蜚 - 萋 - 蚯 - 蚓 - 鲟 - 澈 - 樟 - 悖 - 玖 - 俾 - 抿 - 彷 - 彿 - 虱 - 狙 - 鲶 - 槿 - 烘 - 挎 - 狰 - 狞 - 邃 - 瞪 - 俚 - 涕 - 谬 - 睬 - 蜷 - 兢 - 镍 - 砷 - 菠 - 怦 - 凄 - 卯 - 獒 - 渀 - 辘 - 滇 - 燎 - 噎 - 蝎 - 綦 - 鄢 - 捎 - 瞿 - 蜿 - 蜒 - 禧 - 榈 - 锹 - 殭 - 爵 - 盹 - 淖 - 啼 - 瓮 - 鳖 - 镖 - 珑 - 罹 - 殆 - 掖 - 柞 - 缸 - 绅 - 棘 - 祉 - 胱 - 殓 - 嗡 - 嗷 - 箍 - 圩 - 耒 - 婕 - 腑 - 萦 - 鹞 - 珜 - 啵 - 瑙 - 葆 - 逡 - 嗽 - 饕 - 餮 - 隼 - 妞 - 饺 - 叨 - 酋 - 恙 - 泗 - 弩 - 骜 - 铎 - 酶 - 蚝 - 烁 - 匾 - 侬 - 藻 - 馥 - 骥 - 槐 - 缕 - 椿 - 袆 - 琊 - 稣 - 藩 - 迸 - 蹂 - 躏 - 隽 - 俸 - 郫 - 簸 - 砥 - 骸 - 掮 - 斛 - 啸 - 璋 - 垛 - 札 - 邋 - 遢 - 蕲 - 哇 - 碴 - 邛 - 崃 - 觐 - 笙 - 裳 - 泞 - 蚌 - 醍 - 醐 - 拴 - 舜 - 沅 - 懵 - 谕 - 帚 - 螳 - 噼 - 啪 - 漱 - 郜 - 碉 - 圭 - 谀 - 轶 - 舀 - 呲 - 啶 - 氟 - 琏 - 垅 - 娩 - 乾 - 鏖 - 牾 - 肮 - 啕 - 吏 - 涓 - 氦 - 锥 - 桎 - 吿 - 烊 - 斟 - 汾 - 岐 - 耄 - 耋 - 嗲 - 胛 - 疚 - 骇 - 癣 - 磡 - 侑 - 漾 - 碚 - 琉 - 惬 - 遁 - 耸 - 岱 - 糗 - 缙 - 肴 - 梵 - 僮 - 鸵 - 悯 - 孪 - 莅 - 戬 - 霁 - 簇 - 逵 - 倜 - 傥 - 馋 - 蓁 - 衙 - 蛀 - 蔫 - 崧 - 吟 - 琰 - 唬 - 渥 - 岷 - 仡 - 涎 - 鸳 - 鸯 - 镊 - 妧 - 嬷 - 嫦 - 嫔 - 沐 - 伉 - 嶝 - 锢 - 筐 - 蜥 - 蜴 - 泱 - 骅 - 吆 - 撩 - 怯 - 叩 - 哟 - 啬 - 岬 - 笃 - 玳 - 瑁 - 邝 - 咣 - 矜 - 嘭 - 馗 - 婀 - 黔 - 锟 - 啰 - 翌 - 铠 - 貉 - 獾 - 酣 - 楣 - 佃 - 琵 - 茆 - 皙 - 凋 - 敝 - 匣 - 嵘 - 宓 - 茎 - 楂 - 竲 - 瘪 - 侗 - 铣 - 薰 - 砲 - 羣 - 淼 - 襟 - 妊 - 娠 - 罡 - 瘁 - 椰 - 烙 - 呗 - 荃 - 皎 - 殚 - 腋 - 骼 - 腓 - 榭 - 隘 - 唉 - 铮 - 狩 - 抨 - 峁 - 粱 - 阂 - 厩 - 莠 - 吩 - 咐 - 瞌 - 蜊 - 恬 - 膑 - 踉 - 跄 - 颍 - 朐 - 疝 - 毂 - 秣 - 舛 - 炊 - 漯 - 泠 - 喘 - 撵 - 狡 - 猾 - 铂 - 钛 - 荞 - 拭 - 丞 - 漭 - 绌 - 埜 - 掰 - 狈 - 锜 - 菩 - 弛 - 寰 - 秤 - 灞 - 黍 - 蓟 - 嵛 - 榉 - 幄 - 颊 - 缤 - 朦 - 胧 - 冥 - 砝 - 镀 - 夙 - 燊 - 荚 - 浈 - 苡 - 眺 - 陬 - 寐 - 佘 - 濑 - 仄 - 楔 - 胚 - 嵩 - 洙 - 诓 - 阜 - 浚 - 觊 - 觎 - 曰 - 怵 - 兖 - 稠 - 嵋 - 艋 - 篪 - 琥 - 玟 - 褴 - 褛 - 喱 - 虞 - 魇 - 凇 - 徉 - 嘟 - 臆 - 犊 - 哎 - 靑 - 俺 - 塬 - 妯 - 娌 - 蜈 - 蚣 - 恣 - 沏 - 磴 - 霎 - 趸 - 麒 - 氪 - 缇 - 沁 - 疃 - 恸 - 瘩 - 暄 - 憩 - 祯 - 惰 - 溉 - 沱 - 诲 - 笈 - 擘 - 亳 - 孺 - 忪 - 瞟 - 擞 - 瘸 - 掬 - 唁 - 蹚 - 匡 - 粕 - 鲷 - 泓 - 叵 - 嗣 - 眯 - 炷 - 珺 - 漕 - 谑 - 咯 - 嗬 - 缰 - 卲 - 壑 - 靶 - 隍 - 唠 - 濡 - 盎 - 骊 - 腱 - 鞘 - 拧 - 痫 - 宦 - 诶 - 椋 - 鼾 - 湍 - 毗 - 酪 - 赦 - 炕 - 焘 - 奘 - 邂 - 逅 - 妄 - 骐 - 卒 - 喵 - 觥 - 眬 - 纣 - 憷 - 覃 - 孀 - 芊 - 孢 - 惶 - 迥 - 纰 - 咀 - 鸾 - 箫 - 晦 - 泯 - 砚 - 吭 - 祢 - 揩 - 刨 - 珏 - 撸 - 兀 - 痉 - 挛 - 胤 - 巿 - 纶 - 镁 - 哺 - 咔 - 嚓 - 稼 - 焖 - 妤 - 妩 - 潞 - 雌 - 栾 - 侍 - 煲 - 嫚 - 竽 - 恪 - 霈 - 赝 - 莺 - 眶 - 桓 - 槎 - 馑 - 涮 - 枭 - 徇 - 洵 - 垌 - 昵 - 褶 - 喽 - 脯 - 孱 - 遨 - 谚 - 烷 - 搽 - 酯 - 枷 - 桉 - 咧 - 窿 - 拈 - 斓 - 跛 - 蹶 - 瘟 - 俭 - 靛 - 脍 - <sos/eos> init: null input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: null zero_infinity: true joint_net_conf: null use_preprocessor: true token_type: char bpemodel: null non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' short_noise_thres: 0.5 frontend: default frontend_conf: fs: 16k specaug: specaug specaug_conf: apply_time_warp: true time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 27 num_freq_mask: 2 apply_time_mask: true time_mask_width_ratio_range: - 0.0 - 0.05 num_time_mask: 10 normalize: global_mvn normalize_conf: stats_file: exp/asr_stats_raw_zh_char_sp/train/feats_stats.npz model: espnet model_conf: ctc_weight: 0.3 lsm_weight: 0.1 length_normalized_loss: false preencoder: null preencoder_conf: {} encoder: e_branchformer encoder_conf: output_size: 256 attention_heads: 4 attention_layer_type: rel_selfattn pos_enc_layer_type: rel_pos rel_pos_type: latest cgmlp_linear_units: 1024 cgmlp_conv_kernel: 31 use_linear_after_conv: false gate_activation: identity num_blocks: 12 dropout_rate: 0.1 positional_dropout_rate: 0.1 attention_dropout_rate: 0.1 input_layer: conv2d layer_drop_rate: 0.0 linear_units: 1024 positionwise_layer_type: linear use_ffn: true macaron_ffn: true merge_conv_kernel: 31 postencoder: null postencoder_conf: {} decoder: transformer decoder_conf: attention_heads: 4 linear_units: 2048 num_blocks: 6 dropout_rate: 0.1 positional_dropout_rate: 0.1 self_attention_dropout_rate: 0.0 src_attention_dropout_rate: 0.0 preprocessor: default preprocessor_conf: {} required: - output_dir - token_list version: '202209' distributed: true ``` </details>
2ade90079a137fe09387df961df9c888
apache-2.0
['generated_from_trainer']
false
finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3140 - Accuracy: 0.88 - F1: 0.8816
9f0eb17d9e187c562a6bf504bf7004fe
apache-2.0
['finnish', 't5', 't5x', 'seq2seq']
false
T5-large-nl36 for Finnish Pretrained T5 model on Finnish language using a span-based masked language modeling (MLM) objective. T5 was introduced in [this paper](https://arxiv.org/abs/1910.10683) and first released at [this page](https://github.com/google-research/text-to-text-transfer-transformer). **Note:** The Hugging Face inference widget is deactivated because this model needs a text-to-text fine-tuning on a specific downstream task to be useful in practice. As an example of a fine-tuned Finnish T5 model, you can check [Finnish-NLP/t5-small-nl24-casing-punctuation-correction](https://huggingface.co/Finnish-NLP/t5-small-nl24-casing-punctuation-correction) which has been fine-tuned to correct missing casing and punctuation for Finnish text.
2bc4620568ed4256f60f2d79d8084c28
apache-2.0
['finnish', 't5', 't5x', 'seq2seq']
false
Model description T5 is an encoder-decoder model and treats all NLP problems in a text-to-text format. Finnish T5 is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and outputs from those texts. More precisely, it was pretrained with the span-based masked language modeling (MLM) objective. Spans of the input sequence are masked by so-called sentinel tokens (a.k.a unique mask tokens) and the output sequence is formed as a concatenation of the same sentinel tokens and the real masked tokens. This way, the model learns an inner representation of the Finnish language. This model used the [T5 v1.1](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md
0142febe8a7698477801461f229d6688
apache-2.0
['finnish', 't5', 't5x', 'seq2seq']
false
t511) improvements compared to the original T5 model during the pretraining: - GEGLU activation in feed-forward hidden layer, rather than ReLU - see [here](https://arxiv.org/abs/2002.05202) - Dropout was turned off in pretraining (quality win). Dropout should be re-enabled during fine-tuning - Pretrained on span-based masked language modeling (MLM) objective only without mixing in the downstream tasks - No parameter sharing between embedding and classifier layer This model also used the "efficient" T5 architecture findings presented in [this paper](https://arxiv.org/abs/2109.10686). In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures of similar parameter count. To be more precise, model depth is defined as the number of transformer blocks that are stacked sequentially. This model uses the [t5-efficient-large-nl36](https://huggingface.co/google/t5-efficient-large-nl36) architecture's layer depth which means both the encoder and the decoder have 36 transformer layers compared to the original T5 "large" model's architecture of 24 transformer layers. In total, this model has 1425 million parameters.
bbb2bc7f290e1ee0b97fb29289baaeb3
apache-2.0
['finnish', 't5', 't5x', 'seq2seq']
false
Intended uses & limitations This model was only pretrained in a self-supervised way excluding any supervised training. Therefore, this model has to be fine-tuned before it is usable on a downstream task, like text classification, unlike the Google's original T5 model. **Note:** You most likely need to fine-tune these T5 models without mixed precision so fine-tune them with full fp32 precision. You can also find more fine-tuning tips from [here](https://discuss.huggingface.co/t/t5-finetuning-tips), for example.
8ffbaab39eea9fd5df125f1ef19642a6
apache-2.0
['finnish', 't5', 't5x', 'seq2seq']
false
How to use Here is how to use this model in PyTorch: ```python from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("Finnish-NLP/t5-large-nl36-finnish") model = T5ForConditionalGeneration.from_pretrained("Finnish-NLP/t5-large-nl36-finnish") ``` and in TensorFlow: ```python from transformers import T5Tokenizer, TFT5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("Finnish-NLP/t5-large-nl36-finnish") model = T5ForConditionalGeneration.from_pretrained("Finnish-NLP/t5-large-nl36-finnish", from_pt=True) ```
c67d7928bb7594ef77ea13b2e33cc14a
apache-2.0
['finnish', 't5', 't5x', 'seq2seq']
false
Limitations and bias The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.
28f1db03e7d7ed2996522ef4f11c1eb1
apache-2.0
['finnish', 't5', 't5x', 'seq2seq']
false
Training data This Finnish T5 model was pretrained on the combination of six datasets: - [mc4_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned), the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo). - [wikipedia](https://huggingface.co/datasets/wikipedia) We used the Finnish subset of the wikipedia (August 2021) dataset - [Yle Finnish News Archive 2011-2018](http://urn.fi/urn:nbn:fi:lb-2017070501) - [Yle Finnish News Archive 2019-2020](http://urn.fi/urn:nbn:fi:lb-2021050401) - [Finnish News Agency Archive (STT)](http://urn.fi/urn:nbn:fi:lb-2018121001) - [The Suomi24 Sentences Corpus](http://urn.fi/urn:nbn:fi:lb-2020021803) Raw datasets were automatically cleaned to filter out bad quality and non-Finnish examples. Also, a [perplexity](https://huggingface.co/course/chapter7/3
731269338b8e8ba5b6ee2f318ab39888
apache-2.0
['finnish', 't5', 't5x', 'seq2seq']
false
perplexity-for-language-models) score was calculated for all texts with a KenLM model which was trained with very clean Finnish texts only. This perplexity score can then be used to determine how "clean" Finnish language the text contains. Lastly, all datasets were concatenated and the top 90% perplexity score was used as a filtering threshold to filter out the worst quality 10% of texts. Together these cleaned datasets were around 76GB of text.
8dc14f2accd091b98206decb2e0c565f
apache-2.0
['finnish', 't5', 't5x', 'seq2seq']
false
Preprocessing The texts are tokenized using WordPiece and a vocabulary size of 32000. The inputs and the outputs are sequences of 512 consecutive tokens. Texts are not lower cased so this model is case-sensitive: it makes a difference between finnish and Finnish.
8191526b910102e668f90366da32112e
apache-2.0
['finnish', 't5', 't5x', 'seq2seq']
false
Pretraining The model was trained on TPUv3-8 VM, sponsored by the [Google TPU Research Cloud](https://sites.research.google/trc/about/), for 1.87M steps with a batch size of 32 (in total 31B tokens). The optimizer used was a AdaFactor with learning rate warmup for 10K steps with a constant learning rate of 1e-3, and then an inverse square root decay (exponential decay) of the learning rate after. Training code was from the Google's Jax/Flax based [t5x framework](https://github.com/google-research/t5x) and also some t5x task definitions were adapted from [Per's t5x work](https://huggingface.co/pere).
b77a96c3bb4c6f565feb12d8912c701b
apache-2.0
['finnish', 't5', 't5x', 'seq2seq']
false
Evaluation results Evaluation was done by fine-tuning the model on a downstream text classification task with two different labeled Finnish datasets: [Yle News](https://github.com/spyysalo/yle-corpus) and [Eduskunta](https://github.com/aajanki/eduskunta-vkk). Classification fine-tuning was done with a sequence length of 128 tokens. When fine-tuned on those datasets, this model (the seventh row of the table) achieves the following accuracy results compared to our other T5 models and their parameter counts: | | Model parameters | Yle News accuracy | Eduskunta accuracy | |-------------------------------------------------------|------------------|---------------------|----------------------| |Finnish-NLP/t5-tiny-nl6-finnish | 31 million |92.80 |69.07 | |Finnish-NLP/t5-mini-nl8-finnish | 72 million |93.89 |71.43 | |Finnish-NLP/t5-small-nl16-finnish | 184 million |94.46 |74.00 | |Finnish-NLP/t5-small-nl24-finnish | 260 million |**94.68** |74.90 | |Finnish-NLP/byt5-base-finnish | 582 million |92.33 |73.13 | |Finnish-NLP/t5-base-nl36-finnish | 814 million |94.40 |**75.97** | |Finnish-NLP/t5-large-nl36-finnish | 1425 million |94.17 |73.50 | Fine-tuning Google's multilingual mT5 models on the same datasets we can clearly see that our monolingual Finnish T5 models achieve much better results on Finnish text classification: | | Model parameters | Yle News accuracy | Eduskunta accuracy | |-------------------------------------------------------|------------------|---------------------|----------------------| |google/mt5-small | 301 million |91.51 |64.10 | |google/mt5-base | 583 million |92.71 |68.40 |
4885da85db3d34993816af11daa9957d
apache-2.0
['finnish', 't5', 't5x', 'seq2seq']
false
Team Members - Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/) - Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/) Feel free to contact us for more details 🤗
4f3d666ea10e42686d1ef24694fd5916
mit
['generated_from_trainer']
false
roberta-large-finetuned-non-code-mixed-DS This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.1265 - Accuracy: 0.6936 - Precision: 0.6794 - Recall: 0.6782 - F1: 0.6784
97bc597b946b10a53446bbb8f16d05d6
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 43 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20
7d4b4ef520b493ac65e5c88e209d9b1f
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 1.0688 | 1.0 | 463 | 0.8847 | 0.6127 | 0.6038 | 0.6032 | 0.6014 | | 0.8226 | 2.0 | 926 | 0.7622 | 0.6796 | 0.6769 | 0.6822 | 0.6716 | | 0.6844 | 2.99 | 1389 | 0.8391 | 0.6828 | 0.6718 | 0.6563 | 0.6602 | | 0.536 | 3.99 | 1852 | 0.8218 | 0.6990 | 0.6950 | 0.6807 | 0.6844 | | 0.3938 | 4.99 | 2315 | 0.9616 | 0.6958 | 0.6967 | 0.7056 | 0.6880 | | 0.2674 | 5.99 | 2778 | 1.1389 | 0.7033 | 0.6868 | 0.6895 | 0.6879 | | 0.2073 | 6.98 | 3241 | 1.5578 | 0.6915 | 0.6786 | 0.6807 | 0.6792 | | 0.1641 | 7.98 | 3704 | 1.9538 | 0.6850 | 0.6734 | 0.6715 | 0.6717 | | 0.1394 | 8.98 | 4167 | 2.3230 | 0.6893 | 0.6733 | 0.6742 | 0.6736 | | 0.1248 | 9.98 | 4630 | 2.4050 | 0.6936 | 0.6824 | 0.6819 | 0.6815 | | 0.1002 | 10.98 | 5093 | 2.4227 | 0.6947 | 0.6832 | 0.6932 | 0.6795 | | 0.0776 | 11.97 | 5556 | 2.5782 | 0.7012 | 0.6876 | 0.6923 | 0.6887 | | 0.0685 | 12.97 | 6019 | 2.7967 | 0.6915 | 0.6836 | 0.6930 | 0.6820 | | 0.045 | 13.97 | 6482 | 2.8884 | 0.7044 | 0.6873 | 0.6855 | 0.6863 | | 0.0462 | 14.97 | 6945 | 2.9528 | 0.6947 | 0.6754 | 0.6749 | 0.6751 | | 0.0444 | 15.97 | 7408 | 3.0356 | 0.6904 | 0.6778 | 0.6805 | 0.6778 | | 0.0343 | 16.96 | 7871 | 3.0123 | 0.6936 | 0.6784 | 0.6762 | 0.6771 | | 0.0245 | 17.96 | 8334 | 3.0160 | 0.6893 | 0.6720 | 0.6735 | 0.6727 | | 0.0198 | 18.96 | 8797 | 3.1597 | 0.6904 | 0.6741 | 0.6727 | 0.6732 | | 0.0189 | 19.96 | 9260 | 3.1265 | 0.6936 | 0.6794 | 0.6782 | 0.6784 |
3adc3d3408a12ebebf4800b2479716db
apache-2.0
['image-classification', 'timm']
false
Model card for levit_conv_128.fb_dist_in1k A LeViT image classification model using default linear mode (non-convolutional mode with nn.Linear and nn.BatchNorm1d). Pretrained on ImageNet-1k using distillation by paper authors.
f726de92fe605410c8929ff037634bea
apache-2.0
['image-classification', 'timm']
false
Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 9.2 - GMACs: 0.4 - Activations (M): 2.7 - Image size: 224 x 224 - **Papers:** - LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference: https://arxiv.org/abs/2104.01136 - **Original:** https://github.com/facebookresearch/LeViT - **Dataset:** ImageNet-1k
cf8a50711581a2f6a3405e625d12bc58
apache-2.0
['image-classification', 'timm']
false
Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model('levit_conv_128.fb_dist_in1k', pretrained=True) model = model.eval()
b070a6794a06e634a0d1f4a096bae51b
apache-2.0
['image-classification', 'timm']
false
Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'levit_conv_128.fb_dist_in1k', pretrained=True, num_classes=0,
a03d5033ea7839b641e3a791bad8099b
apache-2.0
['image-classification', 'timm']
false
Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'levit_conv_128.fb_dist_in1k', pretrained=True, features_only=True, ) model = model.eval()
9d9c254da6f58d731fdf5a6566a6ed46
apache-2.0
['image-classification', 'timm']
false
Model Comparison |model |top1 |top5 |param_count|img_size| |-----------------------------------|------|------|-----------|--------| |levit_384.fb_dist_in1k |82.596|96.012|39.13 |224 | |levit_conv_384.fb_dist_in1k |82.596|96.012|39.13 |224 | |levit_256.fb_dist_in1k |81.512|95.48 |18.89 |224 | |levit_conv_256.fb_dist_in1k |81.512|95.48 |18.89 |224 | |levit_conv_192.fb_dist_in1k |79.86 |94.792|10.95 |224 | |levit_192.fb_dist_in1k |79.858|94.792|10.95 |224 | |levit_128.fb_dist_in1k |78.474|94.014|9.21 |224 | |levit_conv_128.fb_dist_in1k |78.474|94.02 |9.21 |224 | |levit_128s.fb_dist_in1k |76.534|92.864|7.78 |224 | |levit_conv_128s.fb_dist_in1k |76.532|92.864|7.78 |224 |
dfd7e346bd60c54b28801348dcb77b22
apache-2.0
['image-classification', 'timm']
false
Citation ```bibtex @InProceedings{Graham_2021_ICCV, author = {Graham, Benjamin and El-Nouby, Alaaeldin and Touvron, Hugo and Stock, Pierre and Joulin, Armand and Jegou, Herve and Douze, Matthijs}, title = {LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {12259-12269} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/rwightman/pytorch-image-models}} } ```
f78c2476c83ce035a84a0669af2151e8
apache-2.0
['deep-narrow']
false
T5-Efficient-SMALL-NL32 (Deep-Narrow version) T5-Efficient-SMALL-NL32 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5). It is a *pretrained-only* checkpoint and was released with the paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*. In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures of similar parameter count. To quote the paper: > We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased > before considering any other forms of uniform scaling across other dimensions. This is largely due to > how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a > tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise, > a tall base model might also generally more efficient compared to a large model. We generally find > that, regardless of size, even if absolute performance might increase as we continue to stack layers, > the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36 > layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e., > params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params, > FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to > consider. To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially. A sequence of word embeddings is therefore processed sequentially by each transformer block.
a9d47528be23e5dd9b5772c51405c4df
apache-2.0
['deep-narrow']
false
Details model architecture This model checkpoint - **t5-efficient-small-nl32** - is of model type **Small** with the following variations: - **nl** is **32** It has **251.49** million parameters and thus requires *ca.* **1005.96 MB** of memory in full precision (*fp32*) or **502.98 MB** of memory in half precision (*fp16* or *bf16*). A summary of the *original* T5 model architectures can be seen here: | Model | nl (el/dl) | ff | dm | kv | nh |
acaec5ee55f058025eb2eea478161bd9
apache-2.0
['tabular-regression', 'baseline-trainer']
false
Baseline Model trained on outhimar_64 to apply regression on Close **Metrics of the best model:** r2 0.999858 neg_mean_squared_error -1.067685 Name: Ridge(alpha=10), dtype: float64 **See model plot below:** <style>
606a334d1bc3b29d17b5b97f903ac999
apache-2.0
['tabular-regression', 'baseline-trainer']
false
sk-container-id-6 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}
993a22c62ab2e76c7b8cf68b83ba2c81
apache-2.0
['tabular-regression', 'baseline-trainer']
false
x27;,EasyPreprocessor(types= continuous dirty_float low_card_int ... date free_string useless Date False False False ... True False False Open True False False ... False False False High True False False ... False False False Low True False False ... False False False Adj Close True False False ... False False False Volume True False False ... False False False[6 rows x 7 columns])),(&
97bf1de7bbc988a50f78f7a557ee568b
apache-2.0
['tabular-regression', 'baseline-trainer']
false
x27;, Ridge(alpha=10))])</pre><b>In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. <br />On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.</b></div><div class="sk-container" hidden><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-18" type="checkbox" ><label for="sk-estimator-id-18" class="sk-toggleable__label sk-toggleable__label-arrow">Pipeline</label><div class="sk-toggleable__content"><pre>Pipeline(steps=[(&
69efd657fac2d47b2867aaf272e57b75
apache-2.0
['tabular-regression', 'baseline-trainer']
false
x27;, Ridge(alpha=10))])</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-19" type="checkbox" ><label for="sk-estimator-id-19" class="sk-toggleable__label sk-toggleable__label-arrow">EasyPreprocessor</label><div class="sk-toggleable__content"><pre>EasyPreprocessor(types= continuous dirty_float low_card_int ... date free_string useless Date False False False ... True False False Open True False False ... False False False High True False False ... False False False Low True False False ... False False False Adj Close True False False ... False False False Volume True False False ... False False False[6 rows x 7 columns])</pre></div></div></div><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-20" type="checkbox" ><label for="sk-estimator-id-20" class="sk-toggleable__label sk-toggleable__label-arrow">Ridge</label><div class="sk-toggleable__content"><pre>Ridge(alpha=10)</pre></div></div></div></div></div></div></div> **Disclaimer:** This model is trained with dabl library as a baseline, for better results, use [AutoTrain](https://huggingface.co/autotrain). **Logs of training** including the models tried in the process can be found in logs.txt
3bc0cebeb1209eb55dc3a87012aa201e
apache-2.0
['multiberts', 'multiberts-seed_15']
false
MultiBERTs - Seed 15 MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model
539a6c7f360342cfa7210a8cf9b062a2
apache-2.0
['multiberts', 'multiberts-seed_15']
false
Model Description This model is a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
33959892b5cd87efe8b3d1f9201fd8a6
apache-2.0
['multiberts', 'multiberts-seed_15']
false
How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_15') model = TFBertModel.from_pretrained("google/multiberts-seed_15") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_15') model = BertModel.from_pretrained("google/multiberts-seed_15") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ```
fe8dffa39e19c9e222770bb3d62c9f79
apache-2.0
['Tensorflow']
false
Tensorpacks Cascade-RCNN with FPN and Group Normalization on ResNext32xd4-50 trained on Pubtabnet for Semantic Segmentation of tables. The model and its training code has been mainly taken from: [Tensorpack](https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN) . Regarding the dataset, please check: [Xu Zhong et. all. - Image-based table recognition: data, model, and evaluation](https://arxiv.org/abs/1911.10683). The model has been trained on detecting cells from tables. Note, that the datasets contains tables only. Therefore, it is required to perform a table detection task before detecting cells. The code has been adapted so that it can be used in a **deep**doctection pipeline.
8eb46c163e801d342674a3f600ef15d7
apache-2.0
['Tensorflow']
false
How this model can be used This model can be used with the **deep**doctection in a full pipeline, along with table recognition and OCR. Check the general instruction following this [Get_started](https://github.com/deepdoctection/deepdoctection/blob/master/notebooks/Get_Started.ipynb) tutorial.
6a9b019a9b11db16d11d42be5428ecda
apache-2.0
['Tensorflow']
false
This is an inference model only To reduce the size of the checkpoint we removed all variables that are not necessary for inference. Therefore it cannot be used for fine-tuning. To fine tune this model please check this [model](https://huggingface.co/deepdoctection/tp_casc_rcnn_X_32xd4_50_FPN_GN_2FC_pubtabnet_c) .
5adc2c06c2cb5be9b7f2b2be498b3867
apache-2.0
['Tensorflow']
false
How this model was trained. To recreate the model run on the **deep**doctection framework, run: ```python >>> import os >>> from deep_doctection.datasets import DatasetRegistry >>> from deep_doctection.eval import MetricRegistry >>> from deep_doctection.utils import get_configs_dir_path >>> from deep_doctection.train import train_faster_rcnn pubtabnet = DatasetRegistry.get_dataset("pubtabnet") pubtabnet.dataflow.categories.filter_categories(categories="CELL") path_config_yaml=os.path.join(get_configs_dir_path(),"tp/cell/conf_frcnn_cell.yaml") path_weights = "" dataset_train = pubtabnet config_overwrite=["TRAIN.STEPS_PER_EPOCH=500","TRAIN.STARTING_EPOCH=1", "TRAIN.CHECKPOINT_PERIOD=50","BACKBONE.FREEZE_AT=0", "PREPROC.TRAIN_SHORT_EDGE_SIZE=[200,600]"] build_train_config=["max_datapoints=500000"] dataset_val = pubtabnet build_val_config = ["max_datapoints=4000"] coco_metric = MetricRegistry.get_metric("coco") coco_metric.set_params(max_detections=[50,200,600], area_range=[[0,1000000],[0,200],[200,800],[800,1000000]]) train_faster_rcnn(path_config_yaml=path_config_yaml, dataset_train=dataset_train, path_weights=path_weights, config_overwrite=config_overwrite, log_dir="/path/to/dir", build_train_config=build_train_config, dataset_val=dataset_val, build_val_config=build_val_config, metric=coco_metric, pipeline_component_name="ImageLayoutService" ) ```
db206629ffa91e7db8d96ae03547f573
other
['generated_from_trainer']
false
opt-350m-opty-350m-lectures This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.3830
660fb15bb2871588127f6173b21b4ba2
other
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 5 | 2.7828 | | No log | 2.0 | 10 | 2.4889 | | No log | 3.0 | 15 | 2.3830 |
3c2664f285937c773687579dd7b44bb2
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 4.2896 | 1.0 | 318 | 3.2891 | 0.7429 | | 2.6283 | 2.0 | 636 | 1.8755 | 0.8374 | | 1.5481 | 3.0 | 954 | 1.1570 | 0.8961 | | 1.0149 | 4.0 | 1272 | 0.8573 | 0.9132 | | 0.7952 | 5.0 | 1590 | 0.7720 | 0.9184 |
752fae3bf7c9781d6b9b55ea4c1a3b93
mit
['generated_from_trainer']
false
deberta-base-finetuned-qqp This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.2617 - Accuracy: 0.9128 - F1: 0.8844
b558dd0a1c0c3a45c0ee32da32a76cf3
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:| | 0.2412 | 1.0 | 22741 | 0.2369 | 0.9048 | 0.8753 | | 0.1742 | 2.0 | 45482 | 0.2617 | 0.9128 | 0.8844 |
2206c00a8a5e7b03846c9667db2d2cf5
apache-2.0
['Quality Estimation', 'microtransquest']
false
Using Pre-trained Models ```python from transquest.algo.word_level.microtransquest.run_model import MicroTransQuestModel import torch model = MicroTransQuestModel("xlmroberta", "TransQuest/microtransquest-en_de-wiki", labels=["OK", "BAD"], use_cuda=torch.cuda.is_available()) source_tags, target_tags = model.predict([["if not , you may not be protected against the diseases . ", "ja tā nav , Jūs varat nepasargāt no slimībām . "]]) ```
029363875a57a1c6c560266395186df5
apache-2.0
['generated_from_trainer']
false
distilbert_add_GLUE_Experiment_logit_kd_rte_256 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE RTE dataset. It achieves the following results on the evaluation set: - Loss: 0.4234 - Accuracy: 0.4729
9e0694403f5f576923c42ffa3c6006fc
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4263 | 1.0 | 10 | 0.4235 | 0.4729 | | 0.4176 | 2.0 | 20 | 0.4241 | 0.4729 | | 0.4173 | 3.0 | 30 | 0.4234 | 0.4729 | | 0.4172 | 4.0 | 40 | 0.4245 | 0.4729 | | 0.4182 | 5.0 | 50 | 0.4243 | 0.4729 | | 0.4178 | 6.0 | 60 | 0.4236 | 0.4729 | | 0.4176 | 7.0 | 70 | 0.4238 | 0.4729 | | 0.4177 | 8.0 | 80 | 0.4240 | 0.4729 |
0f153884aa91a1c9a28a8d111e840f01
apache-2.0
['setfit', 'sentence-transformers', 'text-classification']
false
fathyshalab/massive_audio-roberta-large-v1-5-0 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer.
e7632b112364a23805c494dbd30d3094
apache-2.0
['exbert', 'multiberts']
false
MultiBERTs Seed 0 (uncased) Seed 0 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
af4e39940e2f332bacadf59abc7692ea
apache-2.0
['exbert', 'multiberts']
false
Model description MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the MultiBERTs model as inputs.
d611d2129beb5f8da679d1b0dbf7cb4f
apache-2.0
['exbert', 'multiberts']
false
Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2.
89db6a285d1d2a479bbd3a43043008e6
apache-2.0
['exbert', 'multiberts']
false
How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0') model = BertModel.from_pretrained("multiberts-seed-0") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ```
e02d8d9a1cf2986be54960c23cd1e923
apache-2.0
['exbert', 'multiberts']
false
Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased
131430d6db25ad484f035a034647095d
apache-2.0
['exbert', 'multiberts']
false
Training data The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers).
e859cd0fbf4029bede32f25f48f07544
apache-2.0
['exbert', 'multiberts']
false
Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is.
386b8863f2467b8c412067a088c19ccd
apache-2.0
['exbert', 'multiberts']
false
Pretraining The model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size of 256. The sequence length was set to 512 throughout. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after.
8b083f865345047bd80f3a5a2a4d2902
apache-2.0
['exbert', 'multiberts']
false
BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2106-16163, author = {Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis}, journal = {CoRR}, volume = {abs/2106.16163}, year = {2021}, url = {https://arxiv.org/abs/2106.16163}, eprinttype = {arXiv}, eprint = {2106.16163}, timestamp = {Mon, 05 Jul 2021 15:15:50 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <a href="https://huggingface.co/exbert/?model=multiberts"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
00a92577d9335b3b16e7e082bcfbcfe6
apache-2.0
['mt5-small', 'text2text-generation', 'natural language understanding', 'conversational system', 'task-oriented dialog']
false
mt5-small-nlu-all-crosswoz This model is a fine-tuned version of [mt5-small](https://huggingface.co/mt5-small) on [CrossWOZ](https://huggingface.co/datasets/ConvLab/crosswoz) both user and system utterances. Refer to [ConvLab-3](https://github.com/ConvLab/ConvLab-3) for model description and usage.
b5082ec7b393fda301a97d2c1a526fb3
apache-2.0
['mt5-small', 'text2text-generation', 'natural language understanding', 'conversational system', 'task-oriented dialog']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 256 - optimizer: Adafactor - lr_scheduler_type: linear - num_epochs: 10.0
9b9baa942301a0786852d5e443c056a2
mit
[]
false
xatu2 on Stable Diffusion This is the `<xatu-test>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<xatu-test> 0](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/63.jpeg) ![<xatu-test> 1](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/80.jpeg) ![<xatu-test> 2](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/43.jpeg) ![<xatu-test> 3](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/56.jpeg) ![<xatu-test> 4](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/30.jpeg) ![<xatu-test> 5](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/88.jpeg) ![<xatu-test> 6](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/24.jpeg) ![<xatu-test> 7](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/85.jpeg) ![<xatu-test> 8](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/37.jpeg) ![<xatu-test> 9](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/19.jpeg) ![<xatu-test> 10](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/54.jpeg) ![<xatu-test> 11](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/5.jpeg) ![<xatu-test> 12](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/75.jpeg) ![<xatu-test> 13](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/55.jpeg) ![<xatu-test> 14](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/66.jpeg) ![<xatu-test> 15](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/49.jpeg) ![<xatu-test> 16](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/45.jpeg) ![<xatu-test> 17](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/58.jpeg) ![<xatu-test> 18](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/39.jpeg) ![<xatu-test> 19](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/6.jpeg) ![<xatu-test> 20](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/38.jpeg) ![<xatu-test> 21](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/15.jpeg) ![<xatu-test> 22](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/20.jpeg) ![<xatu-test> 23](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/35.jpeg) ![<xatu-test> 24](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/62.jpeg) ![<xatu-test> 25](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/14.jpeg) ![<xatu-test> 26](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/9.jpeg) ![<xatu-test> 27](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/3.jpeg) ![<xatu-test> 28](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/47.jpeg) ![<xatu-test> 29](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/0.jpeg) ![<xatu-test> 30](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/42.jpeg) ![<xatu-test> 31](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/36.jpeg) ![<xatu-test> 32](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/33.jpeg) ![<xatu-test> 33](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/17.jpeg) ![<xatu-test> 34](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/12.jpeg) ![<xatu-test> 35](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/86.jpeg) ![<xatu-test> 36](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/13.jpeg) ![<xatu-test> 37](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/2.jpeg) ![<xatu-test> 38](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/44.jpeg) ![<xatu-test> 39](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/68.jpeg) ![<xatu-test> 40](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/16.jpeg) ![<xatu-test> 41](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/65.jpeg) ![<xatu-test> 42](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/52.jpeg) ![<xatu-test> 43](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/59.jpeg) ![<xatu-test> 44](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/25.jpeg) ![<xatu-test> 45](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/50.jpeg) ![<xatu-test> 46](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/48.jpeg) ![<xatu-test> 47](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/18.jpeg) ![<xatu-test> 48](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/78.jpeg) ![<xatu-test> 49](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/22.jpeg) ![<xatu-test> 50](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/10.jpeg) ![<xatu-test> 51](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/53.jpeg) ![<xatu-test> 52](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/46.jpeg) ![<xatu-test> 53](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/41.jpeg) ![<xatu-test> 54](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/72.jpeg) ![<xatu-test> 55](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/31.jpeg) ![<xatu-test> 56](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/7.jpeg) ![<xatu-test> 57](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/51.jpeg) ![<xatu-test> 58](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/84.jpeg) ![<xatu-test> 59](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/1.jpeg) ![<xatu-test> 60](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/27.jpeg) ![<xatu-test> 61](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/70.jpeg) ![<xatu-test> 62](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/87.jpeg) ![<xatu-test> 63](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/32.jpeg) ![<xatu-test> 64](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/81.jpeg) ![<xatu-test> 65](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/76.jpeg) ![<xatu-test> 66](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/34.jpeg) ![<xatu-test> 67](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/79.jpeg) ![<xatu-test> 68](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/69.jpeg) ![<xatu-test> 69](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/26.jpeg) ![<xatu-test> 70](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/82.jpeg) ![<xatu-test> 71](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/21.jpeg) ![<xatu-test> 72](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/23.jpeg) ![<xatu-test> 73](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/67.jpeg) ![<xatu-test> 74](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/61.jpeg) ![<xatu-test> 75](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/29.jpeg) ![<xatu-test> 76](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/60.jpeg) ![<xatu-test> 77](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/40.jpeg) ![<xatu-test> 78](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/64.jpeg) ![<xatu-test> 79](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/74.jpeg) ![<xatu-test> 80](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/83.jpeg) ![<xatu-test> 81](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/73.jpeg) ![<xatu-test> 82](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/11.jpeg) ![<xatu-test> 83](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/57.jpeg) ![<xatu-test> 84](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/28.jpeg) ![<xatu-test> 85](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/71.jpeg) ![<xatu-test> 86](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/4.jpeg) ![<xatu-test> 87](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/8.jpeg) ![<xatu-test> 88](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/77.jpeg)
9a5717c8740ea58de4bb7a7147425113
apache-2.0
['generated_from_trainer']
false
all-roberta-large-v1-auto_and_commute-4-16-5 This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.2614 - Accuracy: 0.4289
22b15ba35c4e0dd2e9cb0895d5f0e578
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.7929 | 1.0 | 1 | 2.5690 | 0.2667 | | 2.267 | 2.0 | 2 | 2.4558 | 0.3533 | | 1.8495 | 3.0 | 3 | 2.3630 | 0.3911 | | 1.4397 | 4.0 | 4 | 2.2956 | 0.4133 | | 1.2985 | 5.0 | 5 | 2.2614 | 0.4289 |
57666663d037065013492768e044e67b
apache-2.0
['generated_from_keras_callback']
false
BeardedJohn/bert-finetuned-ner-ubb-endava-only-misc This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0015 - Validation Loss: 0.0006 - Epoch: 2
eb68e20504b58e1d84c5931fc9200027
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 705, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32
00640cc4f4785cad340e2bdbab4ef459
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.1740 | 0.0013 | 0 | | 0.0024 | 0.0007 | 1 | | 0.0015 | 0.0006 | 2 |
097f84405dda4eab8a6c7d386f3fae8e
apache-2.0
['generated_from_trainer']
false
mt5-base-finetuned-xsum-data_prep_2021_12_26___t8_54.csv___topic_text_google_mt5_base This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: nan - Rouge1: 1.4678 - Rouge2: 0.1841 - Rougel: 1.4748 - Rougelsum: 1.4701 - Gen Len: 6.4874
c3674fe8d22e2a39959697abe5653099
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP
1cd6b5b4b4f757724805e3271fd1b675
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 0.0 | 1.0 | 10645 | nan | 1.4678 | 0.1841 | 1.4748 | 1.4701 | 6.4874 |
e1aac54ad3c4710804b8dfbeedf58a90
mit
['generated_from_trainer']
false
roberta-base-finetuned-filtered-0609 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1343 - Accuracy: 0.9824 - Precision: 0.9824 - Recall: 0.9824 - F1: 0.9824
0cb88f05f75abe3fffa901ff4b52383e
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 10
85212984d318406173b6f54960022612
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.1817 | 1.0 | 3180 | 0.1883 | 0.9651 | 0.9654 | 0.9651 | 0.9651 | | 0.1647 | 2.0 | 6360 | 0.1264 | 0.9777 | 0.9778 | 0.9777 | 0.9777 | | 0.1295 | 3.0 | 9540 | 0.1514 | 0.9723 | 0.9724 | 0.9723 | 0.9723 | | 0.0991 | 4.0 | 12720 | 0.1487 | 0.9761 | 0.9763 | 0.9761 | 0.9761 | | 0.0749 | 5.0 | 15900 | 0.1119 | 0.9802 | 0.9802 | 0.9802 | 0.9802 | | 0.0532 | 6.0 | 19080 | 0.1357 | 0.9789 | 0.9790 | 0.9789 | 0.9789 | | 0.0471 | 7.0 | 22260 | 0.1397 | 0.9780 | 0.9782 | 0.9780 | 0.9780 | | 0.0153 | 8.0 | 25440 | 0.1568 | 0.9777 | 0.9778 | 0.9777 | 0.9777 | | 0.0147 | 9.0 | 28620 | 0.1274 | 0.9824 | 0.9824 | 0.9824 | 0.9824 | | 0.0135 | 10.0 | 31800 | 0.1343 | 0.9824 | 0.9824 | 0.9824 | 0.9824 |
925bbee95630525c49438658a34b5e2c
apache-2.0
['generated_from_keras_callback']
false
Haakf/allsides_right_text_padded This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.9151 - Validation Loss: 1.8887 - Epoch: 5
c4c4e75a480b558ffd53c696f53db642
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -797, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000} - training_precision: mixed_float16
c13e21942e63e60afb3f6c18d58d0db2