license stringlengths 2 30 | tags stringlengths 2 513 | is_nc bool 1 class | readme_section stringlengths 201 597k | hash stringlengths 32 32 |
|---|---|---|---|---|
apache-2.0 | ['generated_from_trainer'] | false | swin-small-finetuned-cifar100 This model is a fine-tuned version of [microsoft/swin-small-patch4-window7-224](https://huggingface.co/microsoft/swin-small-patch4-window7-224) on the cifar100 dataset. It achieves the following results on the evaluation set: - Loss: 0.6281 - Accuracy: 0.8938 | 3d5f7c462ddf2306bcfcdf8f1a90b007 |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 | 0cc9d69459781e8d0c542d5a13e6bf9a |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.72 | 1.0 | 781 | 0.6691 | 0.8077 | | 0.6944 | 2.0 | 1562 | 0.4797 | 0.8495 | | 0.2794 | 3.0 | 2343 | 0.4338 | 0.869 | | 0.2569 | 4.0 | 3124 | 0.4263 | 0.879 | | 0.1417 | 5.0 | 3905 | 0.4385 | 0.8819 | | 0.0961 | 6.0 | 4686 | 0.4720 | 0.8854 | | 0.0584 | 7.0 | 5467 | 0.4941 | 0.885 | | 0.0351 | 8.0 | 6248 | 0.5253 | 0.885 | | 0.0107 | 9.0 | 7029 | 0.5598 | 0.8887 | | 0.0118 | 10.0 | 7810 | 0.5998 | 0.8858 | | 0.0097 | 11.0 | 8591 | 0.5957 | 0.8941 | | 0.0044 | 12.0 | 9372 | 0.6237 | 0.8912 | | 0.0013 | 13.0 | 10153 | 0.6286 | 0.8929 | | 0.0102 | 14.0 | 10934 | 0.6281 | 0.8938 | | 2010f343a1029bec5c69dc6f10560fb4 |
apache-2.0 | ['translation'] | false | opus-mt-tzo-es * source languages: tzo * target languages: es * OPUS readme: [tzo-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tzo-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tzo-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tzo-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tzo-es/opus-2020-01-16.eval.txt) | 6f7aac7fee7780c737f7e2d064fb79f6 |
mit | ['generated_from_trainer'] | false | xlm-roberta-base-finetuned_panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1928 - F1: 0.8388 | 172b4b45f1143e554d3ee4709bae1be6 |
mit | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.3375 | 1.0 | 525 | 0.2216 | 0.7952 | | 0.1749 | 2.0 | 1050 | 0.1996 | 0.8206 | | 0.1094 | 3.0 | 1575 | 0.1928 | 0.8388 | | 98b8c4a29bab529b136e00b6c9017f44 |
mit | [] | false | Phan on Stable Diffusion This is the `<phan>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`:     | 687a69104b5680028a92c9c70f7913a7 |
apache-2.0 | ['summarization', 'generated_from_trainer'] | false | mt5-small-finetuned-amazon-en-es This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.0296 - Rouge1: 18.0335 - Rouge2: 8.816 - Rougel: 17.5279 - Rougelsum: 17.6189 | 8de1373ad3ef5f7772520bcc07322672 |
apache-2.0 | ['summarization', 'generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:| | 6.9312 | 1.0 | 1209 | 3.2984 | 14.4268 | 6.4451 | 14.0547 | 14.1363 | | 3.8882 | 2.0 | 2418 | 3.1272 | 17.1618 | 8.7776 | 16.4569 | 16.5079 | | 3.578 | 3.0 | 3627 | 3.0798 | 17.9251 | 9.2806 | 17.4056 | 17.3871 | | 3.4191 | 4.0 | 4836 | 3.0671 | 17.6256 | 8.8731 | 16.975 | 17.0113 | | 3.3193 | 5.0 | 6045 | 3.0605 | 17.9539 | 8.7188 | 17.4034 | 17.4726 | | 3.2434 | 6.0 | 7254 | 3.0387 | 17.0668 | 8.2769 | 16.5612 | 16.6636 | | 3.208 | 7.0 | 8463 | 3.0338 | 17.2954 | 8.4547 | 16.7602 | 16.8175 | | 3.1812 | 8.0 | 9672 | 3.0296 | 18.0335 | 8.816 | 17.5279 | 17.6189 | | 5acb10510345cbeb58cca045f8461bbb |
mit | ['generated_from_trainer'] | false | roberta-base-finetuned-mbti-0901 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.0780 | 4bf7b330166bd5839ce7f600f5de8550 |
mit | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 4.3179 | 1.0 | 9920 | 4.1970 | | 4.186 | 2.0 | 19840 | 4.1264 | | 4.1057 | 3.0 | 29760 | 4.0955 | | 4.0629 | 4.0 | 39680 | 4.0826 | | 4.0333 | 5.0 | 49600 | 4.0780 | | c87a5e7c39f1a3f06b5b2844ca7de313 |
apache-2.0 | ['translation'] | false | opus-mt-es-tw * source languages: es * target languages: tw * OPUS readme: [es-tw](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-tw/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-tw/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-tw/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-tw/opus-2020-01-16.eval.txt) | 82e058c2757199d71cf331a0a2a75245 |
cc-by-4.0 | ['espnet', 'audio', 'text-to-speech'] | false | `kan-bayashi/vctk_gst+xvector_tacotron2` ♻️ Imported from https://zenodo.org/record/4394598/ This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/). | 4e7a0b97786117fee731cdf0a9cfa730 |
apache-2.0 | ['exbert', 'multiberts'] | false | MultiBERTs Seed 17 (uncased) Seed 17 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani). | b2ec650ad8d4fb24fedaeb9be6d525ed |
apache-2.0 | ['exbert', 'multiberts'] | false | How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-17') model = BertModel.from_pretrained("multiberts-seed-17") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` | 2c0b9e21985e084e070446c33d549b74 |
mit | [] | false | million-live-spade-q-style-3k on Stable Diffusion This is the `<spade_q>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`:              | 0d5992cf86696c1f622f594273fef944 |
apache-2.0 | ['translation'] | false | opus-mt-mh-es * source languages: mh * target languages: es * OPUS readme: [mh-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/mh-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/mh-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/mh-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/mh-es/opus-2020-01-16.eval.txt) | 990e1007b14f7edc67ad54638eaf36ea |
creativeml-openrail-m | ['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'science'] | false | DreamBooth model for the StarTrek concept trained by vumichien on the vumichien/spaceship_star_trek dataset. <img src="https://huggingface.co/vumichien/StarTrek-starship/resolve/main/1_dlgd3k5ZecT17cJOrg2NdA.jpeg" alt="StarTrek starship"> This is a Stable Diffusion model fine-tuned on the StarTrek concept with DreamBooth. It can be used by modifying the `instance_prompt`: **StarTrek starship** This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part! | 5a804633cfad6b268869ad0249745d8b |
creativeml-openrail-m | ['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'science'] | false | Examples <figure> <img src="https://huggingface.co/vumichien/StarTrek-starship/resolve/main/Leonardo%20Da%20Vinci%20style.png" alt="StarTrek starship - Leonardo Da Vinci style"> <figcaption>Text prompts for generated: A painting of StarTrek starship, Leonardo Da Vinci style </figcaption> </figure> <figure> <img src="https://huggingface.co/vumichien/StarTrek-starship/resolve/main/Michelangelo%20style.png" alt="StarTrek starship - Michelangelo style"> <figcaption>Text prompts for generated: A painting of StarTrek starship, Michelangelo style </figcaption> </figure> <figure> <img src="https://huggingface.co/vumichien/StarTrek-starship/resolve/main/Botero%20style.png" alt="StarTrek starship - Botero style"> <figcaption>Text prompts for generated: A painting of StarTrek starship, Botero style </figcaption> </figure> <figure> <img src="https://huggingface.co/vumichien/StarTrek-starship/resolve/main/Pierre-Auguste%20Renoir%20style.png" alt="StarTrek starship - Pierre-Auguste Renoir style"> <figcaption>Text prompts for generated: A painting of StarTrek starship, Pierre-Auguste Renoir style </figcaption> </figure> <figure> <img src="https://huggingface.co/vumichien/StarTrek-starship/resolve/main/Vincent%20Van%20Gogh%20style.png" alt="StarTrek starship - Vincent Van Gogh style"> <figcaption>Text prompts for generated: A painting of StarTrek starship, Vincent Van Gogh style </figcaption> </figure> <figure> <img src="https://huggingface.co/vumichien/StarTrek-starship/resolve/main/Rembrandt%20style.png" alt="StarTrek starship - Rembrandt style"> <figcaption>Text prompts for generated: A painting of StarTrek starship, Rembrandt style </figcaption> </figure> | 5836e978c77d4fb81d941ff860676702 |
apache-2.0 | ['generated_from_trainer'] | false | all-roberta-large-v1-banking-2-2-1 This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.6817 - Accuracy: 0.1022 | dd9f89236bbaba5ea08e5a2eb415e7dc |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.653 | 1.0 | 5 | 2.6817 | 0.1022 | | 2e7739971bb85a4246fb6e0670648e8a |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 | 19ece0fc08de4981f9aec02bbef0a412 |
creativeml-openrail-m | ['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'landscape'] | false | DreamBooth model for the landscape concept trained by nahidalam on the nahidalam/landscape dataset. This is a Stable Diffusion model fine-tuned on the landscape concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of landscape ocean** This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part! | ca28c784dbf9584c5412cf4a7c65cd2b |
apache-2.0 | ['hf-asr-leaderboard', 'generated_from_trainer'] | false | Whisper Small Km - Kak Soky This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the SLR42 dataset. It achieves the following results on the evaluation set: - Loss: 0.1471 - Wer: 35.6654 | 3fd0f41b2cca18b2f2bbeb6916839e95 |
apache-2.0 | ['hf-asr-leaderboard', 'generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP | a39bd79383a3b10551a158be7063daec |
apache-2.0 | ['hf-asr-leaderboard', 'generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.3639 | 0.76 | 1000 | 0.3452 | 71.9392 | | 0.1553 | 1.53 | 2000 | 0.2025 | 49.0494 | | 0.0565 | 2.29 | 3000 | 0.1664 | 39.9240 | | 0.0334 | 3.06 | 4000 | 0.1471 | 35.6654 | | 00edae06c9b2fcac00d0667ce507df70 |
apache-2.0 | ['translation'] | false | opus-mt-et-es * source languages: et * target languages: es * OPUS readme: [et-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/et-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/et-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/et-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/et-es/opus-2020-01-16.eval.txt) | b676df5883ab05a6348985f4ec852911 |
apache-2.0 | ['generated_from_keras_callback'] | false | whisper_nosp_0020 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1825 - Train Accuracy: 0.0228 - Validation Loss: 0.8115 - Validation Accuracy: 0.0203 - Epoch: 19 | 239f5c69ee59c447407b20a678b21de6 |
apache-2.0 | ['generated_from_keras_callback'] | false | Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 7.5559 | 0.0010 | 6.3853 | 0.0013 | 0 | | 6.3227 | 0.0021 | 5.7023 | 0.0038 | 1 | | 4.9825 | 0.0063 | 3.6302 | 0.0109 | 2 | | 2.9413 | 0.0126 | 2.1959 | 0.0154 | 3 | | 1.9349 | 0.0157 | 1.6630 | 0.0172 | 4 | | 1.4741 | 0.0171 | 1.3813 | 0.0181 | 5 | | 1.1975 | 0.0181 | 1.2161 | 0.0186 | 6 | | 1.0048 | 0.0188 | 1.0990 | 0.0191 | 7 | | 0.8598 | 0.0194 | 1.0165 | 0.0194 | 8 | | 0.7431 | 0.0199 | 0.9603 | 0.0196 | 9 | | 0.6489 | 0.0203 | 0.9106 | 0.0198 | 10 | | 0.5682 | 0.0207 | 0.8787 | 0.0199 | 11 | | 0.4985 | 0.0210 | 0.8548 | 0.0200 | 12 | | 0.4372 | 0.0213 | 0.8352 | 0.0201 | 13 | | 0.3829 | 0.0216 | 0.8190 | 0.0202 | 14 | | 0.3327 | 0.0219 | 0.8148 | 0.0202 | 15 | | 0.2904 | 0.0221 | 0.8139 | 0.0202 | 16 | | 0.2492 | 0.0224 | 0.8188 | 0.0202 | 17 | | 0.2140 | 0.0226 | 0.8146 | 0.0203 | 18 | | 0.1825 | 0.0228 | 0.8115 | 0.0203 | 19 | | 70302b576c4f688dfe92707fb0cf5700 |
apache-2.0 | ['image-classification', 'generated_from_trainer'] | false | vit-base-patch16-224 This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1510 - Accuracy: 0.9443 | c60e3f84e8fe6a4187ced4535697200a |
apache-2.0 | ['image-classification', 'generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 60 - eval_batch_size: 60 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 240 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 | 7e8f581e156bfda9790ee30425233f6b |
apache-2.0 | ['image-classification', 'generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1438 | 1.0 | 150 | 0.1645 | 0.9353 | | 36eb576ec58e82d1c1094d2da4061fd1 |
apache-2.0 | ['generated_from_trainer'] | false | convnext-base-224_finetuned_on_unlabelled_IA_with_snorkel_labels This model is a fine-tuned version of [facebook/convnext-base-224](https://huggingface.co/facebook/convnext-base-224) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3443 - Precision: 0.9864 - Recall: 0.9822 - F1: 0.9843 - Accuracy: 0.9884 | 603c05c81e322465d8a76ae124368c32 |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP - label_smoothing_factor: 0.2 | 4685f66c4435f70d58bb5adc36f36dae |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.3611 | 1.0 | 2021 | 0.3467 | 0.9843 | 0.9729 | 0.9784 | 0.9842 | | 0.3524 | 2.0 | 4042 | 0.3453 | 0.9853 | 0.9790 | 0.9821 | 0.9868 | | 0.3466 | 3.0 | 6063 | 0.3438 | 0.9854 | 0.9847 | 0.9851 | 0.9889 | | 0.3433 | 4.0 | 8084 | 0.3434 | 0.9850 | 0.9808 | 0.9829 | 0.9873 | | 0.3404 | 5.0 | 10105 | 0.3459 | 0.9853 | 0.9790 | 0.9821 | 0.9868 | | 0.3384 | 6.0 | 12126 | 0.3453 | 0.9853 | 0.9790 | 0.9821 | 0.9868 | | 0.3382 | 7.0 | 14147 | 0.3437 | 0.9864 | 0.9822 | 0.9843 | 0.9884 | | 0.3358 | 8.0 | 16168 | 0.3441 | 0.9857 | 0.9829 | 0.9843 | 0.9884 | | 0.3349 | 9.0 | 18189 | 0.3448 | 0.9857 | 0.9829 | 0.9843 | 0.9884 | | 0.3325 | 10.0 | 20210 | 0.3443 | 0.9864 | 0.9822 | 0.9843 | 0.9884 | | 90c88af43ec0341602b0d7d9327990b7 |
apache-2.0 | ['image-classification', 'timm'] | false | Model card for maxvit_large_tf_224.in1k An official MaxViT image classification model. Trained in tensorflow on ImageNet-1k by paper authors. Ported from official Tensorflow implementation (https://github.com/google-research/maxvit) to PyTorch by Ross Wightman. | 0b141edeeb5dc9beec64b76b0dffb71c |
apache-2.0 | ['image-classification', 'timm'] | false | Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 211.8 - GMACs: 43.7 - Activations (M): 127.3 - Image size: 224 x 224 - **Papers:** - MaxViT: Multi-Axis Vision Transformer: https://arxiv.org/abs/2204.01697 - **Dataset:** ImageNet-1k | 6a8b7cd83950ba0fa40f9cbc307e3d64 |
apache-2.0 | ['image-classification', 'timm'] | false | Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model('maxvit_large_tf_224.in1k', pretrained=True) model = model.eval() | a26b4cfc1db07b6c53ac9831abe048e0 |
apache-2.0 | ['image-classification', 'timm'] | false | Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'maxvit_large_tf_224.in1k', pretrained=True, features_only=True, ) model = model.eval() | 2bee8e0fe77e6ed9534971f63b09b5e8 |
apache-2.0 | ['image-classification', 'timm'] | false | Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'maxvit_large_tf_224.in1k', pretrained=True, num_classes=0, | b26f98e0d7bcc19131d0505ccc37debf |
cc-by-4.0 | ['generated_from_trainer'] | false | roberta-base-bne-finetuned-amazon_reviews_multi This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 0.3011 - Accuracy: 0.9185 | 87de9caf33a321ef13efac7cb0bbb726 |
cc-by-4.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2427 | 1.0 | 125 | 0.2109 | 0.919 | | 0.0986 | 2.0 | 250 | 0.3011 | 0.9185 | | 94c564679b84686f7564f56c960facf6 |
apache-2.0 | ['t5', 'seq2seq'] | false | t5-v1.1-base-dutch-cased A [T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) sequence to sequence model pre-trained from scratch on [cleaned Dutch 🇳🇱🇧🇪 mC4](https://huggingface.co/datasets/yhavinga/mc4_nl_cleaned). This **t5-v1.1** model has **247M** parameters. It was pre-trained with masked language modeling (denoise token span corruption) objective on the dataset `mc4_nl_cleaned` config `full` for **2** epoch(s) and a duration of **6d6h**, with a sequence length of **1024**, batch size **64** and **1210154** total steps (**79B** tokens). Pre-training evaluation loss and accuracy are **0,96** and **0,78**. Refer to the evaluation section below for a comparison of the pre-trained models on summarization and translation. * Pre-trained T5 models need to be finetuned before they can be used for downstream tasks, therefore the inference widget on the right has been turned off. * For a demo of the Dutch CNN summarization models, head over to the Hugging Face Spaces for the **[Netherformer 📰](https://huggingface.co/spaces/flax-community/netherformer)** example application! Please refer to the original T5 papers and Scale Efficiently papers for more information about the T5 architecture and configs, though it must be noted that this model (t5-v1.1-base-dutch-cased) is unrelated to these projects and not an 'official' checkpoint. * **[Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf)** by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu*. * **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*. | 39609a9a18ad4c5a5437569b8167ee11 |
apache-2.0 | ['t5', 'seq2seq'] | false | Tokenizer The model uses a cased SentencePiece tokenizer configured with the `Nmt, NFKC, Replace multi-space to single-space` normalizers and has 32003 tokens. It was trained on Dutch mc4 with scripts from the Huggingface Transformers [Flax examples](https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling). See [./raw/main/tokenizer.json](tokenizer.json) for details. | 214b45ec156c5e5e1ee194495eb73bb3 |
apache-2.0 | ['generated_from_trainer'] | false | bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0599 - Precision: 0.9360 - Recall: 0.9520 - F1: 0.9439 - Accuracy: 0.9869 | 8be1d14958b4a0c262cb8468bb435b35 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0879 | 1.0 | 1756 | 0.0652 | 0.9236 | 0.9379 | 0.9307 | 0.9832 | | 0.0343 | 2.0 | 3512 | 0.0614 | 0.9337 | 0.9510 | 0.9423 | 0.9864 | | 0.019 | 3.0 | 5268 | 0.0599 | 0.9360 | 0.9520 | 0.9439 | 0.9869 | | 4e348f1e78da4ef9568d598c11e4b969 |
apache-2.0 | ['whisper-event', 'generated_from_trainer'] | false | Whisper Medium VI - Multi - Augmented This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the following datasets: - [mozilla-foundation/common_voice_11_0](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0) - [google/fleurs](https://huggingface.co/datasets/google/fleurs) - [vivos](https://huggingface.co/datasets/vivos) It achieves the following results on the evaluation set: - Loss: 0.3696 - Wer: 16.6594 - Cer: 7.7625 | 2c4fdbfb4d989b7c3db0451f2adbf075 |
apache-2.0 | ['whisper-event', 'generated_from_trainer'] | false | Training and evaluation data Training: - [mozilla-foundation/common_voice_11_0](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0) (train+validation) - [google/fleurs](https://huggingface.co/datasets/google/fleurs) (train+validation) - [vivos](https://huggingface.co/datasets/vivos) (train) Evaluation: - [mozilla-foundation/common_voice_11_0](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0) (test) - [google/fleurs](https://huggingface.co/datasets/google/fleurs) (test) - [vivos](https://huggingface.co/datasets/vivos) (test) | dadc82970d4b618ec5d1c952912a0d4b |
apache-2.0 | ['whisper-event', 'generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:| | 0.1992 | 1.8 | 1000 | 0.2726 | 17.4929 | 8.2562 | | 0.0402 | 3.6 | 2000 | 0.3317 | 17.4929 | 8.2588 | | 0.0073 | 5.4 | 3000 | 0.3429 | 17.6793 | 8.8913 | | 0.0014 | 7.19 | 4000 | 0.3599 | 19.0283 | 9.5103 | | 0.0006 | 8.99 | 5000 | 0.3696 | 16.6594 | 7.7625 | | a45d4f25454c68e9e5df4647953e9c0e |
afl-3.0 | ['t5'] | false | chunked T5 - small (cT5-small) Github: https://github.com/mtreviso/chunked-t5 A T5 model that uses a new loss where a special end-of-chunk token `</c>` is appended after sentinel tokens. The decoder has to predict the full input with masked tokens followed by `</c>`. This allows a much faster auto-regressive generation since the decoder can predict multiple tokens in parallel. For example, for the input `the quick brown fox jumps over the lazy dog`: ``` encoder: the <extra_id_0> fox jumps <extra_id_1> the lazy dog T5 decoder : <extra_id_0> quick brown <extra_id_1> over <extra_id_2> cT5 decoder: <extra_id_0> quick brown </c> <extra_id_1> over </c> <extra_id_2> ``` The generation may look like this for T5 and cT5: ``` T5: <extra_id_0> T5: <extra_id_0> quick T5: <extra_id_0> quick brown T5: <extra_id_0> quick brown <extra_id_1> T5: <extra_id_0> quick brown <extra_id_1> over T5: <extra_id_0> quick brown <extra_id_1> over <extra_id_2> T5: <extra_id_0> quick brown <extra_id_1> over <extra_id_2> </s> cT5: <extra_id_0> <pad> <extra_id_1> <pad> <extra_id_2> </s> cT5: <extra_id_0> quick <pad> <extra_id_1> over <pad> <extra_id_2> </s> cT5: <extra_id_0> quick brown <pad> <extra_id_1> over </c> <extra_id_2> </s> cT5: <extra_id_0> quick brown </c> <extra_id_1> over </c> <extra_id_2> </s> ``` In the original T5, the decoder is called \\(n_s + 1 + \sum_i |s_i|\\) times autoregressively, where \\(n_s\\) is the number of sentinel tokens and \\(s_1,...,s_{n_s}\\) are the predicted chunks. In contrast, cT5's decoder is called just \\(max_i |s_i| + 1\\) times. The generation stops when all sentences were fully translated to complete chunks, i.e., until all `</c>` tokens were generated. Alternatively, you can also set `max_chunk_size` to manually force the model to stop after generating a chunk with `max_chunk_size` tokens. The overhead of calling the decoder with a longer input is less pronounced since this computation can be parallelized in GPUs/TPUs. | 0da5ee187274a104ca052b9f036c1166 |
afl-3.0 | ['t5'] | false | Training details cT5 models used T5's weights as a starting point, and then it was finetuned on the English [wikipedia](https://huggingface.co/datasets/wikipedia) for 3 epochs, achieving ~74% validation accuracy (ct5-small). The training script is in JAX + Flax and can be found in `pretrain_ct5.py`. Flax checkpoints can be converted to PyTorch via `convert_flax_to_pytorch.py [flax_dirname]`. | 7e05ba378782ae858991167f9567a5b1 |
afl-3.0 | ['t5'] | false | Usage ```python from transformers import AutoTokenizer from modeling_ct5 import CT5ForConditionalGeneration tokenizer = AutoTokenizer.from_pretrained("mtreviso/ct5-small-en-wiki") model = CT5ForConditionalGeneration.from_pretrained("mtreviso/ct5-small-en-wiki") ``` For training: ```python input_ids = tokenizer("The <extra_id_0> walks in <extra_id_1> park", return_tensors="pt").input_ids labels = tokenizer("<extra_id_0> man </c> <extra_id_1> the </c> <extra_id_2>", return_tensors="pt").input_ids outputs = model(input_ids=input_ids, labels=labels) loss = outputs.loss logits = outputs.logits ``` For generation: ```python texts = [ "The <extra_id_0> walks in <extra_id_1> park", "UN Chief says there is no way to <extra_id_0> in Syria", ] input_ids = tokenizer(texts, return_tensors="pt", padding=True).input_ids generated_ids = model.generate( input_ids, use_cache=False, | aabbbf02e4aa7bd6e74057e6b139cf49 |
mit | ['object-detection', 'computer-vision', 'sort', 'tracker', 'bytetracker'] | false | Model Description [ByteTrack](https://arxiv.org/abs/2110.06864): Multi-Object Tracking by Associating Every Detection Box <img src="https://raw.githubusercontent.com/ifzhang/ByteTrack/main/assets/sota.png" width="500"/> | ea2126ec993d95da165321f57395d1f3 |
mit | ['object-detection', 'computer-vision', 'sort', 'tracker', 'bytetracker'] | false | BibTeX Entry and Citation Info ``` @article{zhang2022bytetrack, title={ByteTrack: Multi-Object Tracking by Associating Every Detection Box}, author={Zhang, Yifu and Sun, Peize and Jiang, Yi and Yu, Dongdong and Weng, Fucheng and Yuan, Zehuan and Luo, Ping and Liu, Wenyu and Wang, Xinggang}, booktitle={Proceedings of the European Conference on Computer Vision (ECCV)}, year={2022} } ``` | d5dadff04819748b26f873e6accbf674 |
mit | ['binary_segmentation', 'image_differences'] | false | Image Difference Segmentation For the main repository and code, please refer to the [GitHub Repo](https://github.com/Brikwerk/image-difference-segmentation). This project enables creation of large binary segmentation datasets through use of image differences. Certain domains, such as comic books or manga, take particularly well to the proposed approach. Creating a dataset and training a segmentation model involves two manual steps (outside of the code in this repository): 1. Finding and sorting suitable data. Ideally, your data should have two or more classes wherein the only difference between the classes should be the subject that is to be segmented. An example would be an English page from a comic and a French page from the same comic. 2. Segmentation masks must be manually created for a small number of image differences. Using a pretrained DiffNet requires only 20-50 new masks. Re-training DiffNet from scratch requires 100-200 masks. For quickly generating binary segmentation masks, [simple-masker](https://github.com/Brikwerk/simple-masker) was written/used. | 8227bce14a750f60bc2744e8dfc8acd0 |
mit | ['binary_segmentation', 'image_differences'] | false | Prerequisites The following must be on your system: - Python 3.6+ - An accompanying Pip installation - Python and Pip must be accessible from the command line - An NVIDIA GPU that is CUDA-capable (6GB+ of VRAM likely needed) | 526574568d76ee3b77c66e41e32a44f7 |
mit | ['binary_segmentation', 'image_differences'] | false | Downloading the Weights File Weights for this project are hosted at [HuggingFace](https://huggingface.co/brikwerk/image-difference-segmentation) under `weights` directory. Currently, a DiffNet instance trained on text differences is provided. To use this model, download it and move it to the weights directory in your local copy of this repository. | e50534d2f89aba264371cb195960053f |
mit | ['binary_segmentation', 'image_differences'] | false | Using Pretrained Weights Pretrained weights can be used in the `batch_process.py` file and the `evaluate.py` file. For both files, specify the path to your weights file using the `--weights_path` CLI argument. | 34f1a230977b6e4ba731ed3bf4ca4378 |
creativeml-openrail-m | ['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'landscape'] | false | DreamBooth model for the fruins concept trained on the CCMat/db-forest-ruins dataset. This is a Stable Diffusion model fine-tuned on the fruins concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of fruins ruins** This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part! | 645ac2ec17271b30418eb7a8745f98b5 |
creativeml-openrail-m | ['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'landscape'] | false | Description This is a Stable Diffusion model fine-tuned on `ruins` images for the landscape theme.<br> Concept: **fruins** : forest ruins, greenery ruins<br> Pretrained Model: [prompthero/openjourney](https://huggingface.co/prompthero/openjourney)<br> Learning rate: 1e-6<br> | 27666fb68ef173db8089c0550e3e6e3b |
creativeml-openrail-m | ['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'landscape'] | false | Samples Prompt: "a photo fruins ruins in Paris in front of the Arc de triomphe, in the 1970s, vivid colors"  <br> Prompt: "high quality photo of Rome in fruins ruins with the Colosseum in the background"  <br> Prompt: "fruins ruins in London near the Tower Bridge, professional photograph"  <br> Prompt: "A futiristic post-apocalyptic town in fruins ruins trending on artstation, nostalgic lightning, unreal engine 5"  <br> Prompt: "fruins ruins in Saint Petersburg, Sovietwave"  | 30eaae1981ae6a4776770e664453a0f1 |
mit | ['generated_from_trainer'] | false | deberta-base-finetuned-aqa-squad1-newsqa This model is a fine-tuned version of [stevemobs/deberta-base-finetuned-aqa-squad1](https://huggingface.co/stevemobs/deberta-base-finetuned-aqa-squad1) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7523 | cefdf34aea4d086229ab4b05e07cfd83 |
mit | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.681 | 1.0 | 17307 | 0.7207 | | 0.4682 | 2.0 | 34614 | 0.7523 | | 41efd632def359cd4b758e007ade727c |
apache-2.0 | ['image-classification', 'pytorch', 'onnx'] | false | Usage instructions ```python from PIL import Image from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize from torchvision.transforms.functional import InterpolationMode from holocron.models import model_from_hf_hub model = model_from_hf_hub("frgfm/resnet34").eval() img = Image.open(path_to_an_image).convert("RGB") | 761aebd7360c3cc7e651b4cd32405f95 |
cc-by-4.0 | ['espnet', 'audio', 'automatic-speech-recognition'] | false | Demo: How to use in ESPnet2 ```bash cd espnet git checkout 0d8cd47dd3572248b502bc831cd305e648170233 pip install -e . cd egs2/csj/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/kan-bayashi_csj_asr_train_asr_conformer ``` | a3599e1a4a024b0a327418ea5fd0af48 |
cc-by-4.0 | ['espnet', 'audio', 'automatic-speech-recognition'] | false | ASR config <details><summary>expand</summary> ``` config: conf/tuning/train_asr_conformer.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_train_asr_conformer_raw_char_sp ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: 4 dist_rank: 0 local_rank: 0 dist_master_addr: localhost dist_master_port: 47308 dist_launcher: null multiprocessing_distributed: true cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 100 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - acc - max keep_nbest_models: 10 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 6 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null pretrain_path: [] pretrain_key: [] num_iters_per_epoch: null batch_size: 20 valid_batch_size: null batch_bins: 15000000 valid_batch_bins: null train_shape_file: - exp/asr_stats_raw_sp/train/speech_shape - exp/asr_stats_raw_sp/train/text_shape.char valid_shape_file: - exp/asr_stats_raw_sp/valid/speech_shape - exp/asr_stats_raw_sp/valid/text_shape.char batch_type: numel valid_batch_type: null fold_length: - 80000 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train_nodup_sp/wav.scp - speech - sound - - dump/raw/train_nodup_sp/text - text - text valid_data_path_and_name_and_type: - - dump/raw/train_dev/wav.scp - speech - sound - - dump/raw/train_dev/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 valid_max_cache_size: null optim: adam optim_conf: lr: 0.002 scheduler: warmuplr scheduler_conf: warmup_steps: 25000 token_list: - <blank> - <unk> - "\u306E" - "\u3044" - "\u3067" - "\u3068" - "\u30FC" - "\u3066" - "\u3046" - "\u307E" - "\u3059" - "\u3057" - "\u306B" - "\u3063" - "\u306A" - "\u3048" - "\u305F" - "\u3053" - "\u304C" - "\u304B" - "\u306F" - "\u308B" - "\u3042" - "\u3093" - "\u308C" - "\u3082" - "\u3092" - "\u305D" - "\u308A" - "\u3089" - "\u3051" - "\u304F" - "\u3069" - "\u3088" - "\u304D" - "\u3060" - "\u304A" - "\u30F3" - "\u306D" - "\u4E00" - "\u3055" - "\u30B9" - "\u8A00" - "\u3061" - "\u3064" - "\u5206" - "\u30C8" - "\u3084" - "\u4EBA" - "\u30EB" - "\u601D" - "\u308F" - "\u6642" - "\u65B9" - "\u3058" - "\u30A4" - "\u884C" - "\u4F55" - "\u307F" - "\u5341" - "\u30E9" - "\u4E8C" - "\u672C" - "\u8A9E" - "\u5927" - "\u7684" - "\u30AF" - "\u30BF" - "\u308D" - "\u3070" - "\u3087" - "\u3083" - "\u97F3" - "\u51FA" - "\u305B" - "\u30C3" - "\u5408" - "\u65E5" - "\u4E2D" - "\u751F" - "\u4ECA" - "\u898B" - "\u30EA" - "\u9593" - "\u8A71" - "\u3081" - "\u30A2" - "\u5F8C" - "\u81EA" - "\u305A" - "\u79C1" - "\u30C6" - "\u4E0A" - "\u5E74" - "\u5B66" - "\u4E09" - "\u30B7" - "\u5834" - "\u30C7" - "\u5B9F" - "\u5B50" - "\u4F53" - "\u8003" - "\u5BFE" - "\u7528" - "\u6587" - "\u30D1" - "\u5F53" - "\u7D50" - "\u5EA6" - "\u5165" - "\u8A33" - "\u30D5" - "\u98A8" - "\u30E0" - "\u30D7" - "\u6700" - "\u30C9" - "\u30EC" - "\u30ED" - "\u4F5C" - "\u6570" - "\u76EE" - "\u30B8" - "\u95A2" - "\u30B0" - "\u767A" - "\u8005" - "\u5B9A" - "\u3005" - "\u3050" - "\u30B3" - "\u4E8B" - "\u624B" - "\u5168" - "\u5909" - "\u30DE" - "\u6027" - "\u8868" - "\u4F8B" - "\u52D5" - "\u8981" - "\u5148" - "\u524D" - "\u610F" - "\u90E8" - "\u4F1A" - "\u6301" - "\u30E1" - "\u5316" - "\u9054" - "\u4ED8" - "\u5F62" - "\u73FE" - "\u4E94" - "\u30AB" - "\u3079" - "\u53D6" - "\u56DE" - "\u5E38" - "\u4F7F" - "\u611F" - "\u66F8" - "\u6C17" - "\u6CD5" - "\u7A0B" - "\u3071" - "\u56DB" - "\u591A" - "\u8272" - "\u30BB" - "\u7406" - "\u975E" - "\u30D0" - "\u58F0" - "\u5358" - "\u756A" - "\uFF21" - "\u6210" - "\u540C" - "\u901A" - "\u30A3" - "\u679C" - "\u30AD" - "\u554F" - "\u984C" - "\u69CB" - "\u56FD" - "\u6765" - "\u9AD8" - "\u6B21" - "\u9A13" - "\u3052" - "\u30C1" - "\u4EE5" - "\u3054" - "\u4EE3" - "\u30E2" - "\u30AA" - "\u51C4" - "\u7279" - "\u77E5" - "\u30E5" - "\u7269" - "\u660E" - "\u70B9" - "\u5473" - "\u767E" - "\u89E3" - "\u8FD1" - "\u8B58" - "\u5730" - "\u540D" - "\u805E" - "\u4E0B" - "\u5C0F" - "\u6559" - "\u30B5" - "\u70BA" - "\u4E5D" - "\u30D6" - "\u5BB6" - "\u30CB" - "\u521D" - "\u30D9" - "\u30E7" - "\u5C11" - "\u8A8D" - "\u8AD6" - "\u529B" - "\u516D" - "\u30D3" - "\u60C5" - "\u7FD2" - "\u30A6" - "\u7ACB" - "\u5FC3" - "\u8ABF" - "\u5831" - "\u30A8" - "\uFF24" - "\uFF2E" - "\u793A" - "\u793E" - "\u9055" - "\u969B" - "\u3056" - "\u8AAC" - "\u5FDC" - "\u98DF" - "\u72B6" - "\u9577" - "\u7814" - "\u6821" - "\u5185" - "\u639B" - "\u30DF" - "\u5916" - "\u5411" - "\u80FD" - "\u516B" - "\u9762" - "\u7A76" - "\u7136" - "\u3073" - "\u30D4" - "\u4E3B" - "\u4FC2" - "\u5024" - "\u91CD" - "\u8A5E" - "\u4F9B" - "\u5F97" - "\u5FC5" - "\u5973" - "\u78BA" - "\u7D42" - "\u30BA" - "\u6BCD" - "\u696D" - "\u7387" - "\u65B0" - "\u6D3B" - "\u697D" - "\u8449" - "\u8A08" - "\u30CA" - "\u3080" - "\u6240" - "\u4E16" - "\u6B63" - "\u30E3" - "\u8A18" - "\u671F" - "\u5207" - "\u3078" - "\u6A5F" - "\u30DA" - "\u5343" - "\u985E" - "\u5143" - "\u614B" - "\u826F" - "\u5728" - "\u6709" - "\u30C0" - "\u4E03" - "\uFF23" - "\u5225" - "\u30EF" - "\u691C" - "\u7D9A" - "\u9078" - "\u57FA" - "\u76F8" - "\u6708" - "\u4FA1" - "\u7D20" - "\u4ED6" - "\u6BD4" - "\u9023" - "\u96C6" - "\u30A7" - "\u307B" - "\u4F4D" - "\u597D" - "\uFF2D" - "\u5F37" - "\u4E0D" - "\u5FA1" - "\u6790" - "\u30DD" - "\u7121" - "\u89AA" - "\u53D7" - "\u3086" - "\u7F6E" - "\u8C61" - "\u4ED5" - "\u5F0F" - "\u30CD" - "\u6307" - "\u8AAD" - "\u6C7A" - "\u8ECA" - "\u96FB" - "\u904E" - "\u30B1" - "\u8A55" - "\u5229" - "\u6B8B" - "\u8D77" - "\u30CE" - "\u7D4C" - "\u56F3" - "\u4F1D" - "\u500B" - "\u30C4" - "\u7BC0" - "\u9053" - "\u5E73" - "\u91D1" - "\u899A" - "\uFF34" - "\u4F4F" - "\u59CB" - "\u63D0" - "\u5B58" - "\u5171" - "\u30DB" - "\u7B2C" - "\u7D44" - "\u89B3" - "\u80B2" - "\u6771" - "\u305E" - "\u958B" - "\u52A0" - "\u5F15" - "\uFF33" - "\u53E3" - "\u6C34" - "\u5BB9" - "\u5468" - "\u5B87" - "\u7D04" - "\u5B57" - "\u3076" - "\u9803" - "\u3072" - "\u5B99" - "\u6BB5" - "\u30BD" - "\u97FF" - "\u30DC" - "\u53CB" - "\u91CF" - "\u6599" - "\u3085" - "\u5CF6" - "\u8EAB" - "\u76F4" - "\u753B" - "\u7DDA" - "\u54C1" - "\u5DEE" - "\u4EF6" - "\u9069" - "\u5F35" - "\u8FBA" - "\u8FBC" - "\u91CE" - "\u69D8" - "\u578B" - "\u4E88" - "\u7A2E" - "\u5074" - "\u8FF0" - "\u5C71" - "\u5C4B" - "\u5E30" - "\u30CF" - "\u4E57" - "\u539F" - "\u683C" - "\u8CEA" - "\u666E" - "\uFF30" - "\u9020" - "\u753A" - "\u30B4" - "\u82F1" - "\u63A5" - "\u304E" - "\u6E2C" - "\u3075" - "\u7FA9" - "\u4EAC" - "\u5272" - "\u5236" - "\u7B54" - "\u5404" - "\u4FE1" - "\u754C" - "\u6211" - "\u7A7A" - "\uFF0E" - "\u7740" - "\u53EF" - "\u66F4" - "\u6D77" - "\u4E0E" - "\u9032" - "\u52B9" - "\u5F7C" - "\u771F" - "\u7530" - "\u5FB4" - "\u6D41" - "\u5177" - "\uFF32" - "\u5E02" - "\u67FB" - "\u5B89" - "\uFF22" - "\u5E83" - "\u50D5" - "\u6CE2" - "\u5C40" - "\u8A2D" - "\u7537" - "\u767D" - "\u30B6" - "\u53CD" - "\u6226" - "\u533A" - "\u6C42" - "\u96D1" - "\uFF29" - "\u6B69" - "\u8CB7" - "\u982D" - "\u7B97" - "\u534A" - "\u4FDD" - "\u5E03" - "\u96E3" - "\uFF2C" - "\u5224" - "\u843D" - "\u8DB3" - "\u5E97" - "\u7533" - "\u8FD4" - "\u30AE" - "\u4E07" - "\u6728" - "\u6614" - "\u8F03" - "\u7D22" - "\uFF26" - "\u30B2" - "\u6B86" - "\u60AA" - "\u5883" - "\u548C" - "\u907A" - "\u57DF" - "\u968E" - "\u542B" - "\u305C" - "\u30BC" - "\u65AD" - "\u9650" - "\u63A8" - "\u4F4E" - "\u5F71" - "\u898F" - "\u6319" - "\u90FD" - "\u307C" - "\u6848" - "\u4EEE" - "\u88AB" - "\u547C" - "\u30A1" - "\u96E2" - "\u7CFB" - "\u79FB" - "\u30AC" - "\u5DDD" - "\u6E96" - "\u904B" - "\u6761" - "\u5FF5" - "\u6C11" - "\uFF27" - "\u7236" - "\u75C5" - "\u79D1" - "\u4E21" - "\u7531" - "\u8A66" - "\u56E0" - "\u547D" - "\u795E" - "\uFF28" - "\u7570" - "\u7C21" - "\u53E4" - "\u6F14" - "\u5897" - "\u51E6" - "\u8B70" - "\u7DD2" - "\u7CBE" - "\u6613" - "\u53F7" - "\u65CF" - "\u52FF" - "\u60F3" - "\u5217" - "\u5C0E" - "\u8EE2" - "\u54E1" - "\u30E6" - "\u6BCE" - "\u8996" - "\u4E26" - "\u98DB" - "\u4F3C" - "\u6620" - "\u7D71" - "\u4EA4" - "\u30D2" - "\u6B4C" - "\u5F85" - "\u8CC7" - "\u8907" - "\u8AA4" - "\u63DB" - "\u6A19" - "\u6CC1" - "\u914D" - "\u62BD" - "\u822C" - "\u7403" - "\u9006" - "\u65C5" - "\u6628" - "\u9662" - "\u99C5" - "\u74B0" - "\u5BDF" - "\u516C" - "\u6B73" - "\u5C5E" - "\u8F9E" - "\u5947" - "\u6CBB" - "\u5E7E" - "\u82E5" - "\u58F2" - "\u632F" - "\u7686" - "\u6CE8" - "\u6B74" - "\u9805" - "\u5F93" - "\u5747" - "\u5F79" - "\u9806" - "\u53BB" - "\u56E3" - "\u8853" - "\u7DF4" - "\u6FC0" - "\u6982" - "\u66FF" - "\u7B49" - "\u98F2" - "\u53F2" - "\u88DC" - "\u901F" - "\u53C2" - "\u65E9" - "\u53CE" - "\u9332" - "\u671D" - "\u5186" - "\u5370" - "\u5668" - "\u63A2" - "\u7D00" - "\u9001" - "\u6E1B" - "\u571F" - "\u5929" - "\uFF2F" - "\u50BE" - "\u72AC" - "\u9060" - "\u5E2F" - "\u52A9" - "\u6A2A" - "\u591C" - "\u7523" - "\u8AB2" - "\u5BA2" - "\u629E" - "\u5712" - "\u4E38" - "\u50CF" - "\u50CD" - "\u6750" - "\u5DE5" - "\u904A" - "\u544A" - "\u523A" - "\u6539" - "\u8D64" - "\u8074" - "\u4ECB" - "\u8077" - "\u53F0" - "\u77ED" - "\u8AB0" - "\u7D30" - "\u672A" - "\u770C" - "\u9928" - "\u6B62" - "\u53F3" - "\u306C" - "\u3065" - "\u56F2" - "\u8A0E" - "\u6B7B" - "\u5EFA" - "\u592B" - "\u7AE0" - "\u964D" - "\u666F" - "\u706B" - "\u30A9" - "\u9E97" - "\u8B1B" - "\u72EC" - "\u5DE6" - "\u5C64" - "\uFF25" - "\u5C55" - "\u653F" - "\u5099" - "\u4F59" - "\u7D76" - "\u5065" - "\u518D" - "\u9580" - "\u5546" - "\u52DD" - "\u52C9" - "\u82B1" - "\u30E4" - "\u8EF8" - "\u97FB" - "\u66F2" - "\u6574" - "\u652F" - "\u6271" - "\u53E5" - "\u6280" - "\u5317" - "\u30D8" - "\u897F" - "\u5247" - "\u4FEE" - "\u6388" - "\u9031" - "\u5BA4" - "\u52D9" - "\u9664" - "\u533B" - "\u6563" - "\u56FA" - "\u7AEF" - "\u653E" - "\u99AC" - "\u7A4D" - "\u8208" - "\u592A" - "\u5ACC" - "\u9F62" - "\u672B" - "\u7D05" - "\u6E90" - "\u6E80" - "\u5931" - "\u5BDD" - "\u6D88" - "\u6E08" - "\u4FBF" - "\u983C" - "\u4F01" - "\u5B8C" - "\u4F11" - "\u9752" - "\u7591" - "\u8D70" - "\u6975" - "\u767B" - "\u8AC7" - "\u6839" - "\u6025" - "\u512A" - "\u7D75" - "\u623B" - "\u5E2B" - "\u5F59" - "\u6DF7" - "\u8DEF" - "\u7E70" - "\uFF2B" - "\u8A3C" - "\u713C" - "\u6562" - "\u5BB3" - "\u96F6" - "\u6253" - "\u82E6" - "\u7701" - "\u7D19" - "\u5C02" - "\u8DDD" - "\u9854" - "\u8D8A" - "\u4E89" - "\u56F0" - "\u5BC4" - "\u5199" - "\u4E92" - "\u6DF1" - "\u5A5A" - "\u7DCF" - "\u89A7" - "\u80CC" - "\u7BC9" - "\u6E29" - "\u8336" - "\u62EC" - "\u8CA0" - "\u590F" - "\u89E6" - "\u7D14" - "\u9045" - "\u58EB" - "\u96A3" - "\u6050" - "\u91C8" - "\u967A" - "\u5150" - "\u5BBF" - "\u6A21" - "\u77F3" - "\u983B" - "\u5B09" - "\u5EA7" - "\u7642" - "\u7E4B" - "\uFF38" - "\u5C06" - "\u8FFD" - "\u5EAD" - "\u6238" - "\u5371" - "\u5BC6" - "\u5DF1" - "\u9014" - "\u7BC4" - "\u99C4" - "\u7D39" - "\u4EFB" - "\u968F" - "\u5357" - "\uFF11" - "\u5EB7" - "\u9818" - "\u5FD8" - "\u3045" - "\u59FF" - "\u7F8E" - "\u55B6" - "\u6349" - "\u65E2" - "\u7167" - "\uFF2A" - "\u4EF2" - "\u9152" - "\u52E2" - "\u9ED2" - "\u5149" - "\u6E21" - "\u75DB" - "\u62C5" - "\u5F31" - "\u307D" - "\uFF36" - "\u7D0D" - "\u629C" - "\u5E45" - "\u6D17" - "\u7A81" - "\u671B" - "\u5373" - "\u9858" - "\u7565" - "\uFF12" - "\u9811" - "\u5FD7" - "\u5B85" - "\u7247" - "\u656C" - "\u6751" - "\u60B2" - "\u81A8" - "\u89D2" - "\u30E8" - "\u4F9D" - "\u8A73" - "\u5F8B" - "\u9B5A" - "\u52B4" - "\u5A66" - "\u6163" - "\u732B" - "\u5019" - "\u8001" - "\u558B" - "\u79F0" - "\u796D" - "\u7FA4" - "\u7E2E" - "\u6C38" - "\u616E" - "\u5EF6" - "\u7A3F" - "\u611B" - "\u8089" - "\u9589" - "\u8CBB" - "\u6295" - "\u6D3E" - "\u81F4" - "\u7BA1" - "\u7C73" - "\u5E95" - "\u7D99" - "\u6C0F" - "\u690D" - "\u501F" - "\u5727" - "\u52E4" - "\u6F22" - "\u66AE" - "\u5F27" - "\u88C5" - "\u57CE" - "\u5287" - "\u76DB" - "\u63F4" - "\u9244" - "\u8C37" - "\u5E72" - "\u7E26" - "\u8A31" - "\u6016" - "\u9A5A" - "\u8A8C" - "\uFF35" - "\u8B77" - "\u5B88" - "\u8033" - "\u6B32" - "\u8239" - "\uFF10" - "\u5178" - "\u67D3" - "\u7D1A" - "\u98FE" - "\u5144" - "\u71B1" - "\u8F09" - "\u88FD" - "\u5BFA" - "\u662D" - "\u7FFB" - "\u5426" - "\u5584" - "\u62BC" - "\u53CA" - "\u6A29" - "\u559C" - "\u670D" - "\u8CB0" - "\u8EFD" - "\u677F" - "\u61B6" - "\u98FC" - "\u5C3E" - "\u5FA9" - "\u5E78" - "\u7389" - "\u5354" - "\u679A" - "\u90CE" - "\u8840" - "\u524A" - "\u5922" - "\u63A1" - "\u6674" - "\u6B20" - "\u602A" - "\u65BD" - "\u7DE8" - "\u98EF" - "\u7B56" - "\u9000" - "\uFF39" - "\u8349" - "\u61F8" - "\u6458" - "\u58CA" - "\u4F38" - "\u85AC" - "\u9996" - "\u5BFF" - "\u53B3" - "\u606F" - "\u5C45" - "\u643A" - "\u9F3B" - "\u9280" - "\u4EA1" - "\u6CCA" - "\u8857" - "\u9759" - "\u9CE5" - "\u677E" - "\u5F92" - "\u969C" - "\u7B4B" - "\u7559" - "\u51B7" - "\u5C24" - "\u68EE" - "\u5438" - "\u5012" - "\u68B0" - "\u6D0B" - "\u821E" - "\u6A4B" - "\u500D" - "\u6255" - "\u5352" - "\u7E04" - "\u6C5A" - "\u53F8" - "\u6625" - "\u793C" - "\u66DC" - "\u6545" - "\u526F" - "\u5F01" - "\u5439" - "\u85E4" - "\u8DE1" - "\u962A" - "\u4E86" - "\u91E3" - "\u9632" - "\u7834" - "\u6012" - "\u662F" - "\u30A5" - "\u7AF6" - "\u8179" - "\u4E95" - "\u4E08" - "\u64AE" - "\u72ED" - "\u5BD2" - "\u7B46" - "\u5965" - "\u8C4A" - "\u732E" - "\u5C31" - "\u5A18" - "\u79D2" - "\u6C5F" - "\u8E0F" - "\u8A13" - "\u7372" - "\u96E8" - "\u6BBA" - "\u57CB" - "\u64CD" - "\u9AA8" - "\u8D85" - "\u6D5C" - "\u8B66" - "\u7DD1" - "\u7D61" - "\u8133" - "\u7B11" - "\u6D6E" - "\u7D66" - "\u7126" - "\u8A70" - "\u878D" - "\u738B" - "\u5C3A" - "\u5E7C" - "\u820C" - "\u663C" - "\u88CF" - "\u6CE3" - "\u67C4" - "\u9396" - "\u62E1" - "\u8A3A" - "\u7DE0" - "\u5B98" - "\u6697" - "\u820E" - "\u6298" - "\u5264" - "\u4E73" - "\u6B6F" - "\u7248" - "\u5C04" - "\u8108" - "\u9707" - "\u7802" - "\u4F34" - "\u72AF" - "\u4F50" - "\u5DDE" - "\u8FB2" - "\u8DA3" - "\u990A" - "\u675F" - "\u6E2F" - "\u8FEB" - "\u5F3E" - "\u798F" - "\u51AC" - "\u541B" - "\u6B66" - "\u77AC" - "\u67A0" - "\u6CA2" - "\u661F" - "\u5BCC" - "\u6557" - "\u5D0E" - "\u6355" - "\u8377" - "\u5F1F" - "\u95BE" - "\u7E54" - "\u7C89" - "\u725B" - "\u8DF5" - "\u9999" - "\u6797" - "\u83DC" - "\u62CD" - "\u63CF" - "\u888B" - "\u6607" - "\u91DD" - "\u8FCE" - "\u585A" - "\u5A46" - "\uFF49" - "\u8ECD" - "\uFF13" - "\uFF37" - "\u5BC2" - "\u8F29" - "\u3074" - "\u5DFB" - "\u4E01" - "\u504F" - "\u79CB" - "\u5E9C" - "\u6CC9" - "\u81F3" - "\u6368" - "\u7956" - "\u8584" - "\u5B97" - "\u5FB9" - "\u93E1" - "\u75C7" - "\u6CB9" - "\u8131" - "\u9CF4" - "\u7AE5" - "\u6BDB" - "\u9077" - "\u84CB" - "\u58C1" - "\u5915" - "\u5589" - "\u907F" - "\u984D" - "\u6EA2" - "\u96F0" - "\u4EE4" - "\u59C9" - "\u63E1" - "\u3077" - "\u523B" - "\u62E0" - "\u8CA1" - "\u8FF7" - "\u9063" - "\u82B8" - "\u5E8F" - "\u76E3" - "\u8457" - "\u5869" - "\u5009" - "\u7F6A" - "\u6F5C" - "\u7D5E" - "\u764C" - "\u5BAE" - "\u5E2D" - "\u8F2A" - "\u594F" - "\u846C" - "\u6C60" - "\u6CBF" - "\u5FAE" - "\u5305" - "\u76CA" - "\u76AE" - "\u4FC3" - "\u6297" - "\u5FEB" - "\u66AB" - "\u52E7" - "\u8CA9" - "\u8C46" - "\u5B63" - "\u529F" - "\u9A12" - "\uFF54" - "\u97D3" - "\u6ED1" - "\u75B2" - "\u9003" - "\u9061" - "\u5E79" - "\u60A9" - "\u83D3" - "\u672D" - "\u6804" - "\u9177" - "\u8B1D" - "\u6C96" - "\u96EA" - "\u5360" - "\u60D1" - "\u63FA" - "\u866B" - "\u62B1" - "\uFF4B" - "\u5CA1" - "\u6E9C" - "\u8535" - "\u7763" - "\u6838" - "\u4E71" - "\u4E45" - "\u9EC4" - "\u9670" - "\u7720" - "\u7B26" - "\u6B8A" - "\u628A" - "\u6291" - "\u5E0C" - "\u63C3" - "\u6483" - "\u5EAB" - "\u5409" - "\u6E6F" - "\u65CB" - "\u640D" - "\u52AA" - "\u64E6" - "\u9769" - "\u6E0B" - "\u773C" - "\u592E" - "\u8CDE" - "\u5374" - "\u5948" - "\u539A" - "\u59D4" - "\u83EF" - "\u96A0" - "\uFF4E" - "\u30CC" - "\u9BAE" - "\u515A" - "\u5C65" - "\u8A98" - "\u6469" - "\u6162" - "\u5442" - "\u7206" - "\u7BB1" - "\u6075" - "\u9678" - "\u7DCA" - "\u7E3E" - "\u5742" - "\u7B52" - "\u7532" - "\u5348" - "\u5230" - "\u8CAC" - "\u5C0A" - "\u6CF3" - "\u6279" - "\u7518" - "\u5B6B" - "\u7159" - "\u8A2A" - "\u50B7" - "\u6E05" - "\u716E" - "\u88C1" - "\u9694" - "\u8ED2" - "\uFF31" - "\u7FBD" - "\u5D29" - "\u7A74" - "\u7CD6" - "\u707D" - "\u5275" - "\u6F70" - "\u6691" - "\u87BA" - "\u653B" - "\u6577" - "\u6575" - "\u76E4" - "\u9732" - "\u7A93" - "\u63B2" - "\u81E8" - "\u53E9" - "\u5145" - "\u4FFA" - "\u8F38" - "\u967D" - "\u6B27" - "\u6687" - "\u6B6A" - "\u6DFB" - "\u60A3" - "\u5FD9" - "\u70AD" - "\u829D" - "\u8EDF" - "\u88D5" - "\u7E01" - "\u6F2B" - "\u7A1A" - "\u7968" - "\u8A69" - "\u5CB8" - "\u7687" - "\uFF4A" - "\u6627" - "\u5100" - "\u5857" - "\u8E0A" - "\u8AF8" - "\u6D74" - "\u904D" - "\u66D6" - "\u5BE7" - "\u99B4" - "\u5339" - "\u03B1" - "\u627F" - "\u30BE" - "\u6383" - "\u5375" - "\u5999" - "\u3043" - "\u66B4" - "\u62B5" - "\u604B" - "\u8863" - "\u6EB6" - "\u7DAD" - "\u514D" - "\u6392" - "\u685C" - "\u7573" - "\u7B87" - "\u6398" - "\u535A" - "\u6FC3" - "\u7FCC" - "\u8056" - "\u7DB2" - "\u885B" - "\u64EC" - "\u5E8A" - "\u9178" - "\u6669" - "\u4E7E" - "\u90AA" - "\u7551" - "\u6EDE" - "\u5802" - "\u7E41" - "\u4ECF" - "\u5FB3" - "\u7DE9" - "\u6A39" - "\u6551" - "\u633F" - "\u68D2" - "\u906D" - "\u676F" - "\u6065" - "\u6E56" - "\u6E09" - "\u81D3" - "\u8CB4" - "\u723A" - "\u7981" - "\u4F75" - "\u5263" - "\u786C" - "\u58C7" - "\u80A9" - "\u6D78" - "\u4F0A" - "\u5B9D" - "\u6094" - "\u8E8D" - "\u6DB2" - "\u99C6" - "\u6D25" - "\u307A" - "\u6D45" - "\u8B72" - "\u5CA9" - "\u9B45" - "\u587E" - "\u03B8" - "\u6696" - "\u6CB3" - "\u8A95" - "\u7F36" - "\u5507" - "\u80A2" - "\u6328" - "\u62F6" - "\u7A0E" - "\u50AC" - "\u8A34" - "\uFF58" - "\u968A" - "\u659C" - "\u770B" - "\uFF50" - "\u6D66" - "\u8352" - "\uFF41" - "\u71C3" - "\u52A3" - "\u5BA3" - "\u8FBF" - "\u790E" - "\u62FE" - "\u5C4A" - "\u6905" - "\u5EC3" - "\u6749" - "\u9AEA" - "\u77E2" - "\u67D4" - "\u55AB" - "\u73CD" - "\u57FC" - "\u88C2" - "\u63B4" - "\u59BB" - "\u8CA7" - "\u934B" - "\u59A5" - "\u59B9" - "\u5175" - "\uFF14" - "\u623F" - "\u5951" - "\u65E8" - "\uFF44" - "\u0394" - "\u5DE1" - "\u8A02" - "\u5F90" - "\u8CC0" - "\u7BED" - "\u9810" - "\u84C4" - "\u8846" - "\u5DE8" - "\u5506" - "\u65E6" - "\u5531" - "\u9047" - "\u6E67" - "\u8010" - "\u96C4" - "\u6D99" - "\u8CB8" - "\u822A" - "\u5104" - "\u5618" - "\u6C37" - "\u78C1" - "\u679D" - "\u8CAB" - "\u61D0" - "\u52DF" - "\u8155" - "\u65E7" - "\u7AF9" - "\u99D0" - "\u8A72" - "\uFF52" - "\u5893" - "\u518A" - "\u80F8" - "\u758E" - "\u773A" - "\uFF45" - "\u9855" - "\u631F" - "\u55A7" - "\u520A" - "\u68C4" - "\u990C" - "\u67F1" - "\u5800" - "\u8ACB" - "\u79D8" - "\u6717" - "\u96F2" - "\u8170" - "\u7A32" - "\u828B" - "\u8C9D" - "\u5C48" - "\u91CC" - "\u508D" - "\u8102" - "\u6FC1" - "\u54B2" - "\u6BD2" - "\u6EC5" - "\u5629" - "\u6442" - "\u6E7E" - "\u83CC" - "\u8150" - "\u5211" - "\u5F25" - "\u5AC1" - "\u61A7" - "\u4E18" - "\u5C90" - "\u52B1" - "\u8CA2" - "\u6C41" - "\u96C7" - "\u5076" - "\u9774" - "\u72D9" - "\u719F" - "\u900F" - "\uFF59" - "\u8CFC" - "\u5319" - "\uFF46" - "\uFF15" - "\u92AD" - "\u6D12" - "\u8A17" - "\u809D" - "\u963F" - "\u80C3" - "\uFF53" - "\u885D" - "\u621A" - "\uFF4D" - "\u84B8" - "\u4FF3" - "\u8972" - "\u5265" - "\u5BE9" - "\u6817" - "\u8A87" - "\u5237" - "\u7CF8" - "\u90F7" - "\u5049" - "\u6C57" - "\u53CC" - "\u98FD" - "\u77DB" - "\u984E" - "\u552F" - "\u6590" - "\u7DB4" - "\u5B64" - "\u90F5" - "\u76D7" - "\u9E7F" - "\u8CC3" - "\u76FE" - "\u682A" - "\u9ED9" - "\u7C8B" - "\u63DA" - "\u9808" - "\u7092" - "\u9285" - "\u5E81" - "\u9B54" - "\u75E9" - "\u9802" - "\u76BF" - "\u970A" - "\u5E55" - "\u570F" - "\u574A" - "\u72C2" - "\u8912" - "\u9451" - "\u50B5" - "\u77AD" - "\u565B" - "\u5E33" - "\u5782" - "\u8870" - "\u4ED9" - "\u9EA6" - "\u8CA8" - "\u7AAA" - "\u6F6E" - "\u6FEF" - "\u5238" - "\u7D1B" - "\u7384" - "\u7C4D" - "\uFF43" - "\u74F6" - "\u5DE3" - "\u5192" - "\u6CBC" - "\u99D2" - "\u5C3D" - "\u517C" - "\u7C97" - "\u63BB" - "\u80BA" - "\u9154" - "\uFF4C" - "\u702C" - "\u505C" - "\u6F20" - "\u673A" - "\u916C" - "\u4FD7" - "\u8986" - "\u5C3B" - "\u9375" - "\u5805" - "\u6F2C" - "\u2212" - "\u79C0" - "\u6885" - "\u9042" - "\u57F9" - "\u871C" - "\uFF42" - "\u30FB" - "\u52C7" - "\u8ECC" - "\u7F85" - "\uFF3A" - "\u5BB4" - "\u8C5A" - "\u7A3C" - "\u62AB" - "\u8CAF" - "\u9EBB" - "\u6C4E" - "\u51DD" - "\u5FE0" - "\uFF55" - "\u5F80" - "\u8AE6" - "\u8B19" - "\u6F0F" - "\u5410" - "\u3047" - "\u7652" - "\u9663" - "\u6D6A" - "\u52D8" - "\u53D9" - "\u5200" - "\u67B6" - "\u57F7" - "\u5674" - "\u5197" - "\u4E4F" - "\u837B" - "\u81ED" - "\u708A" - "\u598A" - "\u808C" - "\u8CDB" - "\u5C0B" - "\u9175" - "\u757F" - "\u5270" - "\u706F" - "\u8C6A" - "\u9685" - "\u9905" - "\u7949" - "\u80AF" - "\u62DB" - "\u7A3D" - "\u5F6B" - "\u5F69" - "\u03B2" - "\u6B04" - "\u718A" - "\u68CB" - "\u6CB8" - "\u6C88" - "\u8339" - "\u7ABA" - "\u5B9C" - "\u8217" - "\u7CA7" - "\u683D" - "\u80AA" - "\u9665" - "\u6CE1" - "\u95D8" - "\u8F3F" - "\u5353" - "\u7070" - "\u8F9B" - "\u6F01" - "\u9F13" - "\u585E" - "\u8CD1" - "\u76C6" - "\u68FA" - "\u6311" - "\u54F2" - "\u9867" - "\u8B21" - "\u8302" - "\u90A3" - "\u80DE" - "\u4F3A" - "\u5A92" - "\u708E" - "\u67D0" - "\u564C" - "\u5203" - "\u6F5F" - "\u7656" - "\u4E80" - "\u63EE" - "\u511F" - "\u4E39" - "\u7DEF" - "\u9DB4" - "\u4E4B" - "\u6BB4" - "\u4EF0" - "\u5949" - "\u7E2B" - "\u75F4" - "\u8650" - "\u61B2" - "\u71E5" - "\u6DC0" - "\uFF57" - "\u88F8" - "\u82BD" - "\u63A7" - "\u95A3" - "\u7587" - "\u925B" - "\u8178" - "\u5642" - "\u935B" - "\u654F" - "\u9162" - "\u938C" - "\u81E3" - "\u8E74" - "\u5A01" - "\u6D44" - "\u7965" - "\u795D" - "\u86C7" - "\u811A" - "\u4F0F" - "\u6F54" - "\u5510" - "\u6955" - "\u57A3" - "\u932F" - "\u514B" - "\u614C" - "\u6BBF" - "\u819C" - "\u61A9" - "\u9065" - "\u82DB" - "\u9676" - "\u8997" - "\u78E8" - "\u624D" - "\u5E1D" - "\u642C" - "\u722A" - "\u90CA" - "\u80A5" - "\u819D" - "\u62D2" - "\u868A" - "\u5208" - "\u5132" - "\uFF48" - "\u596E" - "\u7761" - "\u5BEE" - "\uFF17" - "\u4FB5" - "\u9B31" - "\u635C" - "\u6DBC" - "\u5A20" - "\u7363" - "\u7C92" - "\u963B" - "\u6CE5" - "\u7ADC" - "\u91A4" - "\u92ED" - "\u6606" - "\u9234" - "\u7DBF" - "\u830E" - "\u8107" - "\u7948" - "\u8A60" - "\u6B53" - "\u7F70" - "\u68DA" - "\u83CA" - "\u6069" - "\u7267" - "\u540A" - "\u8DF3" - "\u6DE1" - "\u7F72" - "\u596A" - "\u9038" - "\u6170" - "\u5EB6" - "\u9262" - "\u8B5C" - "\u5ECA" - "\u5606" - "\u62ED" - "\u8CED" - "\u99C1" - "\u7F8A" - "\u5384" - "\u7D10" - "\u9673" - "\u816B" - "\u6841" - "\u9298" - "\u96CC" - "\u636E" - "\u62DD" - "\u60E8" - "\u96DB" - "\u845B" - "\u7FA8" - "\u609F" - "\u76DF" - "\u7E4A" - "\u9192" - "\u65EC" - "\u6DAF" - "\u8CC4" - "\u6E7F" - "\u6F02" - "\u7D2B" - "\u30F4" - "\u4E9C" - "\u8AA0" - "\u5854" - "\u5E4C" - "\u80C6" - "\u64A5" - "\u865A" - "\u6F64" - "\u9699" - "\u5F84" - "\u6C72" - "\u8CE2" - "\u5BF8" - "\u8888" - "\u88DF" - "\u8266" - "\uFF19" - "\u62D8" - "\uFF47" - "\u5841" - "\u5BDB" - "\u51A0" - "\u614E" - "\u971E" - "\u731B" - "\u67CF" - "\u733F" - "\u9084" - "\u50E7" - "\u53EB" - "\u53F1" - "\u72E9" - "\u63C9" - "\u7D2F" - "\u5982" - "\u7897" - "\u6BBB" - "\u906E" - "\u5FCD" - "\u6EF4" - "\u6B96" - "\u8D08" - "\u74A7" - "\u6F38" - "\u6589" - "\u03BC" - "\u9686" - "\u6176" - "\u72A0" - "\u7272" - "\u5146" - "\u576A" - "\u6284" - "\u65D7" - "\u50DA" - "\u5C3F" - "\u51CD" - "\u902E" - "\u7B39" - "\u8F1D" - "\u5C1A" - "\u8015" - "\u51CC" - "\u632B" - "\u4F10" - "\u7BB8" - "\u4E91" - "\u5968" - "\u819A" - "\u9010" - "\u03B3" - "\u5F26" - "\u9700" - "\u5C01" - "\u5E3D" - "\u6F31" - "\u9283" - "\u507D" - "\u5875" - "\u7E1B" - "\u58A8" - "\u6020" - "\u96F7" - "\u5766" - "\u68A8" - "\u90ED" - "\u7A4F" - "\u67FF" - "\u7AFF" - "\u5E61" - "\u5F81" - "\u99B3" - "\u9EBA" - "\u03C4" - "\u8154" - "\u7C98" - "\u7409" - "\u731F" - "\u4EC1" - "\u8358" - "\u6492" - "\u7C3F" - "\u90E1" - "\u7B4C" - "\u5D8B" - "\u6FE1" - "\u618E" - "\u5446" - "\u6F15" - "\u5A29" - "\u68DF" - "\u6052" - "\uFF18" - "\u5553" - "\u5B5D" - "\u67F3" - "\u64A4" - "\u85CD" - "\u95C7" - "\u5B22" - "\u67F4" - "\u6734" - "\u6D1E" - "\u5CB3" - "\u9B3C" - "\u8DE8" - "\u3049" - "\u70C8" - "\u559A" - "\u6F84" - "\u6FEB" - "\u82A6" - "\u62D3" - "\u51FD" - "\u6843" - "\u76F2" - "\u6CA1" - "\u7A6B" - "\u6212" - "\u99FF" - "\u8D05" - "\u67AF" - "\u6C70" - "\u53F6" - "\u90A6" - "\u66C7" - "\u9A30" - "\u711A" - "\u51F6" - "\u5CF0" - "\u69FD" - "\u67DA" - "\u5320" - "\u9A19" - "\u502B" - "\u84EE" - "\u634C" - "\u61F2" - "\u8B0E" - "\u91B8" - "\u56DA" - "\u7344" - "\u6EDD" - "\u6795" - "\u60DC" - "\u7DB1" - "\u8B33" - "\u7089" - "\u5DFE" - "\u91DC" - "\u9BAB" - "\u6E58" - "\u92F3" - "\u5351" - "\uFF51" - "\u7DBB" - "\u5EF7" - "\u85A6" - "\u667A" - "\u6C99" - "\u8CBF" - "\u8098" - "\uFF16" - "\u5F0A" - "\u66F0" - "\u7881" - "\u9DFA" - "\u6676" - "\u8D74" - "\u8513" - "\u75D2" - "\u79E9" - "\u5DE7" - "\u9418" - "\u7B1B" - "\u638C" - "\u53EC" - "\u5347" - "\u6249" - "\u5A2F" - "\u8A1F" - "\u8247" - "\u64B2" - "\uFF56" - "\u6182" - "\u90B8" - "\u5098" - "\u7CDE" - "\u03BB" - "\u5C16" - "\u723D" - "\u7832" - "\u55A9" - "\u80CE" - "\u84B2" - "\u9DF9" - "\u755C" - "\u6897" - "\uFF4F" - "\u5023" - "\u6247" - "\u7DFB" - "\u6756" - "\u622F" - "\u5D50" - "\u6A3D" - "\u6F06" - "\u9CE9" - "\u039B" - "\u5FAA" - "\u8896" - "\u9784" - "\u6851" - "\u5D16" - "\u59A8" - "\u66A6" - "\u59D3" - "\u7A00" - "\u3041" - "\u920D" - "\u9727" - "\u9837" - "\u8105" - "\u7B20" - "\u86CD" - "\u8328" - "\u69CD" - "\u3062" - "\u59EB" - "\u6ABB" - "\u8463" - "\u6C7D" - "\u541F" - "\u807E" - "\u73E0" - "\u62B9" - "\u9D28" - "\u64AB" - "\u8607" - "\u7AC3" - "\u864E" - "\u78EF" - "\u77E9" - "\u7CCA" - "\u55AA" - "\u8A6E" - "\u82D1" - "\u98F4" - "\u6089" - "\u674F" - "\u9B42" - "\u914C" - "\u9BC9" - "\u8A50" - "\u03A3" - "\u7815" - "\u55DC" - "\u7FFC" - "\u4F0E" - "\u751A" - "\u5F66" - "\u961C" - "\u8706" - "\u6109" - "\u80F4" - "\u8776" - "\u8B00" - "\u9271" - "\u75E2" - "\u73ED" - "\u9438" - "\u92F8" - "\u62D9" - "\u6068" - "\u4EAD" - "\u4EAB" - "\u75AB" - "\u5F13" - "\u74E6" - "\u7D46" - "\u814E" - "\u62F3" - "\u9A0E" - "\u58B3" - "\u83F1" - "\u6813" - "\u5256" - "\u6D2A" - "\u5484" - "\u9591" - "\u58EE" - "\u9945" - "\u65ED" - "\u8987" - "\u80A1" - "\u86D9" - "\u724C" - "\u965B" - "\u714E" - "\u63AC" - "\u9AED" - "\u9019" - "\u5E7B" - "\u54B3" - "\u6E26" - "\u55C5" - "\u7A42" - "\u7434" - "\u5FCC" - "\u70CF" - "\u5448" - "\u91D8" - "\u611A" - "\u6C3E" - "\u8AFE" - "\u6E9D" - "\u7336" - "\u7AAF" - "\u8ACF" - "\u8CC2" - "\u57C3" - "\u51F8" - "\u7D0B" - "\u6ADB" - "\u525B" - "\u98E2" - "\u4FCA" - "\u54C0" - "\u5BB0" - "\u93AE" - "\u7435" - "\u7436" - "\u96C5" - "\u8494" - "\u85AA" - "\u8A93" - "\u59EA" - "\u62D7" - "\u8778" - "\u7169" - "\u7B51" - "\u690E" - "\u4FB6" - "\u553E" - "\u7BAA" - "\u5075" - "\u8861" - "\u03C3" - "\u88FE" - "\u95B2" - "\u805A" - "\u4E3C" - "\u633D" - "\u7E4D" - "\u82D7" - "\u9E93" - "\u03C6" - "\u03B4" - "\u4E32" - "\u51E1" - "\u5F18" - "\u85FB" - "\u61C7" - "\u817F" - "\u7A9F" - "\u6803" - "\u6652" - "\u5E84" - "\u7891" - "\u7B4F" - "\u7B25" - "\u5E06" - "\u96B7" - "\u8FB0" - "\u75BE" - "\u8FE6" - "\u8A6B" - "\u5617" - "\u582A" - "\u6842" - "\u5B9B" - "\u58F7" - "\u8AED" - "\u97AD" - "\u9310" - "\u6DF5" - "\u79E4" - "\u7525" - "\u4F8D" - "\u66FD" - "\u6572" - "\u63AA" - "\u6168" - "\u83E9" - "\u5CE0" - "\u901D" - "\u5F70" - "\u67F5" - "\u82AF" - "\u7C50" - "\u57A2" - "\u03BE" - "\u77EF" - "\u8C8C" - "\u8F44" - "\u8A89" - "\u9813" - "\u7D79" - "\u9E78" - "\u5E7D" - "\u6881" - "\u642D" - "\u54BD" - "\u82B3" - "\u7729" - "\u0393" - "\u61A4" - "\u7985" - "\u6063" - "\u5840" - "\u7149" - "\u75FA" - "\uFF06" - "\u7A40" - "\u545F" - "\u918D" - "\u9190" - "\u7901" - "\u51F9" - "\u86EE" - "\u5974" - "\u64AD" - "\u7E79" - "\u8499" - "\u8A63" - "\u4E5F" - "\u5420" - "\u4E59" - "\u8E8A" - "\u8E87" - "\u9D2C" - "\u7A92" - "\u59E5" - "\u9326" - "\u694A" - "\u8017" - "\u6F09" - "\u60E7" - "\u4FE3" - "\u6876" - "\u5CFB" - "\u905C" - "\u65FA" - "\u75D5" - "\u03A6" - "\u6234" - "\u658E" - "\u8CD3" - "\u7BC7" - "\u8429" - "\u85E9" - "\u7950" - "\u8B83" - "\u83AB" - "\u9C39" - "\u85A9" - "\u5378" - "\u4E9B" - "\u75B9" - "\u8E44" - "\u4E56" - "\uFF5A" - "\u92FC" - "\u6A3A" - "\u5B8F" - "\u7BE4" - "\u8258" - "\u81B3" - "\u7A83" - "\u7E82" - "\u5598" - "\u786B" - "\u99D5" - "\u7261" - "\u732A" - "\u62D0" - "\u60DA" - "\u60A0" - "\u7CE7" - "\u95A5" - "\u03C0" - "\u853D" - "\u6850" - "\u981A" - "\u9214" - "\u697C" - "\u8C9E" - "\u602F" - "\u817A" - "\u8305" - "\u6CF0" - "\u9913" - "\u5C51" - "\u9BDB" - "\u929B" - "\u9AB8" - "\u9C57" - "\u5824" - "\u9675" - "\u6DD8" - "\u64C1" - "\u81FC" - "\u6D32" - "\u8FBB" - "\u8A23" - "\u5C4F" - "\u9BE8" - "\u895F" - "\u5CE1" - "\u660C" - "\u982C" - "\u5806" - "\u865C" - "\u840E" - "\u9EB9" - "\u7CE0" - "\u68B1" - "\u8AFA" - "\u5403" - "\u66A2" - "\u5B54" - "\u5EB8" - "\u5DF3" - "\u589C" - "\u85AE" - "\u6101" - "\u664B" - "\u8236" - "\u8FC5" - "\u6B3A" - "\u9640" - "\u7709" - "\u6CC4" - "\u59FB" - "\u9688" - "\u58CC" - "\u69D9" - "\u5E87" - "\u52D2" - "\u6E07" - "\u91E7" - "\u4E43" - "\u82D4" - "\u9306" - "\u58D5" - "\u78D0" - "\u6962" - "\u65A7" - "\u5E63" - "\u03B7" - "\u7E55" - "\u83C5" - "\u7109" - "\u5112" - "\u5D07" - "\u8276" - "\u5449" - "\u7984" - "\u54C9" - "\u68AF" - "\u5937" - "\u546A" - "\u56C3" - "\u84BC" - "\u9A28" - "\u9D3B" - "\u862D" - "\u7CA5" - "\u7D3A" - "\u7D17" - "\u7164" - "\u03C9" - "\u52FE" - "\u97A0" - "\u4F3D" - "\u7AAE" - "\u6E15" - "\u0392" - "\u8D66" - "\u6597" - "\u66F9" - "\u8CE0" - "\u5CAC" - "\u847A" - "\u7D33" - "\u5B8D" - "\u6191" - "\u6357" - "\u7C9B" - "\u8CCA" - "\u9F8D" - "\u81C6" - "\u6C8C" - "\u52C5" - "\u8096" - "\u559D" - "\u8CAA" - "\u82AD" - "\u8549" - "\u919C" - "\u64B9" - "\u5740" - "\u7BE0" - "\u7D2C" - "\u75B1" - "\u52F2" - "\u86FE" - "\u88B4" - "\u8749" - "\u685F" - "\u4FF5" - "\u818F" - "\u5DF7" - "\u5072" - "\u6148" - "\u754F" - "\u96BB" - "\u606D" - "\u64B0" - "\u9D0E" - "\u52AB" - "\u63C6" - "\u914E" - "\u8106" - "\u6241" - "\u9761" - "\u8511" - "\u95CA" - "\u96BC" - "\u6CCC" - "\u5996" - "\u65A1" - "\u52C3" - "\u637B" - "\u6E13" - "\u937E" - "\u5954" - "\u6155" - "\u5984" - "\u6A0B" - "\u936C" - "\u502D" - "\u8679" - "\u03BD" - "\u60A6" - "\u8151" - "\u62EE" - "\u51E0" - "\u80E1" - "\u8FC2" - "\u8EAF" - "\u50ED" - "\u6ECB" - "\u7B8B" - "\u75F0" - "\u65AC" - "\u85AB" - "\u673D" - "\u82A5" - "\u9756" - "\u907C" - "\u6591" - "\u7953" - "\u5B95" - "\u976D" - "\u72D7" - "\u81BF" - "\u59AC" - "\u5A7F" - "\u7554" - "\u7AEA" - "\u9D5C" - "\u8CE6" - "\u7E1E" - "\u6731" - "\u7C95" - "\u69FB" - "\u6D69" - "\u511A" - "\u8CDC" - "\u8B39" - "\u68B5" - "\u5A9B" - "\u7947" - "\u5516" - "\u03C8" - "\u03C1" - "\u5A9A" - "\u540E" - "\u6FB1" - "\u7DBE" - "\u6372" - "\u67E9" - "\u6DF3" - "\u74DC" - "\u5631" - "\u51B4" - "\u6115" - "\u9211" - "\u51B6" - "\u67A2" - "\u03A9" - "\u77B0" - "\u6775" - "\u5EB5" - "\u4F2F" - "\u840C" - "\u5609" - "\u4FC4" - "\u7D06" - "\u81A0" - "\u7252" - "\u8EB0" - "\u543E" - "\u50FB" - "\u704C" - "\u646F" - "\u5091" - "\u929A" - "\u8B90" - "\u8910" - "\u8FB1" - "\u7345" - "\u7B94" - "\u73A9" - "\u4F43" - "\u583A" - "\u5504" - "\u515C" - "\u62CC" - "\u5751" - "\u75D8" - "\u69CC" - "\u77B3" - "\u79BF" - "\u66D9" - "\u5DF2" - "\u7FC1" - "\u5C3C" - "\u60BC" - "\u7F77" - "\u699C" - "\u5451" - "\u79E6" - "\u533F" - "\u03BA" - "\u7259" - "\u4F46" - "\u572D" - "\u548E" - "\u745E" - "\u7A1C" - "\u785D" - "\u6BC5" - "\u7015" - "\u8702" - "\u978D" - "\u6A2B" - "\u7566" - "\u660F" - "\u755D" - "\u4FAE" - "\u548B" - "\u6367" - "\u7F9E" - "\u803D" - "\u60B8" - "\u51E7" - "\u4EAE" - "\u9AC4" - "\u54FA" - "\u4FEF" - "\u567A" - "\u8058" - "\u8654" - "\u5B8B" - "\u93A7" - "\u968B" - "\u51B3" - "\u59D1" - "\u7078" - "\u927E" - "\u8F5F" - "\u60F0" - "\u03C7" - "\u643E" - "\u6854" - "\u7F6B" - "\u8E4A" - "\u68B6" - "\u6893" - "\u7F75" - "\u65A5" - "\u6276" - "\u6147" - "\u61C3" - "\u9949" - "\u6E25" - "\u6AD3" - "\u80E4" - "\u56A2" - "\u9CF3" - "\u6A84" - "\u8C79" - "\u50B2" - "\u50D1" - "\u7586" - "\u6134" - "\u53A8" - "\u6FB9" - "\u9320" - "\u64E2" - "\u6EBA" - "\u7624" - "\u73CA" - "\u5BC5" - "\u6977" - "\u9583" - "\u9CF6" - "\u7119" - "\u6912" - "\u9B4F" - "\u9798" - "\u68A2" - "\u6900" - "\u8ACC" - "\u696B" - "\u5F14" - "\u65D2" - "\u5957" - "\u9F5F" - "\u9F6C" - "\u7D18" - "\u810A" - "\u536F" - "\u727D" - "\u6BD8" - "\u6714" - "\u514E" - "\u721B" - "\u6D9C" - "\u5851" - "\u5F04" - "\u676D" - "\u63A0" - "\u80B4" - "\u626E" - "\u51F1" - "\u798D" - "\u8036" - "\u808B" - "\u7235" - "\u61AB" - "\u57D3" - "\u5983" - "\u9910" - "\u7C7E" - "\u7262" - "\u6816" - "\u9017" - "\u7058" - "\u5E5F" - "\u68F2" - "\u5687" - "\u7827" - "\u6E1A" - "\u7C9F" - "\u7A7F" - "\u7F60" - "\u68F9" - "\u8594" - "\u8587" - "\u526A" - "\u7B48" - "\u936E" - "\u892A" - "\u7AA9" - "\u58F1" - "\u30F2" - "\u7460" - "\u7483" - "\u61BE" - "\u5E16" - "\u6960" - "\u03B5" - "\u5480" - "\u56BC" - "\u56A5" - "\u6D29" - "\u6A58" - "\u6867" - "\u6A9C" - "\u63F6" - "\u63C4" - "\u88E1" - "\u6A80" - "\u900D" - "\u9081" - "\u6028" - "\u73B2" - "\u90C1" - "\u5815" - "\u8AB9" - "\u8B17" - "\u8956" - "\u51F0" - "\u9B41" - "\u5B75" - "\u7766" - "\u71FB" - "\u5243" - "\u53A9" - "\u71D7" - "\u84D1" - "\u5EFB" - "\u75D4" - "\u837C" - "\u6190" - "\u6070" - "\u8F9F" - "\u5F98" - "\u5F8A" - "\u4FA0" - "\u5830" - "\u971C" - "\u809B" - "\u76E7" - "\u5835" - "\u72DB" - "\u9D8F" - "\u9119" - "\u4F73" - "\u916A" - "\u8AE7" - "\u6973" - "\u7826" - "\u5AC9" - "\u5DEB" - "\u53E1" - "\u9716" - "\u6E23" - "\u5544" - "\u798E" - "\u6CAB" - "\u821F" - "\u6C5D" - "\u5302" - "\u99F1" - "\u6C08" - "\u308E" - "\u714C" - "\u7DAC" - "\u5F1B" - "\u586B" - "\u84C1" - "\u5039" - "\u7CFE" - "\u51A5" - "\u674E" - "\u966A" - "\u8877" - "\u59E6" - "\u5962" - "\u75BC" - "\u8A54" - "\u8599" - "\u8B5A" - "\u5CEF" - "\u684E" - "\u688F" - "\u9B92" - "\u8A1B" - "\u55B0" - "\u7960" - "\u67A1" - "\u6681" - "\u4E5E" - "\u91C7" - "\u9739" - "\u9742" - "\u687F" - "\u929C" - "\u4F51" - "\u79BE" - "\u5944" - "\u6930" - "\u87F9" - "\u8061" - "\u98AF" - "\u30C2" - "\u8E81" - "\u8E42" - "\u8E99" - "\u8695" - "\u693F" - "\u62F7" - "\u9257" - "\u8882" - "\u78CB" - "\u7422" - "\u6B3D" - "\u60B6" - "\u53C9" - "\u7E37" - "\u8A36" - "\u50C5" - "\u5C6F" - "\u5EEC" - "\u5C41" - "\u99A8" - "\u6E20" - "\u8568" - "\u699B" - "\u675C" - "\u7791" - "\u6A8E" - "\u8ECB" - "\u8F62" - "\u8700" - "\u8235" - "\u82B9" - "\u6B3E" - "\u639F" - "\u8E2A" - "\u745A" - "\u71E6" - "\u7D21" - "\u584A" - "\u8171" - "\u6753" - "\u65A4" - "\u786F" - "\u55AC" - "\u8B04" - "\u79DF" - "\u8180" - "\u80F1" - "\u6EC4" - "\u9C10" - "\u8475" - "\u8471" - "\u8461" - "\u5A49" - "\u88D4" - "\u9F0E" - "\u9187" - "\u67EF" - "\u991E" - "\u96C1" - "\u8AA6" - "\u8A62" - "\u633A" - "\u7AFA" - "\u8A82" - "\u5191" - "\u8718" - "\u86DB" - "\u70B8" - "\u932B" - "\u58C5" - "\u8087" - "\u54AC" - "\u9B8E" - "\u67D1" - "\u7D9C" - "\u5BE1" - "\u7977" - "\u522E" - "\u8CCE" - "\u9B18" - "\u884D" - "\u5FD6" - "\u685D" - "\u0398" - "\u039A" - "\u03A8" - "\u53E2" - "\u4FCE" - "\u7396" - "\u78A7" - "\u8766" - "\u8521" - "\u649A" - "\u7A14" - "\u752B" - "\u6D35" - "\u7893" - "\u9ECE" - "\u5AE1" - "\u8755" - "\u725F" - "\u6B89" - "\u6C83" - "\u7B50" - "\u619A" - "\u6E24" - "\u9B4D" - "\u9B4E" - "\u71ED" - "\u7940" - "\u6D1B" - "\u88F3" - "\u4E11" - "\u9846" - "\u9952" - "\u5EC9" - "\u689F" - "\u848B" - "\u6DD1" - "\u8737" - "\u9644" - "\u695A" - "\u9F20" - "\u5154" - "\u61AC" - "\u5F57" - "\u66FC" - "\u5D11" - "\u57DC" - "\u5F77" - "\u5F7F" - "\u5DF4" - "\u831C" - "\u6D9B" - "\u57E0" - "\u945A" - "\u92D2" - "\u5C09" - "\u53AD" - "\u7B75" - "\u7AE3" - "\u7E8F" - "\u6194" - "\u60B4" - "\u8E5F" - "\u675E" - "\u7825" - "\u8F14" - "\u9C52" - "\u4FAF" - "\u7D62" - "\u5475" - "\u698E" - "\u53EA" - "\u71D5" - "\u5C60" - "\u5614" - "\u74E2" - "\u9291" - "\u880D" - "\u932C" - "\u608C" - "\u8A1D" - "\u7DB8" - "\u530D" - "\u5310" - "\u637A" - "\u6A59" - "\u5BB5" - "\u9D60" - "\u57F4" - "\u7690" - "\u9021" - "\u4FF8" - "\u7A63" - "\u54A4" - "\u8309" - "\u8389" - "\u6643" - "\u6EF8" - "\u5289" - "\u5026" - "\u8944" - "\u7B4D" - "\u5239" - "\u83BD" - "\u9041" - "\u66F5" - "\u79BD" - "\u7B67" - "\u7E0A" - "\u7FD4" - "\u5BF5" - "\u834F" - "\u758B" - "\u84EC" - "\u83B1" - "\u8EAC" - "\u696E" - "\u76C8" - "\u5C13" - "\u72FC" - "\u85C9" - "\u965F" - "\u620E" - "\u4E8E" - "\u6F58" - "\u8012" - "\u5F82" - "\u5FA0" - "\u99AE" - "\u5F6D" - "\u5E47" - "\u9087" - "\u6CD3" - "\u80B1" - "\u65BC" - "\u6602" - "\u8E64" - "\u7463" - "\u9A65" - "\u4EA8" - "\u8AEE" - "\u77EE" - "\u8569" - "\u6566" - "\u30EE" - "\u6208" - "\u8229" - "\u9B6F" - "\u65E0" - "\u6159" - "\u6127" - "\u8340" - "\u6309" - "\u914B" - "\u59F6" - "\u723E" - "\u8602" - "\u986B" - "\u593E" - "\u59DA" - "\u701D" - "\u6FD8" - "\u964B" - "\u777E" - "\u5B30" - "\u5DBA" - "\u821B" - "\u7B65" - "\u95A4" - "\u68D8" - "\u9812" - "\u59BE" - "\u8B2C" - "\u4F0D" - "\u537F" - "\u8FEA" - "\u5686" - "\u60F9" - "\u80DA" - "\u6C6A" - "\u543B" - "\u9B51" - "\u8F3B" - "\u59C6" - "\u84FC" - "\u6AC2" - "\u5315" - "\u4F70" - "\u7246" - "\u5CD9" - "\u725D" - "\u9DF2" - "\u7DCB" - "\u7BAD" - "\u82EB" - "\u5366" - "\u5B5F" - "\u5323" - "\u4ED4" - "\u5D19" - "\u6787" - "\u6777" - "\u81C0" - "\u681E" - "\u9E1E" - "\u61FA" - "\u55DA" - "\u6DB8" - "\u30C5" - "\u8D16" - "\u5E9A" - "\u93D1" - "\u9149" - "\u670B" - "\u70F9" - "\u53C8" - "\u7337" - "\u7C00" - "\u5B2C" - "\u88B7" - "\u6BB7" - "\u51DB" - "\u4EC0" - "\u71FF" - "\u5556" - "\u7BC6" - "\u7DD8" - "\u5036" - "\u6AC3" - "\u8A03" - "\u540F" - "\u5CB1" - "\u8A25" - "\u958F" - "\u5DBD" - "\u722C" - "\u618A" - "\u7511" - "\u6144" - "\u5E25" - "\u7704" - "\u5A11" - "\u50E5" - "\u5016" - "\u800C" - "\u8F4D" - "\u5583" - "\u81BE" - "\u7099" - "\u85AF" - "\u97EE" - "\u4E99" - "\u8B14" - "\u86CE" - "\u7425" - "\u73C0" - "\u698A" - "\u7C3E" - "\u8D6D" - "\u8823" - "\u8299" - "\u8B01" - "\u9022" - "\u8466" - "\u6670" - "\u5398" - "\u707C" - "\u903C" - "\u9328" - "\u700B" - "\u5FF8" - "\u6029" - "\u7165" - "\u7B0F" - "\u5FFD" - "\u7708" - "\u7DEC" - "\u5C4D" - "\u75BD" - "\u6E5B" - "\u788D" - "\u8AE4" - <sos/eos> init: xavier_uniform input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true model_conf: ctc_weight: 0.3 lsm_weight: 0.1 length_normalized_loss: false use_preprocessor: true token_type: char bpemodel: null non_linguistic_symbols: null cleaner: null g2p: null frontend: default frontend_conf: fs: 16k specaug: specaug specaug_conf: apply_time_warp: true time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 30 num_freq_mask: 2 apply_time_mask: true time_mask_width_range: - 0 - 40 num_time_mask: 2 normalize: global_mvn normalize_conf: stats_file: exp/asr_stats_raw_sp/train/feats_stats.npz encoder: conformer encoder_conf: output_size: 512 attention_heads: 8 linear_units: 2048 num_blocks: 12 dropout_rate: 0.1 positional_dropout_rate: 0.1 attention_dropout_rate: 0.1 input_layer: conv2d6 normalize_before: true macaron_style: false pos_enc_layer_type: rel_pos selfattention_layer_type: rel_selfattn activation_type: swish use_cnn_module: true cnn_module_kernel: 31 decoder: transformer decoder_conf: attention_heads: 8 linear_units: 2048 num_blocks: 6 dropout_rate: 0.1 positional_dropout_rate: 0.1 self_attention_dropout_rate: 0.1 src_attention_dropout_rate: 0.1 required: - output_dir - token_list distributed: true ``` </details> | 91cb9e78160291173d55ac3f0bc60125 |
apache-2.0 | ['generated_from_trainer'] | false | mobilebert_sa_GLUE_Experiment_data_aug_mrpc This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE MRPC dataset. It achieves the following results on the evaluation set: - Loss: 0.0000 - Accuracy: 1.0 - F1: 1.0 - Combined Score: 1.0 | d5148771d7e25ded73bb3442b900e6e0 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:| | 0.1838 | 1.0 | 1959 | 0.0138 | 0.9951 | 0.9964 | 0.9958 | | 0.0406 | 2.0 | 3918 | 0.0055 | 1.0 | 1.0 | 1.0 | | 0.0267 | 3.0 | 5877 | 0.0129 | 0.9975 | 0.9982 | 0.9979 | | 0.0151 | 4.0 | 7836 | 0.0004 | 1.0 | 1.0 | 1.0 | | 0.0108 | 5.0 | 9795 | 0.0104 | 0.9975 | 0.9982 | 0.9979 | | 0.0075 | 6.0 | 11754 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0059 | 7.0 | 13713 | 0.0005 | 1.0 | 1.0 | 1.0 | | 0.0047 | 8.0 | 15672 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0033 | 9.0 | 17631 | 0.0001 | 1.0 | 1.0 | 1.0 | | 0.0031 | 10.0 | 19590 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0025 | 11.0 | 21549 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0019 | 12.0 | 23508 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0019 | 13.0 | 25467 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0014 | 14.0 | 27426 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.001 | 15.0 | 29385 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.001 | 16.0 | 31344 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0009 | 17.0 | 33303 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0009 | 18.0 | 35262 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0006 | 19.0 | 37221 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0006 | 20.0 | 39180 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0003 | 21.0 | 41139 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0003 | 22.0 | 43098 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0005 | 23.0 | 45057 | 0.0000 | 1.0 | 1.0 | 1.0 | | 7673d76d2cca2b91acbf83f9cb1d6a5c |
apache-2.0 | ['generated_from_trainer'] | false | distilbert-base-uncased__hate_speech_offensive__train-16-8 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0704 - Accuracy: 0.394 | aef47aa32a59fe0ca20a7f5505e73322 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.1031 | 1.0 | 10 | 1.1286 | 0.1 | | 1.0648 | 2.0 | 20 | 1.1157 | 0.3 | | 0.9982 | 3.0 | 30 | 1.1412 | 0.2 | | 0.9283 | 4.0 | 40 | 1.2053 | 0.2 | | 0.7958 | 5.0 | 50 | 1.1466 | 0.2 | | 0.6668 | 6.0 | 60 | 1.1783 | 0.3 | | 0.5068 | 7.0 | 70 | 1.2992 | 0.3 | | 0.3741 | 8.0 | 80 | 1.3483 | 0.3 | | 0.1653 | 9.0 | 90 | 1.4533 | 0.2 | | 0.0946 | 10.0 | 100 | 1.6292 | 0.2 | | 0.0569 | 11.0 | 110 | 1.8381 | 0.2 | | 0.0346 | 12.0 | 120 | 2.0781 | 0.2 | | 4f997e8e251e065dc277386410abb0eb |
apache-2.0 | ['generated_from_trainer'] | false | local_test_model_with_local_dataset This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5566 - Wer: 0.0 | 953c14e13d55ef281f30f5799fe4012b |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | No log | 10.0 | 10 | 3.4660 | 85.7143 | | No log | 20.0 | 20 | 0.7373 | 10.7143 | | 3.3998 | 30.0 | 30 | 0.5920 | 0.0 | | 3.3998 | 40.0 | 40 | 0.5566 | 0.0 | | 6b04cde866816d7e83c60f46c814e91d |
apache-2.0 | ['generated_from_trainer'] | false | bert-finetuned-target This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2793 - Precision: 0.6688 - Recall: 0.7 - F1: 0.6840 - Accuracy: 0.9170 | eda04ab8aaa43cd73ed37bb9fa46159e |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 218 | 0.2489 | 0.6034 | 0.7 | 0.6481 | 0.9106 | | No log | 2.0 | 436 | 0.2453 | 0.6830 | 0.6967 | 0.6898 | 0.9192 | | 0.2156 | 3.0 | 654 | 0.2793 | 0.6688 | 0.7 | 0.6840 | 0.9170 | | 52e225bfdbf0f36d258d5a43ab1a09ae |
creativeml-openrail-m | ['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image'] | false | PyTorch ```bash pip install --upgrade diffusers transformers scipy ``` Running the pipeline with the default PNDM scheduler: ```python import torch from diffusers import StableDiffusionPipeline model_id = "CompVis/stable-diffusion-v1-4" device = "cuda" pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe = pipe.to(device) prompt = "a photo of an astronaut riding a horse on mars" image = pipe(prompt).images[0] image.save("astronaut_rides_horse.png") ``` **Note**: If you are limited by GPU memory and have less than 4GB of GPU RAM available, please make sure to load the StableDiffusionPipeline in float16 precision instead of the default float32 precision as done above. You can do so by telling diffusers to expect the weights to be in float16 precision: ```py import torch pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe = pipe.to(device) pipe.enable_attention_slicing() prompt = "a photo of an astronaut riding a horse on mars" image = pipe(prompt).images[0] image.save("astronaut_rides_horse.png") ``` To swap out the noise scheduler, pass it to `from_pretrained`: ```python from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler model_id = "CompVis/stable-diffusion-v1-4" | fecb30b35e4366a1a9b0ea8b97db5998 |
creativeml-openrail-m | ['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image'] | false | Use the Euler scheduler here instead scheduler = EulerDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler") pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, torch_dtype=torch.float16) pipe = pipe.to("cuda") prompt = "a photo of an astronaut riding a horse on mars" image = pipe(prompt).images[0] image.save("astronaut_rides_horse.png") ``` | 0fb673d2d7e38909a382931434d7b532 |
creativeml-openrail-m | ['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image'] | false | shard inputs and rng params = replicate(params) prng_seed = jax.random.split(prng_seed, num_samples) prompt_ids = shard(prompt_ids) images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:]))) ``` **Note**: If you are limited by TPU memory, please make sure to load the `FlaxStableDiffusionPipeline` in `bfloat16` precision instead of the default `float32` precision as done above. You can do so by telling diffusers to load the weights from "bf16" branch. ```python import jax import numpy as np from flax.jax_utils import replicate from flax.training.common_utils import shard from diffusers import FlaxStableDiffusionPipeline pipeline, params = FlaxStableDiffusionPipeline.from_pretrained( "CompVis/stable-diffusion-v1-4", revision="bf16", dtype=jax.numpy.bfloat16 ) prompt = "a photo of an astronaut riding a horse on mars" prng_seed = jax.random.PRNGKey(0) num_inference_steps = 50 num_samples = jax.device_count() prompt = num_samples * [prompt] prompt_ids = pipeline.prepare_inputs(prompt) | 38eabc412d3d84641530aab6e683dbbe |
creativeml-openrail-m | ['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image'] | false | shard inputs and rng params = replicate(params) prng_seed = jax.random.split(prng_seed, num_samples) prompt_ids = shard(prompt_ids) images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:]))) ``` | 01ac48adcf8a60e977d13a9cd596c837 |
mit | ['token-classification', 'sequence-tagger-model', 'pytorch', 'transformers', 'pubmedbert', 'uncased', 'radiology', 'biomedical'] | false | Stanford de-identifier was trained on a variety of radiology and biomedical documents with the goal of automatising the de-identification process while reaching satisfactory accuracy for use in production. Manuscript in-proceedings. These model weights are the recommended ones among all available deidentifier weights. Associated github repo: https://github.com/MIDRC/Stanford_Penn_Deidentifier | a5b3a71581bada067d3d948fd76c2d94 |
mit | ['generated_from_trainer'] | false | kobart_32_4e-5_datav2_min30_lp5.0_temperature1.0 This model is a fine-tuned version of [gogamza/kobart-base-v2](https://huggingface.co/gogamza/kobart-base-v2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.6131 - Rouge1: 35.7499 - Rouge2: 13.0188 - Rougel: 23.5089 - Bleu1: 29.9409 - Bleu2: 17.5869 - Bleu3: 10.4195 - Bleu4: 6.1345 - Gen Len: 50.5967 | c23fa7b7023d745f7689e7edcbb008a0 |
mit | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 32 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5.0 | 44388b242920d102c283666eb2c370f8 |
mit | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Bleu1 | Bleu2 | Bleu3 | Bleu4 | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:------:|:-------:| | 1.7368 | 3.78 | 5000 | 2.6131 | 35.7499 | 13.0188 | 23.5089 | 29.9409 | 17.5869 | 10.4195 | 6.1345 | 50.5967 | | 5d021e0564ef96931195d4c5df200af3 |
apache-2.0 | ['t5', 'seq2seq'] | false | t5-small-24L-dutch-english A [T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) sequence to sequence model pre-trained from scratch on [cleaned Dutch 🇳🇱🇧🇪 mC4 and cleaned English 🇬🇧 C4](https://huggingface.co/datasets/yhavinga/mc4_nl_cleaned). This **t5 eff** model has **249M** parameters. It was pre-trained with masked language modeling (denoise token span corruption) objective on the dataset `mc4_nl_cleaned` config `large_en_nl` for **1** epoch(s) and a duration of **4d10h**, with a sequence length of **512**, batch size **128** and **851852** total steps (**56B** tokens). Pre-training evaluation loss and accuracy are **1,18** and **0,74**. Refer to the evaluation section below for a comparison of the pre-trained models on summarization and translation. * Pre-trained T5 models need to be finetuned before they can be used for downstream tasks, therefore the inference widget on the right has been turned off. * For a demo of the Dutch CNN summarization models, head over to the Hugging Face Spaces for the **[Netherformer 📰](https://huggingface.co/spaces/flax-community/netherformer)** example application! Please refer to the original T5 papers and Scale Efficiently papers for more information about the T5 architecture and configs, though it must be noted that this model (t5-small-24L-dutch-english) is unrelated to these projects and not an 'official' checkpoint. * **[Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf)** by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu*. * **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*. | ab09eaaa7fbb7835b51b8095c08f73a1 |
apache-2.0 | ['generated_from_trainer'] | false | Roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_en_es This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-biomedical-clinical-es](https://huggingface.co/PlanTL-GOB-ES/roberta-base-biomedical-clinical-es) on the CRAFT dataset. It achieves the following results on the evaluation set: - Loss: 0.1750 - Precision: 0.8664 - Recall: 0.8587 - F1: 0.8625 - Accuracy: 0.9727 | 4bf7fa067e50ca3994db3946a6ec3595 |
apache-2.0 | ['generated_from_trainer'] | false | Model description This model performs Named Entity Recognition for 6 entity tags: Sequence, Cell, Protein, Gene, Taxon, and Chemical from the [CRAFT](https://github.com/UCDenver-ccp/CRAFT/releases)(Colorado Richly Annotated Full Text) Corpus in Spanish and English. Entity tags have been normalized and replaced from the original three letter code to a full name e.g. B-Protein, I-Chemical. | f78dc74c7d2b5b65f83aa4a92668c944 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0564 | 1.0 | 1360 | 0.1459 | 0.8296 | 0.8489 | 0.8392 | 0.9696 | | 0.0222 | 2.0 | 2720 | 0.1554 | 0.8650 | 0.8320 | 0.8482 | 0.9702 | | 0.0124 | 3.0 | 4080 | 0.1670 | 0.8588 | 0.8564 | 0.8576 | 0.9717 | | 0.0052 | 4.0 | 5440 | 0.1750 | 0.8664 | 0.8587 | 0.8625 | 0.9727 | | 9ac10deb0510040a4b647827f1326f30 |
apache-2.0 | ['generated_from_trainer'] | false | GPT-Neo-125m-Beatles-Lyrics-finetuned-newlyrics This model is a fine-tuned version of [EleutherAI/gpt-neo-125M](https://huggingface.co/EleutherAI/gpt-neo-125M) on the [Cmotions - Beatles lyrics](https://huggingface.co/datasets/cmotions/Beatles_lyrics) dataset. It will complete an input prompt with Beatles-like text. | 1d6d45633d23fb36e09df3670a3b902c |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 5 | d08cd1713649581dc87571f1c23eb332 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.4438 | 1.0 | 18 | 1.8004 | | 2.1981 | 2.0 | 36 | 1.6985 | | 1.9766 | 3.0 | 54 | 1.6487 | | 1.8233 | 4.0 | 72 | 1.6384 | | 1.6137 | 5.0 | 90 | 1.6574 | | e2ad1f8b5f05ff484977c2e0f461de94 |
cc-by-4.0 | ['espnet', 'audio', 'automatic-speech-recognition'] | false | Demo: How to use in ESPnet2 Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html) if you haven't done that already. ```bash cd espnet git checkout e62de171f1d11015cb856f83780c61bd5ca7fa8f pip install -e . cd egs2/tedlium2/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model pyf98/tedlium2_ctc_conformer_e12_linear2048 ``` <!-- Generated by scripts/utils/show_asr_result.sh --> | 34d906c7e6c4eaacac5a9b485c6e973d |
cc-by-4.0 | ['espnet', 'audio', 'automatic-speech-recognition'] | false | Environments - date: `Fri Dec 30 14:56:03 CST 2022` - python version: `3.9.15 (main, Nov 24 2022, 14:31:59) [GCC 11.2.0]` - espnet version: `espnet 202211` - pytorch version: `pytorch 1.12.1` - Git hash: `e62de171f1d11015cb856f83780c61bd5ca7fa8f` - Commit date: `Thu Dec 29 14:18:44 2022 -0500` | 819a1805450b6a3f8d8c629492cd903e |
cc-by-4.0 | ['espnet', 'audio', 'automatic-speech-recognition'] | false | WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_ctc_asr_model_valid.cer_ctc.ave/dev|466|14671|92.4|5.4|2.2|1.2|8.9|75.1| |decode_asr_ctc_asr_model_valid.cer_ctc.ave/test|1155|27500|92.6|5.0|2.5|1.1|8.5|70.3| | 246986e8a5f33abd34cbe6f1b392af6f |
cc-by-4.0 | ['espnet', 'audio', 'automatic-speech-recognition'] | false | CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_ctc_asr_model_valid.cer_ctc.ave/dev|466|78259|97.0|0.9|2.1|1.2|4.2|75.1| |decode_asr_ctc_asr_model_valid.cer_ctc.ave/test|1155|145066|97.0|0.9|2.1|1.2|4.2|70.3| | ec787815dacbcf6d774d7071050f4f97 |
cc-by-4.0 | ['espnet', 'audio', 'automatic-speech-recognition'] | false | TER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_ctc_asr_model_valid.cer_ctc.ave/dev|466|28296|94.6|3.1|2.4|1.2|6.6|75.1| |decode_asr_ctc_asr_model_valid.cer_ctc.ave/test|1155|52113|94.9|2.7|2.4|1.2|6.3|70.3| | 549f81c8df03b64860e98950633c4483 |
cc-by-4.0 | ['espnet', 'audio', 'automatic-speech-recognition'] | false | ASR config <details><summary>expand</summary> ``` config: conf/tuning/train_asr_ctc_conformer_e12_linear2048.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_train_asr_ctc_conformer_e12_linear2048_raw_en_bpe500_sp ngpu: 1 seed: 2022 num_workers: 4 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: 2 dist_rank: 0 local_rank: 0 dist_master_addr: localhost dist_master_port: 47181 dist_launcher: null multiprocessing_distributed: true unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 50 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - cer_ctc - min keep_nbest_models: 10 nbest_averaging_interval: 0 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: true log_interval: null use_matplotlib: true use_tensorboard: true create_graph_in_tensorboard: false use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 20 valid_batch_size: null batch_bins: 50000000 valid_batch_bins: null train_shape_file: - exp/asr_stats_raw_en_bpe500_sp/train/speech_shape - exp/asr_stats_raw_en_bpe500_sp/train/text_shape.bpe valid_shape_file: - exp/asr_stats_raw_en_bpe500_sp/valid/speech_shape - exp/asr_stats_raw_en_bpe500_sp/valid/text_shape.bpe batch_type: numel valid_batch_type: null fold_length: - 80000 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train_sp/wav.scp - speech - kaldi_ark - - dump/raw/train_sp/text - text - text valid_data_path_and_name_and_type: - - dump/raw/dev/wav.scp - speech - kaldi_ark - - dump/raw/dev/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 0.002 weight_decay: 1.0e-06 scheduler: warmuplr scheduler_conf: warmup_steps: 15000 token_list: - <blank> - <unk> - s - ▁the - t - ▁a - ▁and - ▁to - d - e - ▁of - '''' - n - ing - ▁in - ▁i - ▁that - i - a - l - p - m - y - o - ▁it - ▁we - c - u - ▁you - ed - ▁ - r - ▁is - re - ▁this - ar - g - ▁so - al - b - ▁s - or - ▁f - ▁c - in - k - f - ▁for - ic - er - le - ▁be - ▁do - ▁re - ve - ▁e - ▁w - ▁was - es - ▁they - ly - h - ▁on - v - ▁are - ri - ▁have - an - ▁what - ▁with - ▁t - w - ur - it - ent - ▁can - ▁he - ▁but - ra - ce - ▁me - ▁b - ▁ma - ▁p - ll - ▁st - ▁one - 'on' - ▁about - th - ▁de - en - ▁all - ▁not - il - ▁g - ch - at - ▁there - ▁mo - ter - ation - tion - ▁at - ▁my - ro - ▁as - te - ▁le - ▁con - ▁like - ▁people - ▁or - ▁an - el - ▁if - ▁from - ver - ▁su - ▁co - ate - ▁these - ol - ci - ▁now - ▁see - ▁out - ▁our - ion - ▁know - ect - ▁just - as - ▁ex - ▁ch - ▁d - ▁when - ▁very - ▁think - ▁who - ▁because - ▁go - ▁up - ▁us - ▁pa - ▁no - ies - ▁di - ▁ho - om - ive - ▁get - id - ▁o - ▁hi - un - ▁how - ▁by - ir - et - ck - ity - ▁po - ul - ▁which - ▁mi - ▁some - z - ▁sp - ▁un - ▁going - ▁pro - ist - ▁se - ▁look - ▁time - ment - de - ▁more - ▁had - ng - ▁would - ge - la - ▁here - ▁really - x - ▁your - ▁them - us - me - ▁en - ▁two - ▁k - ▁li - ▁world - ne - ow - ▁way - ▁want - ▁work - ▁don - ▁lo - ▁fa - ▁were - ▁their - age - vi - ▁ha - ac - der - est - ▁bo - am - ▁other - able - ▁actually - ▁sh - ▁make - ▁ba - ▁la - ine - ▁into - ▁where - ▁could - ▁comp - ting - ▁has - ▁will - ▁ne - j - ical - ally - ▁vi - ▁things - ▁te - igh - ▁say - ▁years - ers - ▁ra - ther - ▁than - ru - ▁ro - op - ▁did - ▁any - ▁new - ound - ig - ▁well - mo - ▁she - ▁na - ▁been - he - ▁thousand - ▁car - ▁take - ▁right - ▁then - ▁need - ▁start - ▁hundred - ▁something - ▁over - ▁com - ia - ▁kind - um - if - ▁those - ▁first - ▁pre - ta - ▁said - ize - end - ▁even - ▁thing - one - ▁back - ite - ▁every - ▁little - ry - ▁life - ▁much - ke - ▁also - ▁most - ant - per - ▁three - ▁come - ▁lot - ance - ▁got - ▁talk - ▁per - ▁inter - ▁sa - ▁use - ▁mu - ▁part - ish - ence - ▁happen - ▁bi - ▁mean - ough - ▁qu - ▁bu - ▁day - ▁ga - ▁only - ▁many - ▁different - ▁dr - ▁th - ▁show - ful - ▁down - ated - ▁good - ▁tra - ▁around - ▁idea - ▁human - ous - ▁put - ▁through - ▁five - ▁why - ▁change - ▁real - ff - ible - ▁fact - ▁same - ▁jo - ▁live - ▁year - ▁problem - ▁ph - ▁four - ▁give - ▁big - ▁tell - ▁great - ▁try - ▁va - ▁ru - ▁system - ▁six - ▁plan - ▁place - ▁build - ▁called - ▁again - ▁point - ▁twenty - ▁percent - ▁nine - ▁find - ▁app - ▁after - ▁long - ▁eight - ▁imp - ▁gene - ▁design - ▁today - ▁should - ▁made - ious - ▁came - ▁learn - ▁last - ▁own - way - ▁turn - ▁seven - ▁high - ▁question - ▁person - ▁brain - ▁important - ▁another - ▁thought - ▁trans - ▁create - ness - ▁hu - ▁power - ▁act - land - ▁play - ▁sort - ▁old - ▁before - ▁course - ▁understand - ▁feel - ▁might - ▁each - ▁million - ▁better - ▁together - ▁ago - ▁example - ▁help - ▁story - ▁next - ▁hand - ▁school - ▁water - ▁develop - ▁technology - que - ▁second - ▁grow - ▁still - ▁cell - ▁believe - ▁number - ▁small - ▁between - qui - ▁data - ▁become - ▁america - ▁maybe - ▁space - ▁project - ▁organ - ▁vo - ▁children - ▁book - graph - ▁open - ▁fifty - ▁picture - ▁health - ▁thirty - ▁africa - ▁reason - ▁large - ▁hard - ▁computer - ▁always - ▁sense - ▁money - ▁women - ▁everything - ▁information - ▁country - ▁teach - ▁energy - ▁experience - ▁food - ▁process - qua - ▁interesting - ▁future - ▁science - q - '0' - '5' - '6' - '9' - '3' - '8' - '4' - N - A - '7' - S - G - F - R - L - U - E - T - H - _ - B - D - J - M - ă - ō - ť - '2' - '-' - '1' - C - <sos/eos> init: null input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: null zero_infinity: true joint_net_conf: null use_preprocessor: true token_type: bpe bpemodel: data/en_token_list/bpe_unigram500/bpe.model non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' short_noise_thres: 0.5 frontend: default frontend_conf: n_fft: 512 win_length: 400 hop_length: 160 fs: 16k specaug: specaug specaug_conf: apply_time_warp: true time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 27 num_freq_mask: 2 apply_time_mask: true time_mask_width_ratio_range: - 0.0 - 0.05 num_time_mask: 5 normalize: global_mvn normalize_conf: stats_file: exp/asr_stats_raw_en_bpe500_sp/train/feats_stats.npz model: espnet model_conf: ctc_weight: 1.0 lsm_weight: 0.1 length_normalized_loss: false preencoder: null preencoder_conf: {} encoder: conformer encoder_conf: output_size: 256 attention_heads: 4 linear_units: 2048 num_blocks: 12 dropout_rate: 0.1 positional_dropout_rate: 0.1 attention_dropout_rate: 0.1 input_layer: conv2d normalize_before: true macaron_style: true rel_pos_type: latest pos_enc_layer_type: rel_pos selfattention_layer_type: rel_selfattn activation_type: swish use_cnn_module: true cnn_module_kernel: 31 postencoder: null postencoder_conf: {} decoder: rnn decoder_conf: {} preprocessor: default preprocessor_conf: {} required: - output_dir - token_list version: '202211' distributed: true ``` </details> | 2d16caa700ee0145da2aaa31741c479c |
cc-by-sa-4.0 | ['japanese', 'question-answering', 'dependency-parsing'] | false | Model Description This is a RoBERTa model pretrained on 青空文庫 for dependency-parsing (head-detection on long-unit-words) as question-answering, derived from [roberta-base-japanese-aozora-char](https://huggingface.co/KoichiYasuoka/roberta-base-japanese-aozora-char) and [UD_Japanese-GSDLUW](https://github.com/UniversalDependencies/UD_Japanese-GSDLUW). Use [MASK] inside `context` to avoid ambiguity when specifying a multiple-used word as `question`. | ca22078ab3de085af9cf55bb39829c0e |
cc-by-sa-4.0 | ['japanese', 'question-answering', 'dependency-parsing'] | false | How to Use ```py from transformers import AutoTokenizer,AutoModelForQuestionAnswering,QuestionAnsweringPipeline tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-japanese-aozora-ud-head") model=AutoModelForQuestionAnswering.from_pretrained("KoichiYasuoka/roberta-base-japanese-aozora-ud-head") qap=QuestionAnsweringPipeline(tokenizer=tokenizer,model=model,align_to_words=False) print(qap(question="国語",context="全学年にわたって小学校の国語の教科書に挿し絵>が用いられている")) ``` or (with [ufal.chu-liu-edmonds](https://pypi.org/project/ufal.chu-liu-edmonds/)) ```py class TransformersUD(object): def __init__(self,bert): import os from transformers import (AutoTokenizer,AutoModelForQuestionAnswering, AutoModelForTokenClassification,AutoConfig,TokenClassificationPipeline) self.tokenizer=AutoTokenizer.from_pretrained(bert) self.model=AutoModelForQuestionAnswering.from_pretrained(bert) x=AutoModelForTokenClassification.from_pretrained if os.path.isdir(bert): d,t=x(os.path.join(bert,"deprel")),x(os.path.join(bert,"tagger")) else: from transformers.utils import cached_file c=AutoConfig.from_pretrained(cached_file(bert,"deprel/config.json")) d=x(cached_file(bert,"deprel/pytorch_model.bin"),config=c) s=AutoConfig.from_pretrained(cached_file(bert,"tagger/config.json")) t=x(cached_file(bert,"tagger/pytorch_model.bin"),config=s) self.deprel=TokenClassificationPipeline(model=d,tokenizer=self.tokenizer, aggregation_strategy="simple") self.tagger=TokenClassificationPipeline(model=t,tokenizer=self.tokenizer) def __call__(self,text): import numpy,torch,ufal.chu_liu_edmonds w=[(t["start"],t["end"],t["entity_group"]) for t in self.deprel(text)] z,n={t["start"]:t["entity"].split("|") for t in self.tagger(text)},len(w) r,m=[text[s:e] for s,e,p in w],numpy.full((n+1,n+1),numpy.nan) v,c=self.tokenizer(r,add_special_tokens=False)["input_ids"],[] for i,t in enumerate(v): q=[self.tokenizer.cls_token_id]+t+[self.tokenizer.sep_token_id] c.append([q]+v[0:i]+[[self.tokenizer.mask_token_id]]+v[i+1:]+[[q[-1]]]) b=[[len(sum(x[0:j+1],[])) for j in range(len(x))] for x in c] with torch.no_grad(): d=self.model(input_ids=torch.tensor([sum(x,[]) for x in c]), token_type_ids=torch.tensor([[0]*x[0]+[1]*(x[-1]-x[0]) for x in b])) s,e=d.start_logits.tolist(),d.end_logits.tolist() for i in range(n): for j in range(n): m[i+1,0 if i==j else j+1]=s[i][b[i][j]]+e[i][b[i][j+1]-1] h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0] if [0 for i in h if i==0]!=[0]: i=([p for s,e,p in w]+["root"]).index("root") j=i+1 if i<n else numpy.nanargmax(m[:,0]) m[0:j,0]=m[j+1:,0]=numpy.nan h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0] u=" | 41176a723895a917a74e9b90ece45502 |
cc-by-sa-4.0 | ['japanese', 'question-answering', 'dependency-parsing'] | false | text = "+text.replace("\n"," ")+"\n" for i,(s,e,p) in enumerate(w,1): p="root" if h[i]==0 else "dep" if p=="root" else p u+="\t".join([str(i),r[i-1],"_",z[s][0][2:],"_","|".join(z[s][1:]), str(h[i]),p,"_","_" if i<n and e<w[i][0] else "SpaceAfter=No"])+"\n" return u+"\n" nlp=TransformersUD("KoichiYasuoka/roberta-base-japanese-aozora-ud-head") print(nlp("全学年にわたって小学校の国語の教科書に挿し絵が用いられている")) ``` | 71c08ab08cb9a3484c548c388237bc30 |
mit | [] | false | She Mask on Stable Diffusion This is the `<she-mask>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`:     | e58fcd25ffae33535118cef5fcd333ca |
apache-2.0 | ['translation'] | false | opus-mt-ln-en * source languages: ln * target languages: en * OPUS readme: [ln-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ln-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/ln-en/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ln-en/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ln-en/opus-2020-01-09.eval.txt) | 4e73ba6af8d69d2dd9ccffeba258038a |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.