license stringlengths 2 30 | tags stringlengths 2 513 | is_nc bool 1 class | readme_section stringlengths 201 597k | hash stringlengths 32 32 |
|---|---|---|---|---|
cc0-1.0 | ['programming', 'gpt2', 'causal-lm'] | false | GPT-CSRC This is a GPT2 774M model trained on the C/C++ code of the top 10,000 most popular packages in Debian, according to the [Debian Popularity Contest](https://popcon.debian.org/). The source files were deduplicated using a process similar to the OpenWebText preprocessing (basically a locality-sensitive hash to detect near-duplicates). The model was originally trained using [NVIDIA's Megatron-LM](https://github.com/nvidia/Megatron-LM) but has been converted to Huggingface. Note that the tokenizer is *not* the standard GPT2 BPE vocab, but one that has been trained for this dataset; the tokenizer is also available from this repository. The processed dataset (in JSON format) can be found here: [csrc\_dataset\_large.json.gz](https://moyix.net/~moyix/csrc_dataset_large.json.gz). This model was used to generate snippets for the web site [This Code Does Not Exist](https://doesnotexist.codes/). | a0de71c0fc4ef19a3533be76af0dbd08 |
cc0-1.0 | ['programming', 'gpt2', 'causal-lm'] | false | Usage ``` >>> import torch >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> model = AutoModelForCausalLM.from_pretrained("moyix/csrc_774m") >>> device = torch.device("cuda") >>> model.to(device) >>> tokenizer = AutoTokenizer.from_pretrained("moyix/csrc_774m") >>> prompt = tokenizer.encode('// say hello\nvoid hello() {', return_tensors="pt") >>> output = model.generate(input_ids=prompt.to(device), max_length=32, num_return_sequences=1, do_sample=True, num_beams=4) >>> print(tokenizer.decode(output[0].tolist(),clean_up_tokenization_spaces=True)) // say hello void hello() { std::cout << "hello" << std::endl; } int main() { ``` | 9368a0a641594ec1916aa18a907c9de7 |
apache-2.0 | ['image-classification', 'image-segmentation'] | false | Keras Implementation of Point cloud classification with PointNet This repo contains the trained model of [Point cloud classification with PointNet](https://keras.io/examples/vision/pointnet/). The full credit goes to: [David Griffiths](https://dgriffiths3.github.io/) | af2f676be9d6ced9c8571e795b70c637 |
apache-2.0 | ['image-classification', 'image-segmentation'] | false | Intended uses & limitations - As stated in the paper, PointNet is 3D perception model, applying deep learning to point clouds for object classification and scene semantic segmentation. - PointNet takes raw point cloud data as input, which is typically collected from either a lidar or radar sensor. | 55edf2e3773fc1618b4b5f8a167e7842 |
apache-2.0 | ['generated_from_keras_callback'] | false | vdsouza1/bert-finetuned-ner This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0253 - Validation Loss: 0.0587 - Epoch: 2 | 935d4b7222e37949f886773575ca66d5 |
apache-2.0 | ['generated_from_keras_callback'] | false | Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.1293 | 0.0559 | 0 | | 0.0407 | 0.0552 | 1 | | 0.0253 | 0.0587 | 2 | | f938cc3b8ec068e932720c0c18ce3da9 |
mit | ['generated_from_trainer'] | false | ClinicalBioBERT This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.9404 - Accuracy: 0.77 - Precision: 0.8333 - Recall: 0.8209 - F1: 0.8271 | 031c67ce9c61fac0ce27bdd5ab83a5f5 |
mit | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.693 | 1.0 | 50 | 0.6142 | 0.61 | 0.8182 | 0.5373 | 0.6486 | | 0.5547 | 2.0 | 100 | 0.5753 | 0.66 | 0.8367 | 0.6119 | 0.7069 | | 0.3912 | 3.0 | 150 | 0.5167 | 0.8 | 0.8406 | 0.8657 | 0.8529 | | 0.2618 | 4.0 | 200 | 0.6664 | 0.8 | 0.8133 | 0.9104 | 0.8592 | | 0.1648 | 5.0 | 250 | 0.5954 | 0.79 | 0.8594 | 0.8209 | 0.8397 | | 0.1446 | 6.0 | 300 | 0.6131 | 0.81 | 0.8871 | 0.8209 | 0.8527 | | 0.0841 | 7.0 | 350 | 0.8966 | 0.79 | 0.8194 | 0.8806 | 0.8489 | | 0.0708 | 8.0 | 400 | 0.9366 | 0.78 | 0.8169 | 0.8657 | 0.8406 | | 0.049 | 9.0 | 450 | 0.9523 | 0.78 | 0.8358 | 0.8358 | 0.8358 | | 0.0516 | 10.0 | 500 | 0.9404 | 0.77 | 0.8333 | 0.8209 | 0.8271 | | f3a92c18fddc98fcd0e90f5d5f1b711e |
apache-2.0 | ['generated_from_trainer'] | false | bert-base-uncased-finetuned-removed-0530 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1269 - Accuracy: 0.8745 - F1: 0.8745 | 10a0df93f9fed4ed3eeca5de73555bd1 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:| | No log | 1.0 | 3180 | 0.5939 | 0.8113 | 0.8113 | | No log | 2.0 | 6360 | 0.6459 | 0.8189 | 0.8183 | | No log | 3.0 | 9540 | 0.6523 | 0.8597 | 0.8604 | | No log | 4.0 | 12720 | 0.8159 | 0.8522 | 0.8521 | | No log | 5.0 | 15900 | 0.9294 | 0.8601 | 0.8599 | | No log | 6.0 | 19080 | 1.0066 | 0.8594 | 0.8592 | | No log | 7.0 | 22260 | 1.0268 | 0.8686 | 0.8689 | | 0.2451 | 8.0 | 25440 | 1.0274 | 0.8758 | 0.8760 | | 0.2451 | 9.0 | 28620 | 1.0850 | 0.8726 | 0.8727 | | 0.2451 | 10.0 | 31800 | 1.1269 | 0.8745 | 0.8745 | | 7a05a5dd770fe9a891b954fb298b3745 |
creativeml-openrail-m | ['text-to-image', 'stable-diffusion'] | false | Astronauts Dreambooth model trained by JacobPerera with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept: | 0fc9a7974aaa63d25a5495a1e71a5856 |
bsd-3-clause | ['generated_from_trainer'] | false | ast-fleurs-langid-dropout-0.2 This model is a fine-tuned version of [MIT/ast-finetuned-audioset-10-10-0.4593](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593) on the fleurs dataset. It achieves the following results on the evaluation set: - Loss: 7.3600 - Accuracy: 0.1819 | 31ba232e58e130e06309e7a5f073f572 |
bsd-3-clause | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3 - mixed_precision_training: Native AMP | aad7719ed62773ae0e7f8eeaf77f7f12 |
bsd-3-clause | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.0251 | 1.0 | 16987 | 6.7973 | 0.1689 | | 0.0007 | 2.0 | 33974 | 7.3461 | 0.1787 | | 0.0 | 3.0 | 50961 | 7.3600 | 0.1819 | | 971021b68dc80bdf07c6a509f7abfa2f |
apache-2.0 | ['generated_from_trainer'] | false | Flan-T5 (small) fine-tuned on OpenAI summarize_from_feedback for summarizing This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on the summarize_from_feedback dataset. It achieves the following results on the evaluation set: - Loss: 2.1488 - Rouge1: 27.2966 - Rouge2: 9.5886 - Rougel: 22.1999 - Rougelsum: 23.6317 - Gen Len: 18.9310 | 392e723c0592abbfff4e317182dc5004 |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 | f71ea8d4d41012b2065d175b4792f811 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 2.2472 | 1.0 | 2902 | 2.1882 | 26.2033 | 8.83 | 21.3673 | 22.7758 | 18.9234 | | 2.1142 | 2.0 | 5804 | 2.1608 | 27.1972 | 9.4269 | 22.1761 | 23.6252 | 18.8796 | | 2.0484 | 3.0 | 8706 | 2.1524 | 27.0963 | 9.4578 | 21.9866 | 23.5124 | 18.9033 | | 2.0055 | 4.0 | 11608 | 2.1519 | 27.2428 | 9.5514 | 22.1542 | 23.6036 | 18.9347 | | 1.9647 | 5.0 | 14510 | 2.1488 | 27.2966 | 9.5886 | 22.1999 | 23.6317 | 18.9310 | | 1.9547 | 6.0 | 17412 | 2.1488 | 27.5602 | 9.673 | 22.3768 | 23.8399 | 18.9236 | | 7be02bb49ccb70039d85951ed998bcba |
apache-2.0 | ['automatic-speech-recognition', 'th'] | false | exp_w2v2t_th_xls-r_s590 Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (th)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool. | 7519510aeb7e2c9229035dfbbbec0830 |
apache-2.0 | ['generated_from_trainer'] | false | bert-finetuned-ner-trainer This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0607 - Precision: 0.9392 - Recall: 0.9515 - F1: 0.9453 - Accuracy: 0.9868 | f508f20b086bf81c21d46797baec87c1 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0861 | 1.0 | 1756 | 0.0623 | 0.9173 | 0.9310 | 0.9241 | 0.9832 | | 0.0342 | 2.0 | 3512 | 0.0644 | 0.9297 | 0.9483 | 0.9389 | 0.9856 | | 0.0165 | 3.0 | 5268 | 0.0607 | 0.9392 | 0.9515 | 0.9453 | 0.9868 | | 4ab6e230a998421a218bc873bfd7a163 |
apache-2.0 | ['translation'] | false | opus-mt-fr-guw * source languages: fr * target languages: guw * OPUS readme: [fr-guw](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-guw/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-guw/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-guw/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-guw/opus-2020-01-09.eval.txt) | 38352af285dd58086fde8812022a8c0f |
mit | ['generated_from_trainer'] | false | xlm-robereta-base-finetuned-panx-all This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1637 - F1: 0.8621 | e8a9119dfe1053460aa7ff95bec34a27 |
mit | ['text-classification'] | false | Multi2ConvAI-Logistics: finetuned Bert for English
This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project:
- domain: Logistics (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases)))
- language: English (en)
- model type: finetuned Bert
| 6f94f0bc157b2427124896b3f6097914 |
cc-by-4.0 | [] | false | HindAlBERT HindAlBERT is a Hindi AlBERT model model trained on publicly available Hindi monolingual datasets. [project link] (https://github.com/l3cube-pune/MarathiNLP) More details on the dataset, models, and baseline results can be found in our [<a href='https://arxiv.org/abs/2211.11418'> paper </a>] (<a href='http://dx.doi.org/10.13140/RG.2.2.14606.84809'> pdf </a>) ``` @article{joshi2022l3cubehind, title={L3Cube-HindBERT and DevBERT: Pre-Trained BERT Transformer models for Devanagari based Hindi and Marathi Languages}, author={Joshi, Raviraj}, journal={arXiv preprint arXiv:2211.11418}, year={2022} } ``` | ed64251cc3a9cf028246faaa9155dff5 |
apache-2.0 | ['generated_from_trainer'] | false | t5-small-transferLearning-NL2BASH_seqTrain This model is a fine-tuned version of [kevinum/t5-small-finetuned-English-to-BASH](https://huggingface.co/kevinum/t5-small-finetuned-English-to-BASH) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6524 - Bleu: 48.0701 - Gen Len: 8.9028 | a9b3e24be8f21b99f8eb2c9258902859 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:| | No log | 1.0 | 36 | 0.6524 | 48.0701 | 8.9028 | | No log | 2.0 | 72 | 0.6524 | 48.0701 | 8.9028 | | No log | 3.0 | 108 | 0.6524 | 48.0701 | 8.9028 | | No log | 4.0 | 144 | 0.6524 | 48.0701 | 8.9028 | | No log | 5.0 | 180 | 0.6524 | 48.0701 | 8.9028 | | 06493fb8b98dc76cb44af5ae2299addb |
apache-2.0 | ['generated_from_trainer'] | false | distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.7475 - Matthews Correlation: 0.5570 | 797c339d70771e795b16431f2b4831be |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5251 | 1.0 | 535 | 0.5304 | 0.4272 | | 0.3474 | 2.0 | 1070 | 0.4874 | 0.5136 | | 0.2356 | 3.0 | 1605 | 0.6454 | 0.5314 | | 0.1699 | 4.0 | 2140 | 0.7475 | 0.5570 | | 0.1244 | 5.0 | 2675 | 0.8525 | 0.5478 | | de6ab8b11a839ef0fca870a89f4dce0a |
creativeml-openrail-m | ['text-to-image', 'stable-diffusion'] | false | 1cryenginebeta Dreambooth model trained by abbiepam with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept: _coming_out_of_the_countryside,_8k_uhd,_studio_quality,_character,_ultra_realistic,_m.png)   _as_a_cyberpunk_hacker_at_the_down_town,_8k_uhd,_studio_quality,_character,_ultra_realisti.png) _as_an_Egyptian_Queen_in_the_castle,_8k_uhd,_studio_quality,_character,_ultra_realistic,_m.png)  _by_the_fountain_on_the_city_park,_8k_uhd,_studio_quality,_character,_ultra_realistic,_max.png) _as_a_cyberpunk_hacker_at_the_down_town,_8k_uhd,_studio_quality,_character,_ultra_realisti.png) _coming_out_of_the_city,_8k_uhd,_studio_quality,_character,_ultra_realistic,_max_detail,_mass.png) | 10c316e9da38546c569dc3d4d54efb38 |
apache-2.0 | ['generated_from_trainer'] | false | openai/whisper-base This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6082 - Wer: 16.5259 | 18eab26ec3074f6688a843646e05a19e |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.2939 | 4.02 | 1000 | 0.3712 | 14.9737 | | 0.1381 | 8.04 | 2000 | 0.4280 | 16.5207 | | 0.0248 | 13.01 | 3000 | 0.5326 | 16.9985 | | 0.0063 | 17.02 | 4000 | 0.5855 | 16.4293 | | 0.0048 | 21.04 | 5000 | 0.6082 | 16.5259 | | 11cb1a2f7c1d093b2d8b5c2c4e2bb91c |
mit | [] | false | XLNet (large-sized model) XLNet model pre-trained on English language. It was introduced in the paper [XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Yang et al. and first released in [this repository](https://github.com/zihangdai/xlnet/). Disclaimer: The team releasing XLNet did not write a model card for this model so this model card has been written by the Hugging Face team. | ddc2af75f3b1eadea0740eb1d9fb88db |
mit | [] | false | Model description XLNet is a new unsupervised language representation learning method based on a novel generalized permutation language modeling objective. Additionally, XLNet employs Transformer-XL as the backbone model, exhibiting excellent performance for language tasks involving long context. Overall, XLNet achieves state-of-the-art (SOTA) results on various downstream language tasks including question answering, natural language inference, sentiment analysis, and document ranking. | 61591b5b805670d532a2e7bd948b7c70 |
mit | [] | false | Intended uses & limitations The model is mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?search=xlnet) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation, you should look at models like GPT2. | 109aef30591c388db9ad8935f2c1a678 |
mit | [] | false | Usage Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import XLNetTokenizer, XLNetModel tokenizer = XLNetTokenizer.from_pretrained('xlnet-large-cased') model = XLNetModel.from_pretrained('xlnet-large-cased') inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") outputs = model(**inputs) last_hidden_states = outputs.last_hidden_state ``` | b957863c3dcc01117665c85db526d280 |
mit | [] | false | BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-1906-08237, author = {Zhilin Yang and Zihang Dai and Yiming Yang and Jaime G. Carbonell and Ruslan Salakhutdinov and Quoc V. Le}, title = {XLNet: Generalized Autoregressive Pretraining for Language Understanding}, journal = {CoRR}, volume = {abs/1906.08237}, year = {2019}, url = {http://arxiv.org/abs/1906.08237}, eprinttype = {arXiv}, eprint = {1906.08237}, timestamp = {Mon, 24 Jun 2019 17:28:45 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1906-08237.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` | e5f493808f733676699a503a3ba5c885 |
cc-by-sa-4.0 | [] | false | nlp-waseda/gpt2-xl-japanese This is Japanese GPT2 with approximately 1.5B parameters pretrained on Japanese Wikipedia and CC-100 The model architecture of the model are based on [Radford+ 2019](https://paperswithcode.com/paper/language-models-are-unsupervised-multitask). | 45ddf86bc91069847917392969eccea1 |
cc-by-sa-4.0 | [] | false | Intended uses & limitations You can use the raw model for text generation or fine-tune it to a downstream task. Note that the texts should be segmented into words using [Juman++](https://github.com/ku-nlp/jumanpp) in advance. | 22eac83b38d46bba4478a667cda15282 |
cc-by-sa-4.0 | [] | false | How to use You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: ```python from transformers import pipeline, set_seed generator = pipeline('text-generation', model='nlp-waseda/gpt2-xl-japanese') | 052d53c02127cb427b349ef26e00a119 |
cc-by-sa-4.0 | [] | false | generator = pipeline('text-generation', model='nlp-waseda/gpt2-xl-japanese', device=0) set_seed(42) generator("早稲田 大学 で 自然 言語 処理 を", max_length=30, do_sample=True, pad_token_id=2, num_return_sequences=5) [{'generated_text': '早稲田 大学 で 自然 言語 処理 を 勉強 して いる 大学生 です. 自然 言語 処理 や 音声 認識, 機械 学習 等 に 興味 が あり, 特に 画像'}, {'generated_text': '早稲田 大学 で 自然 言語 処理 を 学んで いる と ある 方 と お 会い して き ました. 今日 は お 話 する 時間 が 少なかった のです が,'}, {'generated_text': '早稲田 大学 で 自然 言語 処理 を 研究 して いる が 、 それ を 趣味 と は 思わず 、 会社 を 作る ため の 手段 と とらえて いる ようです 。'}, {'generated_text': '早稲田 大学 で 自然 言語 処理 を 専門 的に 学ぶ サークル です 。 日本 語 教育 センター で 日本 語 を 勉強 した 中国 の 人 たち と 交流 する'}, {'generated_text': '早稲田 大学 で 自然 言語 処理 を 専攻 した 時 に 、 数学 の 知識 ・ プログラミング 言語 の 知識 が 身 に ついて いた の は 、 とても 役'}] ``` ```python from transformers import AutoTokenizer, GPT2Model tokenizer = AutoTokenizer.from_pretrained('nlp-waseda/gpt2-xl-japanese') model = GPT2Model.from_pretrained('nlp-waseda/gpt2-xl-japanese') text = "早稲田 大学 で 自然 言語 処理 を" encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` | 5a523de93a7e452bd22bf3b6ef0c1afe |
cc-by-sa-4.0 | [] | false | Preprocessing The texts are normalized using [neologdn](https://github.com/ikegami-yukino/neologdn), segmented into words using [Juman++](https://github.com/ku-nlp/jumanpp), and tokenized by [BPE](https://huggingface.co/docs/tokenizers/api/models | e7dfbc9cf5b383db39eefdd37cb0d9c5 |
cc-by-sa-4.0 | [] | false | Acknowledgments This work was supported by Joint Usage/Research Center for Interdisciplinary Large-scale Information Infrastructures (JHPCN) through General Collaboration Project no. jh221004, "Developing a Platform for Constructing and Sharing of Large-Scale Japanese Language Models". For training models, we used the [mdx](https://mdx.jp/): a platform for the data-driven future. | e8fa13d72039be277a9184269a7bd973 |
mit | ['generated_from_trainer'] | false | CR_XLNet_5E This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6034 - Accuracy: 0.9067 | 1428c6393f86694f83b8a132b0b70e15 |
mit | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5384 | 0.33 | 50 | 0.4165 | 0.8533 | | 0.3633 | 0.66 | 100 | 0.3059 | 0.8867 | | 0.2642 | 0.99 | 150 | 0.2582 | 0.9267 | | 0.2626 | 1.32 | 200 | 0.3324 | 0.9 | | 0.1859 | 1.66 | 250 | 0.4076 | 0.9067 | | 0.2631 | 1.99 | 300 | 0.4334 | 0.8867 | | 0.1449 | 2.32 | 350 | 0.4264 | 0.9 | | 0.1815 | 2.65 | 400 | 0.4334 | 0.8933 | | 0.1316 | 2.98 | 450 | 0.4436 | 0.9 | | 0.0725 | 3.31 | 500 | 0.6165 | 0.9 | | 0.0708 | 3.64 | 550 | 0.6737 | 0.8933 | | 0.0821 | 3.97 | 600 | 0.5777 | 0.9067 | | 0.0381 | 4.3 | 650 | 0.6052 | 0.9 | | 0.0441 | 4.64 | 700 | 0.5853 | 0.9133 | | 0.0237 | 4.97 | 750 | 0.6034 | 0.9067 | | 758d2d138e13f942236dcabafe8f6823 |
apache-2.0 | ['generated_from_trainer'] | false | small This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on the xsum dataset. It achieves the following results on the evaluation set: - Loss: 2.0998 - Rouge1: 33.2675 - Rouge2: 11.0862 - Rougel: 26.1709 - Rougelsum: 26.1668 - Gen Len: 28.0123 | 16d37b8d73bc75da6bb856fe6c7fbf63 |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 3.0 | daa3c5306371c8ca1ace6e0677c7e001 |
apache-2.0 | ['generated_from_keras_callback'] | false | syp1229/koelectra-base-v3-generator-finetuned-koidiom-epoch5 This model is a fine-tuned version of [monologg/koelectra-base-v3-generator](https://huggingface.co/monologg/koelectra-base-v3-generator) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.1280 - Validation Loss: 1.8541 - Epoch: 4 | 607cf2464b7580ecf5d2c59b1bb3c2e9 |
apache-2.0 | ['generated_from_keras_callback'] | false | Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 2.4450 | 2.1108 | 0 | | 2.2462 | 1.9578 | 1 | | 2.1990 | 1.9394 | 2 | | 2.1306 | 1.9433 | 3 | | 2.1280 | 1.8541 | 4 | | 496e7fa75605e616cd42a22502d9b67f |
cc-by-4.0 | ['espnet', 'audio', 'automatic-speech-recognition'] | false | Demo: How to use in ESPnet2 ```bash cd espnet git checkout 060fdb8b231b980c67b88a00fb8dd644aebbb1c0 pip install -e . cd egs2/librispeech_100/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model pyf98/librispeech_100h_conformer ``` <!-- Generated by scripts/utils/show_asr_result.sh --> | 4532ee932720d83571ca2061b9aacd7e |
cc-by-4.0 | ['espnet', 'audio', 'automatic-speech-recognition'] | false | Environments - date: `Mon Feb 7 21:28:00 EST 2022` - python version: `3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]` - espnet version: `espnet 0.10.6a1` - pytorch version: `pytorch 1.10.1` - Git hash: `060fdb8b231b980c67b88a00fb8dd644aebbb1c0` - Commit date: `Mon Feb 7 21:26:51 2022 -0500` | ab21c9965f63b068d3381f919e50c7d2 |
cc-by-4.0 | ['espnet', 'audio', 'automatic-speech-recognition'] | false | WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |beam1_ctc0.3/dev_clean|2703|54402|93.6|5.3|1.1|1.5|8.0|58.5| |beam1_ctc0.3/dev_other|2864|50948|83.7|14.3|2.0|3.2|19.5|81.2| |beam1_ctc0.3/test_clean|2620|52576|93.3|5.6|1.1|1.7|8.4|59.4| |beam1_ctc0.3/test_other|2939|52343|83.5|14.4|2.1|2.9|19.4|83.3| |beam20_ctc0.3/dev_clean|2703|54402|94.5|5.1|0.4|0.8|6.3|56.3| |beam20_ctc0.3/dev_other|2864|50948|84.6|13.9|1.5|2.1|17.4|79.9| |beam20_ctc0.3/test_clean|2620|52576|94.3|5.3|0.4|0.8|6.5|57.0| |beam20_ctc0.3/test_other|2939|52343|84.7|13.7|1.6|2.0|17.3|81.6| | 753fa547fadd3aae384b9d5bffc81b73 |
cc-by-4.0 | ['espnet', 'audio', 'automatic-speech-recognition'] | false | CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |beam1_ctc0.3/dev_clean|2703|288456|97.4|1.2|1.4|1.4|4.0|58.5| |beam1_ctc0.3/dev_other|2864|265951|92.5|4.5|3.0|3.2|10.7|81.2| |beam1_ctc0.3/test_clean|2620|281530|97.3|1.2|1.5|1.5|4.2|59.4| |beam1_ctc0.3/test_other|2939|272758|92.6|4.3|3.1|2.9|10.3|83.3| |beam20_ctc0.3/dev_clean|2703|288456|98.2|1.1|0.7|0.7|2.5|56.3| |beam20_ctc0.3/dev_other|2864|265951|93.3|4.2|2.5|2.0|8.7|79.9| |beam20_ctc0.3/test_clean|2620|281530|98.1|1.1|0.8|0.6|2.5|57.0| |beam20_ctc0.3/test_other|2939|272758|93.5|4.0|2.6|1.9|8.4|81.6| | 38f8e31d29b77411be84c9b16d45954d |
cc-by-4.0 | ['espnet', 'audio', 'automatic-speech-recognition'] | false | TER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |beam1_ctc0.3/dev_clean|2703|69558|91.0|5.5|3.5|1.4|10.4|58.5| |beam1_ctc0.3/dev_other|2864|64524|80.2|14.7|5.1|4.2|24.0|81.2| |beam1_ctc0.3/test_clean|2620|66983|91.0|5.6|3.4|1.6|10.6|59.4| |beam1_ctc0.3/test_other|2939|66650|80.0|14.4|5.6|3.7|23.7|83.3| |beam20_ctc0.3/dev_clean|2703|69558|91.9|5.0|3.1|0.6|8.7|56.3| |beam20_ctc0.3/dev_other|2864|64524|81.0|13.5|5.5|2.3|21.3|79.9| |beam20_ctc0.3/test_clean|2620|66983|92.0|5.0|3.0|0.6|8.6|57.0| |beam20_ctc0.3/test_other|2939|66650|81.2|13.0|5.8|2.0|20.9|81.6| | 764f911ff992bf4b3439b1b6c33c5e10 |
cc-by-4.0 | ['espnet', 'audio', 'automatic-speech-recognition'] | false | ASR config <details><summary>expand</summary> ``` config: conf/train_asr_conformer_win400_hop160_ctc0.3_lr2e-3_warmup15k_timemask5_amp_no-deterministic.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_conformer_win400_hop160_ctc0.3_lr2e-3_warmup15k_timemask5_amp_no-deterministic ngpu: 1 seed: 2022 num_workers: 4 num_att_plot: 0 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: false collect_stats: false write_collected_feats: false max_epoch: 70 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - acc - max keep_nbest_models: 10 nbest_averaging_interval: 0 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 4 no_forward_run: false resume: true train_dtype: float32 use_amp: true log_interval: 400 use_matplotlib: true use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 20 valid_batch_size: null batch_bins: 16000000 valid_batch_bins: null train_shape_file: - exp/asr_stats_raw_en_bpe5000_sp/train/speech_shape - exp/asr_stats_raw_en_bpe5000_sp/train/text_shape.bpe valid_shape_file: - exp/asr_stats_raw_en_bpe5000_sp/valid/speech_shape - exp/asr_stats_raw_en_bpe5000_sp/valid/text_shape.bpe batch_type: numel valid_batch_type: null fold_length: - 80000 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train_clean_100_sp/wav.scp - speech - kaldi_ark - - dump/raw/train_clean_100_sp/text - text - text valid_data_path_and_name_and_type: - - dump/raw/dev/wav.scp - speech - kaldi_ark - - dump/raw/dev/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 0.002 weight_decay: 1.0e-06 scheduler: warmuplr scheduler_conf: warmup_steps: 15000 token_list: - <blank> - <unk> - ▁THE - S - ▁AND - ▁OF - ▁TO - ▁A - ▁IN - ED - ▁I - ▁HE - ▁WAS - ▁THAT - ING - ▁IT - '''' - ▁HIS - ▁HAD - ▁WITH - ▁YOU - ▁FOR - T - ▁AS - ▁HER - LY - ▁NOT - ▁BUT - ▁SHE - ▁BE - D - E - ▁IS - ▁AT - ▁ON - ▁HIM - ▁THEY - ▁BY - ▁HAVE - Y - ▁MY - ▁SO - ▁ALL - ▁THIS - ▁WERE - ▁WHICH - ▁ME - ▁FROM - ▁ONE - ▁SAID - ▁WE - N - ER - ▁NO - ▁THERE - ▁WHEN - ▁AN - ▁THEIR - ▁OR - ▁WOULD - ▁WHO - ▁THEM - R - ▁IF - ▁WHAT - ▁ARE - ▁BEEN - ▁OUT - ▁UP - M - ▁WILL - ▁DO - ▁MAN - ▁COULD - C - ▁THEN - ▁INTO - ▁MORE - ▁SOME - ES - P - ▁VERY - ▁NOW - ▁YOUR - ▁LITTLE - ▁TIME - ▁ABOUT - ▁DID - ▁THAN - ▁LIKE - ▁HAS - L - G - AL - IN - ▁UPON - ▁CAN - ▁WELL - ▁OTHER - ▁OVER - US - ▁TWO - ▁ONLY - ▁ANY - ▁OUR - O - EN - RE - ▁MADE - U - ▁AFTER - ▁SEE - ▁S - ▁DOWN - ▁BEFORE - LL - ST - B - ▁OLD - ▁DAY - ▁MISS - ▁GREAT - ▁US - ▁KNOW - OR - ▁SUCH - ▁GOOD - ▁WAY - A - ▁THESE - ▁CAME - ▁UN - ▁SHOULD - ▁HOW - ▁MISTER - ▁GO - ▁MUCH - ▁WHERE - ▁MUST - ▁NEVER - ▁COME - ▁BACK - ION - 'ON' - ▁LONG - F - ▁AGAIN - ▁FIRST - LE - ▁MEN - ▁EVEN - NESS - ▁MIGHT - ▁OWN - ▁MAY - K - ▁HIMSELF - ▁SAY - ▁JUST - ▁THROUGH - ▁RE - ▁AM - ▁ITS - ▁WENT - ▁THOUGHT - ▁ - ▁DE - ▁MAKE - I - ▁HAND - ▁THINK - ▁HOUSE - ▁HERE - IC - H - ATION - ▁LIFE - IT - ▁EYES - ▁MOST - ▁WITHOUT - ▁TOO - ▁THOSE - ABLE - ▁EVERY - ▁DON - ▁MANY - ▁AWAY - ITY - VE - W - ▁STILL - ▁BEING - ▁C - ▁LAST - ▁NIGHT - ▁O - ▁HEAD - AN - ▁FOUND - ▁NOTHING - ▁YOUNG - ▁WHILE - ▁TAKE - ▁GET - ▁PEOPLE - RO - ▁OFF - ▁THOUGH - EST - ▁YET - ▁THREE - TH - ▁RIGHT - ▁UNDER - AR - ▁FACE - IES - ▁ROOM - ▁NEW - ▁SAW - RA - V - ▁ASKED - ▁TELL - ERS - ▁SAME - MENT - ▁HEART - LESS - ▁WORK - ▁PLACE - ▁ANOTHER - ▁EVER - ▁LEFT - ▁SHALL - ▁FATHER - ▁PUT - ▁ONCE - ▁TOOK - ▁LET - ▁ALWAYS - ▁SEEMED - ▁PART - IL - UR - ▁WHY - ▁TOLD - ▁GIVE - ▁LOVE - CE - ▁MIND - ▁LOOKED - ▁HEARD - ▁SOON - ▁LOOK - ▁MOTHER - ▁FAR - IVE - ▁BECAUSE - ▁HOME - OUS - ▁T - EL - ▁D - ▁SOMETHING - ▁SIDE - ▁KING - IS - ATE - ▁MOMENT - ENT - RY - ▁THINGS - ▁ST - ▁LIGHT - ▁FIND - ▁GOING - ▁THING - ▁WORLD - IR - AT - ▁WATER - ▁END - ▁DOOR - ISH - ▁KNEW - ▁WOMAN - ▁SIR - ▁EACH - RI - ▁HAVING - ▁AGAINST - ▁FEW - ▁E - ▁BEGAN - ▁BETTER - ▁YES - ▁NAME - ▁ENOUGH - ET - ▁HARD - ▁VOICE - ▁YEARS - ▁GOT - ▁WHOLE - ▁WHITE - ▁WANT - ▁GIRL - ▁DONE - ▁SEEN - ▁HUNDRED - ▁CALLED - ▁BETWEEN - ▁MORNING - FUL - AS - ▁FELT - TER - ▁KIND - X - CH - ▁HERSELF - ANT - ▁TOWARD - ▁HALF - ▁OH - ▁AMONG - ▁HOWEVER - ▁TURNED - ▁ALSO - ▁BOTH - ▁POOR - ▁PERHAPS - ▁REPLIED - ▁COURSE - UL - ▁QUITE - ▁REST - ▁DOES - ▁MYSELF - NG - LO - ANCE - ▁MA - ▁SET - ▁SMALL - ▁B - ▁SURE - ▁F - ▁GAVE - ▁PRESENT - ▁HIGH - ▁ALMO - ▁R - CK - ▁WHOM - ▁NEAR - ▁CARE - ▁WAR - ▁GOD - ▁TOGETHER - ▁SAT - ▁SHOW - TE - NE - ▁BEST - ▁UNTIL - ▁OPEN - ▁W - ▁FOUR - ▁DEAR - ▁HANDS - ▁WORDS - ▁SINCE - ▁LAND - ▁DIS - MAN - ▁ANYTHING - ▁FEET - ▁NEXT - ▁GENERAL - LING - ▁LAY - ▁NOR - ▁STOOD - ▁BLACK - ▁POWER - ▁BROUGHT - Z - IE - ▁ROUND - ▁BELIEVE - ▁LARGE - ▁ALONG - ▁HELP - ▁DAYS - ▁FIVE - ▁K - ▁HOPE - AM - ▁CO - ▁KEEP - ▁FULL - ▁WALK - ▁MASTER - ATED - ▁NATURE - ▁JOHN - ▁POINT - ▁DUR - ▁MATTER - ▁MONEY - ▁CHILD - ▁LOOKING - ▁RATHER - ▁AIR - IA - ▁P - ▁TWENTY - ▁FIRE - OL - ▁LESS - ▁SHORT - ▁PASSED - ▁INDEED - TY - ▁CASE - ▁WORD - ▁WISH - ▁COUNTRY - LED - ID - ▁BOY - ▁SOUND - ▁FORM - ▁CRIED - LA - ▁FRIEND - TON - ▁FACT - ▁UNCLE - ▁TAKEN - ▁AL - ▁TEN - IAN - ▁GONE - ▁SEA - ▁REASON - TING - ▁WHOSE - ▁OTHERS - AC - ▁LI - ▁DEATH - ▁CERTAIN - ▁ANSWERED - ▁THEMSELVES - ▁LADY - ▁STATE - ▁CAR - ▁WIFE - ▁THOUSAND - ▁TRUE - ▁BEHIND - AGE - ▁DOCTOR - ▁FEAR - ▁OFTEN - OM - ▁TILL - ▁HA - IOUS - ▁AROUND - IST - ▁SENT - ▁SPEAK - ▁WOMEN - ▁GROUND - VER - ENCE - NA - ▁TALK - ▁CHILDREN - TION - CO - MO - ▁HEAR - ▁ORDER - ▁LEAVE - ▁PRO - ▁ALREADY - ▁LA - ▁FINE - SE - ▁BA - PP - ▁THUS - AD - ▁NEED - ▁SIGHT - ▁CALL - ▁FELL - ▁MANNER - MP - ▁BECAME - UM - ▁WATCH - OW - ▁FOOT - ▁CANNOT - ▁BODY - ▁TOWN - ▁LIVE - INE - ▁RETURNED - ▁WONDER - MA - ▁G - UT - ▁CLOSE - UN - IM - ▁ALONE - ▁DIDN - ▁LORD - ▁RED - ARY - ▁GIVEN - ▁SIX - ▁EVERYTHING - ▁DARK - ▁DEAD - ▁STRONG - ▁SON - ▁COMING - URE - ▁HELD - ▁ABOVE - ▁REALLY - ▁BEAUTIFUL - ▁SECOND - ARD - ▁EVENING - ▁CON - ▁HOUR - ▁FELLOW - ▁ROSE - ▁PERSON - ▁EX - ▁CH - ▁FORCE - ▁MO - ▁ARM - ▁CAUSE - ▁TURN - ▁CITY - ▁DOUBT - ▁QUESTION - TIC - ▁DEEP - ▁HAIR - ICAL - ▁MEAN - ▁DI - ▁CLEAR - ▁SOMETIMES - ▁STRANGE - ▁FEEL - ▁HO - ▁IMP - WARD - AUGHT - ▁CAPTAIN - ▁USE - ▁UNDERSTAND - ▁KEPT - ▁BR - ▁WOOD - ▁PRE - ▁YEAR - ▁TI - ▁LEAST - ▁BED - ▁SA - ▁TABLE - ▁BECOME - ▁FREE - ▁FAMILY - ME - ▁EYE - ▁WHETHER - ▁MAKING - ▁WITHIN - ▁SORT - ▁ANSWER - ▁PO - ▁SAYS - ▁EARTH - ▁RETURN - ▁SUDDENLY - ▁FRIENDS - ▁GREEN - ▁SUN - ▁FAIR - ▁TH - ▁FALL - ▁EITHER - ▁BO - ▁PRINCE - ▁THOU - ▁ITSELF - ▁CHURCH - ▁BIG - ▁ABLE - ▁DIFFERENT - ▁SEVERAL - ▁DAUGHTER - ▁WON - ▁WIND - ▁BAD - ▁LOST - ▁READ - ▁STORY - ▁APPEARED - DE - ▁NUMBER - ▁SP - ▁LOW - ▁ROAD - ▁POSSIBLE - ▁HUMAN - ▁RIVER - ▁STREET - ▁GA - ▁COLD - ▁MET - ▁ACT - ▁BROTHER - ▁AGE - ▁KNOWN - ▁CONTINUED - ▁BRING - ▁ILL - ▁RUN - ▁LAW - ▁SUBJECT - ▁CUT - J - PER - ▁PA - ▁TROUBLE - ▁GLAD - HE - ▁SLEEP - MEN - ▁LATE - ▁MEANS - ▁ASK - ▁REACHED - ▁RAN - AK - ▁HORSE - ▁USED - WAY - OP - ▁WINDOW - ▁SNOW - ▁PAST - ▁OBJECT - ▁THEREFORE - IONS - ▁TREE - ▁COMP - ▁BLUE - CA - ▁VI - ▁SIGN - ▁EIGHTEEN - ▁GARDEN - ▁BUSINESS - ▁PETER - ▁FOLLOWED - ▁SEEM - ▁HOLD - ▁HAPPY - ▁LONGER - ▁ACROSS - ▁BU - BE - ▁ELSE - ▁PLAY - ▁SOUL - ▁STAND - ▁ARMS - ▁SCHOOL - ▁PRINCESS - ▁CERTAINLY - LT - ▁ENGLISH - ▁SEVEN - ▁PER - ▁IDEA - ▁LE - ▁BOOK - ▁FEELING - ▁HUSBAND - ▁LINE - PT - THOUGH - ▁OUGHT - ▁RICH - IP - ▁VIEW - ▁DREAM - ▁SENSE - ▁LO - ▁READY - ▁CARRIED - ▁M - ▁REGARD - ▁CHANCE - ▁WANTED - ▁LIVED - ▁LATER - ▁INTEREST - ▁EN - ▁EFFECT - ▁CLA - ▁CHANGE - ▁CA - ▁REAL - ▁SUPPOSE - LES - ▁ART - ▁TIMES - ▁MAR - IF - ▁WILD - ▁ADDED - ▁LETTER - IAL - ▁THANK - ▁PARTY - LAND - ▁PAY - ▁BREATH - ▁TAKING - ▁COURT - ▁COUNT - ILY - ▁COMMON - ▁PUBLIC - ▁PURPOSE - ▁PRETTY - ▁TRUTH - ▁STAY - ▁EM - NT - ▁SH - ▁REMEMBER - ▁ENTERED - ▁RECEIVED - RED - ▁SPOKE - ▁USUAL - ▁THY - ▁FIGURE - ▁LED - ▁TREES - ▁TRIED - ▁FORWARD - NED - ▁HAT - ▁BLOOD - ▁BEYOND - ▁BANK - ▁LIVING - ▁JOY - ▁HOURS - ▁ENGLAND - ▁STONE - VI - GE - ▁SWEET - ▁POSITION - ▁FRONT - ▁GIRLS - ▁VISIT - ▁CHARACTER - ▁SPIRIT - ▁TA - BO - QUE - QUI - ▁OPENED - ▁OCCASION - ▁MEET - ▁EIGHT - ▁REMAIN - ▁PASS - TO - ▁NORTH - ▁SERVICE - ▁SISTER - ▁SE - ▁BEAR - ▁PLEASURE - ▁CHIEF - ▁FOREST - ▁BELL - ▁EXPERIENCE - ▁STRUCK - ▁CARRY - ORY - ▁WARM - 'NO' - ▁WORTH - ▁SAYING - ▁SILENCE - ▁CROSS - ▁JE - ▁H - ▁BEAUTY - PH - ▁DEAL - KE - ▁SECRET - DY - ▁MILES - ▁LU - ▁DOING - ▁BOYS - ▁CROWD - ▁ACCOUNT - REW - ISM - TI - ▁FE - ▁NONE - ▁RO - ▁NEARLY - ▁CHA - ▁YOUTH - ▁CAP - HA - ▁BIT - ▁LIE - ▁ATTENTION - ▁STANDING - ▁STAR - ▁RESPECT - ▁FURTHER - ATIONS - ▁ROCK - ▁BOW - EM - ▁EARLY - ▁MOUTH - ▁BOAT - UB - ▁IMMEDIATELY - ▁EXCEPT - SHIP - ▁PICTURE - ▁BRIGHT - ▁WA - ▁GREW - ▁LEAD - ▁CUR - ▁TONE - RRY - RS - ▁WIDE - CHE - ▁FORTH - IG - OS - ▁NEITHER - ▁YOURSELF - ▁SMILE - ▁DRESS - ▁OPINION - ▁HAPPENED - ▁WAIT - ▁SIT - ▁SHIP - ▁AH - ▁DESIRE - ▁THICK - ▁THIRD - ▁GRAND - ▁FOLLOW - ▁GATHER - ▁HILL - ALLY - ▁COMPANY - ▁CHAIR - DER - ▁TOP - ▁PAR - ▁LENGTH - ▁THIRTY - ▁MINE - ▁MI - ▁EAT - ▁EQUAL - ▁AFRAID - ▁FRESH - ▁TAIL - ▁FILLED - ▁SU - ▁MINUTES - ▁FAST - BU - ▁ENTER - ▁QUEEN - ▁UTTER - AG - ▁FLOOR - ▁SHA - DI - ▁HEAVEN - ▁STOPPED - ▁GUARD - ▁HALL - ▁BAR - ▁COMPLETE - ▁NINE - ▁WEEK - ▁GOLD - VA - ▁FIFTY - ▁BEAT - ▁PRESS - ▁ATTEMPT - ▁EXCLAIMED - DO - ▁CONF - ▁SEEMS - ▁STARTED - ▁EL - ▁HAR - ▁EXPRESSION - ▁TRA - ▁WONDERFUL - ▁SAINT - ▁APPEARANCE - ▁GRAVE - ▁OFFICE - ▁INSTEAD - ▁SILENT - ▁SOUTH - ▁AGO - ▁CAMP - ▁LOVED - ▁PATH - ▁LEARN - ▁PLAN - ▁GOVERNMENT - OUR - PPED - ▁SITTING - ▁SEAT - TEN - RESS - SIDE - ▁MOVED - ▁DIE - ▁RESULT - ▁SPRING - ▁PLEASE - ▁RI - ▁NATURAL - ▁ANNE - ▁STA - ▁CORNER - ▁WALL - ▁IMPOSSIBLE - ▁BROWN - ▁SUIT - ▁MUSIC - PI - ▁TRY - ▁DIED - ▁TEARS - ▁JU - ▁COMFORT - ▁DANGER - ▁MEASURE - ▁PROPERTY - ▁BORN - CON - ▁CR - ▁BROKEN - ▁MASS - EVER - IER - ▁EXPRESS - ▁POCKET - ▁SCARCE - ▁SELF - NY - ▁MADAME - ▁LAUGHED - ▁TOUCH - ▁APPEAR - ▁LONDON - ▁SAFE - ▁SHARP - ▁ATTACK - ▁JANE - ▁COVERED - ▁OUTSIDE - ▁WHATEVER - ▁PLACED - ▁RACE - ▁SHORE - ▁LAID - ▁ROMAN - ▁PERSONAL - UP - AU - ▁REMAINED - ▁HAPPINESS - ▁AFTERNOON - ▁DISTANCE - ▁STORM - ▁MARRIED - ▁FRANK - ▁VALLEY - ▁BOUND - ▁TALKING - ▁JO - ▁QUICK - ▁STEP - AND - ▁ARMY - ▁EFFORT - ▁FRENCH - ▁V - LEY - ▁PARTICULAR - ▁START - ATING - OO - LU - ▁TRANS - ▁HAPPEN - ▁HABIT - ▁VILLAGE - ▁BELOW - ▁GENTLEMAN - BLE - ▁BILL - ▁SAVE - ACT - ▁SOCIETY - ▁MAJOR - ▁QUARTER - ▁SKY - ▁GUESS - CY - ▁SAD - ILE - ▁SL - ▁PLEASANT - ▁STRAIGHT - ▁STRENGTH - ▁FORTUNE - ▁WRONG - ▁COMMAND - ▁BOX - ▁QUIET - ISE - ▁JA - IBLE - ▁TREAT - ▁GLANCE - ▁NECESSARY - ▁FORGET - ▁MOUNTAIN - ▁WINTER - ▁DREW - ▁WAV - ▁PLAIN - ▁ENTIRELY - ▁TEA - ▁SOFT - ▁QUICKLY - ▁INFLUENCE - ▁DINNER - ▁FOOD - ▁CHAPTER - ▁YE - ▁REACH - ▁GETT - ▁PAPER - ▁GIVING - ▁BEGINNING - ▁SEND - ▁FIGHT - ▁SCENE - ▁RUSH - ▁PI - ▁MARK - ▁NA - ▁BROKE - ▁CLASS - ▁BATTLE - ▁EASY - ▁GROUP - BY - ▁STOP - ▁DIRECTION - ▁BESIDE - ▁MOR - HAM - UFF - ▁WEST - ▁OBLIG - ▁COLOR - ▁SINGLE - ▁EASILY - ▁PALE - ▁ACTION - ▁INTER - ▁STRANGER - ▁WI - ▁CONVERSATION - ▁BLOW - ▁MARY - ▁MU - ▁TERRIBLE - ▁THINKING - ▁PULL - ▁MOON - AB - ▁REP - ▁ESPECIALLY - ▁HEAVY - ▁SICK - ▁LUCK - ▁TRAIN - ▁GUN - ▁GU - ▁WAITING - ▁TURNING - ITIES - ▁BREAD - ▁BELONG - ▁LOUD - ▁REPORT - ▁AMERICAN - ▁JOURNEY - ▁ANXIOUS - ▁LIPS - ▁KILLED - IGHT - GO - ▁CONSIDER - ▁PROBABLY - ▁PALACE - ▁HISTORY - ▁LAKE - ▁SHUT - ▁SIMPLY - WA - ▁PAIN - ▁HORSES - ▁SEEING - FULLY - ▁EXPECTED - ▁EVIL - ▁BURN - ▁SIMPLE - ▁DIRECT - IFIED - HER - ▁SLOWLY - ▁LEG - UGH - ▁SAIL - RIC - ▁WISHED - ▁RULE - ▁LAD - ▁MORAL - ▁MOVE - ▁FOLLOWING - ▁SILVER - ▁SEARCH - ▁CHANGED - ▁HANDSOME - ▁COULDN - ▁PASSION - ▁HU - ▁SMILED - ▁STREAM - ▁CONCERN - ▁PRESENCE - STER - ▁CONTENT - ▁BOARD - ▁SHAPE - ▁DECIDED - ▁MARRY - ▁PERFECT - ▁STEPS - ▁CLOSED - ABLY - DEN - ▁WEAK - ▁SUFFICIENT - ▁SHADOW - ▁EXPECT - ▁SPOT - ▁DUTY - ▁SPEAKING - ▁BESIDES - ▁FIELD - ▁ROLL - ▁TRYING - ▁EAR - ▁VER - ▁MARRIAGE - ▁SHOT - ▁SLAVE - ▁MILL - ▁NATION - ▁NECK - ▁ARRIVED - ▁TALL - ▁GRACE - LIN - ▁FORTY - ▁BROAD - ▁SUMMER - ▁COUSIN - ▁BEGIN - ▁CATCH - ▁FO - ▁PE - ▁MEANT - ▁THIN - IO - ▁GROW - ▁TRO - ▁NOTICE - ▁CRY - ▁FISH - ▁COM - ▁DEGREE - ▁HONOUR - ▁UNDERSTOOD - ▁SHOP - ▁TRUST - ▁CONDITION - ▁FARM - IZ - ▁SUDDEN - ▁SUCCESS - ▁SURPRISE - ORS - ▁THOUGHTS - UND - ▁ALLOWED - ITE - ▁NARROW - ▁GLASS - ▁SERIOUS - ▁STICK - ▁GAME - ▁SPENT - ▁SELL - ▁GRA - ▁LOWER - ▁RAISED - ▁PIN - ▁ALLOW - ▁CALM - FT - ▁L - ▁PU - ▁FIT - ACH - ▁SUFFER - ▁LEGS - ▁SUPPORT - ▁FRANCE - ▁LATTER - OV - ▁TASTE - ▁GATE - ▁INSTANT - ▁MINUTE - ▁OFFER - ▁GREATER - ▁PORT - ILL - ▁INDIVIDUAL - ▁AUNT - ▁EAST - ▁ADVANTAGE - ▁FASHION - ▁SWORD - ▁TWELVE - ▁HONOR - ▁MOVEMENT - ▁ISLAND - ACK - ▁WOODS - NCH - ▁PLEASED - ▁ENEMY - ▁RAIN - ▁VARIOUS - ▁OBSERVED - ▁LADIES - ▁BELIEVED - ▁CAST - ▁RISE - ▁BALL - ▁MONTHS - ICE - ▁MURDER - ▁CONDUCT - ▁SOCIAL - ▁TENDER - ▁LEARNED - ▁FRA - ▁FIRM - CLOCK - ▁PREVENT - ▁RING - LIE - ▁GOLDEN - ▁DECLARED - ▁BUILDING - ▁WRITE - ▁ATTEND - ▁CARRIAGE - ▁SITUATION - IDE - ▁NOBLE - ▁HUNG - ▁RUNN - ▁YELLOW - ▁KNOWLEDGE - ▁YORK - ▁PUSH - ▁LEAVING - ▁POST - ▁CIRCUMSTANCES - ▁SEEK - ▁FINALLY - ▁MAIN - ▁LETTERS - ▁POL - ▁ADD - FE - ▁ANCIENT - ▁MARCH - ▁WINE - ▁STATES - ▁WALLS - ▁PRISONER - ▁ISABEL - ▁TEMPER - ▁JUDGE - ▁FAINT - ▁POND - ▁GRASS - ▁FAM - OUT - ▁LAUGH - ▁GRAY - IGN - ▁ESCAPE - ▁KILL - ▁PRAY - ▁COMES - ▁ABSOLUTE - ▁BLIND - ▁WIN - ▁HOST - ▁MERELY - ▁RID - ▁EVERYBODY - ▁MATERIAL - ▁STRETCH - ▁DUE - ▁ROW - ▁TIN - ▁PROMISE - ▁LISTEN - ▁WALKING - ▁COMPANION - ▁INDIAN - ▁BREAK - ▁BENEATH - ▁RUIN - ▁EDGE - ▁WOR - ▁FORMER - ▁WORSE - ▁EVIDENTLY - ▁HARM - ▁CENT - ▁PIECE - ▁LOT - ▁PRESIDENT - ▁SPECIAL - ▁LABOR - ▁HEALTH - GA - ▁PLACES - ▁BEN - ▁SOMEWHAT - ▁DROPPED - ▁AFFECTION - ▁EXACTLY - ▁DARKNESS - ▁FALLEN - ▁DRESSED - ▁BILLY - ▁ACCEPT - ▁FL - ▁HOT - ▁REPEATED - ▁MEETING - PA - ▁PERIOD - ▁HONEST - ▁INSTANCE - ▁FLA - ▁PASSAGE - ▁NE - ▁POSSESSION - ▁WEAR - ▁PEACE - ▁COAT - ▁HOUSES - ▁MOUNTAINS - ▁FIFTEEN - ▁WELCOME - ▁YARD - ▁PROPER - ▁MUS - ADE - ▁RECEIVE - ▁SKIN - ▁GROWN - ▁AFTERWARDS - ANG - ▁DA - ▁DIFFICULT - ▁PERSONS - ▁ACCORDING - ▁FARMER - ▁SPEECH - ▁IMPORTANT - PAR - ▁PERFECTLY - ▁MIN - ▁CONSIDERED - ▁NU - ▁DEPEND - ▁MORROW - ▁MOUNT - ▁KISS - ▁LYING - ▁SUFFERING - ▁EXIST - ERY - OOK - BA - ▁PAINT - AH - ▁CAT - ▁PURE - ▁WISE - ▁PRIVATE - ▁REBECCA - ▁VESSEL - ▁CLEAN - ▁GENTLEMEN - ▁IRON - ▁STORE - ▁FUR - ▁INDIANS - ▁LOSE - ▁BATH - ▁NEWS - ▁CHI - ▁FA - ▁CHARGE - ▁PRIEST - ▁WRITTEN - ▁FORGOTTEN - ▁TRAIL - ▁CLOTHES - ▁ALIVE - ▁SUB - ▁REPLY - ▁THROW - ▁AB - ▁SOLDIERS - ▁ISN - ▁COTTAGE - ▁COURAGE - ▁CONTAIN - ▁BUILT - ▁PAID - ▁HUNT - ▁CASTLE - HOOK - ▁MERE - GGED - ▁NI - ▁UNC - ▁PREPARED - ▁BARE - ▁SMILING - ▁SPREAD - ▁WEATHER - ▁EDWARD - ▁GERMAN - ▁CURIOUS - ▁SERVANT - ▁DISCOVERED - ▁TRAVEL - EY - ▁DANCE - ▁PEN - BR - GEN - ▁BREAKFAST - ▁CHAMBER - ▁WILLIAM - ▁TERROR - ▁SPITE - ▁TIRED - ▁LOCK - ▁CONSIDERABLE - TLE - ▁MANAG - ▁DRY - ▁FINISHED - ▁MILLION - ▁FRE - ▁MIS - ▁PASSING - ▁DRAW - ▁BON - ▁VA - ▁VEN - ▁MAKES - ▁VAIN - ▁BOTTOM - ▁DRINK - ▁FUTURE - ▁RACHEL - ▁SORROW - ▁SIXTEEN - ▁KNIT - ▁PROUD - WI - ▁TOBY - ▁NOISE - ▁SLIGHT - ▁PROCEED - ▁FER - ▁COVER - ▁DRAWING - ▁FAVOR - ▁CATHERINE - ▁NEWSPAPER - ▁NOBODY - ▁ROOF - ▁WEALTH - ▁PROVE - ▁DRAWN - TTED - OKE - ▁DETERMINED - ▁DOG - ▁REMEMBERED - ▁OPENING - ▁FLOWERS - ▁GENTLE - ▁KNIGHT - ▁RECOVER - ▁DESERT - ▁MOTION - ▁NICE - ▁INTENTION - ▁GROWING - ▁CLOUD - ▁MONTH - HOOD - ▁POT - UDE - ▁PLANT - ▁MAD - ▁ENJOY - ▁FAT - ▁COR - ▁KNOWING - ▁IDEAS - IZED - ▁CHEEK - ▁EUROPE - ▁KNOCK - ▁ALARM - ▁TONGUE - ▁SPACE - ▁PATSY - ▁MISTRESS - ▁HENRY - ▁JERRY - ▁LIKED - ▁PLAYED - ▁BOOKS - ▁MODER - ▁CORN - ▁ELIZABETH - ▁CLUB - ▁BRAIN - ▁TROOP - ▁COOK - ▁DU - ▁FUN - DAY - ▁QUA - ▁FLOW - ▁DARE - ▁DELIGHT - ▁WOUND - ▁DESCEND - ▁EVERYWHERE - ▁FRIGHTENED - ▁GEORGE - ▁PECULIAR - ▁MACHINE - ▁PATIENT - ▁MEADOW - ▁PEASANT - ▁BURST - ▁ORDINAR - ▁SONG - ▁BRAVE - ▁EXISTENCE - ▁LUCY - ▁J - ▁CAREFULLY - ▁PRESENTLY - ▁GEN - ▁COW - LLY - ▁PROMISED - UOUS - ▁LIFTED - ▁MEANING - ALL - ▁FAIL - NER - ▁REGULAR - ▁VIRTUE - ▁STUDY - ▁PROTECT - ▁FOND - ▁FANCY - ▁STOCK - ▁KEY - ▁JUSTICE - ▁PACK - LET - ▁AFFAIRS - ▁DIFFICULTY - ▁WORE - ▁COST - ▁HEAT - ▁SHOULDER - ▁OFFERED - ▁MISTAKE - ▁DOLLARS - ▁LOOKS - QUA - ▁BREAST - ▁PRINCIPLE - ▁CHARLES - ▁TEETH - ▁OCCUPIED - ▁DROP - ▁PAPA - ▁SHEEP - ▁KNOWS - ▁DECK - ▁BORE - ▁EXC - ▁SURPRISED - ▁STATION - ▁PL - ▁PR - ▁OURSELVES - ▁SYMPATHY - ▁RUTH - ▁EXCITED - ▁CONTROL - ▁ANGRY - ▁IMAGINATION - ▁WITNESS - ▁HOLDING - THER - DA - ▁TRADE - ▁CREATURE - ▁SISTERS - ▁JOIN - LAS - ▁ALTOGETHER - ▁CIVIL - ▁EMPTY - ▁LEAP - ▁HURT - ▁BOLD - ▁TASK - ▁POLICE - ▁DRAGON - ▁MAID - ▁CLAIM - ▁SHAME - ▁PHYSICAL - ▁CONC - ▁SEIZED - ▁OB - ▁LIVES - ▁HEIGHT - ▁GI - ▁PAL - ▁CHARMING - ▁FEELINGS - ▁SERVANTS - ▁DELIVER - ▁FRUIT - ▁SATISFIED - ▁STRUGGLE - ▁WROTE - ▁CONCEAL - ▁MOVING - ▁FLASH - ▁OPPOSITE - ▁HURRY - ▁ROUGH - ▁PRICE - ▁AWFUL - ▁SAND - ▁SLIPP - ▁SHOWN - ▁SPRA - ▁AGREED - ▁FIXED - ▁PERCEIVED - ▁UPPER - ▁FINGER - ▁FINGERS - ▁EAGER - LF - ▁EARS - LIGHT - ▁IMAGINE - ▁LIKELY - ▁COAST - ▁UNITED - ▁VAN - ▁EXPLAINED - ▁TELLING - ▁DANGEROUS - ▁DICK - ▁COOL - ▁CAL - ▁INSIST - BI - ▁SECURE - ▁HILLS - ▁SAN - ▁CHEER - ▁FILL - ▁BUY - ZA - HI - ▁CLOTH - ▁POSSESSED - ▁ADVANCE - ▁METHOD - ATIVE - ▁GREATLY - ▁SMOKE - ▁HIGHER - ▁COMPANIONS - ▁ANIMALS - ▁GALL - ▁QUIETLY - ▁TRAVELL - ▁RESOLVED - ▁FLEW - ▁CARLYLE - ▁MEMORY - ▁RESIST - ▁GRAHAM - ▁LAUGHING - ▁FAITH - ▁BIRD - CRI - ▁LEAVES - ▁AMERICA - ▁DEMAND - BOARD - ▁AWAKE - ▁CURIOSITY - ▁LANGUAGE - ▁VIOLENT - ▁AWARE - ▁DOUBLE - ▁LOOSE - LIKE - ▁ADAM - ▁RISING - ▁HOTEL - ▁BAND - ▁ENGAGED - ▁HEADS - ▁LOG - ▁FORMED - ▁WINDOWS - ▁PREFER - RUS - ▁THROWN - ▁ARCH - ▁PAUSE - ▁SERVE - KIN - ▁FALLING - ▁VO - ▁WHISPERED - ▁POWERFUL - ▁ER - ▁DEPART - ▁CRUEL - ▁EXAMPLE - ▁SMOOTH - ▁INTRODUC - ▁RELIGION - ▁SEVENTEEN - ▁ABSENCE - ▁PRINT - ▁SHINING - ▁ICE - ▁POET - ▁DREADFUL - ▁REQUIRED - ▁ORIGINAL - ▁POINTED - ▁INSIDE - ▁BROTHERS - ▁PRODUCED - ▁SPOKEN - ▁CREATURES - ▁FLY - ▁TOM - ▁PURSU - ▁SYSTEM - ▁EXCELLENT - ▁EXCITEMENT - ▁MIDDLE - ▁FALSE - ▁REGRET - ▁RAY - ▁PHYSICIAN - ▁COP - ▁VALUE - ▁TOUCHED - ▁FLAT - ▁OAK - ▁SUM - ▁LOSS - ▁PAPERS - ▁STEPP - ▁REVER - ▁SHADE - SOME - ▁LISTENED - ▁N - ▁DISCOVER - ▁BITTER - TERN - ▁HOLE - ▁ADVANCED - ▁PICK - ARTAGNAN - ▁CORPORAL - ▁ASLEEP - ▁TEMPLE - ▁INDICAT - IUM - ▁FARTHER - ▁EXCUSE - ▁FLU - ▁NOSE - ▁SIXTY - ▁SUPPOSED - ▁PROVED - ▁RATE - ▁SHOULDERS - ▁AFFAIR - ▁FIELDS - ▁REMARKED - AVE - ▁WEEKS - ▁ESTABLISH - ▁PARIS - ▁ADMIT - ▁NEIGHBOR - ▁ATTRACT - ▁CUSTOM - ▁DISTINGUISH - ▁SURFACE - ▁COUPLE - ▁DEVIL - ▁LIMIT - ▁ROYAL - ▁FOOL - ▁RARE - ▁PRIDE - ▁PROFESSOR - ▁SAKE - ▁DALE - ▁VAST - ▁REFUSED - ▁FAILED - ▁BAG - ▁ROB - ▁WASH - ▁FAIRY - ▁FREQUENT - ▁MARILLA - ▁PROGRESS - ▁RELIEF - ▁DROVE - ▁DOZEN - ▁AHEAD - ▁ADVENTURE - ▁GRANT - ▁PRIM - ▁MENTAL - ▁PAIR - ▁IMPRESSION - ▁WOUNDED - ▁FULLY - ▁DISAPPEARED - ▁MILE - ▁DRIVE - ▁MUD - ▁SIZE - ▁ANIMAL - ZE - ▁GRE - ▁REPRESENT - ▁ACQUAINTANCE - ▁INSTRUMENT - ▁SPLENDID - ▁UNKNOWN - ▁CORONEL - ▁EMPEROR - ▁EARNEST - ▁EXTEND - ▁BRIEF - ▁RENDER - ▁PARENTS - ▁GENTLY - ▁CALLING - ▁TRIBE - ▁CHRISTIAN - ▁INTERESTING - ▁LAMP - ▁JIMM - ▁DIV - ▁LOVER - UCH - ▁HID - ▁NEEDED - ▁ORDERED - ▁MEAL - ▁SLOW - ▁DAM - ▁CLOUDS - ▁DAN - ▁GAR - ▁EXPLAIN - ▁QUI - ▁CLIMB - ▁HURRIED - ▁MURMUR - ▁SWIFT - ▁ARTHUR - ▁JEFF - ▁KINGDOM - ▁MESSAGE - ▁PROTEST - ▁ORGAN - ▁RISK - ▁FORGIVE - ▁OCCURRED - ▁PEARL - ▁ODD - ▁INFORMATION - ▁BUSY - ▁TRI - ▁LACK - ▁BAY - ▁FLEET - ▁CROWN - ▁WAITED - ▁BIRDS - ▁PITY - ▁SUCCEEDED - ▁INFORMED - ▁WISHES - ▁DIRECTLY - ▁CABIN - ▁AUGUST - ▁COUNTENANCE - ▁HORROR - ▁PHILIP - ▁POPULAR - ▁PREVIOUS - ▁CONTRARY - ▁ARTICLE - ▁DIFFERENCE - ▁HIDDEN - ▁HUGE - ▁AUTHORITY - ▁POUND - ▁JUMP - ▁SPI - ▁SHAKE - ▁EVENTS - ▁FRO - ▁LEAN - ▁CRO - ▁TRIM - ▁SHARE - ▁FISHER - ▁SETTLED - ▁QUESTIONS - ▁SI - ▁VAL - ▁APPROACHED - ▁SUGGESTED - ▁CONTINU - ▁PERFORM - ▁ACKNOWLEDG - ▁CLIFF - ▁COLONEL - ▁GHOST - ▁MAJESTY - ▁EMOTION - ▁SUPPER - ▁DISTANT - ▁INTERESTED - ▁JACK - ▁HUM - ▁TRAMP - ▁BRI - ▁POUR - ▁SHIPS - ▁CHAIN - ▁DY - ▁RANK - ▁MATTERS - ▁LOVELY - AW - ▁PAT - ▁WORKING - ▁CONSEIL - ▁EVIDENCE - ▁MERCHANT - ▁SOLEMN - ▁CONSTANT - ▁MINISTER - ▁OFFICIAL - ▁SENTIMENT - ▁CENTURY - ▁DELAY - ▁JAMES - ▁MATCH - ▁FOREIGN - ▁AROSE - ▁BEAST - ▁BAB - ▁WIT - ▁REMARKABLE - ▁THOR - ▁COMPAR - ▁MAL - ▁NEARER - ▁FOURTH - ▁GREY - ▁MENTION - ▁RUBB - ▁CHARM - ▁BARON - ▁DESIRED - SCAR - ▁HOPED - ▁TEACHER - ▁MON - ITCH - BEL - ▁PARTS - ▁EIGHTY - LAC - GGING - ▁REFLECT - ▁COLLECT - ▁BULL - ▁CONSCIOUS - ▁MOMENTS - ▁DISTURB - ▁COLLEGE - ▁EGGS - ▁STUPID - ▁YESTERDAY - ▁EXAMINE - ▁FAULT - ▁DEPTH - ▁ROOT - ▁MOUSE - ▁SOUGHT - ▁TURTLE - ▁NATIVE - ▁CRACK - ▁SOLD - ▁INVIT - ▁PICKED - ▁CEASED - ▁HEARING - ▁MIDS - ▁PLAYING - ▁STAGE - ▁UNTO - ▁GAIN - ▁MIST - ▁ORDERS - ▁KNEES - ▁TALE - ▁DISTINCT - ▁BENT - ▁DESPAIR - ▁TRIUMPH - ▁SQUARE - ▁THROAT - ▁BOUGHT - ▁PERMIT - ▁SPEND - ▁TRIP - ▁THREATEN - ▁ROME - INESS - ▁EXPOS - GON - ▁WRITING - ▁INCREASED - ▁PORTION - ▁TENT - IUS - ▁YO - ▁INTENDED - ▁NAMED - RATION - ▁NOTIC - ▁PIPE - ▁WILLING - ▁INSTANTLY - ▁SERVED - ▁BAL - ▁POSSESS - ▁CRE - ▁ADMIRATION - ▁LIBERTY - ▁OPPORTUNITY - ▁SELDOM - ▁BIRTH - ▁GLOW - ▁INCLUD - ▁REQUEST - ▁TYPE - ▁SLEPT - ▁CRIME - ▁MOTIVE - ▁ELSIE - ▁BEGUN - ▁CONSENT - ▁ADMITTED - ▁AVOID - ▁ADDRESS - ▁HATE - ▁DEMANDED - ▁APPARENTLY - ▁SUGGESTION - ▁CONSIDERATION - ▁BLESS - ▁PROCEEDED - NCY - ▁PRISON - ▁CONT - ▁SHOUTED - ▁FACES - ▁SPIRITS - ▁DEVELOP - ▁ACCIDENT - ▁ADVICE - ▁INNOCENT - ▁INSTINCT - ▁UNCONSCIOUS - ▁MYSTERIOUS - ▁PRETEND - ▁PEEP - ▁ANYONE - ▁DUKE - ▁PLUM - VILLE - ▁SEVERE - ▁ALAS - ▁DELIGHTED - ▁ISSUE - ▁ASKING - ▁CROW - ▁ACCEPTED - ▁RIDE - ▁DOORS - ▁TAR - ▁PREPAR - ▁SUGGEST - WOOD - ▁CITIZEN - ▁ENTRANCE - ▁LINCOLN - ▁POLITICAL - ▁PRACTICAL - ▁STIFF - ▁WIDOW - ▁CAPITAL - ▁CLEVER - ▁MAMMA - ▁CREDIT - ▁OBEY - ▁STRING - ▁DAILY - ▁ARGUMENT - ▁HEAP - ▁APARTMENT - ▁FLIGHT - ▁ELDER - ▁PUR - ▁PAGE - ▁DUST - ▁GAZE - ▁NATIONAL - ▁BABY - DDING - ISTS - ▁TEACH - ▁STREETS - CAL - ▁GE - AFF - ▁GOES - ▁POSSIBL - UNG - ▁LINES - GUE - ▁VOTE - ▁HUNTING - ▁QUO - ▁RESEMBL - ▁BASKET - ▁CIRCLE - ▁CONSEQUENCE - ▁KITCHEN - ▁TREASURE - ▁NEVERTHELESS - ▁FANCI - ▁ASSEMBL - ▁GRIEF - ▁VEIL - ▁SEASON - ▁INVENT - ▁VIRGINIA - ▁HUT - ▁GUEST - ▁ROAR - ▁BEHOLD - ▁VICTORY - ▁CAPABLE - ▁DULL - ▁SHOE - ▁FLOAT - ▁MERRY - ▁IMMEDIATE - ETH - ▁ELEANOR - ▁EXPLANATION - ▁PARLIAMENT - ▁PRINCIPAL - ▁PROPORTION - ▁RESOLUTION - ▁UNUSUAL - ▁BLUFF - ▁NINETEEN - ▁SENSATION - ▁VISIBLE - ▁INCOME - ▁FATE - ▁SUPER - ▁LAUGHTER - ▁EASE - ▁LOAD - ▁JEW - ▁ZE - ▁FEVER - ▁WEDDING - ▁JOINED - ▁TRACE - ▁LEADER - ▁CLEARLY - ▁FLOWER - ▁TERMS - ▁EMPLOYED - OCK - ▁PARTICULARLY - ▁MEMBERS - ▁CONFESS - ▁GRO - ▁ADDRESSED - ▁CHRIST - ▁ACCOMPANI - ▁AFFORD - ▁AMOUNT - ▁BRILLIANT - ▁COMMUNICAT - ▁FIERCE - ▁RECORD - ▁SACRIFICE - ▁TEMPT - ▁CORDIAL - ▁COLOUR - ▁PROOF - ▁ESTATE - ▁PARDON - ▁ADVIS - ▁ATTITUDE - ▁IMPORTANCE - ▁BOOT - ▁SHOCK - ▁FIR - ▁PLENT - ▁HIT - ▁MEMBER - ▁SUR - ▁SEATED - ▁MAG - AVING - ▁FAVOUR - ▁REMARK - ▁DIM - ▁FAITHFUL - ▁SAVED - CHI - ▁SIN - THE - ▁CONFIDENCE - ▁EXTRAORDINARY - ▁FORTUNATE - ▁MISFORTUNE - ▁PATIENCE - ▁RELIGIOUS - ▁SATISFACTION - ▁POSITIVE - ▁SIMILAR - ▁EXCHANG - ▁RETREAT - ▁FLESH - ▁ADMIRE - ▁SPIRITUAL - ▁DAWN - ▁BURIED - ▁URGE - ▁SUNDAY - ▁FOX - ▁EMMA - ▁NURSE - ▁SNAPP - ▁PARK - ▁OBTAIN - ▁RECOGNIZED - ▁SPEED - ▁MAGIC - ▁LAWS - ▁REMOVED - ▁HAM - ▁PRESERV - ▁AID - HOUSE - ▁MENTIONED - ▁CONSCIENCE - ▁CONTEMPT - ▁DETAIL - ▁IMMENSE - ▁NERVOUS - ▁PRISCILLA - ▁UNFORTUNATE - ▁UNHAPPY - ▁COMPLAIN - ▁TWICE - ▁WHISTL - ▁SNAKE - ▁WASHINGTON - ▁PIRATE - ▁WICKED - ▁BODIES - ▁DESIGN - ▁JASON - ▁VAGUE - ▁CONSIST - ▁GIFT - ▁ANGEL - ▁RODE - ▁FOLD - ▁BRIDE - ▁ANGER - ▁BASE - ITUDE - ▁CONCLUDED - ▁ALTER - ▁FRI - ▁PANT - ▁BID - ▁HIGHEST - ▁SAILOR - MPLE - ▁OBSERV - ▁CHEERFUL - IFICATION - RID - ▁DESCRIBED - ▁BIN - ▁JEWEL - ▁ARTIST - ▁PEER - ▁NORA - ▁SKI - ▁DIAMOND - ▁ENCOURAGE - ▁PRIVILEGE - ▁PROJECT - ▁ANYBODY - ▁ENCOUNTER - ▁HOLLOW - ▁YIELD - ▁BOBBY - ▁SAVAGE - ▁SOMEBODY - ▁OTHERWISE - ▁PRAISE - ▁PROBLEM - ▁DISTRESS - ▁UGLY - ▁WARRIOR - ▁MOURN - ▁RELIEV - ▁DESK - ▁FOOLISH - ▁STARTLED - ▁SKILL - SHONE - ▁LONE - ▁OBSERVATION - ▁DENI - ▁NEST - ▁SOLDIER - ▁RELATION - ▁TRULY - ▁VISITOR - ▁OFFICERS - ERSON - ▁YA - ▁EVIDENT - ▁DREAMS - ▁KEEPING - ▁PLAINLY - ▁DRUNK - ▁EMBRAC - ▁INTELLIGENCE - ▁LIEUTENANT - ▁PERSUADE - ▁SURROUNDING - ▁UNIVERSAL - ▁GLEAM - ▁SUPERIOR - ▁WHEEL - ▁JEALOUS - ▁QUEER - ▁PIERRE - ▁MILK - ▁RAIL - ▁FLUSH - ▁STAIRS - ▁JESUS - ▁HORN - ▁REGION - ▁SAFETY - ▁KA - ▁GUIDE - ▁CAKE - ▁CUP - ▁INQUIRED - ▁DEFI - ▁LESSON - ▁WRETCHED - ▁PACE - ▁TEST - ▁READING - ▁ENTIRE - ▁NET - ▁DOGS - ▁COMMANDER - ▁PRODUCE - ▁GAINED - ▁ARRIVAL - ▁FAMILIAR - ▁MEANWHILE - ▁SUSPICION - ▁CHOICE - ▁IMPULSE - ▁THRUST - ▁PROCESS - ▁SUMMON - ▁SHEPHERD - ▁HASTILY - ▁GRASP - ▁COUNTESS - ▁STYLE - ▁DWELL - ▁MERIT - ▁PITCH - ▁HUNGRY - ▁SPORT - ▁LOUISE - ▁STERN - ▁PROVIDED - ▁ASSUME - ▁EARLIE - ▁RAGE - ▁U - ▁RAPIDLY - PORT - ▁SUCCESSFUL - ▁FLED - ▁AGREE - ▁CONDITIONS - ▁RELATIONS - ▁DREAD - ▁NATURALLY - ▁EARL - ▁GAY - ▁HYPNOTI - ▁PUTT - ▁GAZ - ▁JIM - ▁PAUS - ▁PROPOS - ▁ADMINISTRATION - ▁ELEVEN - ▁HOSPITAL - ▁MAGISTRATE - ▁STRIKE - ▁DIGNITY - ▁GLORY - ▁BOTTLE - ▁THRONE - ▁RECKON - ▁COSETTE - ▁MOREOVER - ▁APPLI - ▁HIND - ▁PRODUCT - ▁POOL - ▁TRIAL - HAN - ▁ERIC - ▁CUB - ▁PIECES - ▁EXCEPTION - ▁ENJOYED - ▁DARED - ▁TRU - ▁CLOSELY - ▁RAPID - ▁AFFECTED - ▁REQUIRE - ▁SOFTLY - ▁BROW - UCK - ▁MARKED - ▁SEVENT - ▁ELECT - ▁FORGOT - ▁CORRECT - ▁FRANCS - ▁MARGUERITE - ▁SCIENCE - ▁UNEXPECTED - ▁FOUGHT - ▁MILITA - ▁THUNDER - ▁VOYAGE - ▁GANEM - ▁FREEDOM - ▁NODDED - ▁CAPTURE - ▁MORTAL - ▁OWNER - ▁POLITE - ▁VISION - ▁EDUCATION - ▁GOVERNOR - ▁RAV - ▁REWARD - ▁HASTE - ▁REPEAT - ▁DETERMIN - ▁PITI - ▁KNEE - LINE - ▁DEVOTED - ▁INTERRUPTED - ▁FOLKS - ▁EXTREME - ▁APPROACH - ▁CONTINUE - ▁BEARING - ▁CHAP - ▁ACQUAINTED - ▁GLIMPSE - ▁GRADUALLY - ▁SUNSHINE - ▁PRACTICE - ▁SUPPLI - ▁DAVID - ▁DRIFT - ▁SHOWING - ▁LEVEL - ▁PROMPT - ▁QUARREL - ▁REPRESENTATIVE - ▁PLUNG - ▁GIANT - FALL - ▁STOUT - CHA - WEPT - ▁GLANC - ▁SALT - ▁CHOSEN - ▁BUCK - ▁REALIZED - ▁REALITY - ▁TUR - ▁DRIVEN - ▁CARD - ▁PRAYER - ▁TERM - AID - ▁HOLY - ▁ENDURE - ▁RANGE - ▁HANG - ▁SAM - LAN - ▁CAVE - INA - ▁GRI - ▁SIGH - ▁NEIGHBOUR - ▁COUNCIL - ▁EXERCISE - ▁NAUTILUS - ▁SOMEWHERE - ▁SYLVIA - ▁THOROUGH - ▁VICTIM - ▁BRIDGE - ▁COMPELLED - ▁INCLINED - ▁OVERCOME - ▁RESERVE - ▁ARREST - ▁PRECIOUS - ▁DUTCH - ▁OCEAN - ▁ACQUIR - ▁RECALL - ▁DESTIN - ▁ATTACH - ▁SLIM - ▁WEEP - ▁CONSCIOUSNESS - ▁TIGHT - ▁WAKE - ▁COMFORTABLE - ▁ACTIVE - ▁WINGS - ▁GRIN - ▁AFFECT - ▁WHIT - ▁IDEAL - ▁EASTER - ▁APPROACHING - ▁CREATED - ▁PLANS - ▁INCREASE - ▁FLYING - ▁SHOUT - OES - MISSION - ▁ARMED - ABILITY - ▁BLUSH - ▁CONNECTION - ▁MATTHEW - ▁MEDICINE - ▁REMIND - ▁EXHIBIT - ▁BLOCK - ▁DESERVE - ▁LISTENING - ▁TITLE - ▁FLOUR - ▁FLAME - ▁AGENT - ▁USEFUL - ▁BRIG - ▁BOIL - ▁ASSURED - ▁REFLECTION - ▁PINE - ▁WAG - ▁YOUNGER - ▁BEARD - ▁KINDNESS - CTUALLY - ▁ACTUAL - ▁WEIGHT - ▁LILY - ▁IMPRESS - ▁DESCRIBE - ▁BEHELD - ▁COMMUNITY - ▁DESPERATE - ▁DISPLAY - ▁ENEMIES - ▁MELANCHOLY - ▁MIRROR - ▁RECOMMEND - ▁SPANISH - ▁BLAME - ▁VOLUME - ▁SHOOT - ▁COMBIN - ▁SHAKING - ▁SOUTHERN - ▁MYSTERY - ▁EVERYONE - ▁COMMISSION - ▁COMPOSED - ▁UDO - ▁IMAGE - ▁DECEIV - ▁FAILURE - ▁PATTY - ▁ALICE - ▁FRAME - ▁MODEST - ▁MAGNIFICENT - ▁BRANCHES - ▁REIGN - ▁RAG - ▁PARISH - ▁KATE - ▁AMID - ▁SLEEPING - ▁ANNOUNCED - ▁EAGERLY - ▁WIRE - ▁LAP - ▁ARAB - ▁EATING - ▁RUM - ▁CAREFUL - ▁DISCUSS - WORTH - ▁DISTRICT - ▁FOREHEAD - ▁FRANCIS - ▁INCIDENT - ▁APPEAL - ▁EMBARRASS - ▁MAINTAIN - ▁PRONOUNC - ▁FURNISH - ▁STRAIN - ▁ELEMENT - ▁SILK - ▁FEAST - ▁RECENT - ▁DANCING - ▁LODGE - ▁ASHAMED - ▁TRICK - ▁BOBO - ▁STUFF - ▁ET - ▁ASSERT - ▁SANK - ▁TREATMENT - ECI - ▁SWIM - ▁BECOMING - ▁SINGING - ▁PLATE - ▁SCATTERED - ▁EXTREMELY - ▁GRIM - ▁SANG - ▁FIGHTING - ▁FACTOR - ▁PAINFUL - ▁HIDE - ▁FUNN - ▁AFTERWARD - ▁FROG - ▁VENTURE - ▁DISAPPOINT - ▁COMRADE - ▁MONSIEUR - ▁OBVIOUS - ▁PASSENGER - ▁PROFOUND - ▁PUBLISH - ▁ACCUSTOM - ▁BLOOM - ▁SMITH - ▁RELATIVE - ▁ACCUSE - ▁MANIFEST - ▁SOLID - ▁MONSTER - ▁MARIUS - ▁CANDLE - ▁PROCUR - ▁INTERFERE - ▁HOUSEHOLD - ▁DEVELOPMENT - ▁AGREEABLE - ▁HALT - ▁NECESSITY - FOLD - ▁CITIES - ▁REGI - ▁GLOOMY - BBL - ▁SEPARATED - ▁CHEST - ▁STRIP - ▁SPAR - ▁DUN - ▁SETTLE - ▁STARED - ▁HANGING - ▁FEATURES - ▁PILE - ▁ORIGIN - ARIES - ▁LION - ▁ALI - ▁ASTONISHMENT - ▁COMPLIMENT - ▁DELICATE - ▁COUNSEL - ▁FIFTH - ▁SUPPRESS - ▁BURDEN - ▁COMPLEX - ▁ADDITION - ▁CRUSH - ▁TWIST - ▁PIANO - ▁BRUSH - ▁CHECK - ▁ANNIE - ▁SHELTER - ▁IMPROV - ▁WESTERN - ▁LOCAL - ▁APPLE - ▁GREET - ▁MASK - ▁RUSSIAN - ▁TOWER - ▁CREW - ▁TIP - ▁WANDERING - ▁READER - ▁WANDERED - ▁DESTROY - ▁OBSERVE - MORE - ▁ESCAPED - ▁PET - ▁BUILD - ▁REAR - ▁DESTROYED - HIN - ▁OWE - ▁RANG - ▁TEAR - ▁NED - ▁OFFICER - ▁TRAP - ▁OCCUR - ▁APPOINTED - ▁ATMOSPHERE - ▁CHOOSE - ▁CONCLUSION - ▁CULTIVAT - ▁DESCRIPTION - ▁ENORMOUS - ▁EXHAUSTED - ▁LANDSCAPE - ▁NATASHA - ▁PROSPECT - ▁REFRESH - ▁SPECIES - ▁SURROUNDED - ▁WEAPON - ▁BLANK - ▁DEFEND - ▁EDITH - ▁HORRIBL - ▁BETRAY - ▁FERKO - ▁LABOUR - ▁NEGRO - ▁RESUMED - ▁LEAF - ▁MUSKET - ▁INTENSE - ▁MERCY - ▁ADOPT - ▁SCORE - ▁DASH - ▁LAWYER - ▁SLOPE - ▁CHUCK - ▁ASSISTANCE - ▁BROOK - ▁BREAKING - ▁ASSIST - ▁GROAN - ▁HELEN - ▁BEHAV - ▁MAIDEN - ▁CRIS - ▁SHOUTING - ▁NAY - ▁PIG - ▁ACCORDINGLY - ETTE - ▁DESIR - ▁RUB - ▁GRU - ▁PIT - ▁HEAVI - ▁OBTAINED - ▁SPARE - ▁BRANCH - ▁COUNTER - ▁APART - ▁AMBITION - ▁ASTONISHED - ▁CORRESPOND - ▁DRIVING - ▁ENERGY - ▁HISTORIAN - ▁REVOLUTION - ▁SWEEP - ▁TREMBLING - ▁CRAFT - ▁FAMILIES - ▁LITERATURE - SBURG - ▁FEMALE - ▁TILNEY - ▁GENEROUS - ▁SUBMIT - ▁INTELLECTUAL - ▁ORCHARD - ▁STORIES - ▁DIANA - ▁VEIN - ▁TRIFL - ▁TWIN - ▁WORSHIP - ▁MARBLE - ▁GALLANT - ▁SENSIBLE - ▁NEAT - ▁BROWNIE - ▁JUNE - ▁SHAW - ▁WORST - ▁USELESS - ▁FISHING - ▁CRYING - ▁MAYBE - ▁VARI - ▁PRESERVE - ▁VOL - ▁EMPLOY - ▁INTERRUPT - ▁SLIGHTLY - ▁ACCOMPLISHED - NEY - ▁STEAM - ▁BALANC - ▁LEANING - ▁SIGHED - ▁REFUSE - ▁IMAGINED - ▁DATE - GROUND - ▁ENTERTAIN - ▁PERCEIVE - ▁ABROAD - ▁CHEESE - ▁DESTRUCTION - ▁ESSENTIAL - ▁EXPEDITION - ▁GRANDFATHER - ▁INFINITE - ▁LIBRARY - ▁MULTITUDE - ▁NEGLECT - ▁SWALLOW - ▁VILLEFORT - ▁BELOVED - ▁COMMITTEE - ▁CONFIDENT - ▁PURPLE - ▁PURCHAS - ▁SCRAP - ▁SPOIL - ▁LIKEWISE - ▁EXTRA - ▁STRAW - ▁SALUT - ▁SOURCE - ▁HASTENED - ▁RESENT - ▁FLOCK - ▁LOFT - ▁FLO - ▁CLO - ▁CONVINCED - ▁GOODNESS - ▁HYPNOTIZ - ▁SETTING - ▁HAIL - ▁PHI - ▁GROVE - ▁DISCOVERY - ▁DAMP - ▁WHISPER - ▁LIFT - ▁HOP - ▁SUSPECTED - ▁SCR - OLI - ▁FAC - ▁BUSH - ▁FOREVER - ▁BARRICADE - ▁CONSTITUTION - ▁ENDEAVOR - ▁ENTHUSIASM - ▁EXECUTION - ▁HYACINTH - ▁PERCEVAL - ▁PSYCHE - ▁REPROACH - ▁THIRTEEN - ▁ABSORB - ▁GRATITUDE - ▁MERCER - ▁REPUTATION - ▁SCREAM - ▁PUPIL - ▁RETIRED - ▁STEEP - ▁SUMMIT - ▁MISERABLE - ▁STRICT - ▁MINGLED - ▁DEFEAT - ▁REVEAL - ▁LOVING - ▁GOOSE - ▁ECHO - ▁AWAIT - ▁MOOD - ▁CRAWLEY - ▁CELL - ▁ENGAGEMENT - ▁PRECED - ▁SOMEONE - ▁ARRANGEMENT - ▁PICKET - ▁GASP - ▁HUMOR - ▁INVITATION - ▁JOB - WITHSTAND - ▁LAMENT - ▁CLASSES - ▁HUNGER - ▁DISPOSED - ▁STEAMER - ▁FEARFUL - ▁GER - ▁FINAL - ▁FLAG - ▁JULY - ▁DIG - WORK - ▁OPPOS - ▁ANXIETY - ▁AUDIENCE - ▁BACHELOR - ▁COLUMN - ▁HANDKERCHIEF - ▁IMPATIENT - ▁JUDGMENT - ▁KNIFE - ▁SOVEREIGN - ▁STRIKING - ▁THOMPSON - ▁EMPIRE - ▁FULFIL - ▁CONSULT - ▁JENNY - ▁THENARDIER - ▁POYSER - ▁FOURTEEN - ▁JAPANESE - ▁INDULG - ▁MARTIAN - ▁COUNTRIES - ▁FETCH - ▁CRITIC - ▁ROBBER - ▁CROOK - ▁DEPARTURE - ▁MABEL - ▁PREACH - ESCENT - ▁WHIP - ▁NAIL - ▁DELIGHTFUL - ▁DISCUSSION - ▁SENTENCE - ▁LANE - ▁ENGINEER - ▁ARRANGED - MMY - ▁LEST - ▁RENT - MMED - ▁LIST - ▁ROBE - ▁MISSION - ▁GRACEFUL - ▁LIGHTN - STONE - COURT - ▁CONCEPTION - ▁CONTRACT - ▁DROWN - ▁EXPERIMENT - ▁HITHERTO - ▁PLAGUE - ▁PORTHOS - ▁SHRIEK - ▁DETECT - ▁ACCENT - ▁ERECT - ▁SAZEN - ▁PROFIT - ▁VIVID - ▁SQUIRE - ▁OPERATION - ▁SMELL - ▁SIMON - ▁EXTENT - ▁KEEN - ▁EMERG - ▁REVIV - ▁REGIMENT - ▁DISAPPOINTMENT - ▁STOLE - ▁DIVINE - ▁GUILTY - ▁COWARD - ▁EXPECTATION - ▁SIGNOR - ▁MODE - ▁CENTRE - ▁FIL - HOW - ▁WEARI - ▁TOTAL - ▁VICTOR - ▁GOVERN - ▁RAISE - ▁ABANDON - ▁ABSURD - ▁ASPECT - ▁CRIMINAL - ▁DEFINITE - ▁DELIBERAT - ▁FEATHER - ▁FLORINA - ▁MIDNIGHT - ▁RICHMOND - ▁SATISFY - ▁SINGULAR - ▁STEADILY - ▁SUPREME - ▁TIMBER - ▁PSYCHOLOG - ▁GESTURE - ▁VALUABLE - ▁INTERVAL - ▁CONFUSION - ▁FLUTTER - ▁SACRED - ▁DISEASE - ▁UNDERTAKE - ▁PENETRAT - ▁MARVEL - ▁NORTHERN - ▁GRIEV - ▁GENIUS - ▁SADDLE - ▁NOVEL - ▁MISERY - ▁CONVICTION - ▁SINK - ▁WAGON - ▁ARISE - ▁COMMENT - ▁BARN - UPON - ▁FENCE - ▁ASSOCIATION - ▁BONES - ▁IDLE - ▁DOUBTFUL - ▁PREPARATION - IZZ - ▁RAIS - ▁BITTERLY - ▁JOE - ▁RELI - ADI - ▁METAL - ▁EXACT - ▁GLOOM - FIELD - ▁DANGLARS - ▁DISGRACE - ▁EXAMINATION - ▁FASCINAT - ▁GLITTER - ▁INCREASING - ▁MESSENGER - ▁PATRIOT - ▁PLATFORM - ▁PROVISION - ▁QUALITIES - ▁SELECT - ▁STEADY - ▁POVERTY - ▁POWDER - ▁PROPHET - ▁HOLLAND - ▁TRUNK - ▁VARIETY - ▁PLANCHET - ▁CONQUER - ▁CONCEIVE - ▁COMBAT - ▁STOOP - ▁SHIRT - ▁GENERATION - ▁COMMITTED - ▁INSULT - ▁CONFUSED - ▁RADIAN - ▁DEBT - ▁IMITAT - ▁DART - ▁CAROLINE - ▁SWAM - ▁WREN - ▁CHILDHOOD - ▁BRAND - ▁JOKE - ▁FRIENDSHIP - ▁DIRT - ▁JOLL - ▁BUSHES - ▁MINK - ▁ROUT - ▁EQUALITY - ▁HESITATED - ▁BARK - ▁ANTI - ▁STATEMENT - PHER - ▁SUNK - ▁DAT - ▁BACKWARD - ▁SUSPECT - ▁OBJECTION - ▁RAP - ▁CHIN - ▁MATE - ▁REDUC - ▁GREGG - ▁ACCOMPANY - ▁ANYWHERE - ▁BENEFIT - ▁CLERK - ▁EXPENSE - ▁FETNAH - ▁INTERPRET - ▁LUKASHKA - ▁NUMEROUS - ▁SURGEON - ▁PUZZL - ▁RESCUE - ▁GRATEFUL - ▁APPROV - ▁RIVAL - ▁NIECE - ▁FLOOD - ▁VANISHED - ▁ERROR - ▁BLAZ - ▁TUMBL - ▁WENDY - ▁PERSIST - ▁CONSOL - ▁SOAP - ▁HUMOUR - ▁FITTED - ▁HOUSEKEEPER - ▁ENABL - ▁OCCASIONALLY - ▁HATRED - ▁SWELL - ▁WORRY - ▁RUST - ▁PURSUIT - ▁INTIMATE - ▁SEAL - ▁COLLECTION - ▁TREMBLED - ▁DENY - ▁HUMANITY - ▁FATAL - ▁COCK - ▁DRIVER - ▁HOPELESS - ▁MISTAKEN - ▁LUC - ▁ACCOMPLISH - ▁COAL - ▁ACCORD - ▁PURSE - ▁SEPARATE - ▁ARRIVE - ▁SMOK - ▁MADAM - ▁ASSOCIAT - ▁INSTRUCT - ▁CELEBR - ▁CHANNEL - ▁CIVILIZATION - ▁DOCTRINE - ▁ENDEAVOUR - ▁GLACIER - ▁INTELLIGENT - ▁INVOLVE - ▁LEATHER - ▁MUTTERED - ▁OLENIN - ▁PENCROFT - ▁PERPLEX - ▁SPECTATOR - ▁UNIVERSITY - ▁ATTAIN - ▁INEVITABL - ▁YONDER - ▁ENCHANT - ▁REPAIR - ▁CURRENT - ▁ASCEND - ▁CREEK - ▁SPARKL - ▁RUE - ▁BEAVER - ▁INFANT - ▁CONTINUALLY - ▁CLASP - ▁IRISH - ▁ROLLIN - ▁PUNISHMENT - ▁LUNCH - ▁AGONY - ▁RUDE - ▁DRAGG - ▁INQUIRI - ▁SEX - ▁TERRIFI - ▁ROBIN - ▁PROFESSIONAL - ▁SPUR - ▁GRAIN - ▁VINE - ▁PENN - ▁ROC - ▁CHASE - ▁INFORM - ▁WRITER - ▁AVO - ▁TAP - ▁CREAT - ▁WHIL - ▁BARR - ▁ASSURE - ▁CIRCUMSTANCE - ▁OIL - ▁ROUSE - ▁COLUMB - ▁CUNNING - ▁DOMESTIC - ▁GLORIOUS - ▁INDIGNATION - ▁PRECISELY - ▁PRUDENCE - ▁RAILROAD - ▁SATURDAY - ▁UTMOST - ▁VIOLENCE - ▁WHIRL - ▁CALCULAT - ▁OVERWHELM - ▁PERPETUAL - ▁QUARLES - ▁SLENDER - ▁TELEGRAPH - ▁ALOUD - ▁OPPRESS - ▁CROPPER - ▁CANADIAN - ▁HERBERT - ▁TIMID - ▁SUPPLY - ▁STROLL - ▁CREEP - ▁OATH - ▁DUSK - ▁EXCESS - ▁HUMBLE - ▁FURIOUS - ▁RIDGE - ▁BULLET - ▁PONY - ▁STATU - ▁ENJOYMENT - ▁CONWAY - ▁DIFFICULTIES - ▁PATCH - ▁JOYCE - ▁CLOCK - ▁RESTORED - ▁ARGU - ▁WIG - ▁CHATT - ▁PLAC - ▁REMOVE - ▁TORN - ▁DISAPPEAR - TIME - WELL - ▁RECOGNIZE - ▁FISHE - ▁DECLARE - ISTIC - ▁AUTHOR - ▁WHISK - ▁COFFEE - ▁COMPREHEND - ▁DISGUISE - ▁ELZEVIR - ▁ENTERPRISE - ▁HOLIDAY - ▁HORIZON - ▁IGNORANT - ▁INTERVIEW - ▁OLIVER - ▁RONICKY - ▁CAPACITY - ▁DISPOSITION - ▁EXTERNAL - ▁OPPOSITION - ▁REPUBLIC - ▁WHEAT - ▁CORPSE - ▁DARLING - ▁THRILL - ▁INHABITANTS - ▁ORNAMENT - ▁SHIFT - ▁RECOGNISE - ▁SHIVER - ▁BOAST - ▁HINT - ▁BOSTON - ▁MULTI - IFYING - ▁STEAL - ▁INSTRUCTIONS - ▁ELECTRIC - ▁SWING - ▁SOOTH - ▁SCALE - ▁MORLAND - ▁DISLIKE - ▁FLATTER - ▁COACH - ▁LEIF - ▁STAMP - ▁ANYHOW - ▁MOTIONLESS - ▁ANDREA - ▁LOSING - ▁PAUL - ▁CAROL - ▁ADVANC - ▁IMAGIN - ▁CENTER - ▁JAR - ▁SUCCEED - ▁DISMISS - CTOR - ▁RECEIV - ▁DRAG - ▁INTENT - ▁BARBAR - ▁PUNISH - ▁ABRUPTLY - ▁BERNARD - ▁DECISION - ▁INDEPENDENT - ▁PROVINCE - ▁SLEEVE - ▁TREMENDOUS - ▁UNPLEASANT - ▁LEISURE - ▁THRONG - ▁THUMB - ▁BANNER - ▁CONTRADICT - ▁RESTRAIN - ▁DIVIDED - ▁WRAPPED - ▁HAUNT - ▁SNEER - CHESTER - ▁JULIA - ▁MILD - ▁CONTACT - ▁MEANTIME - ▁NEEDLE - ▁BLOT - ▁BARREL - ▁ISABELLA - ▁THEATRE - ▁ESTABLISHMENT - ▁MARKET - ▁CHINA - ▁FORBID - ▁PERISH - ▁DOORWAY - ▁CARLING - ▁PERIL - ▁PRIZE - ▁HATCH - ▁CURL - ▁REFER - ▁DEVOT - EMBER - MONT - ▁CANOE - ▁PROFESSION - ▁CONVICT - ▁CRAWL - ▁ACTIVITY - ▁BEWILDER - ▁BREEZE - ▁CONTEMPLAT - ▁DISGUST - ▁FATIGUE - ▁MERRICK - ▁PRAIRIE - ▁REFORM - ▁SPECTACLE - ▁STUDENT - ▁TUMULT - ▁UNIFORM - ▁VIGOROUS - ▁CONDEMN - ▁GENUINE - ▁THOMAS - ▁ARROW - ▁PILLOW - ▁FEEBLE - ▁RALPH - ▁SCHEME - ▁COLLAR - ▁JUSTINIAN - ▁NERVE - ▁OYSTER - ▁BENNET - ▁DUTIES - ▁BINGLEY - ▁CHRISTMAS - ▁CONVEY - ▁DESPIS - ▁RATTL - ▁GARMENTS - ▁GOWN - ▁BERYL - ▁BARRIER - ▁CHARACTERISTIC - ▁MEDITAT - ▁DISCOURSE - ▁STAFF - ▁KARA - ▁MONTE - ▁READILY - ▁VENTUR - ▁HENCE - ▁ROPE - ▁CRIES - ▁ANGLE - ▁RESPECTABLE - ▁MOAN - ▁OUTLINE - BORN - ▁FIX - ▁INTEND - LIA - ▁CHILL - ▁CREP - ▁CHOSE - ▁SPECULAT - ▁ATTRIBUT - ▁BUFFALO - ▁ENTREAT - ▁ENVELOP - ▁FREDERICK - ▁IMPATIENCE - ▁INDIFFERENCE - ▁INDUSTRY - ▁INSTITUTION - ▁LYNDE - ▁RETAIN - ▁TROUTINA - ▁UNCOMFORTABL - ▁VENGEANCE - ▁JENKS - ▁CONGRESS - ▁SMART - ▁THITHER - ▁DISAGREE - ▁IMPROVEMENT - ▁PISTOL - ▁GOSSIP - ▁ETERNAL - ▁BELIEF - ▁SLEDGE - ▁AROUSED - ▁ORANGE - ▁FASTENED - ▁MONKEY - ▁WITHDREW - ▁OFFEND - ▁PIERC - ▁MOONLIGHT - ▁OARS - ▁GROOM - ▁FIDDLER - ▁BARBARA - SHIRE - ▁ATTENDANT - ▁DIVERS - ▁DUCK - ▁PROPOSAL - ▁GROWTH - ▁CURATE - ▁STEWAR - ▁MOCK - ▁SUCCESSION - ▁CREATION - ▁PARTIAL - ▁SWU - ▁FROST - ▁EIGHTH - ▁AWE - ▁PERCH - ▁LACE - SPOON - ▁ARRANGE - SERIES - ▁FOG - ▁SCU - ▁ABRAHAM - ▁ADMIRAL - ▁BARBICANE - ▁CAMPAIGN - ▁CONSEQUENTLY - ▁CULTURE - ▁GRAMMONT - ▁GWYNPLAINE - ▁HAPPILY - ▁HOOPDRIVER - ▁INDEPENDENCE - ▁LEOPOLD - ▁MISCHIEF - ▁MONTGOMERY - ▁NECESSARILY - ▁PSYCHIC - ▁RABBIT - ▁REFUGE - ▁RESPONSIBILIT - ▁SENATOR - ▁UNCERTAIN - ▁MENSTRUA - ▁FANNY - ▁SUBSTANCE - ▁APRIL - ▁ELBOW - ▁QUALITY - ▁BORDER - ▁BRUTAL - ▁CARPET - ▁SOLITAR - ▁FROWN - ▁SCENT - ▁ANNOY - ▁NAKED - ▁BOSOM - ▁CONSUM - ▁TIGER - ▁ITALIAN - ▁PARSON - ▁DECLIN - ▁NEIGHBORHOOD - ▁GREGGORY - ▁EXCEED - ▁SILLY - ▁ICELAND - ▁HIDEOUS - ▁STRU - ▁ALTERNAT - ▁CABINET - ▁ABILITY - ▁BEECH - ▁SECRETARY - ▁CONTEST - ▁MONK - ▁PADD - ▁EVA - ▁CREST - ▁FINISH - ▁APPARENT - ▁MIX - ▁SLIP - ▁LUXURI - ▁AUTUMN - ▁CIRCULAR - ▁COMPOSITION - ▁DISPLEAS - ▁EXCELLENC - ▁FURNITURE - ▁GRADUATE - ▁INDIFFERENT - ▁JOSEPH - ▁OCCUPATION - ▁POSSIBILITY - ▁RENEWED - ▁RESPONDED - ▁PREVAIL - ▁HOARSE - ▁PRACTIS - ▁FAREWELL - ▁JULIET - ▁OVERHEAD - ▁THREAD - ▁APPLICATION - ▁SOLITUDE - ▁ADAPT - ▁FALK - ▁LARK - ▁COARSE - ▁MANKIND - ▁KICK - ▁BATTER - ▁SOLICIT - ▁RESIGN - ▁MOTOR - ▁STEEL - ▁CONTRIV - ▁AUTHORITIES - ▁HARSH - ▁FAVORITE - ▁TALENT - ▁FLEECE - ▁AGITATION - ▁ABBE - ▁STUCK - ▁HEDGE - ▁BIBLE - ▁RECOLLECTION - ▁PARTNER - ▁DAMON - ▁SHINE - ▁HOOK - ▁CONFESSION - ▁ASSENT - ▁ELDE - ▁BIGGE - ▁PEACEFUL - SCRIBED - ▁WEIGH - CARLET - ▁DECIDE - ▁RECOLLECT - ▁BOHEMIA - ▁CALIFORNIA - ▁CONSTRUCT - ▁DEMONSTRAT - ▁DISTRIBUT - ▁FRIGHTFUL - ▁GNOME - ▁IGNORANCE - ▁JANUARY - ▁JULIUS - ▁MEMORIES - ▁OCCUPY - ▁PHRASE - ▁WHIRLWIND - ▁WILMINGTON - ▁CARLINI - ▁CHAUVELIN - ▁ESTEEM - ▁GENZABURO - ▁GLOBE - ▁LECOQ - ▁MARGARET - ▁MONARCH - ▁NAPOLEON - ▁SCORN - ▁STAGGER - ▁SUSTAIN - ▁TRADITION - ▁ADJUST - ▁FROZEN - ▁IMPRISON - ▁LANTERN - ▁MICHEL - ▁STOMACH - ▁TORRENT - ▁WITHDRAW - ▁FRANZ - ▁POISON - ▁SURVEY - ▁BRITISH - ▁ELEVAT - ▁AWOKE - ▁ESTHER - ▁INHERIT - ▁TRAVERS - ▁STOPPING - ▁IRELAND - ▁COMPARATIVE - ▁SOBB - ▁FAVOURITE - ▁CANVAS - ▁CLOAK - ▁GLAR - ▁ASSISTANT - ▁DAMAGE - ▁PEAK - ▁DISTINCTION - FARE - ▁DOLLAR - ▁BEGGAR - LUSIVE - ▁MODEL - ▁SECUR - ▁DISPOS - ▁SLID - ▁PEA - ▁SPEEDI - HOLD - ▁SNAP - ▁CIGAR - ▁AFFLICT - ▁AMAZEMENT - ▁LAUNCELOT - ▁LEAGUE - ▁MARIPOSA - ▁POPULATION - ▁UNEASY - ▁BLOSSOM - ▁CATERPILLAR - ▁INCLINATION - ▁SUSPEND - ▁SYNDIC - ▁TAYLOR - ▁WILSON - ▁CONTRAST - ▁PORTRAIT - ▁CORONER - ▁GREEK - ▁BUNDLE - ▁BLEW - ▁THORPE - ▁ORPHAN - ▁MUSCLE - ▁DEAF - ▁SURVIV - ▁EXCEEDINGLY - ▁TENDENC - ▁ISRAEL - ▁QUANTIT - ▁PENSION - ▁DRIED - TEXT - ▁REFERENCE - ▁REPOSE - ▁FOLLY - ▁REPLACE - ▁TERR - ▁ANKLE - ▁SUNLIGHT - ▁SECURITY - ▁SHOV - ▁RAW - CULAR - ▁JACKET - ▁TUNE - ▁HOBB - ▁MARTIN - DUCED - ▁FIST - ▁BEGG - ▁CHOK - ▁INQUIRE - ▁INTELLECT - ▁AMUSEMENT - ▁APPROPRIATE - ▁CONGRATULAT - ▁CONVENTION - ▁DISCOURAG - ▁EXQUISITE - ▁FOUNTAIN - ▁JUNIOR - ▁NONSENSE - ▁OBSTACLE - ▁SPECIMEN - ▁SWEAR - ▁TRANQUIL - ▁VEHICLE - ▁WISDOM - ▁ASCERTAIN - ▁CAUTIOUS - ▁CENTURIES - ▁CORRUPT - ▁EXPLOR - ▁TURKEY - ▁BARGAIN - ▁CONFOUND - ▁FUNCTION - ▁GRACIOUS - ▁MONICA - ▁ILLUSTRAT - ▁CRUMB - ▁REMEDY - ▁REMOTE - ▁REVENGE - ▁BABYLON - ▁CAUTION - ▁INTERIOR - ▁CRISTEL - ▁BRAZ - ▁THIRST - ▁PROBABLE - ▁HARMONY - ▁CHARITY - ▁DECAY - ▁COLONI - ▁AVAIL - ▁REPULS - ▁ABSENT - ▁PULSE - ▁PRESUM - ▁CRANE - ▁NEIGHBOURHOOD - ▁SUNSET - ▁CANNON - ▁GRAPE - ▁SOFA - ▁DRANK - MINOUS - ▁DECLARATION - ▁CLOSING - ▁MEEK - ▁STARV - ▁BUNCH - ▁PERFORMANCE - ▁ENTERTAINMENT - ▁STRIV - ▁EMILY - ▁VALET - MPOSED - ▁INTIMA - ▁POLISH - ▁HIRE - POST - ▁TREMBLE - ▁CEASE - ▁VIRGIN - ▁RUSSIA - COURSE - ▁EDUCAT - BOUND - ▁INHABIT - ▁SUPERINTEND - ▁BISCUIT - ▁CHICAGO - ▁CHOKICHI - ▁CONFLICT - ▁ENCLOS - ▁EXCLUSION - ▁EXECUTIVE - ▁GRANDMOTHER - ▁HEADQUARTERS - ▁INFERIOR - ▁INVISIBLE - ▁MUTUAL - ▁OPPONENT - ▁SENSITIVE - ▁STUDIED - ▁TEMPORARY - ▁UNWILLING - ▁PERMANENT - ▁BEDROOM - ▁NOVEMBER - ▁COMPLICAT - ▁DEVOUR - ▁SCRAMBL - ▁SECTION - ▁PROPOSITION - ▁DEPRIV - ▁RYNCH - ▁PLEAD - ▁TORTURE - ▁SCOUT - ▁PILOT - ▁CHERISH - ▁SPEAR - ▁SUGAR - ▁JASPER - ▁STRAY - ▁RIFLE - ▁NORMAL - ▁JERK - ▁HONEY - ▁AWAKENED - ▁QUIVER - ▁PYE - ▁APPLY - LICK - JA - ▁ANNOUNC - FORE - ▁ENGINE - ▁HESITATE - ▁PROVIDE - ▁REALIZE - ▁SEIZE - ▁RESTORE - MOUTH - FOOT - ▁DIFFER - ▁ULTIMATE - ▁ABUNDANCE - ▁APPRECIATE - ▁APPREHENSION - ▁AVENUE - ▁AWKWARD - ▁CETERA - ▁CHIMNEY - ▁CLUTCH - ▁CONVENIENT - ▁CORRIDOR - ▁DISTRACT - ▁ELEGANT - ▁ELSEWHERE - ▁ENTHUSIASTIC - ▁EXECUTE - ▁EXTREMIT - ▁JERUSALEM - ▁MIRACLE - ▁MONSTROUS - ▁OBEDIENCE - ▁OBSCURE - ▁PHENOMENA - ▁RESIDENCE - ▁RESOURCE - ▁REVOLT - ▁SCIENTIFIC - ▁SHIELD - ▁SIMPSON - ▁UNIVERSE - VOLUNTARY - ▁ATTENTIVE - ▁BRENDA - ▁DEPOSIT - ▁MAXIM - ▁REJECT - ▁STIRRED - ▁DISORDER - ▁SERENE - ▁TOBACCO - ▁MILTON - ▁BALLOON - ▁STEPHEN - ▁STRAIT - ▁CHINESE - ▁COURTEOUS - ▁RELEASE - ▁RECESS - ▁COTTON - ▁STUMP - ▁TANK - ▁PROMOTE - ▁DERIVE - ▁LOYAL - ▁GRANIT - ▁DISMAL - ▁CATTLE - ▁DOONE - ▁CUPID - DIGNIFIED - ▁RIPE - ▁EXILE - ▁ANTIQU - UMINAT - ▁SUPPOS - ▁WRETCH - ▁IDENTI - ▁EASI - ▁SERV - ▁QUEST - TOWN - ▁ACHIEVEMENT - ▁APPETITE - ▁BUCCANEER - ▁COMMENCED - ▁DELAWARE - ▁DISCERN - ▁IMMORTAL - ▁INDIGNANT - ▁JOSIANA - ▁MECHANICAL - ▁MUSKRAT - ▁REVIEW - ▁ROBARTS - ▁SIGNIFICANT - ▁SUBSEQUENT - ▁YOURSELVES - ▁ANGRILY - ▁BORROW - ▁SUBLIME - ▁AFRICA - ▁CHICKEN - ▁DEGRAD - ▁GEORGI - ▁HUMILIAT - ▁LODGING - ▁REDCOAT - ▁VIOLET - ▁HOPKINS - ▁RAWDON - ▁PRICK - ▁WHALE - ▁FUNERAL - ▁GUINEA - ▁DISMAY - ▁PORCH - ▁HARVEST - ▁PARCEL - ▁SUBDU - ▁SYRIA - ▁PANIC - ▁BOUGHS - ▁CIGARETTE - ▁CHRON - ▁INQUIRY - ▁CRYSTAL - ▁SPELL - ▁PLUCK - ▁PATTERN - ▁DARING - ▁CRITICISM - ▁DAINT - ▁DISTURBANCE - ▁BUTCHER - ▁LITERA - ▁ABUSE - IXTURE - ▁ANIMAT - ▁WRIT - ▁BELIEV - ▁INDUCE - COMING - ▁DRAMA - ▁AGITAT - SHAW - ▁IMPERFECT - ▁MANUFACTURE - ▁AFFIRM - ▁ANGUISH - ▁ARTIFICIAL - ▁BIBBS - ▁CHARLOTTE - ▁CIRCUS - ▁CONNISTON - ▁CONSTITUTE - ▁DAZZL - ▁DEFECT - ▁DISCHARG - ▁ESCORT - ▁EXAGGERAT - ▁GWENDOLEN - ▁IRRESISTIBL - ▁PHILOSOPHY - ▁PHOTOGRAPH - ▁PILGRIM - ▁PLEASING - ▁QUIXOTE - ▁RESPONSE - ▁SCRATCH - ▁SERGEANT - ▁SHERIFF - ▁SHUDDER - ▁STRUCTURE - ▁SUFFRAGE - ▁SURRENDER - ▁SWORE - ▁VILLAIN - ▁HESITATING - ▁FLORENCE - ▁IRRITAT - ▁RIGID - ▁SINISTER - ▁STUDIO - ▁RAFT - ▁CHAMPION - ▁PAVEMENT - ▁WOLF - ▁DEVICE - ▁WRECK - ▁HESITATION - ▁LAZY - ▁ADJO - ▁DECENT - ▁INTERVEN - ▁WOOL - ▁ILLUSION - ▁HAWK - ▁IMPART - ▁LUNGS - ▁WINNING - ▁VITAL - ▁CONSPI - ▁SUBTLE - ▁CONSTANC - ▁HURL - ▁AMIABL - ▁FOLK - GGY - ▁NECESSIT - ▁PROFESS - WASH - ▁ADMIRING - ▁AMBITIOUS - ▁ANTHONY - ▁CEREMONY - ▁CONTRIBUTE - ▁CRAGGS - ▁DETAIN - ▁DISCLOS - ▁DWELT - ▁EGYPT - ▁FELIX - ▁JOURNAL - ▁KWAIRYO - ▁LIBERAL - ▁LUMBER - ▁OCTOBER - ▁ORGANIZATION - ▁POPULACE - ▁PRECAUTION - ▁PREJUDICE - ▁PROCLAIM - ▁PROPRIETOR - ▁RESPONSIBLE - ▁RHYTHM - ▁RIDICULOUS - ▁SCHOLAR - ▁SQUEEZ - ▁SUBSTITUTE - ▁SURPASS - ▁THRESHOLD - ▁WHARTON - ▁FLICKER - ▁AMAZED - ▁BRONZE - ▁COSSACK - ▁SPILETT - ▁CASUAL - ▁DARCY - ▁PARLOUR - ▁SEXUAL - ▁INSECT - ▁NATHAN - ▁EMINENT - ▁PENCIL - ▁PETITION - ▁ROTTEN - ▁VIGIL - ▁CAESAR - ▁EAGLE - ▁TREAD - ▁REACTION - ▁TACIT - ▁PARLOR - ▁SPAIN - ▁WILDERNESS - ▁DICTAT - ▁GRATIFY - ▁STOVE - ▁SKIRT - ▁UTILI - ▁CONCERT - ▁GORGE - ▁DECORAT - ▁LATIN - ▁ANCHOR - ▁KNOT - ▁MONDAY - ▁GABLES - ▁TOLERABL - ▁ROGER - BERRIES - ▁INVAD - IMMER - OMETER - ▁PRODUC - OBIL - ▁PERMISSI - FICIENCY - ▁WANDER - RREL - PIECE - HORN - ▁COMMIT - ▁ACCUMULAT - ▁JAPAN - ▁ABUNDANT - ▁ACADEMY - ▁ALBERT - ▁BANQUET - ▁DELICIOUS - ▁DOCUMENT - ▁EXCLAMATION - ▁FEBRUARY - ▁GROTESQUE - ▁HEATHERSTONE - ▁HUMPHREY - ▁HURSTWOOD - ▁MOHAMMED - ▁MOSCOW - ▁NICHOLAS - ▁OBSTINATE - ▁PHANTOM - ▁PHILOSOPHER - ▁RECEPTION - ▁SPANIARD - ▁SWOLLEN - ▁TELEPHONE - ▁TRIBUTE - ▁TUNNEL - ▁UNREASONABL - ▁WIGWAM - ▁BUTTERFLY - ▁COLLINS - ▁DISPATCH - ▁EDITOR - ▁CONTINENT - ▁DIMINISH - ▁HORRID - ▁KEATS - ▁PROVIDENCE - ▁BEHALF - ▁CHARLEY - ▁DRAKE - ▁LAUNCH - ▁SALOON - ▁GIGANT - ▁DISPUTE - ▁HYSTERI - ▁DEFENCE - ▁SCREEN - ▁VAULT - ▁NINTH - ▁HARBOR - ▁FLANK - ▁SPECK - ▁UPRIGHT - ▁KEMP - ▁CANADA - ▁STALK - ▁OWL - ▁BRUTE - ▁FERRIS - ▁DECREE - ▁HABITUAL - ▁BRISK - ▁INSPIRE - ▁HUSH - ▁CROUCH - ▁FRIDAY - ▁MOUNTAINEER - ▁HISTORIC - ▁BATES - ▁RUSK - ▁SEMI - DICTION - ▁BUSI - ▁REMOV - MMI - ▁SUFFIC - ▁FLEE - ▁LOUIS - NLEA - ▁IMPORT - OLOGY - ▁CLERGY - ▁ADVERTISEMENT - ▁BENEVOLEN - ▁BORODINO - ▁CATHOLIC - ▁COMMERCIAL - ▁CONJECTURE - ▁CURTAIN - ▁CUTHBERT - ▁DEMOCRACY - ▁GUARANTEE - ▁HYPNOSIS - ▁INDEFINITE - ▁INVESTIGATION - ▁IRREGULAR - ▁KOYO - ▁MERRIWIG - ▁MIRANDA - ▁NICHOLL - ▁ONLOOKER - ▁PERSECUT - ▁RECOGNITION - ▁REJOICE - ▁REMEMBRANCE - ▁REVELATION - ▁SCOLD - ▁SENIOR - ▁SQUIRREL - ▁SYMPATHETIC - ▁TEMPEST - ▁TREACHER - ▁UNDERNEATH - ▁UNEASINESS - ▁UNNECESSARY - ▁UPSTAIRS - ▁VEXATION - ▁ACCESS - ▁CHEAP - ▁ESTIMATE - ▁HAZARD - ▁HORSEBACK - ▁PLUNDER - ▁RASCAL - ▁ROSTOV - ▁ACCUR - ▁GRAVITY - ▁SITUATED - ▁INVARIABL - ▁PLENTIFUL - ▁SPENCER - ▁WALLACE - ▁POLICY - ▁WARRANT - ▁ENVY - ▁LAMB - ▁EXTRACT - ▁CORRAL - ▁PANEL - ▁LINK - ▁LILIES - ▁BECKON - ▁SENOR - ▁BORG - ▁DEBATE - ▁STEER - COGNI - COMB - ▁SETTL - ▁VENERA - ▁FEATURE - ▁TERRIBL - CAPABLE - OLOGICAL - ▁INCESSANT - ▁RESOLUTE - SHAUGHNESSY - ▁ABOLITION - ▁ASSASSIN - ▁BEHAVIOUR - ▁BLUNT - ▁COMMERCE - ▁CONSTANTINOPLE - ▁CRICKET - ▁DISCIPLINE - ▁DROUET - ▁DWARF - ▁INJUSTICE - ▁LUXURY - ▁MANUSCRIPT - ▁MISUNDERSTAND - ▁POLITICIAN - ▁REDOUBT - ▁SALVATION - ▁SERMON - ▁STRUGGLING - ▁SURPRISING - ▁TRIGGER - ▁TUESDAY - ▁TWILIGHT - ▁UNDOUBTEDLY - ▁VEGETABLE - ▁VULGAR - ▁WAISTCOAT - ▁WRINKLE - ▁ALEXANDER - ▁CEILING - ▁ECONOMIC - ▁EVERLASTING - ▁INFLICT - ▁LEVISON - ▁LOBSTER - ▁OVERFLOW - ▁SNATCH - ▁TRAGEDY - ▁DEASEY - ▁ENLIGHTEN - ▁FRIGATE - ▁INSPECT - ▁MARVELLOUS - ▁ATLANTIC - ▁LUFTON - ▁BLADE - ▁CRASH - ▁SLAUGHTER - ▁ANNUAL - ▁CONFERENCE - ▁TWIG - ▁REASSUR - ▁UNIQUE - ▁WRATH - ▁CRADLE - ▁HULLO - ▁LIQUID - ▁MIRTH - ▁EXPERT - ▁HARVEY - ▁RESTORATION - ▁PRETTI - ▁APOLOGY - ▁SLAIN - ▁BARBER - ▁UPROAR - ▁SCANT - ▁BADGER - ▁GROCER - ▁ACRES - ▁BRIDLE - ▁SPECIFI - ▁TANGLE - ▁FERTIL - ▁PATRON - WIXT - LAMOUR - ▁DARN - ▁POPE - ▁PERCEIV - ▁CONCLUDE - ▁SIMPL - ▁GUILT - ▁CARRIE - EFFICIENT - SGIVING - ▁APPOINTMENT - ▁APPRECIATION - ▁CARTRIDGE - ▁CHALLENGE - ▁CRAYFISH - ▁CRIMSON - ▁CUCUMETTO - ▁ENERGETIC - ▁EPOCH - ▁EXAMINING - ▁EXTENSIVE - ▁EXTINGUISH - ▁GLOODY - ▁INSIGNIFICANT - ▁LANDLORD - ▁LANGUID - ▁LEGISLATURE - ▁MAJESTIC - ▁PACIFIC - ▁PASTRINI - ▁PHRONSIE - ▁RECONCIL - ▁SIMULTANEOUS - ▁SKELETON - ▁SKETCH - ▁TRANSFORM - ▁UNJUST - ▁VEXED - ▁ASYLUM - ▁CLUSTER - ▁ERRAND - ▁EXPEND - ▁NEGATIVE - ▁NORHALA - ▁SCANDAL - ▁STIMULAT - ▁SWEAT - ▁COMPOUND - ▁DECEMBER - ▁EXPAND - ▁PROLONG - ▁PURITAN - ▁CONQUEST - ▁MAGUA - ▁SANCHO - ▁TRENCH - ▁ENTITLE - ▁PEPPER - ▁DISASTER - ▁REGAIN - ▁SHREWD - ▁SULLEN - ▁CLAVIER - ▁COLOSS - ▁SHILLING - ▁ETHEL - ▁MYSTERIES - ▁BULK - ▁GRANDEUR - ▁AGNES - ▁CONVERT - ▁WRIST - ▁GLID - ▁TERRACE - ▁SONYA - ▁DANTES - ▁MOULD - ▁MAGNET - ▁PLOT - RANK - ▁CAVIT - ▁SUBSID - ▁SLAP - TURNED - ▁THREAT - BREAK - ▁ANCESTORS - ▁ANTICIPATED - ▁APPLAUSE - ▁ASSAULT - ▁ATTORNEY - ▁AUTOMATIC - ▁CARAVAN - ▁CATASTROPHE - ▁CAVALCANTI - ▁CROMWELL - ▁ENVOY - ▁EXHAUSTION - ▁FIEND - ▁GENEROSITY - ▁GIMBLET - ▁HARDQUANONNE - ▁HOUARN - ▁INJURY - ▁MACKINSON - ▁OGLETHORPE - ▁PETTICOAT - ▁RASPBERR - ▁REHNHJELM - ▁REJOICING - ▁REMNANT - ▁SCOTLAND - ▁SHRINK - ▁STANDPOINT - ▁TESTIMONY - ▁THEREAFTER - ▁THIRTIETH - ▁TWENTIETH - ▁TYRANT - ▁VENTNOR - ▁VETERAN - ▁WHITTAKER - ▁ZVERKOV - ▁ARCHITECTUR - ▁BLUNDER - ▁DENSHER - ▁FORTNIGHT - ▁JUDITH - ▁MARIANNE - ▁MEMORABLE - ▁REFINED - ▁REVOLV - ▁UNDERTAKING - ▁CLUMP - ▁GRUMBLE - ▁SYMPATHI - ▁TICKET - ▁TWITCH - ▁EDITION - ▁FALANDER - ▁CARTHAGE - ▁ORLEANS - ▁POSSUM - ▁SWITCH - ▁CLUNG - ▁CARDINAL - ▁GNAW - ▁LOCATED - ▁HARROW - ▁RASH - ▁SIEGE - ▁LOAF - ▁BRUISE - ▁REGULAT - ▁RESORT - ▁SARAH - ▁LEVIN - ▁NAVY - ▁MOOSE - ▁STOOL - ▁CHANCELLOR - ▁INGENIOUS - ▁CHALK - ▁PRETENCE - ▁REPAY - ▁ROAST - ▁PLUTO - ▁BAFFL - ▁STUMBL - ▁SPHERE - ▁PLEDGE - ▁SPRAWL - ▁WRAP - ▁FRINGE - ▁DREAR - ARRINGTON - ▁FEDERA - KEEPER - ▁PHYSIC - ▁ADVENT - HUMAN - OLOGIST - ▁ALEXANDR - ▁APPARITION - ▁BARTHOLEMY - ▁CITOYEN - ▁CLIMATE - ▁CONTEMPORAR - ▁DESOLATE - ▁DISCONTENT - ▁ELEPHANT - ▁FERNANDO - ▁FERRALTI - ▁FOLIAGE - ▁FUGITIVE - ▁GAMBLING - ▁INVOLUNTARILY - ▁LABYRINTH - ▁LEGITIMATE - ▁MILLIONAIRE - ▁PERCEPTION - ▁PROPRIETY - ▁REBELLION - ▁REFRAIN - ▁RUGGLES - ▁SCRIPTURE - ▁SPLENDOR - ▁SQUADRON - ▁STRICKEN - ▁SWARM - ▁THEODORA - ▁TOMORROW - ▁VELVET - ▁WOLVES - ▁DISREGARD - ▁GLIMMER - ▁SHROUD - ▁TWINKLING - ▁UNEQUAL - ▁CHANNING - ▁CLUMS - ▁ENIGMA - ▁NAVIGAT - ▁TARKAS - ▁TEMPERATURE - ▁DIVISION - ▁GRATIFICATION - ▁MONUMENT - ▁SQUEAK - ▁KAVIN - ▁INTERPOSE - ▁THORNTON - ▁SOLUTION - ▁STREAK - ▁SHRILL - ▁APRON - ▁PITEOUS - ▁HAUGHTY - ▁RECKLESS - ▁EMPTI - ▁WADMAN - ▁BONNET - ▁MARTHA - ▁DUMB - ▁SHATTER - ▁ACUTE - ▁BRINK - ▁CAPRICE - ▁HURON - ▁INFERN - ▁FOWL - ▁ENRAGE - ▁ADORN - ▁CRUIS - ▁PROBABILIT - ▁EXPIR - ▁IMPETU - ▁OVERHEAR - BURTON - ▁TRANSLAT - ▁ENGAGE - ▁CONVINCE - ▁ABNORMAL - ▁GESTICULAT - ▁ABOMINABL - ▁ADVERSARY - ▁ADVERTISER - ▁ADVERTISING - ▁ANNIHILAT - ▁ARTILLERY - ▁CATHEDRAL - ▁COMPETITOR - ▁COULSON - ▁CREVICE - ▁CUSHION - ▁DEBRAY - ▁DEJECT - ▁DIETRICH - ▁DISADVANTAGE - ▁ELLISON - ▁EMPHASIS - ▁EXCURSION - ▁FANTASTIC - ▁HYPOTHES - ▁INCONVENIENCE - ▁INDESCRIBABLE - ▁INDUSTRI - ▁INVALID - ▁MERCILESS - ▁MESOPOTAMIA - ▁MOSQUITO - ▁NARRATIVE - ▁NOWADAYS - ▁OPPORTUNITIES - ▁PROMISING - ▁RECTANGLE - ▁REMONSTRANCE - ▁RESTAURANT - ▁RIBBON - ▁SCIENTIST - ▁SHALMANESER - ▁SKULL - ▁SPRUCE - ▁SUBSTANTIAL - ▁SYMBOL - ▁TEAPOT - ▁TERRITORY - ▁TRAFFIC - ▁TREASON - ▁TRUMPET - ▁TYRANN - ▁UNANIMOUS - ▁UNAWARE - ▁VICINITY - ▁WREATH - ▁ZADIG - ▁CHATEAU - ▁CONFRONT - ▁DUCHESS - ▁EMBODI - ▁FEMININ - ▁FURNACE - ▁MONTONI - ▁RENOWN - ▁SMASH - ▁HARVARD - ▁NEWBERRY - ▁PERFUME - ▁SIGNATURE - ▁SPLASH - ▁SUPPOSITION - ▁HARBOUR - ▁ASSURANCE - ▁BRISTOL - ▁BUCKINGHAM - ▁DUDLEY - ▁INTENSITY - ▁CHOPIN - ▁ENLIST - Q - <sos/eos> init: null input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: true joint_net_conf: null model_conf: ctc_weight: 0.3 lsm_weight: 0.1 length_normalized_loss: false use_preprocessor: true token_type: bpe bpemodel: data/en_token_list/bpe_unigram5000/bpe.model non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' frontend: default frontend_conf: n_fft: 512 win_length: 400 hop_length: 160 fs: 16k specaug: specaug specaug_conf: apply_time_warp: true time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 27 num_freq_mask: 2 apply_time_mask: true time_mask_width_ratio_range: - 0.0 - 0.05 num_time_mask: 5 normalize: global_mvn normalize_conf: stats_file: exp/asr_stats_raw_en_bpe5000_sp/train/feats_stats.npz preencoder: null preencoder_conf: {} encoder: conformer encoder_conf: output_size: 256 attention_heads: 4 linear_units: 1024 num_blocks: 12 dropout_rate: 0.1 positional_dropout_rate: 0.1 attention_dropout_rate: 0.1 input_layer: conv2d normalize_before: true macaron_style: true rel_pos_type: latest pos_enc_layer_type: rel_pos selfattention_layer_type: rel_selfattn activation_type: swish use_cnn_module: true cnn_module_kernel: 31 postencoder: null postencoder_conf: {} decoder: transformer decoder_conf: attention_heads: 4 linear_units: 2048 num_blocks: 6 dropout_rate: 0.1 positional_dropout_rate: 0.1 self_attention_dropout_rate: 0.1 src_attention_dropout_rate: 0.1 required: - output_dir - token_list version: 0.10.6a1 distributed: false ``` </details> | 46a7a0acf8a3e964ec4219f20cc3e240 |
apache-2.0 | ['generated_from_trainer'] | false | mobilebert_sa_GLUE_Experiment_data_aug_sst2_256 This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE SST2 dataset. It achieves the following results on the evaluation set: - Loss: 0.5172 - Accuracy: 0.7867 | 22ae3bf128d6d13aa04cbfee1aec677f |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.3529 | 1.0 | 8748 | 0.5172 | 0.7867 | | 0.2729 | 2.0 | 17496 | 0.5752 | 0.7695 | | 0.2317 | 3.0 | 26244 | 0.6663 | 0.7718 | | 0.2039 | 4.0 | 34992 | 0.6987 | 0.7729 | | 0.183 | 5.0 | 43740 | 0.9113 | 0.7810 | | 0.1664 | 6.0 | 52488 | 0.8460 | 0.7844 | | 65ac147c0065a87b7f7c696691fa80b9 |
apache-2.0 | ['Recommendation'] | false | MCTI Recommendation Task (uncased) DRAFT Disclaimer: The Brazilian Ministry of Science, Technology, and Innovation (MCTI) has partially supported this project. The model [NLP MCTI Recommendation Multi](https://huggingface.co/spaces/unb-lamfo-nlp-mcti/nlp-mcti-lda-recommender) is part of the project [Research Financing Product Portfolio (FPP)](https://huggingface.co/unb-lamfo-nlp-mcti) focuses on the task of Recommendation and explores different machine learning strategies that provide suggestions of items that are likely to be handy for a particular individual. Several methods were faced against each other to compare the error estimatives. Using LDA model, a simulated dataset was created. | afde111f3ad76211e16e68c3bafc9b32 |
apache-2.0 | ['Recommendation'] | false | According to the abstract, Current model card disposes model's description and it's classes. Also, inteded uses are described along with a "how to use" section, exposing necessary conditions for the data used. Further in the card, data and it's limitation and bias were discussed. Tables along the page supports the information and tests that were made. How the recommendation is made, datasets used and the benchmarks generated are all set all over the model card. | bbd7af6254a339975e84907fad4047cd |
apache-2.0 | ['Recommendation'] | false | Model description The surprise library provides 11 classifier models that try to predict the classification of training data based on several different collaborative-filtering techniques. The models provided with a brief explanation in English are mentioned below, for more information please refer to the package [documentation](https://surprise.readthedocs.io/en/stable/prediction_algorithms_package.html). random_pred.NormalPredictor: Algorithm predicting a random rating based on the distribution of the training set, which is assumed to be normal. baseline_only.BaselineOnly: Algorithm predicting the baseline estimate for given user and item. knns.KNNBasic: A basic collaborative filtering algorithm. knns.KNNWithMeans: A basic collaborative filtering algorithm, taking into account the mean ratings of each user. knns.KNNWithZScore: A basic collaborative filtering algorithm, taking into account the z-score normalization of each user. knns.KNNBaseline: A basic collaborative filtering algorithm taking into account a baseline rating. matrix_factorization.SVD: The famous SVD algorithm, as popularized by Simon Funk during the Netflix Prize. matrix_factorization.SVDpp: The SVD++ algorithm, an extension of SVD taking into account implicit ratings. matrix_factorization.NMF: A collaborative filtering algorithm based on Non-negative Matrix Factorization. slope_one.SlopeOne: A simple yet accurate collaborative filtering algorithm. co_clustering.CoClustering: A collaborative filtering algorithm based on co-clustering. It is possible to pass a custom dataframe as an argument to this class. The dataframe in question needs to have 3 columns with the following name: ['userID', 'itemID', 'rating']. ```python class Method: def __init__(self,df): self.df=df self.available_methods=[ 'surprise.NormalPredictor', 'surprise.BaselineOnly', 'surprise.KNNBasic', 'surprise.KNNWithMeans', 'surprise.KNNWithZScore', 'surprise.KNNBaseline', 'surprise.SVD', 'surprise.SVDpp', 'surprise.NMF', 'surprise.SlopeOne', 'surprise.CoClustering', ] def show_methods(self): print('The avaliable methods are:') for i,method in enumerate(self.available_methods): print(str(i)+': '+method) def run(self,the_method): self.the_method=the_method if(self.the_method[0:8]=='surprise'): self.run_surprise() elif(self.the_method[0:6]=='Gensim'): self.run_gensim() elif(self.the_method[0:13]=='Transformers-'): self.run_transformers() else: print('This method is not defined! Try another one.') def run_surprise(self): from surprise import Reader from surprise import Dataset from surprise.model_selection import train_test_split reader = Reader(rating_scale=(1, 5)) data = Dataset.load_from_df(self.df[['userID', 'itemID', 'rating']], reader) trainset, testset = train_test_split(data, test_size=.30) the_method=self.the_method.replace("surprise.", "") eval(f"exec('from surprise import {the_method}')") the_algorithm=locals()[the_method]() the_algorithm.fit(trainset) self.predictions=the_algorithm.test(testset) list_predictions=[(uid,iid,r_ui,est) for uid,iid,r_ui,est,_ in self.predictions] self.predictions_df = pd.DataFrame(list_predictions, columns =['user_id', 'item_id', 'rating','predicted_rating']) ``` Every model was used and evaluated. When faced with each other different methods presented different error estimatives. The surprise library provides 4 different methods to assess the accuracy of the ratings prediction. Those are: rmse, mse, mae and fcp. For further discussion on each metric please visit the package documentation. ```python class Evaluator: def __init__(self,predictions_df): self.available_evaluators=['surprise.rmse','surprise.mse', 'surprise.mae','surprise.fcp'] self.predictions_df=predictions_df def show_evaluators(self): print('The avaliable evaluators are:') for i,evaluator in enumerate(self.available_evaluators): print(str(i)+': '+evaluator) def run(self,the_evaluator): self.the_evaluator=the_evaluator if(self.the_evaluator[0:8]=='surprise'): self.run_surprise() else: print('This evaluator is not available!') def run_surprise(self): import surprise from surprise import accuracy predictions=[surprise.prediction_algorithms.predictions.Prediction(row['user_id'],row['item_id'],row['rating'],row['predicted_rating'],{}) for index,row in self.predictions_df.iterrows()] self.predictions=predictions self.the_evaluator= 'accuracy.' + self.the_evaluator.replace("surprise.", "") self.acc = eval(f'{self.the_evaluator}(predictions,verbose=True)') ``` | 2b765ce57f0a9a5b3a5bbb3ef12907e9 |
apache-2.0 | ['Recommendation'] | false | Intended uses You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://www.google.com) to look for fine-tuned versions of a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like XXX. | 64176614a9a193d578f486b6707c6ee2 |
apache-2.0 | ['Recommendation'] | false | How to use The datasets for collaborative filtering must be: - The dataframe containing the ratings. - It must have three columns, corresponding to the user (raw) ids, the item (raw) ids, and the ratings, in this order. ```python >>> import pandas as pd >>> import numpy as np class Data: ```` The databases (ml_100k, ml_1m and jester) are built-in the surprise package for collaborative-filtering. ```python def_init_(self): self.available_databases=['ml_100k', 'ml_1m','jester', 'lda_topics', 'lda_rankings', 'uniform'] def show_available_databases(self): print('The avaliable database are:') for i,database in enumerate(self.available_databases): print(str(i)+': '+database) def read_data(self,database_name): self.database_name=database_name self.the_data_reader= getattr(self, 'read_'+database_name.lower()) self.the_data_reader() def read_ml_100k(self): from surprise import Dataset data = Dataset.load_builtin('ml-100k') self.df = pd.DataFrame(data.__dict__['raw_ratings'], columns=['user_id','item_id','rating','timestamp']) self.df.drop(columns=['timestamp'],inplace=True) self.df.rename({'user_id':'userID','item_id':'itemID'},axis=1,inplace=True) def read_ml_1m(self): from surprise import Dataset data = Dataset.load_builtin('ml-1m') self.df = pd.DataFrame(data.__dict__['raw_ratings'], columns=['user_id','item_id','rating','timestamp']) self.df.drop(columns=['timestamp'],inplace=True) self.df.rename({'user_id':'userID','item_id':'itemID'},axis=1,inplace=True) def read_jester(self): from surprise import Dataset data = Dataset.load_builtin('jester') self.df = pd.DataFrame(data.__dict__['raw_ratings'], columns=['user_id','item_id','rating','timestamp']) self.df.drop(columns=['timestamp'],inplace=True) self.df.rename({'user_id':'userID','item_id':'itemID'},axis=1,inplace=True) ``` Hyperparameters - `n_users` : number of simulated users in the database; `n_ratings` : number of simulated rating events in the database. This is a fictional dataset based in the choice of an uniformly distributed random rating(from 1 to 5) for one of the simulated users of the recommender-system that is being designed in this research project. ```python def read_uniform(self): n_users = 20 n_ratings = 10000 import random opo = pd.read_csv('../oportunidades.csv') df = [(random.randrange(n_users), random.randrange(len(opo)), random.randrange(1,5)) for i in range(n_ratings)] self.df = pd.DataFrame(df, columns = ['userID', 'itemID', 'rating']) ``` Hyperparameters - n_users` : number of simulated users in the database; n_ratings` : number of simulated rating events in the database. This first LDA based dataset builds a model with K = `n_users` topics. LDA topics are used as proxies for simulated users with different clusters of interest. At first a random opportunity is chosen, than the amount of a randomly chosen topic inside the description is multiplied by five. The ceiling operation of this result is the rating that the fictional user will give to that opportunity. Because the amount of each topic predicted by the model is disollved among various topics, it is very rare to find an opportunity that has a higher LDA value. The consequence is that this dataset has really low volatility and the major part of ratings are equal to 1. ```python def read_lda_topics(self): n_users = 20 n_ratings = 10000 import gensim import random import math opo = pd.read_csv('../oportunidades_results.csv') | 0d0b199a3db4d1b9466cfc376c08cbc8 |
apache-2.0 | ['Recommendation'] | false | opo = opo.iloc[np.where(opo['opo_brazil']=='Y')] try: lda_model = gensim.models.ldamodel.LdaModel.load(f'models/lda_model{n_users}.model') except: import generate_users generate_users.gen_model(n_users) lda_model = gensim.models.ldamodel.LdaModel.load(f'models/lda_model{n_users}.model') df = [] for i in range(n_ratings): opo_n = random.randrange(len(opo)) txt = opo.loc[opo_n,'opo_texto'] opo_bow = lda_model.id2word.doc2bow(txt.split()) topics = lda_model.get_document_topics(opo_bow) topics = {topic[0]:topic[1] for topic in topics} user = random.sample(topics.keys(), 1)[0] rating = math.ceil(topics[user]*5) df.append((user, opo_n, rating)) self.df = pd.DataFrame(df, columns = ['userID', 'itemID', 'rating']) def read_lda_rankings(self): n_users = 9 n_ratings = 1000 import gensim import random import math import tqdm opo = pd.read_csv('../oportunidades.csv') opo = opo.iloc[np.where(opo['opo_brazil']=='Y')] opo.index = range(len(opo)) path = f'models/output_linkedin_cle_lda_model_{n_users}_topics_symmetric_alpha_auto_beta' lda_model = gensim.models.ldamodel.LdaModel.load(path) df = [] pbar = tqdm.tqdm(total= n_ratings) for i in range(n_ratings): opo_n = random.randrange(len(opo)) txt = opo.loc[opo_n,'opo_texto'] opo_bow = lda_model.id2word.doc2bow(txt.split()) topics = lda_model.get_document_topics(opo_bow) topics = {topic[0]:topic[1] for topic in topics} prop = pd.DataFrame([topics], index=['prop']).T.sort_values('prop', ascending=True) prop['rating'] = range(1, len(prop)+1) prop['rating'] = prop['rating']/len(prop) prop['rating'] = prop['rating'].apply(lambda x: math.ceil(x*5)) prop.reset_index(inplace=True) prop = prop.sample(1) df.append((prop['index'].values[0], opo_n, prop['rating'].values[0])) pbar.update(1) pbar.close() self.df = pd.DataFrame(df, columns = ['userID', 'itemID', 'rating']) ``` | c3a16170470ac8789209d256bad57037 |
apache-2.0 | ['Recommendation'] | false | Limitations and bias In this model we have faced some obstacles that we had overcome, but some of those, by the nature of the project, couldn't be totally solved. Databases containing profiles of possible users of the planned prototype are not available. For this reason, it was necessary to carry out simulations in order to represent the interests of these users, so that the recommendation system could be modeled. A simulation of clusters of latent interests was realized, based on topics present in the texts describing financial products. Due the fact that the dataset was build it by ourselves, there was no interaction yet between a user and the dataset, therefore we don't have realistic ratings, making the results less believable. Later on, we have used a database of scrappings of linkedin profiles. The problem is that the profiles that linkedin shows is biased, so the profiles that appears was geographically closed, or related to the users organization and email. | 175c2d9c0240e07256797e6298c4efd6 |
apache-2.0 | ['Recommendation'] | false | Checkpoints - Example ```python data=Data() data.show_available_databases() data.read_data('ml_100k') method=Method(data.df) method.show_methods() method.run('surprise.KNNWithMeans') predictions_df=method.predictions_df evaluator=Evaluator(predictions_df) evaluator.show_evaluators() evaluator.run('surprise.mse') ``` The avaliable database are: 0: ml_100k 1: ml_1m 2: jester 3: lda_topics 4: lda_rankings 5: uniform The avaliable methods are: 0: surprise.NormalPredictor 1: surprise.BaselineOnly 2: surprise.KNNBasic 3: surprise.KNNWithMeans 4: surprise.KNNWithZScore 5: surprise.KNNBaseline 6: surprise.SVD 7: surprise.SVDpp 8: surprise.NMF 9: surprise.SlopeOne 10: surprise.CoClustering Computing the msd similarity matrix... Done computing similarity matrix. The avaliable evaluators are: 0: surprise.rmse 1: surprise.mse 2: surprise.mae 3: surprise.fcp MSE: 0.9146 Next, we have the code that builds the table with the accuracy metrics for all rating prediction models built-in the surprise package. The expected return of this function is a pandas dataframe (11x4) corresponding to the 11 classifier models and 4 different accuracy metrics. ```python def model_table(label): import tqdm table = pd.DataFrame() data=Data() data.read_data(label) method=Method(data.df) for m in method.available_methods: print(m) method.run(m) predictions_df=method.predictions_df evaluator=Evaluator(predictions_df) metrics = [] for e in evaluator.available_evaluators: evaluator.run(e) metrics.append(evaluator.acc) table = table.append(dict(zip(evaluator.available_evaluators,metrics)),ignore_index=True) table.index = [x[9:] for x in method.available_methods] table.columns = [x[9:].upper() for x in evaluator.available_evaluators] return table import sys, os sys.stdout = open(os.devnull, 'w') | f37db1651acef9378a045ea5c0669215 |
apache-2.0 | ['Recommendation'] | false | Codigo para reativar os prints ``` - Usage Example In this section it will be explained how the recommendation is made for the user. ```python import gradio as gr import random import pandas as pd opo = pd.read_csv('oportunidades_results.csv', lineterminator='\n') | d13e7ee39890a936dbd6adba64e09d5b |
apache-2.0 | ['Recommendation'] | false | opo = opo.iloc[np.where(opo['opo_brazil']=='Y')] simulation = pd.read_csv('simulation2.csv') userID = max(simulation['userID']) + 1 This function, creates the string that it will be displayed to the user on the app, showing the opportunities title, link and the resume. def build_display_text(opo_n): title = opo.loc[opo_n]['opo_titulo'] link = opo.loc[opo_n]['link'] summary = opo.loc[opo_n]['facebook-bart-large-cnn_results'] display_text = f"**{title}**\n\nURL:\n{link}\n\nSUMMARY:\n{summary}" return display_text ``` Here it will be generate 4 random opportunities. ```python opo_n_one = random.randrange(len(opo)) opo_n_two = random.randrange(len(opo)) opo_n_three = random.randrange(len(opo)) opo_n_four = random.randrange(len(opo)) evaluated = [] ``` The next function, is the "predict_next", that accepts an option and a rating. ```python def predict_next(option, nota): global userID global opo_n_one global opo_n_two global opo_n_three global opo_n_four global evaluated global opo global simulation ``` Here it will be taken the number, on our database, of the rated opportunity. ```python selected = [opo_n_one, opo_n_two, opo_n_three, opo_n_four][int(option)-1] ``` Here is created a new database called simulation, that takes the previous simulation then adds a new line with te ID of the user, the rated item and the rate. integrates the selected opportunity. ```python simulation = simulation.append({'userID': userID, 'itemID': selected, 'rating': nota}, ignore_index=True) evaluated.append(selected) from surprise import Reader reader = Reader(rating_scale=(1, 5)) from surprise import Dataset data = Dataset.load_from_df(simulation[['userID', 'itemID', 'rating']], reader) trainset = data.build_full_trainset() from surprise import SVDpp svdpp = SVDpp() svdpp.fit(trainset) items = list() est = list() for i in range(len(opo)): if i not in evaluated: items.append(i) est.append(svdpp.predict(userID, i).est) opo_n_one = items[est.index(sorted(est)[-1])] opo_n_two = items[est.index(sorted(est)[-2])] opo_n_three = items[est.index(sorted(est)[-3])] opo_n_four = items[est.index(sorted(est)[-4])] return build_display_text(opo_n_one), build_display_text(opo_n_two), build_display_text(opo_n_three), build_display_text(opo_n_four) ``` Here we have the interation of gradio, that allows the construction of the app. ```python with gr.Blocks() as demo: with gr.Row(): one_opo = gr.Textbox(build_display_text(opo_n_one), label='Oportunidade 1') two_opo = gr.Textbox(build_display_text(opo_n_two), label='Oportunidade 2') with gr.Row(): three_opo = gr.Textbox(build_display_text(opo_n_three), label='Oportunidade 3') four_opo = gr.Textbox(build_display_text(opo_n_four), label='Oportunidade 4') with gr.Row(): option = gr.Radio(['1', '2', '3', '4'], label='Opção', value = '1') with gr.Row(): nota = gr.Slider(1,5,step=1,label="Nota 1") with gr.Row(): confirm = gr.Button("Confirmar") confirm.click(fn=predict_next, inputs=[option, nota], outputs=[one_opo, two_opo, three_opo, four_opo]) if __name__ == "__main__": demo.launch() ``` | 2f6168bf4538cfd9badc81612ee9eeab |
apache-2.0 | ['Recommendation'] | false | LDA-GENERATED DATASET ranking ``` | | RMSE | MSE | MAE | FCP | |-----------------|-----------|-----------|-----------|-----------| | NormalPredictor | 1.820737 | 3.315084 | 1.475522 | 0.514134 | | BaselineOnly | 1.072843 | 1.150992 | 0.890233 | 0.556560 | | KNNBasic | 1.232248 | 1.518436 | 0.936799 | 0.648604 | | KNNWithMeans | 1.124166 | 1.263750 | 0.808329 | 0.597148 | | KNNWithZScore | 1.056550 | 1.116299 | 0.750004 | 0.669651 | | KNNBaseline | 1.134660 | 1.287454 | 0.825161 | 0.614270 | | SVD | 0.977468 | 0.955444 | 0.757485 | 0.723829 | | SVDpp | 0.843065 | 0.710758 | 0.670516 | 0.671737 | | NMF | 1.122684 | 1.260420 | 0.722101 | 0.688728 | | SlopeOne | 1.073552 | 1.152514 | 0.747142 | 0.651937 | | CoClustering | 1.293383 | 1.672838 | 1.007951 | 0.494174 | ```python | 0dfeb6c6d139475778b98cf49bc7b532 |
apache-2.0 | ['Recommendation'] | false | BENCHMARK DATASET uniform ``` | | RMSE | MSE | MAE | FCP | |-----------------|-----------|-----------|-----------|-----------| | NormalPredictor | 1.508925 | 2.276854 | 1.226758 | 0.503723 | | BaselineOnly | 1.153331 | 1.330172 | 1.022732 | 0.506818 | | KNNBasic | 1.205058 | 1.452165 | 1.026591 | 0.501168 | | KNNWithMeans | 1.202024 | 1.444862 | 1.028149 | 0.503527 | | KNNWithZScore | 1.216041 |1.478756 | 1.041070 | 0.501582 | | KNNBaseline | 1.225609 | 1.502117 | 1.048107 | 0.498198 | | SVD | 1.176273 | 1.383619 | 1.013285 | 0.502067 | | SVDpp | 1.192619 | 1.422340 | 1.018717 | 0.500909 | | NMF | 1.338216 | 1.790821 | 1.120604 | 0.492944 | | SlopeOne | 1.224219 | 1.498713 | 1.047170 | 0.494298 | | CoClustering | 1.223020 | 1.495778 | 1.033699 | 0.518509 | | 2687c00f21f248ea2de5531a54582171 |
apache-2.0 | ['Recommendation'] | false | BibTeX entry and citation info ```bibtex @unpublished{recommend22, author ={Jo\~{a}o Gabriel de Moraes Souza. and Daniel Oliveira Cajueiro. and Johnathan de O. Milagres. and Vin\´{i}cius de Oliveira Watanabe. and V\´{i}tor Bandeira Borges. and Victor Rafael Celestino.}, title ={A comprehensive review of recommendation systems: method, data, evaluation and coding}, } ``` <a href="https://huggingface.co/exbert/?model=bert-base-uncased"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a> | 43b96fc77f18b644dd60eca20bf37b1f |
apache-2.0 | ['generated_from_trainer'] | false | Article_250v0_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the article250v0_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.2589 - Precision: 0.6609 - Recall: 0.6239 - F1: 0.6419 - Accuracy: 0.9219 | 855d35fb77bc2575afccd02791ce88e9 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 112 | 0.2475 | 0.5938 | 0.5559 | 0.5742 | 0.9180 | | No log | 2.0 | 224 | 0.2340 | 0.6483 | 0.6411 | 0.6447 | 0.9247 | | No log | 3.0 | 336 | 0.2589 | 0.6609 | 0.6239 | 0.6419 | 0.9219 | | d9f0b581256221e8a0acc1eb1cf31c93 |
apache-2.0 | ['exbert', 'multiberts', 'multiberts-seed-1'] | false | MultiBERTs Seed 1 Checkpoint 1200k (uncased) Seed 1 intermediate checkpoint 1200k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint. The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani). | f567bf518579f7b03a8f0c4fe5dbcf1e |
apache-2.0 | ['exbert', 'multiberts', 'multiberts-seed-1'] | false | How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-1200k') model = BertModel.from_pretrained("multiberts-seed-1-1200k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` | 0921a67a792f940c1a76fc92066c1d78 |
apache-2.0 | ['generated_from_keras_callback'] | false | market_positivity This model is a fine-tuned version of [hfl/chinese-roberta-wwm-ext](https://huggingface.co/hfl/chinese-roberta-wwm-ext) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4959 - Train Sparse Categorical Accuracy: 0.8060 - Validation Loss: 0.4484 - Validation Sparse Categorical Accuracy: 0.8187 - Epoch: 1 | 884f854ff2344f8af0fdf83ca640f1a6 |
apache-2.0 | ['generated_from_keras_callback'] | false | Training results | Train Loss | Train Sparse Categorical Accuracy | Validation Loss | Validation Sparse Categorical Accuracy | Epoch | |:----------:|:---------------------------------:|:---------------:|:--------------------------------------:|:-----:| | 0.6595 | 0.7184 | 0.5732 | 0.7479 | 0 | | 0.4959 | 0.8060 | 0.4484 | 0.8187 | 1 | | 6db0ee00eba358a67467b12c4dd86f32 |
creativeml-openrail-m | ['text-to-image', 'stable-diffusion'] | false | wedadams_bkdbj Dreambooth model trained by tftgregrge with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb) Sample pictures of this concept: .jpg) | df3cf5029ab2c974c06cb908ba7ddd06 |
apache-2.0 | ['exbert', 'multiberts', 'multiberts-seed-2'] | false | MultiBERTs Seed 2 Checkpoint 1900k (uncased) Seed 2 intermediate checkpoint 1900k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint. The final checkpoint can be found at [multiberts-seed-2](https://hf.co/multberts-seed-2). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani). | a03a2838128ee755ab736541cf46d275 |
apache-2.0 | ['exbert', 'multiberts', 'multiberts-seed-2'] | false | How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-2-1900k') model = BertModel.from_pretrained("multiberts-seed-2-1900k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` | 68668cdcf51a1f3b5a1408228c7f7b97 |
apache-2.0 | ['translation'] | false | opus-mt-de-pl * source languages: de * target languages: pl * OPUS readme: [de-pl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-pl/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-pl/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-pl/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-pl/opus-2020-01-20.eval.txt) | 0e7927d8a6ecf9b250e57ca71713ecc8 |
apache-2.0 | ['summarization', 'generated_from_trainer'] | false | mt5-small-finetuned-amazon-en-es This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.0329 - Rouge1: 16.3034 - Rouge2: 7.8192 - Rougel: 16.0316 - Rougelsum: 15.9173 | fe9392293ce5080101796409d82e4f0f |
apache-2.0 | ['summarization', 'generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:| | 7.0891 | 1.0 | 1209 | 3.2989 | 13.8686 | 6.1132 | 13.3657 | 13.3454 | | 3.9283 | 2.0 | 2418 | 3.1443 | 16.3537 | 7.9374 | 15.8565 | 15.7281 | | 3.5985 | 3.0 | 3627 | 3.1004 | 17.9042 | 9.1908 | 17.5268 | 17.385 | | 3.4285 | 4.0 | 4836 | 3.0578 | 16.3118 | 8.4563 | 15.9252 | 15.9109 | | 3.3222 | 5.0 | 6045 | 3.0587 | 17.5106 | 8.6579 | 17.2096 | 17.1079 | | 3.2554 | 6.0 | 7254 | 3.0497 | 16.9153 | 8.0973 | 16.5874 | 16.4807 | | 3.2085 | 7.0 | 8463 | 3.0309 | 16.3789 | 7.9306 | 16.1233 | 16.0097 | | 3.1856 | 8.0 | 9672 | 3.0329 | 16.3034 | 7.8192 | 16.0316 | 15.9173 | | a2496e86e902b67c8d6efc499fd92131 |
apache-2.0 | ['generated_from_trainer'] | false | swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.3826 - Accuracy: 0.4865 | 9bc54ba55dbcef28577dac024a836d52 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.9 | 7 | 1.4323 | 0.4865 | | 1.5843 | 1.9 | 14 | 1.3999 | 0.4865 | | 1.5007 | 2.9 | 21 | 1.3826 | 0.4865 | | a65375683a98d81e8358387d35faf49a |
mit | [] | false | Sherhook Painting on Stable Diffusion This is the `<sherhook>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`:        | 7ea61a1214df5762269c150225a8eabe |
mit | ['exbert', 'authorship-identification', 'fire2020', 'pan2020', 'ai-soco'] | false | Model description From scratch pre-trained RoBERTa model with 1 layers and 96 attention heads using [AI-SOCO](https://sites.google.com/view/ai-soco-2020) dataset which consists of C++ codes crawled from CodeForces website. | 4fb48bdd46422561935100fd9bbaf0fc |
mit | ['exbert', 'authorship-identification', 'fire2020', 'pan2020', 'ai-soco'] | false | BibTeX entry and citation info ```bibtex @inproceedings{ai-soco-2020-fire, title = "Overview of the {PAN@FIRE} 2020 Task on {Authorship Identification of SOurce COde (AI-SOCO)}", author = "Fadel, Ali and Musleh, Husam and Tuffaha, Ibraheem and Al-Ayyoub, Mahmoud and Jararweh, Yaser and Benkhelifa, Elhadj and Rosso, Paolo", booktitle = "Proceedings of The 12th meeting of the Forum for Information Retrieval Evaluation (FIRE 2020)", year = "2020" } ``` <a href="https://huggingface.co/exbert/?model=aliosm/ai-soco-c++-roberta-tiny-96"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a> | e7f3ae3b033e361d807834db01e9cb44 |
creativeml-openrail-m | ['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image'] | false | Stable Diffusion v1-4 Model Card Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input. For more information about how Stable Diffusion functions, please have a look at [๐ค—'s Stable Diffusion with ๐งจDiffusers blog](https://huggingface.co/blog/stable_diffusion). The **Stable-Diffusion-v1-4** checkpoint was initialized with the weights of the [Stable-Diffusion-v1-2](https:/steps/huggingface.co/CompVis/stable-diffusion-v1-2) checkpoint and subsequently fine-tuned on 225k steps at resolution 512x512 on "laion-aesthetics v2 5+" and 10% dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598). This weights here are intended to be used with the ๐งจ Diffusers library. If you are looking for the weights to be loaded into the CompVis Stable Diffusion codebase, [come here](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original) | b3b0abb1543881b93ba2f158a6c8bbd7 |
creativeml-openrail-m | ['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image'] | false | Limitations - The model does not achieve perfect photorealism - The model cannot render legible text - The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to โ€A red cube on top of a blue sphereโ€ - Faces and people in general may not be generated properly. - The model was trained mainly with English captions and will not work as well in other languages. - The autoencoding part of the model is lossy - The model was trained on a large-scale dataset [LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material and is not fit for product use without additional safety mechanisms and considerations. - No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data. The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images. | c5d54bb3405583988b90a4fa792a204b |
mit | ['generated_from_trainer'] | false | xlm-roberta-base-finetuned-panx-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1059 - F1: 0.9275 | 939348aed3251545a416170048a477c3 |
mit | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.5416 | 1.0 | 191 | 0.2322 | 0.8378 | | 0.2614 | 2.0 | 382 | 0.1544 | 0.8866 | | 0.1758 | 3.0 | 573 | 0.1059 | 0.9275 | | 59eeba3f28d6bfe96afe931e3c546b7e |
apache-2.0 | ['whisper-event', 'generated_from_trainer'] | false | Whisper Small Assamese This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 as dataset. It achieves the following results on the evaluation set: - Loss: 0.6033 - Wer: 35.4990 | dbffac5f2841300d30a5a8b6c3be6447 |
apache-2.0 | ['whisper-event', 'generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 64 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 40 - training_steps: 400 - mixed_precision_training: Native AMP | 207eea87e766174830167277e417e82a |
apache-2.0 | ['whisper-event', 'generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 1.0676 | 3.01 | 50 | 0.6487 | 62.5338 | | 0.2252 | 6.03 | 100 | 0.3487 | 36.4916 | | 0.0787 | 9.04 | 150 | 0.3934 | 35.6434 | | 0.0178 | 13.01 | 200 | 0.5057 | 36.0043 | | 0.0048 | 16.02 | 250 | 0.5589 | 35.8239 | | 0.0022 | 19.04 | 300 | 0.5882 | 35.7336 | | 0.0015 | 23.01 | 350 | 0.5985 | 35.5712 | | 0.0013 | 26.02 | 400 | 0.6033 | 35.4990 | | 0d39d16fb5565dfad3e12ca62562acd7 |
apache-2.0 | ['generated_from_trainer'] | false | M6_MLM This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.0237 | faa5cbe0906d7e81d3aafe758530bef1 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.4015 | 1.0 | 25 | 2.1511 | | 2.2207 | 2.0 | 50 | 2.1268 | | 2.168 | 3.0 | 75 | 2.0796 | | 05567e385cdc5b246c20b6ada3311c67 |
mit | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001372 - train_batch_size: 1 - eval_batch_size: 8 - seed: 3313214263 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1.0 | a4ac0e33bf7fe913619810977315fb0b |
apache-2.0 | ['generated_from_trainer'] | false | wav2vec2-adult-child-cls This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1713 - Accuracy: 0.9460 - F1: 0.9509 | 9d6fbc51c912615962e0d54b2a158c49 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.323 | 1.0 | 96 | 0.2699 | 0.9026 | 0.9085 | | 0.2003 | 2.0 | 192 | 0.2005 | 0.9234 | 0.9300 | | 0.1808 | 3.0 | 288 | 0.1780 | 0.9377 | 0.9438 | | 0.1537 | 4.0 | 384 | 0.1673 | 0.9441 | 0.9488 | | 0.1135 | 5.0 | 480 | 0.1713 | 0.9460 | 0.9509 | | 1cde9f80d9d99bd68b8d82047e5857fa |
apache-2.0 | ['generated_from_trainer'] | false | mobilebert_add_GLUE_Experiment_logit_kd_mrpc_128 This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE MRPC dataset. It achieves the following results on the evaluation set: - Loss: 0.5534 - Accuracy: 0.6838 - F1: 0.8122 - Combined Score: 0.7480 | 464710437c37600cd650127ebab16c80 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:| | 0.6399 | 1.0 | 29 | 0.5562 | 0.6838 | 0.8122 | 0.7480 | | 0.6101 | 2.0 | 58 | 0.5559 | 0.6838 | 0.8122 | 0.7480 | | 0.6111 | 3.0 | 87 | 0.5557 | 0.6838 | 0.8122 | 0.7480 | | 0.6104 | 4.0 | 116 | 0.5572 | 0.6838 | 0.8122 | 0.7480 | | 0.6086 | 5.0 | 145 | 0.5550 | 0.6838 | 0.8122 | 0.7480 | | 0.6058 | 6.0 | 174 | 0.5534 | 0.6838 | 0.8122 | 0.7480 | | 0.6036 | 7.0 | 203 | 0.5745 | 0.6838 | 0.8122 | 0.7480 | | 0.5969 | 8.0 | 232 | 0.5595 | 0.6838 | 0.8122 | 0.7480 | | 0.5735 | 9.0 | 261 | 0.5699 | 0.6838 | 0.8122 | 0.7480 | | 0.5597 | 10.0 | 290 | 0.5608 | 0.6838 | 0.8122 | 0.7480 | | 0.5456 | 11.0 | 319 | 0.5714 | 0.6838 | 0.8122 | 0.7480 | | 9bb6c4c518f88bfcc4f492e3c428728e |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.