modelId
stringlengths
4
111
lastModified
stringlengths
24
24
tags
list
pipeline_tag
stringlengths
5
30
author
stringlengths
2
34
config
null
securityStatus
null
id
stringlengths
4
111
likes
int64
0
9.53k
downloads
int64
2
73.6M
library_name
stringlengths
2
84
created
timestamp[us]
card
stringlengths
101
901k
card_len
int64
101
901k
embeddings
list
m3hrdadfi/wav2vec2-xlsr-persian-speech-emotion-recognition
2021-07-27T06:12:46.000Z
[ "transformers", "pytorch", "wav2vec2", "audio", "automatic-speech-recognition", "speech", "speech-emotion-recognition", "fa", "dataset:ShEMO", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
m3hrdadfi
null
null
m3hrdadfi/wav2vec2-xlsr-persian-speech-emotion-recognition
3
1,228
transformers
2022-03-02T23:29:05
--- language: fa datasets: - ShEMO tags: - audio - automatic-speech-recognition - speech - speech-emotion-recognition license: apache-2.0 --- # Emotion Recognition in Persian (Farsi - fa) Speech using Wav2Vec 2.0 ## How to use ### Requirements ```bash # requirement packages !pip install git+https://github.com/huggingface/datasets.git !pip install git+https://github.com/huggingface/transformers.git !pip install torchaudio !pip install librosa ``` ### Prediction ```python import torch import torch.nn as nn import torch.nn.functional as F import torchaudio from transformers import AutoConfig, Wav2Vec2FeatureExtractor import librosa import IPython.display as ipd import numpy as np import pandas as pd ``` ```python device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model_name_or_path = "m3hrdadfi/wav2vec2-xlsr-persian-speech-emotion-recognition" config = AutoConfig.from_pretrained(model_name_or_path) feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained(model_name_or_path) sampling_rate = feature_extractor.sampling_rate model = Wav2Vec2ForSpeechClassification.from_pretrained(model_name_or_path).to(device) ``` ```python def speech_file_to_array_fn(path, sampling_rate): speech_array, _sampling_rate = torchaudio.load(path) resampler = torchaudio.transforms.Resample(_sampling_rate) speech = resampler(speech_array).squeeze().numpy() return speech def predict(path, sampling_rate): speech = speech_file_to_array_fn(path, sampling_rate) inputs = feature_extractor(speech, sampling_rate=sampling_rate, return_tensors="pt", padding=True) inputs = {key: inputs[key].to(device) for key in inputs} with torch.no_grad(): logits = model(**inputs).logits scores = F.softmax(logits, dim=1).detach().cpu().numpy()[0] outputs = [{"Label": config.id2label[i], "Score": f"{round(score * 100, 3):.1f}%"} for i, score in enumerate(scores)] return outputs ``` ```python path = "/path/to/sadness.wav" outputs = predict(path, sampling_rate) ``` ```bash [ {'Label': 'Anger', 'Score': '0.0%'}, {'Label': 'Fear', 'Score': '0.0%'}, {'Label': 'Happiness', 'Score': '0.0%'}, {'Label': 'Neutral', 'Score': '0.0%'}, {'Label': 'Sadness', 'Score': '99.9%'}, {'Label': 'Surprise', 'Score': '0.0%'} ] ``` ## Evaluation The following tables summarize the scores obtained by model overall and per each class. | Emotions | precision | recall | f1-score | accuracy | |:---------:|:---------:|:------:|:--------:|:--------:| | Anger | 0.95 | 0.95 | 0.95 | | | Fear | 0.33 | 0.17 | 0.22 | | | Happiness | 0.69 | 0.69 | 0.69 | | | Neutral | 0.91 | 0.94 | 0.93 | | | Sadness | 0.92 | 0.85 | 0.88 | | | Surprise | 0.81 | 0.88 | 0.84 | | | | | | Overal | 0.90 | ## Questions? Post a Github issue from [HERE](https://github.com/m3hrdadfi/soxan/issues).
2,989
[ [ -0.032745361328125, -0.0430908203125, 0.0247344970703125, 0.0217132568359375, -0.004924774169921875, -0.0071258544921875, -0.0307464599609375, -0.02325439453125, 0.012969970703125, 0.01141357421875, -0.050323486328125, -0.060516357421875, -0.04083251953125, ...
jojo0217/ChatSKKU5.8B
2023-10-24T12:01:26.000Z
[ "transformers", "tensorboard", "safetensors", "gpt_neox", "text-generation", "ko", "dataset:jojo0217/korean_rlhf_dataset", "license:apache-2.0", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
jojo0217
null
null
jojo0217/ChatSKKU5.8B
0
1,228
transformers
2023-09-27T12:55:21
--- license: apache-2.0 datasets: - jojo0217/korean_rlhf_dataset language: - ko pipeline_tag: text-generation --- 성균관대학교 산학협력 데이터로 만든 테스트 모델입니다. 기존 10만 7천개의 데이터 + 2천개 일상대화 추가 데이터를 첨가하여 학습하였습니다. ___ 모델은 EleutherAI/polyglot-ko-5.8b를 base로 학습 되었으며 학습 parameter은 다음과 같습니다. batch_size: 128 micro_batch_size: 8 num_epochs: 3 learning_rate: 3e-4 cutoff_len: 1024 lora_r: 8 lora_alpha: 16 lora_dropout: 0.05 weight_decay: 0.1 ___ 측정한 kobest 10shot 점수는 다음과 같습니다. ![score](./asset/score.png) ___ 모델 prompt template는 kullm의 template를 사용하였습니다. 테스트 코드는 다음과 같습니다. https://colab.research.google.com/drive/1xEHewqHnG4p3O24AuqqueMoXq1E3AlT0?usp=sharing ``` from transformers import pipeline, AutoModelForCausalLM, AutoTokenizer model_name="jojo0217/ChatSKKU5.8B" model = AutoModelForCausalLM.from_pretrained( model_name, device_map="auto", load_in_8bit=True,#만약 양자화 끄고 싶다면 false ) tokenizer = AutoTokenizer.from_pretrained(model_name) pipe = pipeline( "text-generation", model=model, tokenizer=model_name, device_map="auto" ) def answer(message): prompt=f"아래는 작업을 설명하는 명령어입니다. 요청을 적절히 완료하는 응답을 작성하세요.\n\n### 명령어:\n{message}" ans = pipe( prompt + "\n\n### 응답:", do_sample=True, max_new_tokens=512, temperature=0.7, repetition_penalty = 1.0, return_full_text=False, eos_token_id=2, ) msg = ans[0]["generated_text"] return msg answer('성균관대학교에대해 알려줘') ```
1,565
[ [ -0.034393310546875, -0.06903076171875, 0.0165557861328125, 0.01513671875, -0.036529541015625, -0.00672149658203125, 0.00775146484375, -0.01024627685546875, 0.03692626953125, 0.011383056640625, -0.043701171875, -0.037750244140625, -0.0643310546875, 0.01014709...
HumanF-MarkrAI/pub-llama-13b-v2
2023-10-22T18:37:25.000Z
[ "transformers", "pytorch", "llama", "text-generation", "ko", "dataset:HumanF-MarkrAI/pub_COT_v2-2000", "license:cc-by-nc-sa-4.0", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
HumanF-MarkrAI
null
null
HumanF-MarkrAI/pub-llama-13b-v2
0
1,227
transformers
2023-10-21T12:56:10
--- language: - ko datasets: HumanF-MarkrAI/pub_COT_v2-2000 license: cc-by-nc-sa-4.0 --- **(주)미디어그룹사람과숲과 (주)마커의 LLM 연구 컨소시엄에서 개발된 모델입니다** **The license is `cc-by-nc-sa`.** ## Model Details **Model Developers** Kyujin Han (kyujinpy) **Input** Models input text only. **Output** Models generate text only. **Model Architecture** pub-llama-13b-v2 is an auto-regressive language model based on the LLaMA2 transformer architecture. **Repo Link** Github: [pub-llama📑](Not_yet) **Training Dataset** More detail about dataset: [HumanF-MarkrAI/pub_COT-2000](https://huggingface.co/datasets/HumanF-MarkrAI/pub_COT-2000).
631
[ [ 0.0007138252258300781, -0.059906005859375, 0.0076904296875, 0.0506591796875, -0.0285186767578125, 0.004482269287109375, -0.0117950439453125, -0.0220489501953125, 0.01145172119140625, 0.04730224609375, -0.039764404296875, -0.0423583984375, -0.046844482421875, ...
facebook/blenderbot-90M
2023-01-24T16:29:11.000Z
[ "transformers", "pytorch", "blenderbot-small", "text2text-generation", "convAI", "conversational", "facebook", "en", "dataset:blended_skill_talk", "arxiv:1907.06616", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
conversational
facebook
null
null
facebook/blenderbot-90M
2
1,226
transformers
2022-03-02T23:29:05
--- language: - en thumbnail: tags: - convAI - conversational - facebook license: apache-2.0 datasets: - blended_skill_talk metrics: - perplexity --- # 🚨🚨**IMPORTANT**🚨🚨 **This model is deprecated! Please use the identical model** **https://huggingface.co/facebook/blenderbot_small-90M instead** ## Model description + Paper: [Recipes for building an open-domain chatbot](https://arxiv.org/abs/1907.06616) + [Original PARLAI Code](https://parl.ai/projects/recipes/) ### Abstract Building open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, both asking and answering questions, and displaying knowledge, empathy and personality appropriately, depending on the situation. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter neural models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models.
1,599
[ [ -0.025299072265625, -0.07269287109375, 0.0219573974609375, 0.0246734619140625, 0.01033782958984375, -0.01361846923828125, -0.0242919921875, -0.02288818359375, 0.005893707275390625, 0.048980712890625, -0.0266571044921875, -0.0118560791015625, -0.055511474609375, ...
FFusion/FFusionXL-LoRa-SDXL-Island-Generator
2023-07-24T23:28:40.000Z
[ "diffusers", "safetensors", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "en", "doi:10.57967/hf/0919", "license:other", "region:us" ]
text-to-image
FFusion
null
null
FFusion/FFusionXL-LoRa-SDXL-Island-Generator
1
1,226
diffusers
2023-07-23T15:20:06
--- license: other base_model: diffusers/stable-diffusion-xl-base-0.9 instance_prompt: a 3d island tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora inference: true library_name: diffusers badges: - alt: Name url: >- https://img.shields.io/badge/Name-FFusion%20XL%20LoRA%20%F0%9F%8F%9D%EF%B8%8F%20%20Island%20Generator-89CFF0 src: >- https://img.shields.io/badge/Name-FFusion%20XL%20LoRA%20%F0%9F%8F%9D%EF%B8%8F%20%20Island%20Generator-89CFF0 - alt: LoRA Type url: https://img.shields.io/badge/LoRA%20Type-LyCORIS%2FLoKr%2C%20Prodigy-blue src: https://img.shields.io/badge/LoRA%20Type-LyCORIS%2FLoKr%2C%20Prodigy-blue - alt: Refiner Compatible url: https://img.shields.io/badge/%F0%9F%94%A5%20Refiner%20Compatible-Yes-success src: https://img.shields.io/badge/%F0%9F%94%A5%20Refiner%20Compatible-Yes-success - alt: CLIP Tested url: >- https://img.shields.io/badge/%F0%9F%92%BB%20CLIP--ViT%2FG%20and%20CLIP--ViT%2FL%20tested-Yes-success src: >- https://img.shields.io/badge/%F0%9F%92%BB%20CLIP--ViT%2FG%20and%20CLIP--ViT%2FL%20tested-Yes-success - alt: Trained Resolution url: >- https://img.shields.io/badge/Trained%20Resolution-1024%20x%201024%20pixels-yellow src: >- https://img.shields.io/badge/Trained%20Resolution-1024%20x%201024%20pixels-yellow - alt: Training Data url: https://img.shields.io/badge/Training%20Data-3%20x%203000%20images-orange src: https://img.shields.io/badge/Training%20Data-3%20x%203000%20images-orange - alt: Tested Resolution url: >- https://img.shields.io/badge/Tested%20Resolution-Up%20to%201600%20pixels-brightgreen src: >- https://img.shields.io/badge/Tested%20Resolution-Up%20to%201600%20pixels-brightgreen - alt: Tested on url: >- https://img.shields.io/badge/Tested%20on-SDXL%200.9%20%26%20FFXL%200.001-blue src: >- https://img.shields.io/badge/Tested%20on-SDXL%200.9%20%26%20FFXL%200.001-blue - alt: Hugging Face Model url: https://img.shields.io/badge/Hugging%20Face-FFusion--BaSE-blue src: https://img.shields.io/badge/Hugging%20Face-FFusion--BaSE-blue - alt: GitHub url: https://img.shields.io/badge/GitHub-1e--2-green src: https://img.shields.io/badge/GitHub-1e--2-green - alt: Facebook url: https://img.shields.io/badge/Facebook-FFusionAI-blue src: https://img.shields.io/badge/Facebook-FFusionAI-blue - alt: Civitai url: https://img.shields.io/badge/Civitai-FFusionAI-blue src: https://img.shields.io/badge/Civitai-FFusionAI-blue language: - en --- # FFusion XL LoRA 🏝️Island Generator <div style="display: flex; flex-wrap: wrap; gap: 2px;"> <img src="https://img.shields.io/badge/%F0%9F%94%A5%20Refiner%20Compatible-Yes-success"> <img src="https://img.shields.io/badge/%F0%9F%92%BB%20CLIP--ViT%2FG%20and%20CLIP--ViT%2FL%20tested-Yes-success"> <img src="https://img.shields.io/badge/LoRA%20Type-LyCORIS%2FLoKr%2C%20Prodigy-blue"> <img src="https://img.shields.io/badge/Tested%20on-SDXL%200.9%20%26%20FFXL%200.001-blue"> </div> The FFusion XL LoRA Island Generator is a model designed to generate game assets like islands and objects in low polygonal landscapes, pixelated, 3D, and isometric styles, making it ideal for retro-inspired or stylized game environments. ## Specifications - **Model Name**: FFusion XL LoRA Island Generator - **LoRA Type**: LyCORIS/LoKr, Prodigy - **Trained Resolution**: 1024 x 1024 pixels - **Tested Resolution**: Up to 1600 pixels - **Training Data**: The model was trained on 9,000 images, consisting of 3 different resumes each on 3,000 images, providing a diverse dataset for learning. <div style="display: flex; flex-wrap: wrap; gap: 4px;"><img src="https://img.shields.io/badge/Trained%20Resolution-1024%20x%201024%20pixels-yellow"> <img src="https://img.shields.io/badge/Training%20Data-3%20x%203000%20images-orange"> <img src="https://img.shields.io/badge/Tested%20Resolution-Up%20to%201600%20pixels-brightgreen"></div> ![image.png](https://cdn-uploads.huggingface.co/production/uploads/6380cf05f496d57325c12194/Kc1lRLznSaz5RO5_rGXkJ.png) ## Refiner Example ![idle-FF_00866_.png](https://cdn-uploads.huggingface.co/production/uploads/6380cf05f496d57325c12194/a1vYenzTmyzlzCaYRa8Lp.png) ![ComfyUI_00258_.png](https://cdn-uploads.huggingface.co/production/uploads/6380cf05f496d57325c12194/SY9od7hM1SlGYMJ1gBsC9.png) ## Usage Recommendations <img src="https://img.shields.io/badge/Name-FFusion%20XL%20LoRA%20%F0%9F%8F%9D%EF%B8%8F%20%20Island%20Generator-89CFF0"> The FFusion XL LoRA Island Generator can be utilized to quickly create game assets for a variety of game projects. It is best suited for applications where a retro or pixelated style is desired, and where low polygonal landscapes and 3D elements are prominent. Designers and developers can leverage the model to streamline the asset creation process, saving valuable time and resources. ## Limitations - The model's performance may vary when generating assets beyond the tested resolution of 1600 pixels. Using it for higher resolutions may lead to a decrease in output quality or increased generation time. - The model's ability to produce highly detailed or realistic assets might be limited due to its pixelated and low polygonal nature. - It is recommended to thoroughly evaluate the generated assets to ensure they align with the desired artistic direction and quality standards of the specific game project. ## Ethical Considerations As with any AI model, ethical considerations should be taken into account during usage. It is essential to avoid generating assets that might infringe upon copyrighted material or violate intellectual property rights. Additionally, the model's outputs should be monitored to ensure it does not produce inappropriate, offensive, or harmful content. ## Citations If you use the FFusion XL LoRA Island Generator in your project or research, please provide appropriate citations to acknowledge the model's contribution. ## Disclaimer The FFusion XL LoRA Island Generator is a powerful tool for generating game assets; however, it may not be perfect and might have limitations. Users are encouraged to test and validate the generated assets thoroughly before integrating them into their game projects. The developers of this model hold no responsibility for any consequences that may arise from its usage. <div style="display: flex; flex-wrap: wrap; gap: 2px; align-items: center;"> <p>These are LoRA adaption weights for</p> <a href="https://huggingface.co/stable-diffusion-xl-base-0.9" target="_new" rel="ugc"><img src="https://img.shields.io/badge/stable--diffusion--xl--base--0.9-Model-blue" alt="stable-diffusion-xl-base-0.9"></a> <p>&</p> <a href="https://huggingface.co/FFusionXL-09-SDXL" target="_new" rel="ugc"><img src="https://img.shields.io/badge/FFusionXL--09--SDXL-Model-blue" alt="FFusionXL-09-SDXL"></a> <p>The weights were trained using experimental</p> <a href="https://github.com/kohya-ss/sd-scripts" target="_new" rel="ugc"><img src="https://img.shields.io/badge/kohya--ss%2Fsd--scripts%20build-Model-blue" alt="kohya-ss/sd-scripts build"></a> <p>build</p> </div> ![img_1](./Image_1.png) ![img_2](./Image_2.png) ![img_3](./Image_3.png) ![img_4](./Image_4.png) ![img_5](./Image_5.png) ![img_6](./Image_6.png) ![img_7](./Image_7.png) <div style="display: flex; flex-wrap: wrap; gap: 2px;"> <a href="https://huggingface.co/FFusion/FFusion-BaSE" target="_new" rel="ugc"><img src="https://img.shields.io/badge/Hugging%20Face-FFusion--BaSE-blue" alt="Hugging Face Model"></a> <a href="https://github.com/1e-2" target="_new" rel="ugc"><img src="https://img.shields.io/badge/GitHub-1e--2-green" alt="GitHub"></a> <a href="https://www.facebook.com/FFusionAI/" target="_new" rel="ugc"><img src="https://img.shields.io/badge/Facebook-FFusionAI-blue" alt="Facebook"></a> <a href="https://civitai.com/models/82039/ffusion-ai-sd-21" target="_new" rel="ugc"><img src="https://img.shields.io/badge/Civitai-FFusionAI-blue" alt="Civitai"></a> </div> <div style="display: flex; flex-wrap: wrap; gap: 10px; align-items: center;"> <p>These are LoRA adaption weights for</p> <a href="https://huggingface.co/stabilityai/stable-diffusion-xl-base-0.9" target="_new" rel="ugc"><img src="https://img.shields.io/badge/stable--diffusion--xl--base--0.9-Model-purple" alt="stable-diffusion-xl-base-0.9"></a> <p>&</p> <a href="https://huggingface.co/FFusion/FFusionXL-09-SDXL" target="_new" rel="ugc"><img src="https://img.shields.io/badge/FFusionXL--09--SDXL-Model-pink" alt="FFusionXL-09-SDXL"></a> <p>The weights were trained using experimental</p> <a href="https://github.com/kohya-ss/sd-scripts" target="_new" rel="ugc"><img src="https://img.shields.io/badge/kohya--ss-sd--scripts-blue" alt="kohya-ss/sd-scripts build"></a> <p>build</p> </div> **Attribution:** "SDXL 0.9 is licensed under the SDXL Research License, Copyright (c) Stability AI Ltd. All Rights Reserved." ## License [SDXL 0.9 Research License](https://huggingface.co/stabilityai/stable-diffusion-xl-base-0.9/blob/main/LICENSE.md)
9,080
[ [ -0.0499267578125, -0.046356201171875, 0.018157958984375, 0.006114959716796875, -0.0200042724609375, 0.004852294921875, 0.0099029541015625, -0.047943115234375, 0.055023193359375, 0.0267791748046875, -0.0703125, -0.03399658203125, -0.043121337890625, -0.002344...
hakonmh/sentiment-xdistil-uncased
2023-11-01T21:20:43.000Z
[ "transformers", "pytorch", "safetensors", "bert", "text-classification", "finance", "financial-sentiment-analysis", "sentiment-analysis", "en", "arxiv:2303.15056", "license:mit", "endpoints_compatible", "region:us" ]
text-classification
hakonmh
null
null
hakonmh/sentiment-xdistil-uncased
1
1,221
transformers
2023-05-18T17:35:54
--- license: mit language: - en pipeline_tag: text-classification tags: - finance - financial-sentiment-analysis - sentiment-analysis library_name: transformers widget: - text: unemployment hits record low as job opportunities soar - text: unemployment hits record high as job opportunities suffers --- `Sentiment-xDistil` is a model based on [`xtremedistil-l12-h384-uncased`](https://huggingface.co/microsoft/xtremedistil-l12-h384-uncased) fine-tuned for classifying the sentiment of news headlines on a dataset annotated by [Chat GPT 3.5](https://platform.openai.com/docs/models/gpt-3-5). It is built, together with [`Topic-xDistil`](https://huggingface.co/hakonmh/topic-xdistil-uncased), as a tool for filtering out financial news headlines and classifying their sentiment. The code used to train both models and build the dataset are found [here](https://github.com/hakonmh/distilnews). *Notes*: The output labels are either `Negative`, `Neutral`, or `Positive`. The model is suitable for English. ## Performance Results Here are the performance metrics for both models on the test set: | Model | Test Set Size | Accuracy | F1 Score | | --- | --- | --- | --- | | `topic-xdistil-uncased` | 32 799 | 94.44 % | 92.59 % | | `sentiment-xdistil-uncased` | 17 527 | 94.59 % | 93.44 % | ## Data The training data consists of 300k+ news headlines and tweets, and was annotated by [Chat GPT 3.5](https://platform.openai.com/docs/models/gpt-3-5), which has shown to [outperform crowd-workers for text annotation tasks](https://arxiv.org/pdf/2303.15056.pdf). The sentence labels are defined by the Chat GPT prompt as follows: ```python """ [...] Does the headline convey a Positive, Neutral, or Negative sentiment with \ regard to the current state or potential future impact on the economy or \ the asset described? - Positive sentiment headlines suggest growth, improvement, or \ stability in economic conditions. - Neutral sentiment headlines do not clearly indicate a positive or \ negative impact on the economy. - Negative sentiment headlines imply economic decline, uncertainty, \ or unfavorable conditions. [...] """ ``` ## Example Usage Here's a simple example: ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification model = AutoModelForSequenceClassification.from_pretrained("hakonmh/sentiment-xdistil-uncased") tokenizer = AutoTokenizer.from_pretrained("hakonmh/sentiment-xdistil-uncased") SENTENCE = "Global Growth Surges as New Technologies Drive Innovation and Productivity!" inputs = tokenizer(SENTENCE, return_tensors="pt") output = model(**inputs).logits predicted_label = model.config.id2label[output.argmax(-1).item()] print(predicted_label) ``` ```text Positive ``` Or, as a pipeline together with `Topic-xDistil`: ```python from transformers import pipeline topic_classifier = pipeline("sentiment-analysis", model="hakonmh/topic-xdistil-uncased", tokenizer="hakonmh/topic-xdistil-uncased") sentiment_classifier = pipeline("sentiment-analysis", model="hakonmh/sentiment-xdistil-uncased", tokenizer="hakonmh/sentiment-xdistil-uncased") SENTENCE = "Global Growth Surges as New Technologies Drive Innovation and Productivity!" print(topic_classifier(SENTENCE)) print(sentiment_classifier(SENTENCE)) ``` ```text [{'label': 'Economics', 'score': 0.9970171451568604}] [{'label': 'Positive', 'score': 0.9997037053108215}] ``` Tested on `transformers` 4.30.1, and `torch` 2.0.0.
3,586
[ [ -0.0377197265625, -0.05206298828125, 0.01390838623046875, 0.0208892822265625, -0.02899169921875, 0.00926971435546875, -0.016632080078125, -0.0147247314453125, 0.0284423828125, 0.01236724853515625, -0.050994873046875, -0.0633544921875, -0.051361083984375, -0....
hfl/chinese-electra-180g-base-discriminator
2021-03-03T01:26:14.000Z
[ "transformers", "pytorch", "tf", "electra", "zh", "arxiv:2004.13922", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
hfl
null
null
hfl/chinese-electra-180g-base-discriminator
9
1,220
transformers
2022-03-02T23:29:05
--- language: - zh license: "apache-2.0" --- # This model is trained on 180G data, we recommend using this one than the original version. ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra) You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Citation If you find our resource or paper is useful, please consider including the following citation in your paper. - https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ```
1,964
[ [ -0.0238037109375, -0.046875, 0.0217437744140625, 0.006954193115234375, -0.00640869140625, -0.017303466796875, -0.035125732421875, -0.055084228515625, 0.0301361083984375, 0.0347900390625, -0.024810791015625, -0.0174713134765625, -0.01483154296875, 0.009849548...
Salesforce/codegen-2B-mono
2022-10-03T16:18:49.000Z
[ "transformers", "pytorch", "codegen", "text-generation", "arxiv:2203.13474", "license:bsd-3-clause", "endpoints_compatible", "has_space", "region:us" ]
text-generation
Salesforce
null
null
Salesforce/codegen-2B-mono
19
1,220
transformers
2022-04-11T23:18:40
--- license: bsd-3-clause --- # CodeGen (CodeGen-Mono 2B) ## Model description CodeGen is a family of autoregressive language models for **program synthesis** from the paper: [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. The models are originally released in [this repository](https://github.com/salesforce/CodeGen), under 3 pre-training data variants (`NL`, `Multi`, `Mono`) and 4 model size variants (`350M`, `2B`, `6B`, `16B`). The checkpoint included in this repository is denoted as **CodeGen-Mono 2B** in the paper, where "Mono" means the model is initialized with *CodeGen-Multi 2B* and further pre-trained on a Python programming language dataset, and "2B" refers to the number of trainable parameters. ## Training data This checkpoint (CodeGen-Mono 2B) was firstly initialized with *CodeGen-Multi 2B*, and then pre-trained on BigPython dataset. The data consists of 71.7B tokens of Python programming language. See Section 2.1 of the [paper](https://arxiv.org/abs/2203.13474) for more details. ## Training procedure CodeGen was trained using cross-entropy loss to maximize the likelihood of sequential inputs. The family of models are trained using multiple TPU-v4-512 by Google, leveraging data and model parallelism. See Section 2.3 of the [paper](https://arxiv.org/abs/2203.13474) for more details. ## Evaluation results We evaluate our models on two code generation benchmark: HumanEval and MTPB. Please refer to the [paper](https://arxiv.org/abs/2203.13474) for more details. ## Intended Use and Limitations As an autoregressive language model, CodeGen is capable of extracting features from given natural language and programming language texts, and calculating the likelihood of them. However, the model is intended for and best at **program synthesis**, that is, generating executable code given English prompts, where the prompts should be in the form of a comment string. The model can complete partially-generated code as well. ## How to use This model can be easily loaded using the `AutoModelForCausalLM` functionality: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-2B-mono") model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen-2B-mono") text = "def hello_world():" input_ids = tokenizer(text, return_tensors="pt").input_ids generated_ids = model.generate(input_ids, max_length=128) print(tokenizer.decode(generated_ids[0], skip_special_tokens=True)) ``` ## BibTeX entry and citation info ```bibtex @article{Nijkamp2022ACP, title={A Conversational Paradigm for Program Synthesis}, author={Nijkamp, Erik and Pang, Bo and Hayashi, Hiroaki and Tu, Lifu and Wang, Huan and Zhou, Yingbo and Savarese, Silvio and Xiong, Caiming}, journal={arXiv preprint}, year={2022} } ```
2,963
[ [ -0.034454345703125, -0.04925537109375, -0.002094268798828125, 0.02447509765625, -0.002552032470703125, 0.0218048095703125, -0.02581787109375, -0.0330810546875, -0.005008697509765625, 0.01486968994140625, -0.03997802734375, -0.036865234375, -0.03228759765625, ...
cardiffnlp/twitter-roberta-base-emotion-multilabel-latest
2023-05-28T05:08:45.000Z
[ "transformers", "pytorch", "tf", "roberta", "text-classification", "generated_from_keras_callback", "en", "endpoints_compatible", "region:us" ]
text-classification
cardiffnlp
null
null
cardiffnlp/twitter-roberta-base-emotion-multilabel-latest
9
1,220
transformers
2023-03-01T21:14:01
--- tags: - generated_from_keras_callback model-index: - name: twitter-roberta-base-emotion-multilabel-latest results: [] pipeline_tag: text-classification language: - en --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # twitter-roberta-base-emotion-multilabel-latest This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-2022-154m](https://huggingface.co/cardiffnlp/twitter-roberta-base-2022-154m) on the [`SemEval 2018 - Task 1 Affect in Tweets`](https://aclanthology.org/S18-1001/) `(subtask: E-c / multilabel classification)`. ## Performance Following metrics are achieved on the test split: - F1 (micro): 0.7169 - F1 (macro): 0.5464 - Jaccard Index (samples): 0.5970: ### Usage #### 1. [tweetnlp](https://pypi.org/project/tweetnlp/) Install tweetnlp via pip. ```shell pip install tweetnlp ``` Load the model in python. ```python import tweetnlp model = tweetnlp.load_model('topic_classification', model_name='cardiffnlp/twitter-roberta-base-emotion-multilabel-latest') model.predict("I bet everything will work out in the end :)") >> {'label': ['joy', 'optimism']} ``` #### 2. pipeline ```shell pip install -U tensorflow==2.10 ``` ```python from transformers import pipeline pipe = pipeline("text-classification", model="cardiffnlp/twitter-roberta-base-emotion-multilabel-latest", return_all_scores=True) pipe("I bet everything will work out in the end :)") >> [[{'label': 'anger', 'score': 0.018903767690062523}, {'label': 'anticipation', 'score': 0.28172484040260315}, {'label': 'disgust', 'score': 0.011607927270233631}, {'label': 'fear', 'score': 0.036411102861166}, {'label': 'joy', 'score': 0.8812029361724854}, {'label': 'love', 'score': 0.09591569006443024}, {'label': 'optimism', 'score': 0.9810988306999207}, {'label': 'pessimism', 'score': 0.016823478043079376}, {'label': 'sadness', 'score': 0.01889917254447937}, {'label': 'surprise', 'score': 0.02702752873301506}, {'label': 'trust', 'score': 0.4155798852443695}]] ``` ### Reference ``` @inproceedings{camacho-collados-etal-2022-tweetnlp, title={{T}weet{NLP}: {C}utting-{E}dge {N}atural {L}anguage {P}rocessing for {S}ocial {M}edia}, author={Camacho-Collados, Jose and Rezaee, Kiamehr and Riahi, Talayeh and Ushio, Asahi and Loureiro, Daniel and Antypas, Dimosthenis and Boisson, Joanne and Espinosa-Anke, Luis and Liu, Fangyu and Mart{\'\i}nez-C{\'a}mara, Eugenio and others}, booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", month = nov, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
2,822
[ [ -0.0262603759765625, -0.0491943359375, 0.0203704833984375, 0.0237579345703125, -0.023223876953125, 0.007648468017578125, -0.024627685546875, -0.034271240234375, 0.03582763671875, 0.00638580322265625, -0.044097900390625, -0.0526123046875, -0.071044921875, -0....
davizca87/c-a-g-coinmaker
2023-08-23T00:28:05.000Z
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "license:other", "has_space", "region:us" ]
text-to-image
davizca87
null
null
davizca87/c-a-g-coinmaker
2
1,219
diffusers
2023-08-23T00:27:48
--- license: other tags: - text-to-image - stable-diffusion - lora - diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: c01n widget: - text: c01n --- # C.A.G. - Coinmaker ![Image 0](1722152.jpeg) <p>Hi again! Long time no see.</p><p>Welcome to <strong>Coinmaker</strong>. Coinmaker is my second "concept art generator" here. This time I trained a coin-asset-generator LORA for SDXL 0.9 that will makes easier, more fun and aesthetics the coin assets creation for videogames, renders, etc. It's specially useful for 3D projecting and extrude-modelling in 3D softwares. So that's why instead of having an small coin it will generate a big coin to be able to extrude all details (as requested)... also he told me that could be used for logo/shield but that's not the main purpose.</p><p><br />Token is <strong>c01n.</strong></p><p><br /><strong>IMPORTANT</strong>: This was tested under <a target="_blank" rel="ugc" href="https://github.com/comfyanonymous/ComfyUI">ComfyUI</a>, I don't have any idea how it will go with Automatic1111. I strongly recommend you to use ComfyUI if you want good results and most important, <s>if you want the same settings I used just drag any image into the ComfyUI and it will show the full workflow, </s><strong>it seems CivitAI converted all samples to JPG so instead of dragging the images (if the workflow is not appearing):</strong></p><p><br /><a target="_blank" rel="ugc" href="https://huggingface.co/datasets/davizca87/C01nWorkflow/tree/main">https://huggingface.co/datasets/davizca87/C01nWorkflow/tree/main</a> <strong><span>&lt;- LOAD it via Json file in Comfyui</span></strong></p><p><br /><strong><u>How to prompt and which values to use?</u></strong><br />********************************************<br />- The LORA works perfectly with <u>values from 1-1.2</u> and bellow even IMO it's better to stay at 1 (you can see Clip and Model 1/1 values).<br />- Resolutions <strong>768x768</strong> and <strong>1024x1024 onwards</strong>, very important the <strong>ratio is a SQUARE.</strong></p><p>- You should prompt with a <u>little description of what you want having in mind it was trained to have a clean background and a type of meta</u>l. This is just for an specific purpose but currently support 4 metals (gold, silver, bronze, ancient).</p><p>- Structure as: "<u>photograph/design/artwork of a (type of metal) c01n with a concept-you-want inscription over black background</u>". Black background makes cleaner creations while inscription even optional push forward the trained aesthetics.</p><p></p><p><strong>This LORA is good for</strong><br /><strong>**********************</strong></p><p>- 3D renders of coins of different type and styles. &lt;- Main purporse<br />- Variations of a lot of coins inner properties. ex: bloody, melt, drip, damaged...<br />- Concepts engraved in the coins. Some may need prompt tweaking<br />- Realistic or semi-realistic results.</p><p></p><p><strong>This LORA is not to be used for:</strong><br /><strong>**********************************</strong></p><p>- Moving the coins between or across the scenes or putting them inside any recipient.<br />- Making ANY figure/character engraved, some won't work. Thats' AI limitations.<br />- Fill an scenery of coins.</p><p><br /><strong>Massive thanks to the original user</strong> who requested this LORA and let me upload this to CivitAI so all could enjoy. Hope you all enjoy this and thanks for the continue support in our <a target="_blank" rel="ugc" href="https://www.patreon.com/user?u=90661954">Patreon</a></p><p><br />Have fun! David.</p><p></p> ## Image examples for the model: ![Image 1](1722156.jpeg) ![Image 2](1722154.jpeg) ![Image 3](1722155.jpeg) ![Image 4](1722157.jpeg) ![Image 5](1722153.jpeg) ![Image 6](1722159.jpeg) ![Image 7](1722160.jpeg) ![Image 8](1722161.jpeg) ![Image 9](1722164.jpeg)
3,885
[ [ -0.0286712646484375, -0.056549072265625, 0.022674560546875, 0.00643157958984375, -0.05517578125, 0.004383087158203125, 0.00836181640625, -0.0517578125, 0.0736083984375, 0.057220458984375, -0.036376953125, -0.03521728515625, -0.038970947265625, -0.00492858886...
nayohan/polyglot-ko-5.8b-Inst
2023-10-26T10:37:12.000Z
[ "transformers", "pytorch", "gpt_neox", "text-generation", "generated_from_trainer", "polyglot-ko", "gpt-neox", "KoQuality", "ko", "dataset:DILAB-HYU/KoQuality", "license:apache-2.0", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
nayohan
null
null
nayohan/polyglot-ko-5.8b-Inst
0
1,219
transformers
2023-10-11T18:17:48
--- language: - ko license: apache-2.0 tags: - generated_from_trainer - polyglot-ko - gpt-neox - KoQuality datasets: - DILAB-HYU/KoQuality pipeline_tag: text-generation base_model: EleutherAI/polyglot-ko-5.8b model-index: - name: KoAlpaca-Polyglot-5.8B results: [] --- This model is a test version that was learned by integrating several Instruction datasets. The final version can be found at [DILAB-HYU/KoQuality-Polyglot-5.8b](https://huggingface.co/DILAB-HYU/KoQuality-Polyglot-5.8b). ## Training hyperparameters - learning_rate: 5e-5 - train_batch_size: 2 - seed: 42 - distributed_type: multi-GPU (A30 24G) + Cpu Offloading - num_devices: 2 - gradient_accumulation_steps: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2.0 ## Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu117 - Datasets 2.11.0 - deepspeed 0.9.5
891
[ [ -0.037078857421875, -0.07659912109375, 0.031707763671875, 0.0010728836059570312, -0.015655517578125, -0.0093841552734375, -0.0076141357421875, -0.01922607421875, -0.0173187255859375, 0.021392822265625, -0.04449462890625, -0.03216552734375, -0.0406494140625, ...
timm/repvit_m3.dist_in1k
2023-08-25T17:36:19.000Z
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2307.09283", "license:apache-2.0", "region:us" ]
image-classification
timm
null
null
timm/repvit_m3.dist_in1k
0
1,218
timm
2023-08-25T17:36:08
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for repvit_m3.dist_in1k A RepViT image classification model. Trained on ImageNet-1k with distillation by paper authors. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 10.7 - GMACs: 1.9 - Activations (M): 13.9 - Image size: 224 x 224 - **Papers:** - RepViT: Revisiting Mobile CNN From ViT Perspective: https://arxiv.org/abs/2307.09283 - **Original:** https://github.com/THU-MIG/RepViT - **Dataset:** ImageNet-1k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('repvit_m3.dist_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'repvit_m3.dist_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 64, 56, 56]) # torch.Size([1, 128, 28, 28]) # torch.Size([1, 256, 14, 14]) # torch.Size([1, 512, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'repvit_m3.dist_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 512, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Citation ```bibtex @misc{wang2023repvit, title={RepViT: Revisiting Mobile CNN From ViT Perspective}, author={Ao Wang and Hui Chen and Zijia Lin and Hengjun Pu and Guiguang Ding}, year={2023}, eprint={2307.09283}, archivePrefix={arXiv}, primaryClass={cs.CV} } ```
3,438
[ [ -0.03717041015625, -0.02655029296875, -0.005855560302734375, 0.00986480712890625, -0.03411865234375, -0.0272064208984375, -0.01502227783203125, -0.0114288330078125, 0.024932861328125, 0.03289794921875, -0.042572021484375, -0.05364990234375, -0.04632568359375, ...
JackFram/llama-160m-base
2023-11-05T03:53:09.000Z
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "en", "dataset:wikipedia", "arxiv:2305.09781", "license:other", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
JackFram
null
null
JackFram/llama-160m-base
0
1,217
transformers
2023-09-17T18:07:53
--- license: other language: - en datasets: - wikipedia pipeline_tag: text-generation --- ## Model description This is a LLaMA-like model with only 160M parameters trained on Wikipedia and part of the C4-en and C4-realnewslike datasets. No evaluation has been conducted yet, so use it with care. The model is mainly developed as a base Small Speculative Model in the [SpecInfer](https://arxiv.org/abs/2305.09781) paper. ## Citation To cite the model, please use ```bibtex @misc{miao2023specinfer, title={SpecInfer: Accelerating Generative LLM Serving with Speculative Inference and Token Tree Verification}, author={Xupeng Miao and Gabriele Oliaro and Zhihao Zhang and Xinhao Cheng and Zeyu Wang and Rae Ying Yee Wong and Zhuoming Chen and Daiyaan Arfeen and Reyna Abhyankar and Zhihao Jia}, year={2023}, eprint={2305.09781}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
917
[ [ -0.024169921875, -0.052520751953125, 0.03369140625, 0.0048065185546875, -0.03826904296875, -0.0010519027709960938, -0.01136016845703125, -0.058258056640625, 0.048187255859375, 0.037567138671875, -0.055572509765625, -0.03790283203125, -0.033447265625, 0.01390...
facebook/maskformer-swin-large-coco
2023-09-11T20:46:05.000Z
[ "transformers", "pytorch", "safetensors", "maskformer", "vision", "image-segmentation", "dataset:coco", "arxiv:2107.06278", "license:other", "endpoints_compatible", "has_space", "region:us" ]
image-segmentation
facebook
null
null
facebook/maskformer-swin-large-coco
16
1,216
transformers
2022-03-02T23:29:05
--- license: other tags: - vision - image-segmentation datasets: - coco widget: - src: http://images.cocodataset.org/val2017/000000039769.jpg example_title: Cats - src: http://images.cocodataset.org/val2017/000000039770.jpg example_title: Castle --- # MaskFormer MaskFormer model trained on COCO panoptic segmentation (large-sized version, Swin backbone). It was introduced in the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) and first released in [this repository](https://github.com/facebookresearch/MaskFormer/blob/da3e60d85fdeedcb31476b5edd7d328826ce56cc/mask_former/modeling/criterion.py#L169). Disclaimer: The team releasing MaskFormer did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description MaskFormer addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/maskformer_architecture.png) ## Intended uses & limitations You can use this particular checkpoint for semantic segmentation. See the [model hub](https://huggingface.co/models?search=maskformer) to look for other fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ```python from transformers import MaskFormerImageProcessor, MaskFormerForInstanceSegmentation from PIL import Image import requests # load MaskFormer fine-tuned on COCO panoptic segmentation processor = MaskFormerImageProcessor.from_pretrained("facebook/maskformer-swin-large-coco") model = MaskFormerForInstanceSegmentation.from_pretrained("facebook/maskformer-swin-large-coco") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = processor(images=image, return_tensors="pt") outputs = model(**inputs) # model predicts class_queries_logits of shape `(batch_size, num_queries)` # and masks_queries_logits of shape `(batch_size, num_queries, height, width)` class_queries_logits = outputs.class_queries_logits masks_queries_logits = outputs.masks_queries_logits # you can pass them to processor for postprocessing result = processor.post_process_panoptic_segmentation(outputs, target_sizes=[image.size[::-1]])[0] # we refer to the demo notebooks for visualization (see "Resources" section in the MaskFormer docs) predicted_panoptic_map = result["segmentation"] ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/maskformer).
2,742
[ [ -0.0499267578125, -0.05633544921875, 0.0200958251953125, 0.0300445556640625, -0.0214691162109375, -0.01474761962890625, 0.004302978515625, -0.048980712890625, 0.03253173828125, 0.0537109375, -0.0615234375, -0.041046142578125, -0.059326171875, -0.014595031738...
hustvl/vitmatte-base-composition-1k
2023-09-21T09:25:07.000Z
[ "transformers", "pytorch", "vitmatte", "vision", "arxiv:2305.15272", "license:apache-2.0", "endpoints_compatible", "has_space", "region:us" ]
null
hustvl
null
null
hustvl/vitmatte-base-composition-1k
2
1,216
transformers
2023-09-10T07:56:12
--- license: apache-2.0 tags: - vision --- # ViTMatte model ViTMatte model trained on Composition-1k. It was introduced in the paper [ViTMatte: Boosting Image Matting with Pretrained Plain Vision Transformers](https://arxiv.org/abs/2305.15272) by Yao et al. and first released in [this repository](https://github.com/hustvl/ViTMatte). Disclaimer: The team releasing ViTMatte did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description ViTMatte is a simple approach to image matting, the task of accurately estimating the foreground object in an image. The model consists of a Vision Transformer (ViT) with a lightweight head on top. <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/vitmatte_architecture.png" alt="drawing" width="600"/> <small> ViTMatte high-level overview. Taken from the <a href="https://arxiv.org/abs/2305.15272">original paper.</a> </small> ## Intended uses & limitations You can use the raw model for image matting. See the [model hub](https://huggingface.co/models?search=vitmatte) to look for other fine-tuned versions that may interest you. ### How to use We refer to the [docs](https://huggingface.co/docs/transformers/main/en/model_doc/vitmatte#transformers.VitMatteForImageMatting.forward.example). ### BibTeX entry and citation info ```bibtex @misc{yao2023vitmatte, title={ViTMatte: Boosting Image Matting with Pretrained Plain Vision Transformers}, author={Jingfeng Yao and Xinggang Wang and Shusheng Yang and Baoyuan Wang}, year={2023}, eprint={2305.15272}, archivePrefix={arXiv}, primaryClass={cs.CV} } ```
1,716
[ [ -0.054962158203125, -0.042724609375, 0.01029205322265625, 0.012603759765625, -0.028656005859375, -0.028228759765625, 0.00258636474609375, -0.0291748046875, 0.0247039794921875, 0.03057861328125, -0.055572509765625, -0.0309906005859375, -0.050811767578125, -0....
TencentARC/t2iadapter_zoedepth_sd15v1
2023-07-31T10:48:46.000Z
[ "diffusers", "art", "t2i-adapter", "controlnet", "stable-diffusion", "image-to-image", "arxiv:2302.08453", "license:apache-2.0", "diffusers:T2IAdapter", "region:us" ]
image-to-image
TencentARC
null
null
TencentARC/t2iadapter_zoedepth_sd15v1
1
1,215
diffusers
2023-07-14T19:02:00
--- license: apache-2.0 base_model: runwayml/stable-diffusion-v1-5 tags: - art - t2i-adapter - controlnet - stable-diffusion - image-to-image --- # T2I Adapter - Zoedepth T2I Adapter is a network providing additional conditioning to stable diffusion. Each t2i checkpoint takes a different type of conditioning as input and is used with a specific base stable diffusion checkpoint. This checkpoint provides conditioning on zoedepth depth estimation for the stable diffusion 1.5 checkpoint. ## Model Details - **Developed by:** T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models - **Model type:** Diffusion-based text-to-image generation model - **Language(s):** English - **License:** Apache 2.0 - **Resources for more information:** [GitHub Repository](https://github.com/TencentARC/T2I-Adapter), [Paper](https://arxiv.org/abs/2302.08453). - **Cite as:** @misc{ title={T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models}, author={Chong Mou, Xintao Wang, Liangbin Xie, Yanze Wu, Jian Zhang, Zhongang Qi, Ying Shan, Xiaohu Qie}, year={2023}, eprint={2302.08453}, archivePrefix={arXiv}, primaryClass={cs.CV} } ### Checkpoints | Model Name | Control Image Overview| Control Image Example | Generated Image Example | |---|---|---|---| |[TencentARC/t2iadapter_color_sd14v1](https://huggingface.co/TencentARC/t2iadapter_color_sd14v1)<br/> *Trained with spatial color palette* | A image with 8x8 color palette.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_input.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_output.png"/></a>| |[TencentARC/t2iadapter_canny_sd14v1](https://huggingface.co/TencentARC/t2iadapter_canny_sd14v1)<br/> *Trained with canny edge detection* | A monochrome image with white edges on a black background.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_input.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_output.png"/></a>| |[TencentARC/t2iadapter_sketch_sd14v1](https://huggingface.co/TencentARC/t2iadapter_sketch_sd14v1)<br/> *Trained with [PidiNet](https://github.com/zhuoinoulu/pidinet) edge detection* | A hand-drawn monochrome image with white outlines on a black background.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_input.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_output.png"/></a>| |[TencentARC/t2iadapter_depth_sd14v1](https://huggingface.co/TencentARC/t2iadapter_depth_sd14v1)<br/> *Trained with Midas depth estimation* | A grayscale image with black representing deep areas and white representing shallow areas.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_output.png"/></a>| |[TencentARC/t2iadapter_openpose_sd14v1](https://huggingface.co/TencentARC/t2iadapter_openpose_sd14v1)<br/> *Trained with OpenPose bone image* | A [OpenPose bone](https://github.com/CMU-Perceptual-Computing-Lab/openpose) image.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_output.png"/></a>| |[TencentARC/t2iadapter_keypose_sd14v1](https://huggingface.co/TencentARC/t2iadapter_keypose_sd14v1)<br/> *Trained with mmpose skeleton image* | A [mmpose skeleton](https://github.com/open-mmlab/mmpose) image.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_output.png"/></a>| |[TencentARC/t2iadapter_seg_sd14v1](https://huggingface.co/TencentARC/t2iadapter_seg_sd14v1)<br/>*Trained with semantic segmentation* | An [custom](https://github.com/TencentARC/T2I-Adapter/discussions/25) segmentation protocol image.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_output.png"/></a> | |[TencentARC/t2iadapter_canny_sd15v2](https://huggingface.co/TencentARC/t2iadapter_canny_sd15v2)|| |[TencentARC/t2iadapter_depth_sd15v2](https://huggingface.co/TencentARC/t2iadapter_depth_sd15v2)|| |[TencentARC/t2iadapter_sketch_sd15v2](https://huggingface.co/TencentARC/t2iadapter_sketch_sd15v2)|| |[TencentARC/t2iadapter_zoedepth_sd15v1](https://huggingface.co/TencentARC/t2iadapter_zoedepth_sd15v1)|| ## Example 1. Dependencies ```sh pip install diffusers transformers matplotlib ``` 2. Run code: ```python from PIL import Image import torch import numpy as np import matplotlib from diffusers import T2IAdapter, StableDiffusionAdapterPipeline def colorize(value, vmin=None, vmax=None, cmap='gray_r', invalid_val=-99, invalid_mask=None, background_color=(128, 128, 128, 255), gamma_corrected=False, value_transform=None): """Converts a depth map to a color image. Args: value (torch.Tensor, numpy.ndarry): Input depth map. Shape: (H, W) or (1, H, W) or (1, 1, H, W). All singular dimensions are squeezed vmin (float, optional): vmin-valued entries are mapped to start color of cmap. If None, value.min() is used. Defaults to None. vmax (float, optional): vmax-valued entries are mapped to end color of cmap. If None, value.max() is used. Defaults to None. cmap (str, optional): matplotlib colormap to use. Defaults to 'magma_r'. invalid_val (int, optional): Specifies value of invalid pixels that should be colored as 'background_color'. Defaults to -99. invalid_mask (numpy.ndarray, optional): Boolean mask for invalid regions. Defaults to None. background_color (tuple[int], optional): 4-tuple RGB color to give to invalid pixels. Defaults to (128, 128, 128, 255). gamma_corrected (bool, optional): Apply gamma correction to colored image. Defaults to False. value_transform (Callable, optional): Apply transform function to valid pixels before coloring. Defaults to None. Returns: numpy.ndarray, dtype - uint8: Colored depth map. Shape: (H, W, 4) """ if isinstance(value, torch.Tensor): value = value.detach().cpu().numpy() value = value.squeeze() if invalid_mask is None: invalid_mask = value == invalid_val mask = np.logical_not(invalid_mask) # normalize vmin = np.percentile(value[mask],2) if vmin is None else vmin vmax = np.percentile(value[mask],85) if vmax is None else vmax if vmin != vmax: value = (value - vmin) / (vmax - vmin) # vmin..vmax else: # Avoid 0-division value = value * 0. # squeeze last dim if it exists # grey out the invalid values value[invalid_mask] = np.nan cmapper = matplotlib.cm.get_cmap(cmap) if value_transform: value = value_transform(value) # value = value / value.max() value = cmapper(value, bytes=True) # (nxmx4) img = value[...] img[invalid_mask] = background_color if gamma_corrected: img = img / 255 img = np.power(img, 2.2) img = img * 255 img = img.astype(np.uint8) return img model = torch.hub.load("isl-org/ZoeDepth", "ZoeD_N", pretrained=True) img = Image.open('./images/zoedepth_in.png') out = model.infer_pil(img) zoedepth_image = Image.fromarray(colorize(out)).convert('RGB') zoedepth_image.save('images/zoedepth.png') adapter = T2IAdapter.from_pretrained("TencentARC/t2iadapter_zoedepth_sd15v1", torch_dtype=torch.float16) pipe = StableDiffusionAdapterPipeline.from_pretrained( "runwayml/stable-diffusion-v1-5", adapter=adapter, safety_checker=None, torch_dtype=torch.float16, variant="fp16" ) pipe.to('cuda') zoedepth_image_out = pipe(prompt="motorcycle", image=zoedepth_image).images[0] zoedepth_image_out.save('images/zoedepth_out.png') ``` ![zoedepth_in](./images/zoedepth_in.png) ![zoedepth](./images/zoedepth.png) ![zoedepth_out](./images/zoedepth_out.png)
10,339
[ [ -0.0187835693359375, -0.00495147705078125, 0.025390625, 0.0252227783203125, -0.035614013671875, -0.01654052734375, 0.0032634735107421875, -0.030364990234375, 0.027496337890625, -0.008056640625, -0.046417236328125, -0.048858642578125, -0.046783447265625, -0.0...
timm/vit_base_patch8_224.augreg_in21k_ft_in1k
2023-05-06T00:00:14.000Z
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "dataset:imagenet-21k", "arxiv:2106.10270", "arxiv:2010.11929", "license:apache-2.0", "region:us" ]
image-classification
timm
null
null
timm/vit_base_patch8_224.augreg_in21k_ft_in1k
0
1,213
timm
2022-12-22T07:23:26
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k - imagenet-21k --- # Model card for vit_base_patch8_224.augreg_in21k_ft_in1k A Vision Transformer (ViT) image classification model. Trained on ImageNet-21k and fine-tuned on ImageNet-1k (with additional augmentation and regularization) in JAX by paper authors, ported to PyTorch by Ross Wightman. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 86.6 - GMACs: 66.9 - Activations (M): 65.7 - Image size: 224 x 224 - **Papers:** - How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers: https://arxiv.org/abs/2106.10270 - An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2 - **Dataset:** ImageNet-1k - **Pretrain Dataset:** ImageNet-21k - **Original:** https://github.com/google-research/vision_transformer ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('vit_base_patch8_224.augreg_in21k_ft_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'vit_base_patch8_224.augreg_in21k_ft_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 785, 768) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @article{steiner2021augreg, title={How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers}, author={Steiner, Andreas and Kolesnikov, Alexander and and Zhai, Xiaohua and Wightman, Ross and Uszkoreit, Jakob and Beyer, Lucas}, journal={arXiv preprint arXiv:2106.10270}, year={2021} } ``` ```bibtex @article{dosovitskiy2020vit, title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale}, author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil}, journal={ICLR}, year={2021} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
3,903
[ [ -0.038970947265625, -0.029327392578125, -0.003875732421875, 0.007537841796875, -0.02978515625, -0.0250396728515625, -0.0210418701171875, -0.034454345703125, 0.0131988525390625, 0.0240936279296875, -0.040863037109375, -0.03753662109375, -0.048309326171875, 0....
mgoin/all-MiniLM-L6-v2-ONNX
2023-09-21T14:24:40.000Z
[ "transformers", "onnx", "bert", "feature-extraction", "mteb", "deepsparse", "model-index", "endpoints_compatible", "region:us" ]
feature-extraction
mgoin
null
null
mgoin/all-MiniLM-L6-v2-ONNX
0
1,213
transformers
2023-09-20T20:34:45
--- tags: - mteb - deepsparse model-index: - name: all-MiniLM-L6-v2-ONNX results: - task: type: Classification dataset: type: mteb/amazon_counterfactual name: MTEB AmazonCounterfactualClassification (en) config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 61.98507462686567 - type: ap value: 26.55307769885484 - type: f1 value: 56.576554278961936 - task: type: Classification dataset: type: mteb/amazon_polarity name: MTEB AmazonPolarityClassification config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 64.75349999999999 - type: ap value: 60.39143292103214 - type: f1 value: 64.04365859718361 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (en) config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 29.786 - type: f1 value: 29.084451746695827 - task: type: Clustering dataset: type: mteb/arxiv-clustering-p2p name: MTEB ArxivClusteringP2P config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 46.191950998304165 - task: type: Clustering dataset: type: mteb/arxiv-clustering-s2s name: MTEB ArxivClusteringS2S config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 36.91234422319347 - task: type: Reranking dataset: type: mteb/askubuntudupquestions-reranking name: MTEB AskUbuntuDupQuestions config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 63.47681681237331 - type: mrr value: 77.08657608934617 - task: type: Classification dataset: type: mteb/banking77 name: MTEB Banking77Classification config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 82.92207792207792 - type: f1 value: 82.94749339753726 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-p2p name: MTEB BiorxivClusteringP2P config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 38.59023213662521 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-s2s name: MTEB BiorxivClusteringS2S config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 32.322412431760064 - task: type: Classification dataset: type: mteb/emotion name: MTEB EmotionClassification config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 40.56 - type: f1 value: 36.87680162073889 - task: type: Classification dataset: type: mteb/imdb name: MTEB ImdbClassification config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 61.7084 - type: ap value: 57.21458607676914 - type: f1 value: 61.02407054427192 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (en) config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 91.44778841769266 - type: f1 value: 91.13853010701129 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (en) config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 70.0341997264022 - type: f1 value: 52.81666890250234 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (en) config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 68.71553463349025 - type: f1 value: 66.96683401696183 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (en) config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.38802958977807 - type: f1 value: 74.71954080631626 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-p2p name: MTEB MedrxivClusteringP2P config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 32.54905871377117 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-s2s name: MTEB MedrxivClusteringS2S config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 29.915756420522765 - task: type: Reranking dataset: type: mteb/mind_small name: MTEB MindSmallReranking config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 30.79955258185344 - type: mrr value: 31.804908892048367 - task: type: Clustering dataset: type: mteb/reddit-clustering name: MTEB RedditClustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 45.62587554038637 - task: type: Clustering dataset: type: mteb/reddit-clustering-p2p name: MTEB RedditClusteringP2P config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 53.36681782941832 - task: type: Reranking dataset: type: mteb/scidocs-reranking name: MTEB SciDocsRR config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 87.11941318470207 - type: mrr value: 96.39370705547176 - task: type: PairClassification dataset: type: mteb/sprintduplicatequestions-pairclassification name: MTEB SprintDuplicateQuestions config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.78514851485149 - type: cos_sim_ap value: 94.55063045792447 - type: cos_sim_f1 value: 89.01265822784809 - type: cos_sim_precision value: 90.15384615384615 - type: cos_sim_recall value: 87.9 - type: dot_accuracy value: 99.49405940594059 - type: dot_ap value: 80.75914927763819 - type: dot_f1 value: 73.77605428986914 - type: dot_precision value: 71.5898400752587 - type: dot_recall value: 76.1 - type: euclidean_accuracy value: 99.75247524752476 - type: euclidean_ap value: 92.29488639469919 - type: euclidean_f1 value: 87.00155359917142 - type: euclidean_precision value: 90.22556390977444 - type: euclidean_recall value: 84 - type: manhattan_accuracy value: 99.75247524752476 - type: manhattan_ap value: 92.35450475118803 - type: manhattan_f1 value: 86.98347107438016 - type: manhattan_precision value: 89.95726495726495 - type: manhattan_recall value: 84.2 - type: max_accuracy value: 99.78514851485149 - type: max_ap value: 94.55063045792447 - type: max_f1 value: 89.01265822784809 - task: type: Clustering dataset: type: mteb/stackexchange-clustering name: MTEB StackExchangeClustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 47.27910276403759 - task: type: Clustering dataset: type: mteb/stackexchange-clustering-p2p name: MTEB StackExchangeClusteringP2P config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 34.25294402164424 - task: type: Reranking dataset: type: mteb/stackoverflowdupquestions-reranking name: MTEB StackOverflowDupQuestions config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 50.76324846631252 - type: mrr value: 51.476370851370845 - task: type: Classification dataset: type: mteb/toxic_conversations_50k name: MTEB ToxicConversationsClassification config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 65.3314 - type: ap value: 11.750907138159238 - type: f1 value: 50.16451894112558 - task: type: Classification dataset: type: mteb/tweet_sentiment_extraction name: MTEB TweetSentimentExtractionClassification config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 52.4052065647991 - type: f1 value: 52.62055987764154 - task: type: Clustering dataset: type: mteb/twentynewsgroups-clustering name: MTEB TwentyNewsgroupsClustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 42.377081093709315 - task: type: PairClassification dataset: type: mteb/twittersemeval2015-pairclassification name: MTEB TwitterSemEval2015 config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 84.1151576563152 - type: cos_sim_ap value: 67.85803861885576 - type: cos_sim_f1 value: 64.08006919560113 - type: cos_sim_precision value: 60.260283523123405 - type: cos_sim_recall value: 68.41688654353561 - type: dot_accuracy value: 78.92352625618406 - type: dot_ap value: 48.288660712918215 - type: dot_f1 value: 50.88373919874313 - type: dot_precision value: 40.52236471692211 - type: dot_recall value: 68.3641160949868 - type: euclidean_accuracy value: 83.3581689217381 - type: euclidean_ap value: 65.6113812580966 - type: euclidean_f1 value: 62.50154340041981 - type: euclidean_precision value: 58.737526108145744 - type: euclidean_recall value: 66.78100263852242 - type: manhattan_accuracy value: 83.38797162782382 - type: manhattan_ap value: 65.46092597860742 - type: manhattan_f1 value: 62.46687545169839 - type: manhattan_precision value: 57.468971631205676 - type: manhattan_recall value: 68.41688654353561 - type: max_accuracy value: 84.1151576563152 - type: max_ap value: 67.85803861885576 - type: max_f1 value: 64.08006919560113 - task: type: PairClassification dataset: type: mteb/twitterurlcorpus-pairclassification name: MTEB TwitterURLCorpus config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 88.2504754142896 - type: cos_sim_ap value: 84.70166722958382 - type: cos_sim_f1 value: 76.57057281916886 - type: cos_sim_precision value: 74.5226643346451 - type: cos_sim_recall value: 78.73421619956883 - type: dot_accuracy value: 85.16125276516475 - type: dot_ap value: 76.68984041722426 - type: dot_f1 value: 71.01665954720207 - type: dot_precision value: 66.04210805084746 - type: dot_recall value: 76.8016630736064 - type: euclidean_accuracy value: 85.97236775720883 - type: euclidean_ap value: 79.0002713617006 - type: euclidean_f1 value: 70.74245510090724 - type: euclidean_precision value: 68.14582292930014 - type: euclidean_recall value: 73.54481059439483 - type: manhattan_accuracy value: 85.94132029339853 - type: manhattan_ap value: 78.94101507696199 - type: manhattan_f1 value: 70.58084540348803 - type: manhattan_precision value: 67.85308326229043 - type: manhattan_recall value: 73.53711117955035 - type: max_accuracy value: 88.2504754142896 - type: max_ap value: 84.70166722958382 - type: max_f1 value: 76.57057281916886 --- # all-MiniLM-L6-v2-ONNX
13,400
[ [ -0.00862884521484375, 0.007564544677734375, 0.042510986328125, 0.0279693603515625, -0.03753662109375, -0.000007450580596923828, 0.055084228515625, 0.0224761962890625, 0.046356201171875, 0.044952392578125, -0.05303955078125, -0.01044464111328125, -0.0274505615234...
snunlp/KR-ELECTRA-discriminator
2022-05-04T06:22:51.000Z
[ "transformers", "pytorch", "electra", "pretraining", "ko", "endpoints_compatible", "region:us" ]
null
snunlp
null
null
snunlp/KR-ELECTRA-discriminator
8
1,212
transformers
2022-03-02T23:29:05
--- language: - "ko" --- ## KoRean based ELECTRA (KR-ELECTRA) This is a release of a Korean-specific ELECTRA model with comparable or better performances developed by the Computational Linguistics Lab at Seoul National University. Our model shows remarkable performances on tasks related to informal texts such as review documents, while still showing comparable results on other kinds of tasks. ### Released Model We pre-trained our KR-ELECTRA model following a base-scale model of [ELECTRA](https://github.com/google-research/electra). We trained the model based on Tensorflow-v1 using a v3-8 TPU of Google Cloud Platform. #### Model Details We followed the training parameters of the base-scale model of [ELECTRA](https://github.com/google-research/electra). ##### Hyperparameters | model | # of layers | embedding size | hidden size | # of heads | | ------: | ----------: | -------------: | ----------: | ---------: | | Discriminator | 12 | 768 | 768 | 12 | | Generator | 12 | 768 | 256 | 4 | ##### Pretraining | batch size | train steps | learning rates | max sequence length | generator size | | ---------: | ----------: | -------------: | ------------------: | -------------: | | 256 | 700000 | 2e-4 | 128 | 0.33333 | #### Training Dataset 34GB Korean texts including Wikipedia documents, news articles, legal texts, news comments, product reviews, and so on. These texts are balanced, consisting of the same ratios of written and spoken data. #### Vocabulary vocab size 30,000 We used morpheme-based unit tokens for our vocabulary based on the [Mecab-Ko](https://bitbucket.org/eunjeon/mecab-ko-dic/src/master/) morpheme analyzer. #### Download Link * Tensorflow-v1 model ([download](https://drive.google.com/file/d/1L_yKEDaXM_yDLwHm5QrXAncQZiMN3BBU/view?usp=sharing)) * PyTorch models on HuggingFace ```python from transformers import ElectraModel, ElectraTokenizer model = ElectraModel.from_pretrained("snunlp/KR-ELECTRA-discriminator") tokenizer = ElectraTokenizer.from_pretrained("snunlp/KR-ELECTRA-discriminator") ``` ### Finetuning We used and slightly edited the finetuning codes from [KoELECTRA](https://github.com/monologg/KoELECTRA), with additionally adjusted hyperparameters. You can download the codes and config files that we used for our model from our [github](https://github.com/snunlp/KR-ELECTRA). #### Experimental Results | | **NSMC**<br/>(acc) | **Naver NER**<br/>(F1) | **PAWS**<br/>(acc) | **KorNLI**<br/>(acc) | **KorSTS**<br/>(spearman) | **Question Pair**<br/>(acc) | **KorQuaD (Dev)**<br/>(EM/F1) | **Korean-Hate-Speech (Dev)**<br/>(F1) | | :-------------------- | :----------------: | :--------------------: | :----------------: | :------------------: | :-----------------------: | :-------------------------: | :---------------------------: | :-----------------------------------: | | KoBERT | 89.59 | 87.92 | 81.25 | 79.62 | 81.59 | 94.85 | 51.75 / 79.15 | 66.21 | | XLM-Roberta-Base | 89.03 | 86.65 | 82.80 | 80.23 | 78.45 | 93.80 | 64.70 / 88.94 | 64.06 | | HanBERT | 90.06 | 87.70 | 82.95 | 80.32 | 82.73 | 94.72 | 78.74 / 92.02 | 68.32 | | KoELECTRA-Base | 90.33 | 87.18 | 81.70 | 80.64 | 82.00 | 93.54 | 60.86 / 89.28 | 66.09 | | KoELECTRA-Base-v2 | 89.56 | 87.16 | 80.70 | 80.72 | 82.30 | 94.85 | 84.01 / 92.40 | 67.45 | | KoELECTRA-Base-v3 | 90.63 | **88.11** | **84.45** | 82.24 | **85.53** | 95.25 | 84.83 / **93.45** | 67.61 | | **KR-ELECTRA (ours)** | **91.168** | 87.90 | 82.05 | **82.51** | 85.41 | **95.51** | **84.93** / 93.04 | **74.50** | The baseline results are brought from [KoELECTRA](https://github.com/monologg/KoELECTRA)'s. ### Citation ```bibtex @misc{kr-electra, author = {Lee, Sangah and Hyopil Shin}, title = {KR-ELECTRA: a KoRean-based ELECTRA model}, year = {2022}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/snunlp/KR-ELECTRA}} } ```
4,826
[ [ -0.047027587890625, -0.044525146484375, 0.00887298583984375, 0.0009484291076660156, -0.0260009765625, 0.010345458984375, -0.006603240966796875, -0.0195159912109375, 0.051605224609375, 0.0282135009765625, -0.031219482421875, -0.0516357421875, -0.035552978515625, ...
BAAI/bge-base-zh
2023-10-12T03:37:45.000Z
[ "transformers", "pytorch", "safetensors", "bert", "feature-extraction", "zh", "arxiv:2310.07554", "arxiv:2309.07597", "license:mit", "endpoints_compatible", "region:us" ]
feature-extraction
BAAI
null
null
BAAI/bge-base-zh
35
1,212
transformers
2023-08-05T08:02:30
--- license: mit language: - zh --- **Recommend switching to newest [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5), which has more reasonable similarity distribution and same method of usage.** <h1 align="center">FlagEmbedding</h1> <h4 align="center"> <p> <a href=#model-list>Model List</a> | <a href=#frequently-asked-questions>FAQ</a> | <a href=#usage>Usage</a> | <a href="#evaluation">Evaluation</a> | <a href="#train">Train</a> | <a href="#contact">Contact</a> | <a href="#citation">Citation</a> | <a href="#license">License</a> <p> </h4> More details please refer to our Github: [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding). [English](README.md) | [中文](https://github.com/FlagOpen/FlagEmbedding/blob/master/README_zh.md) FlagEmbedding can map any text to a low-dimensional dense vector which can be used for tasks like retrieval, classification, clustering, or semantic search. And it also can be used in vector databases for LLMs. ************* 🌟**Updates**🌟 ************* - 10/12/2023: Release [LLM-Embedder](./FlagEmbedding/llm_embedder/README.md), a unified embedding model to support diverse retrieval augmentation needs for LLMs. [Paper](https://arxiv.org/pdf/2310.07554.pdf) :fire: - 09/15/2023: The [technical report](https://arxiv.org/pdf/2309.07597.pdf) of BGE has been released - 09/15/2023: The [masive training data](https://data.baai.ac.cn/details/BAAI-MTP) of BGE has been released - 09/12/2023: New models: - **New reranker model**: release cross-encoder models `BAAI/bge-reranker-base` and `BAAI/bge-reranker-large`, which are more powerful than embedding model. We recommend to use/fine-tune them to re-rank top-k documents returned by embedding models. - **update embedding model**: release `bge-*-v1.5` embedding model to alleviate the issue of the similarity distribution, and enhance its retrieval ability without instruction. <details> <summary>More</summary> <!-- ### More --> - 09/07/2023: Update [fine-tune code](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md): Add script to mine hard negatives and support adding instruction during fine-tuning. - 08/09/2023: BGE Models are integrated into **Langchain**, you can use it like [this](#using-langchain); C-MTEB **leaderboard** is [available](https://huggingface.co/spaces/mteb/leaderboard). - 08/05/2023: Release base-scale and small-scale models, **best performance among the models of the same size 🤗** - 08/02/2023: Release `bge-large-*`(short for BAAI General Embedding) Models, **rank 1st on MTEB and C-MTEB benchmark!** :tada: :tada: - 08/01/2023: We release the [Chinese Massive Text Embedding Benchmark](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB) (**C-MTEB**), consisting of 31 test dataset. </details> ## Model List `bge` is short for `BAAI general embedding`. | Model | Language | | Description | query instruction for retrieval [1] | |:-------------------------------|:--------:| :--------:| :--------:|:--------:| | [BAAI/llm-embedder](https://huggingface.co/BAAI/llm-embedder) | English | [Inference](./FlagEmbedding/llm_embedder/README.md) [Fine-tune](./FlagEmbedding/llm_embedder/README.md) | a unified embedding model to support diverse retrieval augmentation needs for LLMs | See [README](./FlagEmbedding/llm_embedder/README.md) | | [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | | | [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | | | [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-large-zh-v1.5](https://huggingface.co/BAAI/bge-large-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` | | [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` | | [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` | | [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [MTEB](https://huggingface.co/spaces/mteb/leaderboard) leaderboard | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-base-en](https://huggingface.co/BAAI/bge-base-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-en` | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-small-en](https://huggingface.co/BAAI/bge-small-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) |a small-scale model but with competitive performance | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB) benchmark | `为这个句子生成表示以用于检索相关文章:` | | [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-zh` | `为这个句子生成表示以用于检索相关文章:` | | [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a small-scale model but with competitive performance | `为这个句子生成表示以用于检索相关文章:` | [1\]: If you need to search the relevant passages to a query, we suggest to add the instruction to the query; in other cases, no instruction is needed, just use the original query directly. In all cases, **no instruction** needs to be added to passages. [2\]: Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding. To balance the accuracy and time cost, cross-encoder is widely used to re-rank top-k documents retrieved by other simple models. For examples, use bge embedding model to retrieve top 100 relevant documents, and then use bge reranker to re-rank the top 100 document to get the final top-3 results. All models have been uploaded to Huggingface Hub, and you can see them at https://huggingface.co/BAAI. If you cannot open the Huggingface Hub, you also can download the models at https://model.baai.ac.cn/models . ## Frequently asked questions <details> <summary>1. How to fine-tune bge embedding model?</summary> <!-- ### How to fine-tune bge embedding model? --> Following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) to prepare data and fine-tune your model. Some suggestions: - Mine hard negatives following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune#hard-negatives), which can improve the retrieval performance. - If you pre-train bge on your data, the pre-trained model cannot be directly used to calculate similarity, and it must be fine-tuned with contrastive learning before computing similarity. - If the accuracy of the fine-tuned model is still not high, it is recommended to use/fine-tune the cross-encoder model (bge-reranker) to re-rank top-k results. Hard negatives also are needed to fine-tune reranker. </details> <details> <summary>2. The similarity score between two dissimilar sentences is higher than 0.5</summary> <!-- ### The similarity score between two dissimilar sentences is higher than 0.5 --> **Suggest to use bge v1.5, which alleviates the issue of the similarity distribution.** Since we finetune the models by contrastive learning with a temperature of 0.01, the similarity distribution of the current BGE model is about in the interval \[0.6, 1\]. So a similarity score greater than 0.5 does not indicate that the two sentences are similar. For downstream tasks, such as passage retrieval or semantic similarity, **what matters is the relative order of the scores, not the absolute value.** If you need to filter similar sentences based on a similarity threshold, please select an appropriate similarity threshold based on the similarity distribution on your data (such as 0.8, 0.85, or even 0.9). </details> <details> <summary>3. When does the query instruction need to be used</summary> <!-- ### When does the query instruction need to be used --> For the `bge-*-v1.5`, we improve its retrieval ability when not using instruction. No instruction only has a slight degradation in retrieval performance compared with using instruction. So you can generate embedding without instruction in all cases for convenience. For a retrieval task that uses short queries to find long related documents, it is recommended to add instructions for these short queries. **The best method to decide whether to add instructions for queries is choosing the setting that achieves better performance on your task.** In all cases, the documents/passages do not need to add the instruction. </details> ## Usage ### Usage for Embedding Model Here are some examples for using `bge` models with [FlagEmbedding](#using-flagembedding), [Sentence-Transformers](#using-sentence-transformers), [Langchain](#using-langchain), or [Huggingface Transformers](#using-huggingface-transformers). #### Using FlagEmbedding ``` pip install -U FlagEmbedding ``` If it doesn't work for you, you can see [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md) for more methods to install FlagEmbedding. ```python from FlagEmbedding import FlagModel sentences_1 = ["样例数据-1", "样例数据-2"] sentences_2 = ["样例数据-3", "样例数据-4"] model = FlagModel('BAAI/bge-large-zh-v1.5', query_instruction_for_retrieval="为这个句子生成表示以用于检索相关文章:", use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation embeddings_1 = model.encode(sentences_1) embeddings_2 = model.encode(sentences_2) similarity = embeddings_1 @ embeddings_2.T print(similarity) # for s2p(short query to long passage) retrieval task, suggest to use encode_queries() which will automatically add the instruction to each query # corpus in retrieval task can still use encode() or encode_corpus(), since they don't need instruction queries = ['query_1', 'query_2'] passages = ["样例文档-1", "样例文档-2"] q_embeddings = model.encode_queries(queries) p_embeddings = model.encode(passages) scores = q_embeddings @ p_embeddings.T ``` For the value of the argument `query_instruction_for_retrieval`, see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list). By default, FlagModel will use all available GPUs when encoding. Please set `os.environ["CUDA_VISIBLE_DEVICES"]` to select specific GPUs. You also can set `os.environ["CUDA_VISIBLE_DEVICES"]=""` to make all GPUs unavailable. #### Using Sentence-Transformers You can also use the `bge` models with [sentence-transformers](https://www.SBERT.net): ``` pip install -U sentence-transformers ``` ```python from sentence_transformers import SentenceTransformer sentences_1 = ["样例数据-1", "样例数据-2"] sentences_2 = ["样例数据-3", "样例数据-4"] model = SentenceTransformer('BAAI/bge-large-zh-v1.5') embeddings_1 = model.encode(sentences_1, normalize_embeddings=True) embeddings_2 = model.encode(sentences_2, normalize_embeddings=True) similarity = embeddings_1 @ embeddings_2.T print(similarity) ``` For s2p(short query to long passage) retrieval task, each short query should start with an instruction (instructions see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list)). But the instruction is not needed for passages. ```python from sentence_transformers import SentenceTransformer queries = ['query_1', 'query_2'] passages = ["样例文档-1", "样例文档-2"] instruction = "为这个句子生成表示以用于检索相关文章:" model = SentenceTransformer('BAAI/bge-large-zh-v1.5') q_embeddings = model.encode([instruction+q for q in queries], normalize_embeddings=True) p_embeddings = model.encode(passages, normalize_embeddings=True) scores = q_embeddings @ p_embeddings.T ``` #### Using Langchain You can use `bge` in langchain like this: ```python from langchain.embeddings import HuggingFaceBgeEmbeddings model_name = "BAAI/bge-large-en-v1.5" model_kwargs = {'device': 'cuda'} encode_kwargs = {'normalize_embeddings': True} # set True to compute cosine similarity model = HuggingFaceBgeEmbeddings( model_name=model_name, model_kwargs=model_kwargs, encode_kwargs=encode_kwargs, query_instruction="为这个句子生成表示以用于检索相关文章:" ) model.query_instruction = "为这个句子生成表示以用于检索相关文章:" ``` #### Using HuggingFace Transformers With the transformers package, you can use the model like this: First, you pass your input through the transformer model, then you select the last hidden state of the first token (i.e., [CLS]) as the sentence embedding. ```python from transformers import AutoTokenizer, AutoModel import torch # Sentences we want sentence embeddings for sentences = ["样例数据-1", "样例数据-2"] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-large-zh-v1.5') model = AutoModel.from_pretrained('BAAI/bge-large-zh-v1.5') model.eval() # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # for s2p(short query to long passage) retrieval task, add an instruction to query (not add instruction for passages) # encoded_input = tokenizer([instruction + q for q in queries], padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, cls pooling. sentence_embeddings = model_output[0][:, 0] # normalize embeddings sentence_embeddings = torch.nn.functional.normalize(sentence_embeddings, p=2, dim=1) print("Sentence embeddings:", sentence_embeddings) ``` ### Usage for Reranker Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding. You can get a relevance score by inputting query and passage to the reranker. The reranker is optimized based cross-entropy loss, so the relevance score is not bounded to a specific range. #### Using FlagEmbedding ``` pip install -U FlagEmbedding ``` Get relevance scores (higher scores indicate more relevance): ```python from FlagEmbedding import FlagReranker reranker = FlagReranker('BAAI/bge-reranker-large', use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation score = reranker.compute_score(['query', 'passage']) print(score) scores = reranker.compute_score([['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']]) print(scores) ``` #### Using Huggingface transformers ```python import torch from transformers import AutoModelForSequenceClassification, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-reranker-large') model = AutoModelForSequenceClassification.from_pretrained('BAAI/bge-reranker-large') model.eval() pairs = [['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']] with torch.no_grad(): inputs = tokenizer(pairs, padding=True, truncation=True, return_tensors='pt', max_length=512) scores = model(**inputs, return_dict=True).logits.view(-1, ).float() print(scores) ``` ## Evaluation `baai-general-embedding` models achieve **state-of-the-art performance on both MTEB and C-MTEB leaderboard!** For more details and evaluation tools see our [scripts](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md). - **MTEB**: | Model Name | Dimension | Sequence Length | Average (56) | Retrieval (15) |Clustering (11) | Pair Classification (3) | Reranking (4) | STS (10) | Summarization (1) | Classification (12) | |:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| | [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | 1024 | 512 | **64.23** | **54.29** | 46.08 | 87.12 | 60.03 | 83.11 | 31.61 | 75.97 | | [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | 768 | 512 | 63.55 | 53.25 | 45.77 | 86.55 | 58.86 | 82.4 | 31.07 | 75.53 | | [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | 384 | 512 | 62.17 |51.68 | 43.82 | 84.92 | 58.36 | 81.59 | 30.12 | 74.14 | | [bge-large-en](https://huggingface.co/BAAI/bge-large-en) | 1024 | 512 | 63.98 | 53.9 | 46.98 | 85.8 | 59.48 | 81.56 | 32.06 | 76.21 | | [bge-base-en](https://huggingface.co/BAAI/bge-base-en) | 768 | 512 | 63.36 | 53.0 | 46.32 | 85.86 | 58.7 | 81.84 | 29.27 | 75.27 | | [gte-large](https://huggingface.co/thenlper/gte-large) | 1024 | 512 | 63.13 | 52.22 | 46.84 | 85.00 | 59.13 | 83.35 | 31.66 | 73.33 | | [gte-base](https://huggingface.co/thenlper/gte-base) | 768 | 512 | 62.39 | 51.14 | 46.2 | 84.57 | 58.61 | 82.3 | 31.17 | 73.01 | | [e5-large-v2](https://huggingface.co/intfloat/e5-large-v2) | 1024| 512 | 62.25 | 50.56 | 44.49 | 86.03 | 56.61 | 82.05 | 30.19 | 75.24 | | [bge-small-en](https://huggingface.co/BAAI/bge-small-en) | 384 | 512 | 62.11 | 51.82 | 44.31 | 83.78 | 57.97 | 80.72 | 30.53 | 74.37 | | [instructor-xl](https://huggingface.co/hkunlp/instructor-xl) | 768 | 512 | 61.79 | 49.26 | 44.74 | 86.62 | 57.29 | 83.06 | 32.32 | 61.79 | | [e5-base-v2](https://huggingface.co/intfloat/e5-base-v2) | 768 | 512 | 61.5 | 50.29 | 43.80 | 85.73 | 55.91 | 81.05 | 30.28 | 73.84 | | [gte-small](https://huggingface.co/thenlper/gte-small) | 384 | 512 | 61.36 | 49.46 | 44.89 | 83.54 | 57.7 | 82.07 | 30.42 | 72.31 | | [text-embedding-ada-002](https://platform.openai.com/docs/guides/embeddings) | 1536 | 8192 | 60.99 | 49.25 | 45.9 | 84.89 | 56.32 | 80.97 | 30.8 | 70.93 | | [e5-small-v2](https://huggingface.co/intfloat/e5-base-v2) | 384 | 512 | 59.93 | 49.04 | 39.92 | 84.67 | 54.32 | 80.39 | 31.16 | 72.94 | | [sentence-t5-xxl](https://huggingface.co/sentence-transformers/sentence-t5-xxl) | 768 | 512 | 59.51 | 42.24 | 43.72 | 85.06 | 56.42 | 82.63 | 30.08 | 73.42 | | [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) | 768 | 514 | 57.78 | 43.81 | 43.69 | 83.04 | 59.36 | 80.28 | 27.49 | 65.07 | | [sgpt-bloom-7b1-msmarco](https://huggingface.co/bigscience/sgpt-bloom-7b1-msmarco) | 4096 | 2048 | 57.59 | 48.22 | 38.93 | 81.9 | 55.65 | 77.74 | 33.6 | 66.19 | - **C-MTEB**: We create the benchmark C-MTEB for Chinese text embedding which consists of 31 datasets from 6 tasks. Please refer to [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md) for a detailed introduction. | Model | Embedding dimension | Avg | Retrieval | STS | PairClassification | Classification | Reranking | Clustering | |:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:| | [**BAAI/bge-large-zh-v1.5**](https://huggingface.co/BAAI/bge-large-zh-v1.5) | 1024 | **64.53** | 70.46 | 56.25 | 81.6 | 69.13 | 65.84 | 48.99 | | [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | 768 | 63.13 | 69.49 | 53.72 | 79.75 | 68.07 | 65.39 | 47.53 | | [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | 512 | 57.82 | 61.77 | 49.11 | 70.41 | 63.96 | 60.92 | 44.18 | | [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | 1024 | 64.20 | 71.53 | 54.98 | 78.94 | 68.32 | 65.11 | 48.39 | | [bge-large-zh-noinstruct](https://huggingface.co/BAAI/bge-large-zh-noinstruct) | 1024 | 63.53 | 70.55 | 53 | 76.77 | 68.58 | 64.91 | 50.01 | | [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | 768 | 62.96 | 69.53 | 54.12 | 77.5 | 67.07 | 64.91 | 47.63 | | [multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 1024 | 58.79 | 63.66 | 48.44 | 69.89 | 67.34 | 56.00 | 48.23 | | [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | 512 | 58.27 | 63.07 | 49.45 | 70.35 | 63.64 | 61.48 | 45.09 | | [m3e-base](https://huggingface.co/moka-ai/m3e-base) | 768 | 57.10 | 56.91 | 50.47 | 63.99 | 67.52 | 59.34 | 47.68 | | [m3e-large](https://huggingface.co/moka-ai/m3e-large) | 1024 | 57.05 | 54.75 | 50.42 | 64.3 | 68.2 | 59.66 | 48.88 | | [multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) | 768 | 55.48 | 61.63 | 46.49 | 67.07 | 65.35 | 54.35 | 40.68 | | [multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) | 384 | 55.38 | 59.95 | 45.27 | 66.45 | 65.85 | 53.86 | 45.26 | | [text-embedding-ada-002(OpenAI)](https://platform.openai.com/docs/guides/embeddings/what-are-embeddings) | 1536 | 53.02 | 52.0 | 43.35 | 69.56 | 64.31 | 54.28 | 45.68 | | [luotuo](https://huggingface.co/silk-road/luotuo-bert-medium) | 1024 | 49.37 | 44.4 | 42.78 | 66.62 | 61 | 49.25 | 44.39 | | [text2vec-base](https://huggingface.co/shibing624/text2vec-base-chinese) | 768 | 47.63 | 38.79 | 43.41 | 67.41 | 62.19 | 49.45 | 37.66 | | [text2vec-large](https://huggingface.co/GanymedeNil/text2vec-large-chinese) | 1024 | 47.36 | 41.94 | 44.97 | 70.86 | 60.66 | 49.16 | 30.02 | - **Reranking**: See [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/) for evaluation script. | Model | T2Reranking | T2RerankingZh2En\* | T2RerankingEn2Zh\* | MMarcoReranking | CMedQAv1 | CMedQAv2 | Avg | |:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:| | text2vec-base-multilingual | 64.66 | 62.94 | 62.51 | 14.37 | 48.46 | 48.6 | 50.26 | | multilingual-e5-small | 65.62 | 60.94 | 56.41 | 29.91 | 67.26 | 66.54 | 57.78 | | multilingual-e5-large | 64.55 | 61.61 | 54.28 | 28.6 | 67.42 | 67.92 | 57.4 | | multilingual-e5-base | 64.21 | 62.13 | 54.68 | 29.5 | 66.23 | 66.98 | 57.29 | | m3e-base | 66.03 | 62.74 | 56.07 | 17.51 | 77.05 | 76.76 | 59.36 | | m3e-large | 66.13 | 62.72 | 56.1 | 16.46 | 77.76 | 78.27 | 59.57 | | bge-base-zh-v1.5 | 66.49 | 63.25 | 57.02 | 29.74 | 80.47 | 84.88 | 63.64 | | bge-large-zh-v1.5 | 65.74 | 63.39 | 57.03 | 28.74 | 83.45 | 85.44 | 63.97 | | [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | 67.28 | 63.95 | 60.45 | 35.46 | 81.26 | 84.1 | 65.42 | | [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | 67.6 | 64.03 | 61.44 | 37.16 | 82.15 | 84.18 | 66.09 | \* : T2RerankingZh2En and T2RerankingEn2Zh are cross-language retrieval tasks ## Train ### BAAI Embedding We pre-train the models using [retromae](https://github.com/staoxiao/RetroMAE) and train them on large-scale pairs data using contrastive learning. **You can fine-tune the embedding model on your data following our [examples](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune).** We also provide a [pre-train example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/pretrain). Note that the goal of pre-training is to reconstruct the text, and the pre-trained model cannot be used for similarity calculation directly, it needs to be fine-tuned. More training details for bge see [baai_general_embedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md). ### BGE Reranker Cross-encoder will perform full-attention over the input pair, which is more accurate than embedding model (i.e., bi-encoder) but more time-consuming than embedding model. Therefore, it can be used to re-rank the top-k documents returned by embedding model. We train the cross-encoder on a multilingual pair data, The data format is the same as embedding model, so you can fine-tune it easily following our [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker). More details please refer to [./FlagEmbedding/reranker/README.md](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker) ## Contact If you have any question or suggestion related to this project, feel free to open an issue or pull request. You also can email Shitao Xiao(stxiao@baai.ac.cn) and Zheng Liu(liuzheng@baai.ac.cn). ## Citation If you find this repository useful, please consider giving a star :star: and citation ``` @misc{bge_embedding, title={C-Pack: Packaged Resources To Advance General Chinese Embedding}, author={Shitao Xiao and Zheng Liu and Peitian Zhang and Niklas Muennighoff}, year={2023}, eprint={2309.07597}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## License FlagEmbedding is licensed under the [MIT License](https://github.com/FlagOpen/FlagEmbedding/blob/master/LICENSE). The released models can be used for commercial purposes free of charge.
27,347
[ [ -0.036407470703125, -0.06707763671875, 0.0289306640625, 0.01132965087890625, -0.030059814453125, -0.02117919921875, -0.0244140625, -0.0213470458984375, 0.0287933349609375, 0.0260772705078125, -0.0276031494140625, -0.06475830078125, -0.037322998046875, -0.004...
jncraton/gte-tiny-ct2-int8
2023-10-14T15:44:06.000Z
[ "sentence-transformers", "feature-extraction", "sentence-similarity", "transformers", "mteb", "model-index", "endpoints_compatible", "region:us", "has_space" ]
sentence-similarity
jncraton
null
null
jncraton/gte-tiny-ct2-int8
0
1,212
sentence-transformers
2023-10-14T15:32:30
--- model-index: - name: gte_tiny results: - task: type: Classification dataset: type: mteb/amazon_counterfactual name: MTEB AmazonCounterfactualClassification (en) config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 71.76119402985076 - type: ap value: 34.63659287952359 - type: f1 value: 65.88939512571113 - task: type: Classification dataset: type: mteb/amazon_polarity name: MTEB AmazonPolarityClassification config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 86.61324999999998 - type: ap value: 81.7476302802319 - type: f1 value: 86.5863470912001 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (en) config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 42.61000000000001 - type: f1 value: 42.2217180000715 - task: type: Retrieval dataset: type: arguana name: MTEB ArguAna config: default split: test revision: None metrics: - type: map_at_1 value: 28.377999999999997 - type: map_at_10 value: 44.565 - type: map_at_100 value: 45.48 - type: map_at_1000 value: 45.487 - type: map_at_3 value: 39.841 - type: map_at_5 value: 42.284 - type: mrr_at_1 value: 29.445 - type: mrr_at_10 value: 44.956 - type: mrr_at_100 value: 45.877 - type: mrr_at_1000 value: 45.884 - type: mrr_at_3 value: 40.209 - type: mrr_at_5 value: 42.719 - type: ndcg_at_1 value: 28.377999999999997 - type: ndcg_at_10 value: 53.638 - type: ndcg_at_100 value: 57.354000000000006 - type: ndcg_at_1000 value: 57.513000000000005 - type: ndcg_at_3 value: 43.701 - type: ndcg_at_5 value: 48.114000000000004 - type: precision_at_1 value: 28.377999999999997 - type: precision_at_10 value: 8.272 - type: precision_at_100 value: 0.984 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 18.303 - type: precision_at_5 value: 13.129 - type: recall_at_1 value: 28.377999999999997 - type: recall_at_10 value: 82.717 - type: recall_at_100 value: 98.43499999999999 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 54.908 - type: recall_at_5 value: 65.647 - task: type: Clustering dataset: type: mteb/arxiv-clustering-p2p name: MTEB ArxivClusteringP2P config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 46.637318326729876 - task: type: Clustering dataset: type: mteb/arxiv-clustering-s2s name: MTEB ArxivClusteringS2S config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 36.01134479855804 - task: type: Reranking dataset: type: mteb/askubuntudupquestions-reranking name: MTEB AskUbuntuDupQuestions config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 59.82917555338909 - type: mrr value: 74.7888361254012 - task: type: STS dataset: type: mteb/biosses-sts name: MTEB BIOSSES config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 87.1657730995964 - type: cos_sim_spearman value: 86.62787748941281 - type: euclidean_pearson value: 85.48127914481798 - type: euclidean_spearman value: 86.48148861167424 - type: manhattan_pearson value: 85.07496934780823 - type: manhattan_spearman value: 86.39473964708843 - task: type: Classification dataset: type: mteb/banking77 name: MTEB Banking77Classification config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 81.73051948051948 - type: f1 value: 81.66368364988331 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-p2p name: MTEB BiorxivClusteringP2P config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 39.18623707448217 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-s2s name: MTEB BiorxivClusteringS2S config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 32.12697757150375 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackAndroidRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 29.160000000000004 - type: map_at_10 value: 40.474 - type: map_at_100 value: 41.905 - type: map_at_1000 value: 42.041000000000004 - type: map_at_3 value: 37.147000000000006 - type: map_at_5 value: 38.873999999999995 - type: mrr_at_1 value: 36.91 - type: mrr_at_10 value: 46.495999999999995 - type: mrr_at_100 value: 47.288000000000004 - type: mrr_at_1000 value: 47.339999999999996 - type: mrr_at_3 value: 43.777 - type: mrr_at_5 value: 45.257999999999996 - type: ndcg_at_1 value: 36.91 - type: ndcg_at_10 value: 46.722 - type: ndcg_at_100 value: 51.969 - type: ndcg_at_1000 value: 54.232 - type: ndcg_at_3 value: 41.783 - type: ndcg_at_5 value: 43.797000000000004 - type: precision_at_1 value: 36.91 - type: precision_at_10 value: 9.013 - type: precision_at_100 value: 1.455 - type: precision_at_1000 value: 0.193 - type: precision_at_3 value: 20.124 - type: precision_at_5 value: 14.363000000000001 - type: recall_at_1 value: 29.160000000000004 - type: recall_at_10 value: 58.521 - type: recall_at_100 value: 80.323 - type: recall_at_1000 value: 95.13000000000001 - type: recall_at_3 value: 44.205 - type: recall_at_5 value: 49.97 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackEnglishRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 27.750000000000004 - type: map_at_10 value: 36.39 - type: map_at_100 value: 37.5 - type: map_at_1000 value: 37.625 - type: map_at_3 value: 33.853 - type: map_at_5 value: 35.397 - type: mrr_at_1 value: 34.14 - type: mrr_at_10 value: 41.841 - type: mrr_at_100 value: 42.469 - type: mrr_at_1000 value: 42.521 - type: mrr_at_3 value: 39.724 - type: mrr_at_5 value: 40.955999999999996 - type: ndcg_at_1 value: 34.14 - type: ndcg_at_10 value: 41.409 - type: ndcg_at_100 value: 45.668 - type: ndcg_at_1000 value: 47.916 - type: ndcg_at_3 value: 37.836 - type: ndcg_at_5 value: 39.650999999999996 - type: precision_at_1 value: 34.14 - type: precision_at_10 value: 7.739 - type: precision_at_100 value: 1.2630000000000001 - type: precision_at_1000 value: 0.173 - type: precision_at_3 value: 18.217 - type: precision_at_5 value: 12.854 - type: recall_at_1 value: 27.750000000000004 - type: recall_at_10 value: 49.882 - type: recall_at_100 value: 68.556 - type: recall_at_1000 value: 83.186 - type: recall_at_3 value: 39.047 - type: recall_at_5 value: 44.458 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGamingRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 36.879 - type: map_at_10 value: 48.878 - type: map_at_100 value: 49.918 - type: map_at_1000 value: 49.978 - type: map_at_3 value: 45.867999999999995 - type: map_at_5 value: 47.637 - type: mrr_at_1 value: 42.696 - type: mrr_at_10 value: 52.342 - type: mrr_at_100 value: 53.044000000000004 - type: mrr_at_1000 value: 53.077 - type: mrr_at_3 value: 50.01 - type: mrr_at_5 value: 51.437 - type: ndcg_at_1 value: 42.696 - type: ndcg_at_10 value: 54.469 - type: ndcg_at_100 value: 58.664 - type: ndcg_at_1000 value: 59.951 - type: ndcg_at_3 value: 49.419999999999995 - type: ndcg_at_5 value: 52.007000000000005 - type: precision_at_1 value: 42.696 - type: precision_at_10 value: 8.734 - type: precision_at_100 value: 1.1769999999999998 - type: precision_at_1000 value: 0.133 - type: precision_at_3 value: 22.027 - type: precision_at_5 value: 15.135000000000002 - type: recall_at_1 value: 36.879 - type: recall_at_10 value: 67.669 - type: recall_at_100 value: 85.822 - type: recall_at_1000 value: 95.092 - type: recall_at_3 value: 54.157999999999994 - type: recall_at_5 value: 60.436 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGisRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 22.942 - type: map_at_10 value: 31.741999999999997 - type: map_at_100 value: 32.721000000000004 - type: map_at_1000 value: 32.809 - type: map_at_3 value: 29.17 - type: map_at_5 value: 30.714000000000002 - type: mrr_at_1 value: 24.746000000000002 - type: mrr_at_10 value: 33.517 - type: mrr_at_100 value: 34.451 - type: mrr_at_1000 value: 34.522000000000006 - type: mrr_at_3 value: 31.148999999999997 - type: mrr_at_5 value: 32.606 - type: ndcg_at_1 value: 24.746000000000002 - type: ndcg_at_10 value: 36.553000000000004 - type: ndcg_at_100 value: 41.53 - type: ndcg_at_1000 value: 43.811 - type: ndcg_at_3 value: 31.674000000000003 - type: ndcg_at_5 value: 34.241 - type: precision_at_1 value: 24.746000000000002 - type: precision_at_10 value: 5.684 - type: precision_at_100 value: 0.859 - type: precision_at_1000 value: 0.109 - type: precision_at_3 value: 13.597000000000001 - type: precision_at_5 value: 9.672 - type: recall_at_1 value: 22.942 - type: recall_at_10 value: 49.58 - type: recall_at_100 value: 72.614 - type: recall_at_1000 value: 89.89200000000001 - type: recall_at_3 value: 36.552 - type: recall_at_5 value: 42.702 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackMathematicaRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 15.345 - type: map_at_10 value: 22.428 - type: map_at_100 value: 23.756 - type: map_at_1000 value: 23.872 - type: map_at_3 value: 20.212 - type: map_at_5 value: 21.291 - type: mrr_at_1 value: 19.279 - type: mrr_at_10 value: 27.1 - type: mrr_at_100 value: 28.211000000000002 - type: mrr_at_1000 value: 28.279 - type: mrr_at_3 value: 24.813 - type: mrr_at_5 value: 25.889 - type: ndcg_at_1 value: 19.279 - type: ndcg_at_10 value: 27.36 - type: ndcg_at_100 value: 33.499 - type: ndcg_at_1000 value: 36.452 - type: ndcg_at_3 value: 23.233999999999998 - type: ndcg_at_5 value: 24.806 - type: precision_at_1 value: 19.279 - type: precision_at_10 value: 5.149 - type: precision_at_100 value: 0.938 - type: precision_at_1000 value: 0.133 - type: precision_at_3 value: 11.360000000000001 - type: precision_at_5 value: 8.035 - type: recall_at_1 value: 15.345 - type: recall_at_10 value: 37.974999999999994 - type: recall_at_100 value: 64.472 - type: recall_at_1000 value: 85.97200000000001 - type: recall_at_3 value: 26.203 - type: recall_at_5 value: 30.485 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackPhysicsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 26.362000000000002 - type: map_at_10 value: 36.406 - type: map_at_100 value: 37.726 - type: map_at_1000 value: 37.84 - type: map_at_3 value: 33.425 - type: map_at_5 value: 35.043 - type: mrr_at_1 value: 32.146 - type: mrr_at_10 value: 41.674 - type: mrr_at_100 value: 42.478 - type: mrr_at_1000 value: 42.524 - type: mrr_at_3 value: 38.948 - type: mrr_at_5 value: 40.415 - type: ndcg_at_1 value: 32.146 - type: ndcg_at_10 value: 42.374 - type: ndcg_at_100 value: 47.919 - type: ndcg_at_1000 value: 50.013 - type: ndcg_at_3 value: 37.29 - type: ndcg_at_5 value: 39.531 - type: precision_at_1 value: 32.146 - type: precision_at_10 value: 7.767 - type: precision_at_100 value: 1.236 - type: precision_at_1000 value: 0.16 - type: precision_at_3 value: 17.965999999999998 - type: precision_at_5 value: 12.742999999999999 - type: recall_at_1 value: 26.362000000000002 - type: recall_at_10 value: 54.98800000000001 - type: recall_at_100 value: 78.50200000000001 - type: recall_at_1000 value: 92.146 - type: recall_at_3 value: 40.486 - type: recall_at_5 value: 46.236 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackProgrammersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 24.417 - type: map_at_10 value: 33.161 - type: map_at_100 value: 34.357 - type: map_at_1000 value: 34.473 - type: map_at_3 value: 30.245 - type: map_at_5 value: 31.541999999999998 - type: mrr_at_1 value: 29.909000000000002 - type: mrr_at_10 value: 38.211 - type: mrr_at_100 value: 39.056999999999995 - type: mrr_at_1000 value: 39.114 - type: mrr_at_3 value: 35.769 - type: mrr_at_5 value: 36.922 - type: ndcg_at_1 value: 29.909000000000002 - type: ndcg_at_10 value: 38.694 - type: ndcg_at_100 value: 44.057 - type: ndcg_at_1000 value: 46.6 - type: ndcg_at_3 value: 33.822 - type: ndcg_at_5 value: 35.454 - type: precision_at_1 value: 29.909000000000002 - type: precision_at_10 value: 7.180000000000001 - type: precision_at_100 value: 1.153 - type: precision_at_1000 value: 0.155 - type: precision_at_3 value: 16.134 - type: precision_at_5 value: 11.256 - type: recall_at_1 value: 24.417 - type: recall_at_10 value: 50.260000000000005 - type: recall_at_100 value: 73.55699999999999 - type: recall_at_1000 value: 91.216 - type: recall_at_3 value: 35.971 - type: recall_at_5 value: 40.793 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 24.266916666666663 - type: map_at_10 value: 32.75025 - type: map_at_100 value: 33.91341666666667 - type: map_at_1000 value: 34.031749999999995 - type: map_at_3 value: 30.166416666666674 - type: map_at_5 value: 31.577000000000005 - type: mrr_at_1 value: 28.828166666666664 - type: mrr_at_10 value: 36.80991666666667 - type: mrr_at_100 value: 37.67075 - type: mrr_at_1000 value: 37.733 - type: mrr_at_3 value: 34.513416666666664 - type: mrr_at_5 value: 35.788 - type: ndcg_at_1 value: 28.828166666666664 - type: ndcg_at_10 value: 37.796 - type: ndcg_at_100 value: 42.94783333333333 - type: ndcg_at_1000 value: 45.38908333333333 - type: ndcg_at_3 value: 33.374750000000006 - type: ndcg_at_5 value: 35.379666666666665 - type: precision_at_1 value: 28.828166666666664 - type: precision_at_10 value: 6.615749999999999 - type: precision_at_100 value: 1.0848333333333333 - type: precision_at_1000 value: 0.1484166666666667 - type: precision_at_3 value: 15.347833333333332 - type: precision_at_5 value: 10.848916666666666 - type: recall_at_1 value: 24.266916666666663 - type: recall_at_10 value: 48.73458333333333 - type: recall_at_100 value: 71.56341666666667 - type: recall_at_1000 value: 88.63091666666668 - type: recall_at_3 value: 36.31208333333333 - type: recall_at_5 value: 41.55633333333333 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackStatsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 23.497 - type: map_at_10 value: 30.249 - type: map_at_100 value: 30.947000000000003 - type: map_at_1000 value: 31.049 - type: map_at_3 value: 28.188000000000002 - type: map_at_5 value: 29.332 - type: mrr_at_1 value: 26.687 - type: mrr_at_10 value: 33.182 - type: mrr_at_100 value: 33.794999999999995 - type: mrr_at_1000 value: 33.873 - type: mrr_at_3 value: 31.263 - type: mrr_at_5 value: 32.428000000000004 - type: ndcg_at_1 value: 26.687 - type: ndcg_at_10 value: 34.252 - type: ndcg_at_100 value: 38.083 - type: ndcg_at_1000 value: 40.682 - type: ndcg_at_3 value: 30.464999999999996 - type: ndcg_at_5 value: 32.282 - type: precision_at_1 value: 26.687 - type: precision_at_10 value: 5.2909999999999995 - type: precision_at_100 value: 0.788 - type: precision_at_1000 value: 0.109 - type: precision_at_3 value: 13.037 - type: precision_at_5 value: 9.049 - type: recall_at_1 value: 23.497 - type: recall_at_10 value: 43.813 - type: recall_at_100 value: 61.88399999999999 - type: recall_at_1000 value: 80.926 - type: recall_at_3 value: 33.332 - type: recall_at_5 value: 37.862 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackTexRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 16.073 - type: map_at_10 value: 22.705000000000002 - type: map_at_100 value: 23.703 - type: map_at_1000 value: 23.833 - type: map_at_3 value: 20.593 - type: map_at_5 value: 21.7 - type: mrr_at_1 value: 19.683 - type: mrr_at_10 value: 26.39 - type: mrr_at_100 value: 27.264 - type: mrr_at_1000 value: 27.349 - type: mrr_at_3 value: 24.409 - type: mrr_at_5 value: 25.474000000000004 - type: ndcg_at_1 value: 19.683 - type: ndcg_at_10 value: 27.014 - type: ndcg_at_100 value: 31.948 - type: ndcg_at_1000 value: 35.125 - type: ndcg_at_3 value: 23.225 - type: ndcg_at_5 value: 24.866 - type: precision_at_1 value: 19.683 - type: precision_at_10 value: 4.948 - type: precision_at_100 value: 0.876 - type: precision_at_1000 value: 0.133 - type: precision_at_3 value: 10.943 - type: precision_at_5 value: 7.86 - type: recall_at_1 value: 16.073 - type: recall_at_10 value: 36.283 - type: recall_at_100 value: 58.745999999999995 - type: recall_at_1000 value: 81.711 - type: recall_at_3 value: 25.637 - type: recall_at_5 value: 29.919 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackUnixRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 25.776 - type: map_at_10 value: 33.317 - type: map_at_100 value: 34.437 - type: map_at_1000 value: 34.54 - type: map_at_3 value: 30.706 - type: map_at_5 value: 32.202999999999996 - type: mrr_at_1 value: 30.224 - type: mrr_at_10 value: 37.34 - type: mrr_at_100 value: 38.268 - type: mrr_at_1000 value: 38.335 - type: mrr_at_3 value: 35.075 - type: mrr_at_5 value: 36.348 - type: ndcg_at_1 value: 30.224 - type: ndcg_at_10 value: 38.083 - type: ndcg_at_100 value: 43.413000000000004 - type: ndcg_at_1000 value: 45.856 - type: ndcg_at_3 value: 33.437 - type: ndcg_at_5 value: 35.661 - type: precision_at_1 value: 30.224 - type: precision_at_10 value: 6.1850000000000005 - type: precision_at_100 value: 1.0030000000000001 - type: precision_at_1000 value: 0.132 - type: precision_at_3 value: 14.646 - type: precision_at_5 value: 10.428999999999998 - type: recall_at_1 value: 25.776 - type: recall_at_10 value: 48.787000000000006 - type: recall_at_100 value: 72.04899999999999 - type: recall_at_1000 value: 89.339 - type: recall_at_3 value: 36.192 - type: recall_at_5 value: 41.665 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWebmastersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 23.156 - type: map_at_10 value: 30.886000000000003 - type: map_at_100 value: 32.551 - type: map_at_1000 value: 32.769 - type: map_at_3 value: 28.584 - type: map_at_5 value: 29.959999999999997 - type: mrr_at_1 value: 28.260999999999996 - type: mrr_at_10 value: 35.555 - type: mrr_at_100 value: 36.687 - type: mrr_at_1000 value: 36.742999999999995 - type: mrr_at_3 value: 33.531 - type: mrr_at_5 value: 34.717 - type: ndcg_at_1 value: 28.260999999999996 - type: ndcg_at_10 value: 36.036 - type: ndcg_at_100 value: 42.675000000000004 - type: ndcg_at_1000 value: 45.303 - type: ndcg_at_3 value: 32.449 - type: ndcg_at_5 value: 34.293 - type: precision_at_1 value: 28.260999999999996 - type: precision_at_10 value: 6.837999999999999 - type: precision_at_100 value: 1.4569999999999999 - type: precision_at_1000 value: 0.23500000000000001 - type: precision_at_3 value: 15.217 - type: precision_at_5 value: 11.028 - type: recall_at_1 value: 23.156 - type: recall_at_10 value: 45.251999999999995 - type: recall_at_100 value: 75.339 - type: recall_at_1000 value: 91.56 - type: recall_at_3 value: 34.701 - type: recall_at_5 value: 39.922999999999995 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWordpressRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 19.846 - type: map_at_10 value: 26.367 - type: map_at_100 value: 27.439999999999998 - type: map_at_1000 value: 27.552 - type: map_at_3 value: 24.006 - type: map_at_5 value: 25.230999999999998 - type: mrr_at_1 value: 21.257 - type: mrr_at_10 value: 28.071 - type: mrr_at_100 value: 29.037000000000003 - type: mrr_at_1000 value: 29.119 - type: mrr_at_3 value: 25.692999999999998 - type: mrr_at_5 value: 27.006000000000004 - type: ndcg_at_1 value: 21.257 - type: ndcg_at_10 value: 30.586000000000002 - type: ndcg_at_100 value: 35.949 - type: ndcg_at_1000 value: 38.728 - type: ndcg_at_3 value: 25.862000000000002 - type: ndcg_at_5 value: 27.967 - type: precision_at_1 value: 21.257 - type: precision_at_10 value: 4.861 - type: precision_at_100 value: 0.8130000000000001 - type: precision_at_1000 value: 0.116 - type: precision_at_3 value: 10.906 - type: precision_at_5 value: 7.763000000000001 - type: recall_at_1 value: 19.846 - type: recall_at_10 value: 41.805 - type: recall_at_100 value: 66.89699999999999 - type: recall_at_1000 value: 87.401 - type: recall_at_3 value: 29.261 - type: recall_at_5 value: 34.227000000000004 - task: type: Retrieval dataset: type: climate-fever name: MTEB ClimateFEVER config: default split: test revision: None metrics: - type: map_at_1 value: 10.333 - type: map_at_10 value: 17.14 - type: map_at_100 value: 18.878 - type: map_at_1000 value: 19.067 - type: map_at_3 value: 14.123 - type: map_at_5 value: 15.699 - type: mrr_at_1 value: 23.192 - type: mrr_at_10 value: 33.553 - type: mrr_at_100 value: 34.553 - type: mrr_at_1000 value: 34.603 - type: mrr_at_3 value: 29.848000000000003 - type: mrr_at_5 value: 32.18 - type: ndcg_at_1 value: 23.192 - type: ndcg_at_10 value: 24.707 - type: ndcg_at_100 value: 31.701 - type: ndcg_at_1000 value: 35.260999999999996 - type: ndcg_at_3 value: 19.492 - type: ndcg_at_5 value: 21.543 - type: precision_at_1 value: 23.192 - type: precision_at_10 value: 7.824000000000001 - type: precision_at_100 value: 1.52 - type: precision_at_1000 value: 0.218 - type: precision_at_3 value: 14.180000000000001 - type: precision_at_5 value: 11.530999999999999 - type: recall_at_1 value: 10.333 - type: recall_at_10 value: 30.142999999999997 - type: recall_at_100 value: 54.298 - type: recall_at_1000 value: 74.337 - type: recall_at_3 value: 17.602999999999998 - type: recall_at_5 value: 22.938 - task: type: Retrieval dataset: type: dbpedia-entity name: MTEB DBPedia config: default split: test revision: None metrics: - type: map_at_1 value: 8.03 - type: map_at_10 value: 17.345 - type: map_at_100 value: 23.462 - type: map_at_1000 value: 24.77 - type: map_at_3 value: 12.714 - type: map_at_5 value: 14.722 - type: mrr_at_1 value: 61.0 - type: mrr_at_10 value: 69.245 - type: mrr_at_100 value: 69.715 - type: mrr_at_1000 value: 69.719 - type: mrr_at_3 value: 67.583 - type: mrr_at_5 value: 68.521 - type: ndcg_at_1 value: 47.625 - type: ndcg_at_10 value: 35.973 - type: ndcg_at_100 value: 39.875 - type: ndcg_at_1000 value: 46.922000000000004 - type: ndcg_at_3 value: 40.574 - type: ndcg_at_5 value: 38.18 - type: precision_at_1 value: 61.0 - type: precision_at_10 value: 29.049999999999997 - type: precision_at_100 value: 8.828 - type: precision_at_1000 value: 1.8290000000000002 - type: precision_at_3 value: 45.333 - type: precision_at_5 value: 37.9 - type: recall_at_1 value: 8.03 - type: recall_at_10 value: 22.334 - type: recall_at_100 value: 45.919 - type: recall_at_1000 value: 68.822 - type: recall_at_3 value: 14.038999999999998 - type: recall_at_5 value: 17.118 - task: type: Classification dataset: type: mteb/emotion name: MTEB EmotionClassification config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 44.714999999999996 - type: f1 value: 39.83929362259356 - task: type: Retrieval dataset: type: fever name: MTEB FEVER config: default split: test revision: None metrics: - type: map_at_1 value: 52.242999999999995 - type: map_at_10 value: 64.087 - type: map_at_100 value: 64.549 - type: map_at_1000 value: 64.567 - type: map_at_3 value: 61.667 - type: map_at_5 value: 63.266 - type: mrr_at_1 value: 56.271 - type: mrr_at_10 value: 68.146 - type: mrr_at_100 value: 68.524 - type: mrr_at_1000 value: 68.53200000000001 - type: mrr_at_3 value: 65.869 - type: mrr_at_5 value: 67.37100000000001 - type: ndcg_at_1 value: 56.271 - type: ndcg_at_10 value: 70.109 - type: ndcg_at_100 value: 72.09 - type: ndcg_at_1000 value: 72.479 - type: ndcg_at_3 value: 65.559 - type: ndcg_at_5 value: 68.242 - type: precision_at_1 value: 56.271 - type: precision_at_10 value: 9.286999999999999 - type: precision_at_100 value: 1.039 - type: precision_at_1000 value: 0.109 - type: precision_at_3 value: 26.308 - type: precision_at_5 value: 17.291 - type: recall_at_1 value: 52.242999999999995 - type: recall_at_10 value: 84.71 - type: recall_at_100 value: 93.309 - type: recall_at_1000 value: 96.013 - type: recall_at_3 value: 72.554 - type: recall_at_5 value: 79.069 - task: type: Retrieval dataset: type: fiqa name: MTEB FiQA2018 config: default split: test revision: None metrics: - type: map_at_1 value: 14.346 - type: map_at_10 value: 24.552 - type: map_at_100 value: 26.161 - type: map_at_1000 value: 26.345000000000002 - type: map_at_3 value: 21.208 - type: map_at_5 value: 22.959 - type: mrr_at_1 value: 29.166999999999998 - type: mrr_at_10 value: 38.182 - type: mrr_at_100 value: 39.22 - type: mrr_at_1000 value: 39.263 - type: mrr_at_3 value: 35.983 - type: mrr_at_5 value: 37.14 - type: ndcg_at_1 value: 29.166999999999998 - type: ndcg_at_10 value: 31.421 - type: ndcg_at_100 value: 38.129999999999995 - type: ndcg_at_1000 value: 41.569 - type: ndcg_at_3 value: 28.172000000000004 - type: ndcg_at_5 value: 29.029 - type: precision_at_1 value: 29.166999999999998 - type: precision_at_10 value: 8.997 - type: precision_at_100 value: 1.5709999999999997 - type: precision_at_1000 value: 0.22 - type: precision_at_3 value: 19.187 - type: precision_at_5 value: 13.980999999999998 - type: recall_at_1 value: 14.346 - type: recall_at_10 value: 37.963 - type: recall_at_100 value: 63.43299999999999 - type: recall_at_1000 value: 84.057 - type: recall_at_3 value: 26.119999999999997 - type: recall_at_5 value: 30.988 - task: type: Retrieval dataset: type: hotpotqa name: MTEB HotpotQA config: default split: test revision: None metrics: - type: map_at_1 value: 33.059 - type: map_at_10 value: 46.421 - type: map_at_100 value: 47.323 - type: map_at_1000 value: 47.403 - type: map_at_3 value: 43.553999999999995 - type: map_at_5 value: 45.283 - type: mrr_at_1 value: 66.117 - type: mrr_at_10 value: 73.10900000000001 - type: mrr_at_100 value: 73.444 - type: mrr_at_1000 value: 73.46000000000001 - type: mrr_at_3 value: 71.70400000000001 - type: mrr_at_5 value: 72.58099999999999 - type: ndcg_at_1 value: 66.117 - type: ndcg_at_10 value: 55.696999999999996 - type: ndcg_at_100 value: 59.167 - type: ndcg_at_1000 value: 60.809000000000005 - type: ndcg_at_3 value: 51.243 - type: ndcg_at_5 value: 53.627 - type: precision_at_1 value: 66.117 - type: precision_at_10 value: 11.538 - type: precision_at_100 value: 1.429 - type: precision_at_1000 value: 0.165 - type: precision_at_3 value: 31.861 - type: precision_at_5 value: 20.997 - type: recall_at_1 value: 33.059 - type: recall_at_10 value: 57.691 - type: recall_at_100 value: 71.458 - type: recall_at_1000 value: 82.35 - type: recall_at_3 value: 47.792 - type: recall_at_5 value: 52.492000000000004 - task: type: Classification dataset: type: mteb/imdb name: MTEB ImdbClassification config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 80.544 - type: ap value: 74.69592367984956 - type: f1 value: 80.51138138449883 - task: type: Retrieval dataset: type: msmarco name: MTEB MSMARCO config: default split: dev revision: None metrics: - type: map_at_1 value: 17.095 - type: map_at_10 value: 28.038999999999998 - type: map_at_100 value: 29.246 - type: map_at_1000 value: 29.311 - type: map_at_3 value: 24.253 - type: map_at_5 value: 26.442 - type: mrr_at_1 value: 17.535999999999998 - type: mrr_at_10 value: 28.53 - type: mrr_at_100 value: 29.697000000000003 - type: mrr_at_1000 value: 29.755 - type: mrr_at_3 value: 24.779999999999998 - type: mrr_at_5 value: 26.942 - type: ndcg_at_1 value: 17.549999999999997 - type: ndcg_at_10 value: 34.514 - type: ndcg_at_100 value: 40.497 - type: ndcg_at_1000 value: 42.17 - type: ndcg_at_3 value: 26.764 - type: ndcg_at_5 value: 30.678 - type: precision_at_1 value: 17.549999999999997 - type: precision_at_10 value: 5.692 - type: precision_at_100 value: 0.8699999999999999 - type: precision_at_1000 value: 0.101 - type: precision_at_3 value: 11.562 - type: precision_at_5 value: 8.917 - type: recall_at_1 value: 17.095 - type: recall_at_10 value: 54.642 - type: recall_at_100 value: 82.652 - type: recall_at_1000 value: 95.555 - type: recall_at_3 value: 33.504 - type: recall_at_5 value: 42.925000000000004 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (en) config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 91.75558595531236 - type: f1 value: 91.25979279648296 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (en) config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 69.90424076607387 - type: f1 value: 52.067408707562244 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (en) config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.13449899125757 - type: f1 value: 67.62456762910598 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (en) config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.862138533961 - type: f1 value: 74.66457222091381 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-p2p name: MTEB MedrxivClusteringP2P config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 34.10761942610792 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-s2s name: MTEB MedrxivClusteringS2S config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 31.673172170578408 - task: type: Reranking dataset: type: mteb/mind_small name: MTEB MindSmallReranking config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 32.058704977250315 - type: mrr value: 33.24327760839221 - task: type: Retrieval dataset: type: nfcorpus name: MTEB NFCorpus config: default split: test revision: None metrics: - type: map_at_1 value: 5.163 - type: map_at_10 value: 11.652999999999999 - type: map_at_100 value: 14.849 - type: map_at_1000 value: 16.253999999999998 - type: map_at_3 value: 8.616999999999999 - type: map_at_5 value: 10.100000000000001 - type: mrr_at_1 value: 44.272 - type: mrr_at_10 value: 52.25 - type: mrr_at_100 value: 52.761 - type: mrr_at_1000 value: 52.811 - type: mrr_at_3 value: 50.31 - type: mrr_at_5 value: 51.347 - type: ndcg_at_1 value: 42.105 - type: ndcg_at_10 value: 32.044 - type: ndcg_at_100 value: 29.763 - type: ndcg_at_1000 value: 38.585 - type: ndcg_at_3 value: 36.868 - type: ndcg_at_5 value: 35.154999999999994 - type: precision_at_1 value: 43.653 - type: precision_at_10 value: 23.622 - type: precision_at_100 value: 7.7490000000000006 - type: precision_at_1000 value: 2.054 - type: precision_at_3 value: 34.262 - type: precision_at_5 value: 30.154999999999998 - type: recall_at_1 value: 5.163 - type: recall_at_10 value: 15.478 - type: recall_at_100 value: 30.424 - type: recall_at_1000 value: 62.67 - type: recall_at_3 value: 9.615 - type: recall_at_5 value: 12.369 - task: type: Retrieval dataset: type: nq name: MTEB NQ config: default split: test revision: None metrics: - type: map_at_1 value: 21.618000000000002 - type: map_at_10 value: 35.465 - type: map_at_100 value: 36.712 - type: map_at_1000 value: 36.757 - type: map_at_3 value: 31.189 - type: map_at_5 value: 33.537 - type: mrr_at_1 value: 24.305 - type: mrr_at_10 value: 37.653 - type: mrr_at_100 value: 38.662 - type: mrr_at_1000 value: 38.694 - type: mrr_at_3 value: 33.889 - type: mrr_at_5 value: 35.979 - type: ndcg_at_1 value: 24.305 - type: ndcg_at_10 value: 43.028 - type: ndcg_at_100 value: 48.653999999999996 - type: ndcg_at_1000 value: 49.733 - type: ndcg_at_3 value: 34.768 - type: ndcg_at_5 value: 38.753 - type: precision_at_1 value: 24.305 - type: precision_at_10 value: 7.59 - type: precision_at_100 value: 1.076 - type: precision_at_1000 value: 0.11800000000000001 - type: precision_at_3 value: 16.271 - type: precision_at_5 value: 12.068 - type: recall_at_1 value: 21.618000000000002 - type: recall_at_10 value: 63.977 - type: recall_at_100 value: 89.03999999999999 - type: recall_at_1000 value: 97.10600000000001 - type: recall_at_3 value: 42.422 - type: recall_at_5 value: 51.629000000000005 - task: type: Retrieval dataset: type: quora name: MTEB QuoraRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 69.405 - type: map_at_10 value: 83.05 - type: map_at_100 value: 83.684 - type: map_at_1000 value: 83.70400000000001 - type: map_at_3 value: 80.08800000000001 - type: map_at_5 value: 81.937 - type: mrr_at_1 value: 79.85 - type: mrr_at_10 value: 86.369 - type: mrr_at_100 value: 86.48599999999999 - type: mrr_at_1000 value: 86.48700000000001 - type: mrr_at_3 value: 85.315 - type: mrr_at_5 value: 86.044 - type: ndcg_at_1 value: 79.86999999999999 - type: ndcg_at_10 value: 87.04499999999999 - type: ndcg_at_100 value: 88.373 - type: ndcg_at_1000 value: 88.531 - type: ndcg_at_3 value: 84.04 - type: ndcg_at_5 value: 85.684 - type: precision_at_1 value: 79.86999999999999 - type: precision_at_10 value: 13.183 - type: precision_at_100 value: 1.51 - type: precision_at_1000 value: 0.156 - type: precision_at_3 value: 36.67 - type: precision_at_5 value: 24.12 - type: recall_at_1 value: 69.405 - type: recall_at_10 value: 94.634 - type: recall_at_100 value: 99.214 - type: recall_at_1000 value: 99.958 - type: recall_at_3 value: 85.992 - type: recall_at_5 value: 90.656 - task: type: Clustering dataset: type: mteb/reddit-clustering name: MTEB RedditClustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 50.191676323145465 - task: type: Clustering dataset: type: mteb/reddit-clustering-p2p name: MTEB RedditClusteringP2P config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 56.4874020363744 - task: type: Retrieval dataset: type: scidocs name: MTEB SCIDOCS config: default split: test revision: None metrics: - type: map_at_1 value: 4.228 - type: map_at_10 value: 11.245 - type: map_at_100 value: 13.353000000000002 - type: map_at_1000 value: 13.665 - type: map_at_3 value: 7.779999999999999 - type: map_at_5 value: 9.405 - type: mrr_at_1 value: 20.9 - type: mrr_at_10 value: 31.657999999999998 - type: mrr_at_100 value: 32.769999999999996 - type: mrr_at_1000 value: 32.833 - type: mrr_at_3 value: 28.333000000000002 - type: mrr_at_5 value: 30.043 - type: ndcg_at_1 value: 20.9 - type: ndcg_at_10 value: 19.073 - type: ndcg_at_100 value: 27.055 - type: ndcg_at_1000 value: 32.641 - type: ndcg_at_3 value: 17.483999999999998 - type: ndcg_at_5 value: 15.42 - type: precision_at_1 value: 20.9 - type: precision_at_10 value: 10.17 - type: precision_at_100 value: 2.162 - type: precision_at_1000 value: 0.35100000000000003 - type: precision_at_3 value: 16.467000000000002 - type: precision_at_5 value: 13.68 - type: recall_at_1 value: 4.228 - type: recall_at_10 value: 20.573 - type: recall_at_100 value: 43.887 - type: recall_at_1000 value: 71.22 - type: recall_at_3 value: 10.023 - type: recall_at_5 value: 13.873 - task: type: STS dataset: type: mteb/sickr-sts name: MTEB SICK-R config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 82.77965135067481 - type: cos_sim_spearman value: 75.85121335808076 - type: euclidean_pearson value: 80.09115175262697 - type: euclidean_spearman value: 75.72249155647123 - type: manhattan_pearson value: 79.89723577351782 - type: manhattan_spearman value: 75.49855259442387 - task: type: STS dataset: type: mteb/sts12-sts name: MTEB STS12 config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 80.46084116030949 - type: cos_sim_spearman value: 72.57579204392951 - type: euclidean_pearson value: 76.39020830763684 - type: euclidean_spearman value: 72.3718627025895 - type: manhattan_pearson value: 76.6148833027359 - type: manhattan_spearman value: 72.57570008442319 - task: type: STS dataset: type: mteb/sts13-sts name: MTEB STS13 config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 80.43678068337017 - type: cos_sim_spearman value: 82.38941154076062 - type: euclidean_pearson value: 81.59260573633661 - type: euclidean_spearman value: 82.31144262574114 - type: manhattan_pearson value: 81.43266909137056 - type: manhattan_spearman value: 82.14704293004861 - task: type: STS dataset: type: mteb/sts14-sts name: MTEB STS14 config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 80.73713431763163 - type: cos_sim_spearman value: 77.97860512809388 - type: euclidean_pearson value: 80.35755041527027 - type: euclidean_spearman value: 78.021703511412 - type: manhattan_pearson value: 80.24440317109162 - type: manhattan_spearman value: 77.93165415697575 - task: type: STS dataset: type: mteb/sts15-sts name: MTEB STS15 config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 85.15111852351204 - type: cos_sim_spearman value: 86.54032447238258 - type: euclidean_pearson value: 86.14157021537433 - type: euclidean_spearman value: 86.67537291929713 - type: manhattan_pearson value: 86.081041854808 - type: manhattan_spearman value: 86.61561701560558 - task: type: STS dataset: type: mteb/sts16-sts name: MTEB STS16 config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 81.34532445104026 - type: cos_sim_spearman value: 83.31325001474116 - type: euclidean_pearson value: 82.81892375201032 - type: euclidean_spearman value: 83.4521695148055 - type: manhattan_pearson value: 82.72503790526163 - type: manhattan_spearman value: 83.37833652941349 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (en-en) config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 87.25463453839801 - type: cos_sim_spearman value: 88.27655263515948 - type: euclidean_pearson value: 88.0248334411439 - type: euclidean_spearman value: 88.18141448876868 - type: manhattan_pearson value: 87.8080451127279 - type: manhattan_spearman value: 88.01028114423058 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (en) config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 63.57551045355218 - type: cos_sim_spearman value: 66.67614095126629 - type: euclidean_pearson value: 66.0787243112528 - type: euclidean_spearman value: 66.83660560636939 - type: manhattan_pearson value: 66.74684019662031 - type: manhattan_spearman value: 67.11761598074368 - task: type: STS dataset: type: mteb/stsbenchmark-sts name: MTEB STSBenchmark config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 83.70881496766829 - type: cos_sim_spearman value: 84.37803542941634 - type: euclidean_pearson value: 84.84501245857096 - type: euclidean_spearman value: 84.47088079741476 - type: manhattan_pearson value: 84.77244090794765 - type: manhattan_spearman value: 84.43307343706205 - task: type: Reranking dataset: type: mteb/scidocs-reranking name: MTEB SciDocsRR config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 81.53946254759089 - type: mrr value: 94.68259953554072 - task: type: Retrieval dataset: type: scifact name: MTEB SciFact config: default split: test revision: None metrics: - type: map_at_1 value: 51.817 - type: map_at_10 value: 62.339999999999996 - type: map_at_100 value: 62.88 - type: map_at_1000 value: 62.909000000000006 - type: map_at_3 value: 59.004 - type: map_at_5 value: 60.906000000000006 - type: mrr_at_1 value: 54.333 - type: mrr_at_10 value: 63.649 - type: mrr_at_100 value: 64.01 - type: mrr_at_1000 value: 64.039 - type: mrr_at_3 value: 61.056 - type: mrr_at_5 value: 62.639 - type: ndcg_at_1 value: 54.333 - type: ndcg_at_10 value: 67.509 - type: ndcg_at_100 value: 69.69999999999999 - type: ndcg_at_1000 value: 70.613 - type: ndcg_at_3 value: 61.729 - type: ndcg_at_5 value: 64.696 - type: precision_at_1 value: 54.333 - type: precision_at_10 value: 9.2 - type: precision_at_100 value: 1.043 - type: precision_at_1000 value: 0.11199999999999999 - type: precision_at_3 value: 24.0 - type: precision_at_5 value: 16.2 - type: recall_at_1 value: 51.817 - type: recall_at_10 value: 82.056 - type: recall_at_100 value: 91.667 - type: recall_at_1000 value: 99.0 - type: recall_at_3 value: 66.717 - type: recall_at_5 value: 74.17200000000001 - task: type: PairClassification dataset: type: mteb/sprintduplicatequestions-pairclassification name: MTEB SprintDuplicateQuestions config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.82475247524752 - type: cos_sim_ap value: 95.4781199603258 - type: cos_sim_f1 value: 91.16186693147964 - type: cos_sim_precision value: 90.53254437869822 - type: cos_sim_recall value: 91.8 - type: dot_accuracy value: 99.75049504950495 - type: dot_ap value: 93.05183539809457 - type: dot_f1 value: 87.31117824773412 - type: dot_precision value: 87.93103448275862 - type: dot_recall value: 86.7 - type: euclidean_accuracy value: 99.82475247524752 - type: euclidean_ap value: 95.38547978154382 - type: euclidean_f1 value: 91.16325511732403 - type: euclidean_precision value: 91.02691924227318 - type: euclidean_recall value: 91.3 - type: manhattan_accuracy value: 99.82574257425742 - type: manhattan_ap value: 95.47237521890308 - type: manhattan_f1 value: 91.27849355797821 - type: manhattan_precision value: 90.47151277013754 - type: manhattan_recall value: 92.10000000000001 - type: max_accuracy value: 99.82574257425742 - type: max_ap value: 95.4781199603258 - type: max_f1 value: 91.27849355797821 - task: type: Clustering dataset: type: mteb/stackexchange-clustering name: MTEB StackExchangeClustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 57.542169376331245 - task: type: Clustering dataset: type: mteb/stackexchange-clustering-p2p name: MTEB StackExchangeClusteringP2P config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 35.74399302634387 - task: type: Reranking dataset: type: mteb/stackoverflowdupquestions-reranking name: MTEB StackOverflowDupQuestions config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 49.65076347632749 - type: mrr value: 50.418099057804945 - task: type: Summarization dataset: type: mteb/summeval name: MTEB SummEval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 29.73997756592847 - type: cos_sim_spearman value: 29.465208011593308 - type: dot_pearson value: 24.83735342474541 - type: dot_spearman value: 26.005180528584855 - task: type: Retrieval dataset: type: trec-covid name: MTEB TRECCOVID config: default split: test revision: None metrics: - type: map_at_1 value: 0.208 - type: map_at_10 value: 1.434 - type: map_at_100 value: 7.829 - type: map_at_1000 value: 19.807 - type: map_at_3 value: 0.549 - type: map_at_5 value: 0.8330000000000001 - type: mrr_at_1 value: 78.0 - type: mrr_at_10 value: 85.35199999999999 - type: mrr_at_100 value: 85.673 - type: mrr_at_1000 value: 85.673 - type: mrr_at_3 value: 84.667 - type: mrr_at_5 value: 85.06700000000001 - type: ndcg_at_1 value: 72.0 - type: ndcg_at_10 value: 59.214999999999996 - type: ndcg_at_100 value: 44.681 - type: ndcg_at_1000 value: 43.035000000000004 - type: ndcg_at_3 value: 66.53099999999999 - type: ndcg_at_5 value: 63.23 - type: precision_at_1 value: 78.0 - type: precision_at_10 value: 62.4 - type: precision_at_100 value: 45.76 - type: precision_at_1000 value: 19.05 - type: precision_at_3 value: 71.333 - type: precision_at_5 value: 67.2 - type: recall_at_1 value: 0.208 - type: recall_at_10 value: 1.6580000000000001 - type: recall_at_100 value: 11.324 - type: recall_at_1000 value: 41.537 - type: recall_at_3 value: 0.579 - type: recall_at_5 value: 0.8959999999999999 - task: type: Retrieval dataset: type: webis-touche2020 name: MTEB Touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 2.442 - type: map_at_10 value: 8.863 - type: map_at_100 value: 14.606 - type: map_at_1000 value: 16.258 - type: map_at_3 value: 4.396 - type: map_at_5 value: 6.199000000000001 - type: mrr_at_1 value: 30.612000000000002 - type: mrr_at_10 value: 43.492 - type: mrr_at_100 value: 44.557 - type: mrr_at_1000 value: 44.557 - type: mrr_at_3 value: 40.816 - type: mrr_at_5 value: 42.143 - type: ndcg_at_1 value: 25.509999999999998 - type: ndcg_at_10 value: 22.076 - type: ndcg_at_100 value: 34.098 - type: ndcg_at_1000 value: 46.265 - type: ndcg_at_3 value: 24.19 - type: ndcg_at_5 value: 23.474 - type: precision_at_1 value: 30.612000000000002 - type: precision_at_10 value: 19.796 - type: precision_at_100 value: 7.286 - type: precision_at_1000 value: 1.5310000000000001 - type: precision_at_3 value: 25.85 - type: precision_at_5 value: 24.490000000000002 - type: recall_at_1 value: 2.442 - type: recall_at_10 value: 15.012 - type: recall_at_100 value: 45.865 - type: recall_at_1000 value: 82.958 - type: recall_at_3 value: 5.731 - type: recall_at_5 value: 9.301 - task: type: Classification dataset: type: mteb/toxic_conversations_50k name: MTEB ToxicConversationsClassification config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 70.974 - type: ap value: 14.534996211286682 - type: f1 value: 54.785946183399005 - task: type: Classification dataset: type: mteb/tweet_sentiment_extraction name: MTEB TweetSentimentExtractionClassification config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 58.56819468024901 - type: f1 value: 58.92391487111204 - task: type: Clustering dataset: type: mteb/twentynewsgroups-clustering name: MTEB TwentyNewsgroupsClustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 43.273202335218194 - task: type: PairClassification dataset: type: mteb/twittersemeval2015-pairclassification name: MTEB TwitterSemEval2015 config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 84.37742146986946 - type: cos_sim_ap value: 68.1684129575579 - type: cos_sim_f1 value: 64.93475108748189 - type: cos_sim_precision value: 59.89745876058849 - type: cos_sim_recall value: 70.89709762532982 - type: dot_accuracy value: 80.49710913750968 - type: dot_ap value: 54.699790073944186 - type: dot_f1 value: 54.45130013221684 - type: dot_precision value: 46.74612183125236 - type: dot_recall value: 65.19788918205805 - type: euclidean_accuracy value: 84.5085533766466 - type: euclidean_ap value: 68.38835695236224 - type: euclidean_f1 value: 65.3391121002694 - type: euclidean_precision value: 58.75289656625237 - type: euclidean_recall value: 73.58839050131925 - type: manhattan_accuracy value: 84.40126363473803 - type: manhattan_ap value: 68.09539181555348 - type: manhattan_f1 value: 64.99028182701653 - type: manhattan_precision value: 60.22062134173795 - type: manhattan_recall value: 70.58047493403694 - type: max_accuracy value: 84.5085533766466 - type: max_ap value: 68.38835695236224 - type: max_f1 value: 65.3391121002694 - task: type: PairClassification dataset: type: mteb/twitterurlcorpus-pairclassification name: MTEB TwitterURLCorpus config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 88.34167733923235 - type: cos_sim_ap value: 84.84136381147736 - type: cos_sim_f1 value: 77.01434980904001 - type: cos_sim_precision value: 74.27937915742794 - type: cos_sim_recall value: 79.95842315983985 - type: dot_accuracy value: 85.06422944075756 - type: dot_ap value: 76.49446747522325 - type: dot_f1 value: 71.11606520830432 - type: dot_precision value: 64.93638676844785 - type: dot_recall value: 78.59562673236834 - type: euclidean_accuracy value: 88.45810532852097 - type: euclidean_ap value: 84.91526721863501 - type: euclidean_f1 value: 77.04399001750662 - type: euclidean_precision value: 74.62298867162133 - type: euclidean_recall value: 79.62734832152756 - type: manhattan_accuracy value: 88.46004579500912 - type: manhattan_ap value: 84.81590026238194 - type: manhattan_f1 value: 76.97804626491822 - type: manhattan_precision value: 73.79237288135593 - type: manhattan_recall value: 80.45118570988605 - type: max_accuracy value: 88.46004579500912 - type: max_ap value: 84.91526721863501 - type: max_f1 value: 77.04399001750662 pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers - mteb --- # {gte-tiny} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. It is distilled from `thenlper/gte-small`, with comparable (slightly worse) performance at around half the size. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
65,446
[ [ -0.0233001708984375, -0.055419921875, 0.02374267578125, 0.023590087890625, -0.0230255126953125, -0.031280517578125, -0.021820068359375, -0.0052337646484375, 0.01444244384765625, 0.0217132568359375, -0.040374755859375, -0.03863525390625, -0.05291748046875, -0...
facebook/wav2vec2-large-robust-ft-libri-960h
2023-06-23T16:47:23.000Z
[ "transformers", "pytorch", "safetensors", "wav2vec2", "automatic-speech-recognition", "speech", "audio", "en", "dataset:libri_light", "dataset:common_voice", "dataset:switchboard", "dataset:fisher", "dataset:librispeech_asr", "arxiv:2104.01027", "license:apache-2.0", "endpoints_compati...
automatic-speech-recognition
facebook
null
null
facebook/wav2vec2-large-robust-ft-libri-960h
8
1,211
transformers
2022-03-02T23:29:05
--- language: en datasets: - libri_light - common_voice - switchboard - fisher - librispeech_asr tags: - speech - audio - automatic-speech-recognition widget: - example_title: Librispeech sample 1 src: https://cdn-media.huggingface.co/speech_samples/sample1.flac - example_title: Librispeech sample 2 src: https://cdn-media.huggingface.co/speech_samples/sample2.flac license: apache-2.0 --- # Wav2Vec2-Large-Robust finetuned on Librispeech [Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/). This model is a fine-tuned version of the [wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust) model. It has been pretrained on: - [Libri-Light](https://github.com/facebookresearch/libri-light): open-source audio books from the LibriVox project; clean, read-out audio data - [CommonVoice](https://huggingface.co/datasets/common_voice): crowd-source collected audio data; read-out text snippets - [Switchboard](https://catalog.ldc.upenn.edu/LDC97S62): telephone speech corpus; noisy telephone data - [Fisher](https://catalog.ldc.upenn.edu/LDC2004T19): conversational telephone speech; noisy telephone data and subsequently been finetuned on 960 hours of - [Librispeech](https://huggingface.co/datasets/librispeech_asr): open-source read-out audio data. When using the model make sure that your speech input is also sampled at 16Khz. [Paper Robust Wav2Vec2](https://arxiv.org/abs/2104.01027) Authors: Wei-Ning Hsu, Anuroop Sriram, Alexei Baevski, Tatiana Likhomanenko, Qiantong Xu, Vineel Pratap, Jacob Kahn, Ann Lee, Ronan Collobert, Gabriel Synnaeve, Michael Auli **Abstract** Self-supervised learning of speech representations has been a very active research area but most work is focused on a single domain such as read audio books for which there exist large quantities of labeled and unlabeled data. In this paper, we explore more general setups where the domain of the unlabeled data for pre-training data differs from the domain of the labeled data for fine-tuning, which in turn may differ from the test data domain. Our experiments show that using target domain data during pre-training leads to large performance improvements across a variety of setups. On a large-scale competitive setup, we show that pre-training on unlabeled in-domain data reduces the gap between models trained on in-domain and out-of-domain labeled data by 66%-73%. This has obvious practical implications since it is much easier to obtain unlabeled target domain data than labeled data. Moreover, we find that pre-training on multiple domains improves generalization performance on domains not seen during training. Code and models will be made available at this https URL. The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20. # Usage To transcribe audio files the model can be used as a standalone acoustic model as follows: ```python from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC from datasets import load_dataset import soundfile as sf import torch # load model and processor processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-large-robust-ft-libri-960h") model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-large-robust-ft-libri-960h") # define function to read in sound file def map_to_array(batch): speech, _ = sf.read(batch["file"]) batch["speech"] = speech return batch # load dummy dataset and read soundfiles ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") ds = ds.map(map_to_array) # tokenize input_values = processor(ds["speech"][:2], return_tensors="pt", padding="longest").input_values # Batch size 1 # retrieve logits logits = model(input_values).logits # take argmax and decode predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids) ```
3,964
[ [ -0.02276611328125, -0.052581787109375, 0.0041961669921875, 0.006710052490234375, -0.01555633544921875, -0.012664794921875, -0.03826904296875, -0.045867919921875, -0.01099395751953125, 0.0254669189453125, -0.044769287109375, -0.04425048828125, -0.046356201171875,...
tugstugi/bert-large-mongolian-uncased
2021-05-20T08:19:28.000Z
[ "transformers", "pytorch", "tf", "jax", "bert", "fill-mask", "mongolian", "uncased", "mn", "arxiv:1810.04805", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
fill-mask
tugstugi
null
null
tugstugi/bert-large-mongolian-uncased
5
1,211
transformers
2022-03-02T23:29:05
--- language: "mn" tags: - bert - mongolian - uncased --- # BERT-LARGE-MONGOLIAN-UNCASED [Link to Official Mongolian-BERT repo](https://github.com/tugstugi/mongolian-bert) ## Model description This repository contains pre-trained Mongolian [BERT](https://arxiv.org/abs/1810.04805) models trained by [tugstugi](https://github.com/tugstugi), [enod](https://github.com/enod) and [sharavsambuu](https://github.com/sharavsambuu). Special thanks to [nabar](https://github.com/nabar) who provided 5x TPUs. This repository is based on the following open source projects: [google-research/bert](https://github.com/google-research/bert/), [huggingface/pytorch-pretrained-BERT](https://github.com/huggingface/pytorch-pretrained-BERT) and [yoheikikuta/bert-japanese](https://github.com/yoheikikuta/bert-japanese). #### How to use ```python from transformers import pipeline, AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained('tugstugi/bert-large-mongolian-uncased', use_fast=False) model = AutoModelForMaskedLM.from_pretrained('tugstugi/bert-large-mongolian-uncased') ## declare task ## pipe = pipeline(task="fill-mask", model=model, tokenizer=tokenizer) ## example ## input_ = 'Монгол улсын [MASK] Улаанбаатар хотоос ярьж байна.' output_ = pipe(input_) for i in range(len(output_)): print(output_[i]) ## output ## # {'sequence': 'монгол улсын нийслэл улаанбаатар хотоос ярьж байна.', 'score': 0.7867621183395386, 'token': 849, 'token_str': 'нийслэл'} # {'sequence': 'монгол улсын ерөнхийлөгч улаанбаатар хотоос ярьж байна.', 'score': 0.14303277432918549, 'token': 244, 'token_str': 'ерөнхийлөгч'} # {'sequence': 'монгол улсын ерөнхийлөгчийг улаанбаатар хотоос ярьж байна.', 'score': 0.011642335914075375, 'token': 8373, 'token_str': 'ерөнхийлөгчийг'} # {'sequence': 'монгол улсын иргэд улаанбаатар хотоос ярьж байна.', 'score': 0.006592822726815939, 'token': 247, 'token_str': 'иргэд'} # {'sequence': 'монгол улсын нийслэлийг улаанбаатар хотоос ярьж байна.', 'score': 0.006165097933262587, 'token': 15501, 'token_str': 'нийслэлийг'} ``` ## Training data Mongolian Wikipedia and the 700 million word Mongolian news data set [[Pretraining Procedure](https://github.com/tugstugi/mongolian-bert#pre-training)] ### BibTeX entry and citation info ```bibtex @misc{mongolian-bert, author = {Tuguldur, Erdene-Ochir and Gunchinish, Sharavsambuu and Bataa, Enkhbold}, title = {BERT Pretrained Models on Mongolian Datasets}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/tugstugi/mongolian-bert/}} } ```
2,603
[ [ -0.029632568359375, -0.033050537109375, -0.0041351318359375, 0.0185089111328125, -0.040924072265625, 0.0017757415771484375, -0.0242156982421875, -0.00949859619140625, 0.025543212890625, 0.007801055908203125, -0.046478271484375, -0.04693603515625, -0.049438476562...
alvations/autotrain-ara-transliterate-1259548205
2023-03-16T00:44:52.000Z
[ "transformers", "pytorch", "marian", "text2text-generation", "autotrain", "translation", "en", "ar", "multilingual", "dataset:alvations/autotrain-data-ara-transliterate", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
alvations
null
null
alvations/autotrain-ara-transliterate-1259548205
0
1,211
transformers
2022-08-15T12:23:51
--- language: - en - ar - multilingual tags: - autotrain - translation datasets: - alvations/autotrain-data-ara-transliterate co2_eq_emissions: emissions: 1938.877077145461 --- # Model Trained Using AutoTrain - Problem type: Translation - Model ID: 1259548205 - CO2 Emissions (in grams): 1938.8771 ## Validation Metrics - Loss: 0.685 - SacreBLEU: 57.231 - Gen len: 6.943
376
[ [ -0.010223388671875, -0.023529052734375, 0.036407470703125, 0.0063018798828125, -0.004306793212890625, -0.01690673828125, 0.0092926025390625, 0.0019350051879882812, -0.0298004150390625, 0.023345947265625, -0.043792724609375, -0.032958984375, -0.0538330078125, ...
timm/eva02_enormous_patch14_clip_224.laion2b_s4b_b115k
2023-04-11T00:18:32.000Z
[ "open_clip", "zero-shot-image-classification", "clip", "license:mit", "region:us", "has_space" ]
zero-shot-image-classification
timm
null
null
timm/eva02_enormous_patch14_clip_224.laion2b_s4b_b115k
1
1,211
open_clip
2023-04-10T23:15:05
--- tags: - zero-shot-image-classification - clip library_tag: open_clip license: mit --- # Model card for eva02_enormous_patch14_clip_224.laion2b_s4b_b115k
157
[ [ -0.02227783203125, -0.01335906982421875, 0.0285797119140625, 0.048309326171875, -0.0361328125, 0.01195526123046875, 0.038116455078125, 0.0079803466796875, 0.055511474609375, 0.06524658203125, -0.04327392578125, -0.019287109375, -0.0273284912109375, -0.004501...
readerbench/RoBERT-base
2023-06-21T10:08:18.000Z
[ "transformers", "pytorch", "tf", "jax", "bert", "endpoints_compatible", "has_space", "region:us" ]
null
readerbench
null
null
readerbench/RoBERT-base
2
1,210
transformers
2022-03-02T23:29:05
Model card for RoBERT-base --- language: - ro --- # RoBERT-base ## Pretrained BERT model for Romanian Pretrained model on Romanian language using a masked language modeling (MLM) and next sentence prediction (NSP) objective. It was introduced in this [paper](https://www.aclweb.org/anthology/2020.coling-main.581/). Three BERT models were released: RoBERT-small, **RoBERT-base** and RoBERT-large, all versions uncased. | Model | Weights | L | H | A | MLM accuracy | NSP accuracy | |----------------|:---------:|:------:|:------:|:------:|:--------------:|:--------------:| | RoBERT-small | 19M | 12 | 256 | 8 | 0.5363 | 0.9687 | | *RoBERT-base* | *114M* | *12* | *768* | *12* | *0.6511* | *0.9802* | | RoBERT-large | 341M | 24 | 1024 | 24 | 0.6929 | 0.9843 | All models are available: * [RoBERT-small](https://huggingface.co/readerbench/RoBERT-small) * [RoBERT-base](https://huggingface.co/readerbench/RoBERT-base) * [RoBERT-large](https://huggingface.co/readerbench/RoBERT-large) #### How to use ```python # tensorflow from transformers import AutoModel, AutoTokenizer, TFAutoModel tokenizer = AutoTokenizer.from_pretrained("readerbench/RoBERT-base") model = TFAutoModel.from_pretrained("readerbench/RoBERT-base") inputs = tokenizer("exemplu de propoziție", return_tensors="tf") outputs = model(inputs) # pytorch from transformers import AutoModel, AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("readerbench/RoBERT-base") model = AutoModel.from_pretrained("readerbench/RoBERT-base") inputs = tokenizer("exemplu de propoziție", return_tensors="pt") outputs = model(**inputs) ``` ## Training data The model is trained on the following compilation of corpora. Note that we present the statistics after the cleaning process. | Corpus | Words | Sentences | Size (GB)| |-----------|:---------:|:---------:|:--------:| | Oscar | 1.78B | 87M | 10.8 | | RoTex | 240M | 14M | 1.5 | | RoWiki | 50M | 2M | 0.3 | | **Total** | **2.07B** | **103M** | **12.6** | ## Downstream performance ### Sentiment analysis We report Macro-averaged F1 score (in %) | Model | Dev | Test | |------------------|:--------:|:--------:| | multilingual-BERT| 68.96 | 69.57 | | XLM-R-base | 71.26 | 71.71 | | BERT-base-ro | 70.49 | 71.02 | | RoBERT-small | 66.32 | 66.37 | | *RoBERT-base* | *70.89* | *71.61* | | RoBERT-large | **72.48**| **72.11**| ### Moldavian vs. Romanian Dialect and Cross-dialect Topic identification We report results on [VarDial 2019](https://sites.google.com/view/vardial2019/campaign) Moldavian vs. Romanian Cross-dialect Topic identification Challenge, as Macro-averaged F1 score (in %). | Model | Dialect Classification | MD to RO | RO to MD | |-------------------|:----------------------:|:--------:|:--------:| | 2-CNN + SVM | 93.40 | 65.09 | 75.21 | | Char+Word SVM | 96.20 | 69.08 | 81.93 | | BiGRU | 93.30 | **70.10**| 80.30 | | multilingual-BERT | 95.34 | 68.76 | 78.24 | | XLM-R-base | 96.28 | 69.93 | 82.28 | | BERT-base-ro | 96.20 | 69.93 | 78.79 | | RoBERT-small | 95.67 | 69.01 | 80.40 | | *RoBERT-base* | *97.39* | *68.30* | *81.09* | | RoBERT-large | **97.78** | 69.91 | **83.65**| ### Diacritics Restoration Challenge can be found [here](https://diacritics-challenge.speed.pub.ro/). We report results on the official test set, as accuracies in %. | Model | word level | char level | |-----------------------------|:----------:|:----------:| | BiLSTM | 99.42 | - | | CharCNN | 98.40 | 99.65 | | CharCNN + multilingual-BERT | 99.72 | 99.94 | | CharCNN + XLM-R-base | 99.76 | **99.95** | | CharCNN + BERT-base-ro | **99.79** | **99.95** | | CharCNN + RoBERT-small | 99.73 | 99.94 | | *CharCNN + RoBERT-base* | *99.78* | **99.95** | | CharCNN + RoBERT-large | 99.76 | **99.95** | ### BibTeX entry and citation info ```bibtex @inproceedings{masala2020robert, title={RoBERT--A Romanian BERT Model}, author={Masala, Mihai and Ruseti, Stefan and Dascalu, Mihai}, booktitle={Proceedings of the 28th International Conference on Computational Linguistics}, pages={6626--6637}, year={2020} } ```
4,718
[ [ -0.037445068359375, -0.057952880859375, 0.0123443603515625, 0.01309967041015625, -0.02301025390625, 0.008880615234375, -0.03125, -0.0290374755859375, 0.035797119140625, 0.0256500244140625, -0.047576904296875, -0.060394287109375, -0.055694580078125, 0.0160675...
digiplay/2-KWI
2023-06-25T16:25:23.000Z
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "endpoints_compatible", "has_space", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
digiplay
null
null
digiplay/2-KWI
3
1,209
diffusers
2023-06-25T05:15:56
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- Model info: https://civitai.com/models/93891/2-kwi Original Author's DEMO image : ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/f6adc7b4-8150-4292-9b3b-509ca31ee771/width=1024/02016-4011654912-(masterpiece,%20best%20quality),1girl%20with%20long%20white%20hair%20sitting%20in%20a%20field%20of%20green%20plants%20and%20flowers,%20her%20hand%20under%20her%20chin,.jpeg)
497
[ [ -0.0174102783203125, -0.038970947265625, 0.032012939453125, 0.030548095703125, -0.0244598388671875, -0.018218994140625, 0.0174713134765625, -0.00902557373046875, 0.031585693359375, 0.04083251953125, -0.066650390625, -0.039215087890625, -0.03131103515625, -0....
Yntec/DucHaiten-StyleLikeMeVAE
2023-07-26T09:07:38.000Z
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "DucHaiten", "license:creativeml-openrail-m", "endpoints_compatible", "has_space", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
Yntec
null
null
Yntec/DucHaiten-StyleLikeMeVAE
1
1,209
diffusers
2023-07-26T08:46:00
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image tags: - stable-diffusion - stable-diffusion-diffusers - diffusers - text-to-image - DucHaiten --- # Style Like DucHaiten VAE Ladies and gentlemen, I present what DucHaiten has to say about this model: https://huggingface.co/DucHaiten/DucHaiten-StyleLikeMe And then I baked in the Color 101 VAE.
385
[ [ -0.03155517578125, -0.033111572265625, 0.034454345703125, 0.0263671875, -0.024078369140625, -0.0282135009765625, 0.011077880859375, 0.006744384765625, 0.08355712890625, 0.041259765625, -0.052215576171875, -0.042755126953125, -0.01727294921875, -0.01170349121...
bionlp/bluebert_pubmed_uncased_L-12_H-768_A-12
2021-09-24T07:45:33.000Z
[ "transformers", "pytorch", "bluebert", "en", "dataset:pubmed", "license:cc0-1.0", "endpoints_compatible", "region:us" ]
null
bionlp
null
null
bionlp/bluebert_pubmed_uncased_L-12_H-768_A-12
5
1,208
transformers
2022-03-02T23:29:05
--- language: - en tags: - bluebert license: cc0-1.0 datasets: - pubmed --- # BlueBert-Base, Uncased, PubMed ## Model description A BERT model pre-trained on PubMed abstracts ## Intended uses & limitations #### How to use Please see https://github.com/ncbi-nlp/bluebert ## Training data We provide [preprocessed PubMed texts](https://ftp.ncbi.nlm.nih.gov/pub/lu/Suppl/NCBI-BERT/pubmed_uncased_sentence_nltk.txt.tar.gz) that were used to pre-train the BlueBERT models. The corpus contains ~4000M words extracted from the [PubMed ASCII code version](https://www.ncbi.nlm.nih.gov/research/bionlp/APIs/BioC-PubMed/). Pre-trained model: https://huggingface.co/bert-base-uncased ## Training procedure * lowercasing the text * removing speical chars `\x00`-`\x7F` * tokenizing the text using the [NLTK Treebank tokenizer](https://www.nltk.org/_modules/nltk/tokenize/treebank.html) Below is a code snippet for more details. ```python value = value.lower() value = re.sub(r'[\r\n]+', ' ', value) value = re.sub(r'[^\x00-\x7F]+', ' ', value) tokenized = TreebankWordTokenizer().tokenize(value) sentence = ' '.join(tokenized) sentence = re.sub(r"\s's\b", "'s", sentence) ``` ### BibTeX entry and citation info ```bibtex @InProceedings{peng2019transfer, author = {Yifan Peng and Shankai Yan and Zhiyong Lu}, title = {Transfer Learning in Biomedical Natural Language Processing: An Evaluation of BERT and ELMo on Ten Benchmarking Datasets}, booktitle = {Proceedings of the 2019 Workshop on Biomedical Natural Language Processing (BioNLP 2019)}, year = {2019}, pages = {58--65}, } ```
1,620
[ [ 0.009063720703125, -0.032257080078125, 0.0272979736328125, 0.0284423828125, -0.03778076171875, 0.003170013427734375, -0.0242156982421875, -0.042327880859375, 0.01507568359375, 0.0250244140625, -0.024627685546875, -0.056427001953125, -0.040679931640625, 0.027...
ConvLab/t5-small-nlu-multiwoz21
2022-11-25T11:13:22.000Z
[ "transformers", "pytorch", "t5", "text2text-generation", "t5-small", "natural language understanding", "conversational system", "task-oriented dialog", "en", "dataset:ConvLab/multiwoz21", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generatio...
text2text-generation
ConvLab
null
null
ConvLab/t5-small-nlu-multiwoz21
0
1,208
transformers
2022-11-25T04:25:43
--- language: - en license: apache-2.0 tags: - t5-small - text2text-generation - natural language understanding - conversational system - task-oriented dialog datasets: - ConvLab/multiwoz21 metrics: - Dialog acts Accuracy - Dialog acts F1 model-index: - name: t5-small-nlu-multiwoz21 results: - task: type: text2text-generation name: natural language understanding dataset: type: ConvLab/multiwoz21 name: MultiWOZ 2.1 split: test revision: 5f55375edbfe0270c20bcf770751ad982c0e6614 metrics: - type: Dialog acts Accuracy value: 77.8 name: Accuracy - type: Dialog acts F1 value: 86.5 name: F1 widget: - text: "user: I would like a taxi from Saint John's college to Pizza Hut Fen Ditton." - text: "user: we are staying 6 people for 4 nights starting from Tuesday. i need the reference number" inference: parameters: max_length: 100 --- # t5-small-nlu-multiwoz21 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on [MultiWOZ 2.1](https://huggingface.co/datasets/ConvLab/multiwoz21). Refer to [ConvLab-3](https://github.com/ConvLab/ConvLab-3) for model description and usage. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 128 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 256 - optimizer: Adafactor - lr_scheduler_type: linear - num_epochs: 10.0 ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
1,640
[ [ -0.05157470703125, -0.043243408203125, 0.01169586181640625, 0.0211029052734375, -0.0188140869140625, -0.02191162109375, -0.007568359375, -0.0203094482421875, 0.00850677490234375, 0.024871826171875, -0.06988525390625, -0.03466796875, -0.035400390625, 0.006046...
consciousAI/cai-stellaris-text-embeddings
2023-06-23T15:27:38.000Z
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "sentence-similarity", "mteb", "model-index", "endpoints_compatible", "region:us" ]
sentence-similarity
consciousAI
null
null
consciousAI/cai-stellaris-text-embeddings
0
1,208
sentence-transformers
2023-06-23T15:01:50
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb model-index: - name: cai-stellaris-text-embeddings results: - task: type: Classification dataset: type: mteb/amazon_counterfactual name: MTEB AmazonCounterfactualClassification (en) config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 64.86567164179104 - type: ap value: 28.30760041689409 - type: f1 value: 59.08589995918376 - task: type: Classification dataset: type: mteb/amazon_polarity name: MTEB AmazonPolarityClassification config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 65.168625 - type: ap value: 60.131922961382166 - type: f1 value: 65.02463910192814 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (en) config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 31.016 - type: f1 value: 30.501226228002924 - task: type: Retrieval dataset: type: arguana name: MTEB ArguAna config: default split: test revision: None metrics: - type: map_at_1 value: 24.609 - type: map_at_10 value: 38.793 - type: map_at_100 value: 40.074 - type: map_at_1000 value: 40.083 - type: map_at_3 value: 33.736 - type: map_at_5 value: 36.642 - type: mrr_at_1 value: 25.533 - type: mrr_at_10 value: 39.129999999999995 - type: mrr_at_100 value: 40.411 - type: mrr_at_1000 value: 40.42 - type: mrr_at_3 value: 34.033 - type: mrr_at_5 value: 36.956 - type: ndcg_at_1 value: 24.609 - type: ndcg_at_10 value: 47.288000000000004 - type: ndcg_at_100 value: 52.654999999999994 - type: ndcg_at_1000 value: 52.88699999999999 - type: ndcg_at_3 value: 36.86 - type: ndcg_at_5 value: 42.085 - type: precision_at_1 value: 24.609 - type: precision_at_10 value: 7.468 - type: precision_at_100 value: 0.979 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 15.315000000000001 - type: precision_at_5 value: 11.721 - type: recall_at_1 value: 24.609 - type: recall_at_10 value: 74.68 - type: recall_at_100 value: 97.866 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 45.946 - type: recall_at_5 value: 58.606 - task: type: Clustering dataset: type: mteb/arxiv-clustering-p2p name: MTEB ArxivClusteringP2P config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 42.014046191286525 - task: type: Clustering dataset: type: mteb/arxiv-clustering-s2s name: MTEB ArxivClusteringS2S config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 31.406159641263052 - task: type: Reranking dataset: type: mteb/askubuntudupquestions-reranking name: MTEB AskUbuntuDupQuestions config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 60.35266033223575 - type: mrr value: 72.66796376907179 - task: type: Classification dataset: type: mteb/banking77 name: MTEB Banking77Classification config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 74.12337662337661 - type: f1 value: 73.12122145084057 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-p2p name: MTEB BiorxivClusteringP2P config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 34.72513663347855 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-s2s name: MTEB BiorxivClusteringS2S config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 29.280150859689826 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackAndroidRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 21.787 - type: map_at_10 value: 30.409000000000002 - type: map_at_100 value: 31.947 - type: map_at_1000 value: 32.09 - type: map_at_3 value: 27.214 - type: map_at_5 value: 28.810999999999996 - type: mrr_at_1 value: 27.039 - type: mrr_at_10 value: 35.581 - type: mrr_at_100 value: 36.584 - type: mrr_at_1000 value: 36.645 - type: mrr_at_3 value: 32.713 - type: mrr_at_5 value: 34.272999999999996 - type: ndcg_at_1 value: 27.039 - type: ndcg_at_10 value: 36.157000000000004 - type: ndcg_at_100 value: 42.598 - type: ndcg_at_1000 value: 45.207 - type: ndcg_at_3 value: 30.907 - type: ndcg_at_5 value: 33.068 - type: precision_at_1 value: 27.039 - type: precision_at_10 value: 7.295999999999999 - type: precision_at_100 value: 1.303 - type: precision_at_1000 value: 0.186 - type: precision_at_3 value: 14.926 - type: precision_at_5 value: 11.044 - type: recall_at_1 value: 21.787 - type: recall_at_10 value: 47.693999999999996 - type: recall_at_100 value: 75.848 - type: recall_at_1000 value: 92.713 - type: recall_at_3 value: 32.92 - type: recall_at_5 value: 38.794000000000004 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackEnglishRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 24.560000000000002 - type: map_at_10 value: 34.756 - type: map_at_100 value: 36.169000000000004 - type: map_at_1000 value: 36.298 - type: map_at_3 value: 31.592 - type: map_at_5 value: 33.426 - type: mrr_at_1 value: 31.274 - type: mrr_at_10 value: 40.328 - type: mrr_at_100 value: 41.125 - type: mrr_at_1000 value: 41.171 - type: mrr_at_3 value: 37.866 - type: mrr_at_5 value: 39.299 - type: ndcg_at_1 value: 31.338 - type: ndcg_at_10 value: 40.696 - type: ndcg_at_100 value: 45.922000000000004 - type: ndcg_at_1000 value: 47.982 - type: ndcg_at_3 value: 36.116 - type: ndcg_at_5 value: 38.324000000000005 - type: precision_at_1 value: 31.338 - type: precision_at_10 value: 8.083 - type: precision_at_100 value: 1.4040000000000001 - type: precision_at_1000 value: 0.189 - type: precision_at_3 value: 18.089 - type: precision_at_5 value: 13.159 - type: recall_at_1 value: 24.560000000000002 - type: recall_at_10 value: 51.832 - type: recall_at_100 value: 74.26899999999999 - type: recall_at_1000 value: 87.331 - type: recall_at_3 value: 38.086999999999996 - type: recall_at_5 value: 44.294 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGamingRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 27.256999999999998 - type: map_at_10 value: 38.805 - type: map_at_100 value: 40.04 - type: map_at_1000 value: 40.117000000000004 - type: map_at_3 value: 35.425000000000004 - type: map_at_5 value: 37.317 - type: mrr_at_1 value: 31.912000000000003 - type: mrr_at_10 value: 42.045 - type: mrr_at_100 value: 42.956 - type: mrr_at_1000 value: 43.004 - type: mrr_at_3 value: 39.195 - type: mrr_at_5 value: 40.866 - type: ndcg_at_1 value: 31.912000000000003 - type: ndcg_at_10 value: 44.826 - type: ndcg_at_100 value: 49.85 - type: ndcg_at_1000 value: 51.562 - type: ndcg_at_3 value: 38.845 - type: ndcg_at_5 value: 41.719 - type: precision_at_1 value: 31.912000000000003 - type: precision_at_10 value: 7.768 - type: precision_at_100 value: 1.115 - type: precision_at_1000 value: 0.131 - type: precision_at_3 value: 18.015 - type: precision_at_5 value: 12.814999999999998 - type: recall_at_1 value: 27.256999999999998 - type: recall_at_10 value: 59.611999999999995 - type: recall_at_100 value: 81.324 - type: recall_at_1000 value: 93.801 - type: recall_at_3 value: 43.589 - type: recall_at_5 value: 50.589 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGisRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 15.588 - type: map_at_10 value: 22.936999999999998 - type: map_at_100 value: 24.015 - type: map_at_1000 value: 24.127000000000002 - type: map_at_3 value: 20.47 - type: map_at_5 value: 21.799 - type: mrr_at_1 value: 16.723 - type: mrr_at_10 value: 24.448 - type: mrr_at_100 value: 25.482 - type: mrr_at_1000 value: 25.568999999999996 - type: mrr_at_3 value: 21.94 - type: mrr_at_5 value: 23.386000000000003 - type: ndcg_at_1 value: 16.723 - type: ndcg_at_10 value: 27.451999999999998 - type: ndcg_at_100 value: 33.182 - type: ndcg_at_1000 value: 36.193999999999996 - type: ndcg_at_3 value: 22.545 - type: ndcg_at_5 value: 24.837 - type: precision_at_1 value: 16.723 - type: precision_at_10 value: 4.5760000000000005 - type: precision_at_100 value: 0.7929999999999999 - type: precision_at_1000 value: 0.11 - type: precision_at_3 value: 9.944 - type: precision_at_5 value: 7.321999999999999 - type: recall_at_1 value: 15.588 - type: recall_at_10 value: 40.039 - type: recall_at_100 value: 67.17699999999999 - type: recall_at_1000 value: 90.181 - type: recall_at_3 value: 26.663999999999998 - type: recall_at_5 value: 32.144 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackMathematicaRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 12.142999999999999 - type: map_at_10 value: 18.355 - type: map_at_100 value: 19.611 - type: map_at_1000 value: 19.750999999999998 - type: map_at_3 value: 16.073999999999998 - type: map_at_5 value: 17.187 - type: mrr_at_1 value: 15.547 - type: mrr_at_10 value: 22.615 - type: mrr_at_100 value: 23.671 - type: mrr_at_1000 value: 23.759 - type: mrr_at_3 value: 20.149 - type: mrr_at_5 value: 21.437 - type: ndcg_at_1 value: 15.547 - type: ndcg_at_10 value: 22.985 - type: ndcg_at_100 value: 29.192 - type: ndcg_at_1000 value: 32.448 - type: ndcg_at_3 value: 18.503 - type: ndcg_at_5 value: 20.322000000000003 - type: precision_at_1 value: 15.547 - type: precision_at_10 value: 4.49 - type: precision_at_100 value: 0.8840000000000001 - type: precision_at_1000 value: 0.129 - type: precision_at_3 value: 8.872 - type: precision_at_5 value: 6.741 - type: recall_at_1 value: 12.142999999999999 - type: recall_at_10 value: 33.271 - type: recall_at_100 value: 60.95399999999999 - type: recall_at_1000 value: 83.963 - type: recall_at_3 value: 20.645 - type: recall_at_5 value: 25.34 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackPhysicsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 22.09 - type: map_at_10 value: 30.220000000000002 - type: map_at_100 value: 31.741999999999997 - type: map_at_1000 value: 31.878 - type: map_at_3 value: 27.455000000000002 - type: map_at_5 value: 28.808 - type: mrr_at_1 value: 27.718999999999998 - type: mrr_at_10 value: 35.476 - type: mrr_at_100 value: 36.53 - type: mrr_at_1000 value: 36.602000000000004 - type: mrr_at_3 value: 33.157 - type: mrr_at_5 value: 34.36 - type: ndcg_at_1 value: 27.718999999999998 - type: ndcg_at_10 value: 35.547000000000004 - type: ndcg_at_100 value: 42.079 - type: ndcg_at_1000 value: 44.861000000000004 - type: ndcg_at_3 value: 30.932 - type: ndcg_at_5 value: 32.748 - type: precision_at_1 value: 27.718999999999998 - type: precision_at_10 value: 6.795 - type: precision_at_100 value: 1.194 - type: precision_at_1000 value: 0.163 - type: precision_at_3 value: 14.758 - type: precision_at_5 value: 10.549 - type: recall_at_1 value: 22.09 - type: recall_at_10 value: 46.357 - type: recall_at_100 value: 74.002 - type: recall_at_1000 value: 92.99199999999999 - type: recall_at_3 value: 33.138 - type: recall_at_5 value: 38.034 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackProgrammersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 16.904 - type: map_at_10 value: 25.075999999999997 - type: map_at_100 value: 26.400000000000002 - type: map_at_1000 value: 26.525 - type: map_at_3 value: 22.191 - type: map_at_5 value: 23.947 - type: mrr_at_1 value: 21.461 - type: mrr_at_10 value: 29.614 - type: mrr_at_100 value: 30.602 - type: mrr_at_1000 value: 30.677 - type: mrr_at_3 value: 27.017000000000003 - type: mrr_at_5 value: 28.626 - type: ndcg_at_1 value: 21.461 - type: ndcg_at_10 value: 30.304 - type: ndcg_at_100 value: 36.521 - type: ndcg_at_1000 value: 39.366 - type: ndcg_at_3 value: 25.267 - type: ndcg_at_5 value: 27.918 - type: precision_at_1 value: 21.461 - type: precision_at_10 value: 5.868 - type: precision_at_100 value: 1.072 - type: precision_at_1000 value: 0.151 - type: precision_at_3 value: 12.291 - type: precision_at_5 value: 9.429 - type: recall_at_1 value: 16.904 - type: recall_at_10 value: 41.521 - type: recall_at_100 value: 68.919 - type: recall_at_1000 value: 88.852 - type: recall_at_3 value: 27.733999999999998 - type: recall_at_5 value: 34.439 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 18.327916666666667 - type: map_at_10 value: 26.068 - type: map_at_100 value: 27.358833333333333 - type: map_at_1000 value: 27.491583333333335 - type: map_at_3 value: 23.45508333333333 - type: map_at_5 value: 24.857916666666664 - type: mrr_at_1 value: 22.05066666666667 - type: mrr_at_10 value: 29.805083333333332 - type: mrr_at_100 value: 30.80283333333333 - type: mrr_at_1000 value: 30.876166666666666 - type: mrr_at_3 value: 27.381083333333333 - type: mrr_at_5 value: 28.72441666666667 - type: ndcg_at_1 value: 22.056000000000004 - type: ndcg_at_10 value: 31.029416666666666 - type: ndcg_at_100 value: 36.90174999999999 - type: ndcg_at_1000 value: 39.716249999999995 - type: ndcg_at_3 value: 26.35533333333333 - type: ndcg_at_5 value: 28.471500000000006 - type: precision_at_1 value: 22.056000000000004 - type: precision_at_10 value: 5.7645833333333325 - type: precision_at_100 value: 1.0406666666666666 - type: precision_at_1000 value: 0.14850000000000002 - type: precision_at_3 value: 12.391416666666666 - type: precision_at_5 value: 9.112499999999999 - type: recall_at_1 value: 18.327916666666667 - type: recall_at_10 value: 42.15083333333333 - type: recall_at_100 value: 68.38666666666666 - type: recall_at_1000 value: 88.24183333333333 - type: recall_at_3 value: 29.094416666666667 - type: recall_at_5 value: 34.48716666666666 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackStatsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 15.009 - type: map_at_10 value: 21.251 - type: map_at_100 value: 22.337 - type: map_at_1000 value: 22.455 - type: map_at_3 value: 19.241 - type: map_at_5 value: 20.381 - type: mrr_at_1 value: 17.638 - type: mrr_at_10 value: 24.184 - type: mrr_at_100 value: 25.156 - type: mrr_at_1000 value: 25.239 - type: mrr_at_3 value: 22.29 - type: mrr_at_5 value: 23.363999999999997 - type: ndcg_at_1 value: 17.638 - type: ndcg_at_10 value: 25.269000000000002 - type: ndcg_at_100 value: 30.781999999999996 - type: ndcg_at_1000 value: 33.757 - type: ndcg_at_3 value: 21.457 - type: ndcg_at_5 value: 23.293 - type: precision_at_1 value: 17.638 - type: precision_at_10 value: 4.294 - type: precision_at_100 value: 0.771 - type: precision_at_1000 value: 0.11100000000000002 - type: precision_at_3 value: 9.815999999999999 - type: precision_at_5 value: 7.086 - type: recall_at_1 value: 15.009 - type: recall_at_10 value: 35.014 - type: recall_at_100 value: 60.45399999999999 - type: recall_at_1000 value: 82.416 - type: recall_at_3 value: 24.131 - type: recall_at_5 value: 28.846 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackTexRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 12.518 - type: map_at_10 value: 18.226 - type: map_at_100 value: 19.355 - type: map_at_1000 value: 19.496 - type: map_at_3 value: 16.243 - type: map_at_5 value: 17.288999999999998 - type: mrr_at_1 value: 15.382000000000001 - type: mrr_at_10 value: 21.559 - type: mrr_at_100 value: 22.587 - type: mrr_at_1000 value: 22.677 - type: mrr_at_3 value: 19.597 - type: mrr_at_5 value: 20.585 - type: ndcg_at_1 value: 15.382000000000001 - type: ndcg_at_10 value: 22.198 - type: ndcg_at_100 value: 27.860000000000003 - type: ndcg_at_1000 value: 31.302999999999997 - type: ndcg_at_3 value: 18.541 - type: ndcg_at_5 value: 20.089000000000002 - type: precision_at_1 value: 15.382000000000001 - type: precision_at_10 value: 4.178 - type: precision_at_100 value: 0.8380000000000001 - type: precision_at_1000 value: 0.132 - type: precision_at_3 value: 8.866999999999999 - type: precision_at_5 value: 6.476 - type: recall_at_1 value: 12.518 - type: recall_at_10 value: 31.036 - type: recall_at_100 value: 56.727000000000004 - type: recall_at_1000 value: 81.66799999999999 - type: recall_at_3 value: 20.610999999999997 - type: recall_at_5 value: 24.744 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackUnixRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 18.357 - type: map_at_10 value: 25.384 - type: map_at_100 value: 26.640000000000004 - type: map_at_1000 value: 26.762999999999998 - type: map_at_3 value: 22.863 - type: map_at_5 value: 24.197 - type: mrr_at_1 value: 21.735 - type: mrr_at_10 value: 29.069 - type: mrr_at_100 value: 30.119 - type: mrr_at_1000 value: 30.194 - type: mrr_at_3 value: 26.663999999999998 - type: mrr_at_5 value: 27.904 - type: ndcg_at_1 value: 21.735 - type: ndcg_at_10 value: 30.153999999999996 - type: ndcg_at_100 value: 36.262 - type: ndcg_at_1000 value: 39.206 - type: ndcg_at_3 value: 25.365 - type: ndcg_at_5 value: 27.403 - type: precision_at_1 value: 21.735 - type: precision_at_10 value: 5.354 - type: precision_at_100 value: 0.958 - type: precision_at_1000 value: 0.134 - type: precision_at_3 value: 11.567 - type: precision_at_5 value: 8.469999999999999 - type: recall_at_1 value: 18.357 - type: recall_at_10 value: 41.205000000000005 - type: recall_at_100 value: 68.30000000000001 - type: recall_at_1000 value: 89.294 - type: recall_at_3 value: 27.969 - type: recall_at_5 value: 32.989000000000004 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWebmastersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 18.226 - type: map_at_10 value: 25.766 - type: map_at_100 value: 27.345000000000002 - type: map_at_1000 value: 27.575 - type: map_at_3 value: 22.945999999999998 - type: map_at_5 value: 24.383 - type: mrr_at_1 value: 21.542 - type: mrr_at_10 value: 29.448 - type: mrr_at_100 value: 30.509999999999998 - type: mrr_at_1000 value: 30.575000000000003 - type: mrr_at_3 value: 26.482 - type: mrr_at_5 value: 28.072999999999997 - type: ndcg_at_1 value: 21.542 - type: ndcg_at_10 value: 31.392999999999997 - type: ndcg_at_100 value: 37.589 - type: ndcg_at_1000 value: 40.717 - type: ndcg_at_3 value: 26.179000000000002 - type: ndcg_at_5 value: 28.557 - type: precision_at_1 value: 21.542 - type: precision_at_10 value: 6.462 - type: precision_at_100 value: 1.415 - type: precision_at_1000 value: 0.234 - type: precision_at_3 value: 12.187000000000001 - type: precision_at_5 value: 9.605 - type: recall_at_1 value: 18.226 - type: recall_at_10 value: 42.853 - type: recall_at_100 value: 70.97200000000001 - type: recall_at_1000 value: 91.662 - type: recall_at_3 value: 28.555999999999997 - type: recall_at_5 value: 34.203 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWordpressRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 15.495999999999999 - type: map_at_10 value: 21.631 - type: map_at_100 value: 22.705000000000002 - type: map_at_1000 value: 22.823999999999998 - type: map_at_3 value: 19.747 - type: map_at_5 value: 20.75 - type: mrr_at_1 value: 16.636 - type: mrr_at_10 value: 23.294 - type: mrr_at_100 value: 24.312 - type: mrr_at_1000 value: 24.401999999999997 - type: mrr_at_3 value: 21.503 - type: mrr_at_5 value: 22.52 - type: ndcg_at_1 value: 16.636 - type: ndcg_at_10 value: 25.372 - type: ndcg_at_100 value: 30.984 - type: ndcg_at_1000 value: 33.992 - type: ndcg_at_3 value: 21.607000000000003 - type: ndcg_at_5 value: 23.380000000000003 - type: precision_at_1 value: 16.636 - type: precision_at_10 value: 4.011 - type: precision_at_100 value: 0.741 - type: precision_at_1000 value: 0.11199999999999999 - type: precision_at_3 value: 9.365 - type: precision_at_5 value: 6.654 - type: recall_at_1 value: 15.495999999999999 - type: recall_at_10 value: 35.376000000000005 - type: recall_at_100 value: 61.694 - type: recall_at_1000 value: 84.029 - type: recall_at_3 value: 25.089 - type: recall_at_5 value: 29.43 - task: type: Retrieval dataset: type: climate-fever name: MTEB ClimateFEVER config: default split: test revision: None metrics: - type: map_at_1 value: 4.662 - type: map_at_10 value: 8.638 - type: map_at_100 value: 9.86 - type: map_at_1000 value: 10.032 - type: map_at_3 value: 6.793 - type: map_at_5 value: 7.761 - type: mrr_at_1 value: 10.684000000000001 - type: mrr_at_10 value: 17.982 - type: mrr_at_100 value: 19.152 - type: mrr_at_1000 value: 19.231 - type: mrr_at_3 value: 15.113999999999999 - type: mrr_at_5 value: 16.658 - type: ndcg_at_1 value: 10.684000000000001 - type: ndcg_at_10 value: 13.483 - type: ndcg_at_100 value: 19.48 - type: ndcg_at_1000 value: 23.232 - type: ndcg_at_3 value: 9.75 - type: ndcg_at_5 value: 11.208 - type: precision_at_1 value: 10.684000000000001 - type: precision_at_10 value: 4.573 - type: precision_at_100 value: 1.085 - type: precision_at_1000 value: 0.17600000000000002 - type: precision_at_3 value: 7.514 - type: precision_at_5 value: 6.241 - type: recall_at_1 value: 4.662 - type: recall_at_10 value: 18.125 - type: recall_at_100 value: 39.675 - type: recall_at_1000 value: 61.332 - type: recall_at_3 value: 9.239 - type: recall_at_5 value: 12.863 - task: type: Retrieval dataset: type: dbpedia-entity name: MTEB DBPedia config: default split: test revision: None metrics: - type: map_at_1 value: 3.869 - type: map_at_10 value: 8.701 - type: map_at_100 value: 11.806999999999999 - type: map_at_1000 value: 12.676000000000002 - type: map_at_3 value: 6.3100000000000005 - type: map_at_5 value: 7.471 - type: mrr_at_1 value: 38.5 - type: mrr_at_10 value: 48.754 - type: mrr_at_100 value: 49.544 - type: mrr_at_1000 value: 49.568 - type: mrr_at_3 value: 46.167 - type: mrr_at_5 value: 47.679 - type: ndcg_at_1 value: 30.5 - type: ndcg_at_10 value: 22.454 - type: ndcg_at_100 value: 25.380999999999997 - type: ndcg_at_1000 value: 31.582 - type: ndcg_at_3 value: 25.617 - type: ndcg_at_5 value: 24.254 - type: precision_at_1 value: 38.5 - type: precision_at_10 value: 18.4 - type: precision_at_100 value: 6.02 - type: precision_at_1000 value: 1.34 - type: precision_at_3 value: 29.083 - type: precision_at_5 value: 24.85 - type: recall_at_1 value: 3.869 - type: recall_at_10 value: 12.902 - type: recall_at_100 value: 30.496000000000002 - type: recall_at_1000 value: 51.066 - type: recall_at_3 value: 7.396 - type: recall_at_5 value: 9.852 - task: type: Classification dataset: type: mteb/emotion name: MTEB EmotionClassification config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 36.705000000000005 - type: f1 value: 32.72625967901387 - task: type: Classification dataset: type: mteb/imdb name: MTEB ImdbClassification config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 66.89840000000001 - type: ap value: 61.43175045563333 - type: f1 value: 66.67945656405962 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (en) config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 89.12676698586411 - type: f1 value: 88.48426641357668 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (en) config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 62.61513907888736 - type: f1 value: 40.96251281624023 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (en) config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 61.95359784801614 - type: f1 value: 58.85654625260125 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (en) config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 70.1983860121049 - type: f1 value: 68.73455379435487 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-p2p name: MTEB MedrxivClusteringP2P config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 31.772017072895846 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-s2s name: MTEB MedrxivClusteringS2S config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 30.944581802089044 - task: type: Reranking dataset: type: mteb/mind_small name: MTEB MindSmallReranking config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 30.977328237697133 - type: mrr value: 32.02612207306447 - task: type: Clustering dataset: type: mteb/reddit-clustering name: MTEB RedditClustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 43.08588418858767 - task: type: Clustering dataset: type: mteb/reddit-clustering-p2p name: MTEB RedditClusteringP2P config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 56.53785276450797 - task: type: Reranking dataset: type: mteb/scidocs-reranking name: MTEB SciDocsRR config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 81.44882719207659 - type: mrr value: 94.71082022552609 - task: type: PairClassification dataset: type: mteb/sprintduplicatequestions-pairclassification name: MTEB SprintDuplicateQuestions config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.77821782178218 - type: cos_sim_ap value: 93.22909989796688 - type: cos_sim_f1 value: 88.41778697001035 - type: cos_sim_precision value: 91.54175588865097 - type: cos_sim_recall value: 85.5 - type: dot_accuracy value: 99.77821782178218 - type: dot_ap value: 93.2290998979669 - type: dot_f1 value: 88.41778697001035 - type: dot_precision value: 91.54175588865097 - type: dot_recall value: 85.5 - type: euclidean_accuracy value: 99.77821782178218 - type: euclidean_ap value: 93.2290998979669 - type: euclidean_f1 value: 88.41778697001035 - type: euclidean_precision value: 91.54175588865097 - type: euclidean_recall value: 85.5 - type: manhattan_accuracy value: 99.77524752475247 - type: manhattan_ap value: 93.18492132451668 - type: manhattan_f1 value: 88.19552782111285 - type: manhattan_precision value: 91.87432286023835 - type: manhattan_recall value: 84.8 - type: max_accuracy value: 99.77821782178218 - type: max_ap value: 93.2290998979669 - type: max_f1 value: 88.41778697001035 - task: type: Clustering dataset: type: mteb/stackexchange-clustering name: MTEB StackExchangeClustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 48.225188905490285 - task: type: Clustering dataset: type: mteb/stackexchange-clustering-p2p name: MTEB StackExchangeClusteringP2P config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 34.76195959924048 - task: type: Reranking dataset: type: mteb/stackoverflowdupquestions-reranking name: MTEB StackOverflowDupQuestions config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 48.16986372261003 - type: mrr value: 48.7718837535014 - task: type: Classification dataset: type: mteb/toxic_conversations_50k name: MTEB ToxicConversationsClassification config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 63.567200000000014 - type: ap value: 11.412292644030266 - type: f1 value: 49.102043399207716 - task: type: Classification dataset: type: mteb/tweet_sentiment_extraction name: MTEB TweetSentimentExtractionClassification config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 51.04414261460101 - type: f1 value: 51.22880449155832 - task: type: Clustering dataset: type: mteb/twentynewsgroups-clustering name: MTEB TwentyNewsgroupsClustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 34.35595440606073 - task: type: PairClassification dataset: type: mteb/twittersemeval2015-pairclassification name: MTEB TwitterSemEval2015 config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 84.6754485307266 - type: cos_sim_ap value: 69.6007143804539 - type: cos_sim_f1 value: 65.99822312476202 - type: cos_sim_precision value: 63.58522866226461 - type: cos_sim_recall value: 68.60158311345647 - type: dot_accuracy value: 84.6754485307266 - type: dot_ap value: 69.60070881520775 - type: dot_f1 value: 65.99822312476202 - type: dot_precision value: 63.58522866226461 - type: dot_recall value: 68.60158311345647 - type: euclidean_accuracy value: 84.6754485307266 - type: euclidean_ap value: 69.60071394457518 - type: euclidean_f1 value: 65.99822312476202 - type: euclidean_precision value: 63.58522866226461 - type: euclidean_recall value: 68.60158311345647 - type: manhattan_accuracy value: 84.6754485307266 - type: manhattan_ap value: 69.57324451019119 - type: manhattan_f1 value: 65.7235045917101 - type: manhattan_precision value: 62.04311152764761 - type: manhattan_recall value: 69.86807387862797 - type: max_accuracy value: 84.6754485307266 - type: max_ap value: 69.6007143804539 - type: max_f1 value: 65.99822312476202 - task: type: PairClassification dataset: type: mteb/twitterurlcorpus-pairclassification name: MTEB TwitterURLCorpus config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 87.63922847052432 - type: cos_sim_ap value: 83.48934190421085 - type: cos_sim_f1 value: 75.42265503384861 - type: cos_sim_precision value: 71.17868124359413 - type: cos_sim_recall value: 80.20480443486295 - type: dot_accuracy value: 87.63922847052432 - type: dot_ap value: 83.4893468701264 - type: dot_f1 value: 75.42265503384861 - type: dot_precision value: 71.17868124359413 - type: dot_recall value: 80.20480443486295 - type: euclidean_accuracy value: 87.63922847052432 - type: euclidean_ap value: 83.48934073168017 - type: euclidean_f1 value: 75.42265503384861 - type: euclidean_precision value: 71.17868124359413 - type: euclidean_recall value: 80.20480443486295 - type: manhattan_accuracy value: 87.66251406838204 - type: manhattan_ap value: 83.46319621504654 - type: manhattan_f1 value: 75.41883304448297 - type: manhattan_precision value: 71.0089747076421 - type: manhattan_recall value: 80.41268863566368 - type: max_accuracy value: 87.66251406838204 - type: max_ap value: 83.4893468701264 - type: max_f1 value: 75.42265503384861 --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 15607 with parameters: ``` {'batch_size': 48, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
41,485
[ [ -0.0234375, -0.05303955078125, 0.024627685546875, 0.020477294921875, -0.0152130126953125, -0.03546142578125, -0.013336181640625, 0.0127716064453125, 0.01215362548828125, 0.0352783203125, -0.05291748046875, -0.04632568359375, -0.044708251953125, -0.0030632019...
GAI-LLM/ko-en-llama2-13b-mixed-v3
2023-10-27T00:43:02.000Z
[ "transformers", "pytorch", "llama", "text-generation", "ko", "license:cc-by-nc-2.0", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
GAI-LLM
null
null
GAI-LLM/ko-en-llama2-13b-mixed-v3
0
1,208
transformers
2023-10-22T22:36:08
--- license: cc-by-nc-2.0 language: - ko library_name: transformers pipeline_tag: text-generation --- **The license is `cc-by-nc-2.0`.** # **GAI-LLM/ko-en-llama2-13b-mixed-v3** ## Model Details **Model Developers** Donghoon Oh, Hanmin Myung, Eunyoung Kim (SK C&C G.AI Eng) **Input** Models input text only. **Output** Models generate text only. **Model Architecture** ko-en-llama2-13b-mixed-v3 is an auto-regressive language model based on the LLaMA2 transformer architecture. **Base Model** [hyunseoki/ko-en-llama2-13b](https://huggingface.co/hyunseoki/ko-en-llama2-13b) **Training Dataset** - We combined Open Korean Dateset using mixed-strategy. - Kopen-platypus + kaist_cot_deepL - We use A100 GPU 80GB * 8, when training. # **Model Benchmark** ## KO-LLM leaderboard - Follow up as [Open KO-LLM LeaderBoard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard). # Implementation Code ```python ### GAI-LLM/ko-en-llama2-13b-mixed-v3 from transformers import AutoModelForCausalLM, AutoTokenizer import torch repo = "GAI-LLM/ko-en-llama2-13b-mixed-v3" model = AutoModelForCausalLM.from_pretrained( repo, return_dict=True, torch_dtype=torch.float16, device_map='auto' ) tokenizer = AutoTokenizer.from_pretrained(repo) ```
1,299
[ [ -0.0180816650390625, -0.057586669921875, 0.0283966064453125, 0.052490234375, -0.03851318359375, 0.0118865966796875, -0.006847381591796875, -0.031494140625, -0.00026226043701171875, 0.0290374755859375, -0.054107666015625, -0.047271728515625, -0.04595947265625, ...
TencentARC/t2iadapter_sketch_sd15v2
2023-08-01T09:42:42.000Z
[ "diffusers", "art", "t2i-adapter", "controlnet", "stable-diffusion", "image-to-image", "arxiv:2302.08453", "license:apache-2.0", "diffusers:T2IAdapter", "region:us" ]
image-to-image
TencentARC
null
null
TencentARC/t2iadapter_sketch_sd15v2
2
1,207
diffusers
2023-07-14T19:01:49
--- license: apache-2.0 base_model: runwayml/stable-diffusion-v1-5 tags: - art - t2i-adapter - controlnet - stable-diffusion - image-to-image --- # T2I Adapter - Sketch T2I Adapter is a network providing additional conditioning to stable diffusion. Each t2i checkpoint takes a different type of conditioning as input and is used with a specific base stable diffusion checkpoint. This checkpoint provides conditioning on sketches for the stable diffusion 1.5 checkpoint. ## Model Details - **Developed by:** T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models - **Model type:** Diffusion-based text-to-image generation model - **Language(s):** English - **License:** Apache 2.0 - **Resources for more information:** [GitHub Repository](https://github.com/TencentARC/T2I-Adapter), [Paper](https://arxiv.org/abs/2302.08453). - **Cite as:** @misc{ title={T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models}, author={Chong Mou, Xintao Wang, Liangbin Xie, Yanze Wu, Jian Zhang, Zhongang Qi, Ying Shan, Xiaohu Qie}, year={2023}, eprint={2302.08453}, archivePrefix={arXiv}, primaryClass={cs.CV} } ### Checkpoints | Model Name | Control Image Overview| Control Image Example | Generated Image Example | |---|---|---|---| |[TencentARC/t2iadapter_color_sd14v1](https://huggingface.co/TencentARC/t2iadapter_color_sd14v1)<br/> *Trained with spatial color palette* | A image with 8x8 color palette.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_input.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_output.png"/></a>| |[TencentARC/t2iadapter_canny_sd14v1](https://huggingface.co/TencentARC/t2iadapter_canny_sd14v1)<br/> *Trained with canny edge detection* | A monochrome image with white edges on a black background.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_input.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_output.png"/></a>| |[TencentARC/t2iadapter_sketch_sd14v1](https://huggingface.co/TencentARC/t2iadapter_sketch_sd14v1)<br/> *Trained with [PidiNet](https://github.com/zhuoinoulu/pidinet) edge detection* | A hand-drawn monochrome image with white outlines on a black background.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_input.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_output.png"/></a>| |[TencentARC/t2iadapter_depth_sd14v1](https://huggingface.co/TencentARC/t2iadapter_depth_sd14v1)<br/> *Trained with Midas depth estimation* | A grayscale image with black representing deep areas and white representing shallow areas.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_output.png"/></a>| |[TencentARC/t2iadapter_openpose_sd14v1](https://huggingface.co/TencentARC/t2iadapter_openpose_sd14v1)<br/> *Trained with OpenPose bone image* | A [OpenPose bone](https://github.com/CMU-Perceptual-Computing-Lab/openpose) image.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_output.png"/></a>| |[TencentARC/t2iadapter_keypose_sd14v1](https://huggingface.co/TencentARC/t2iadapter_keypose_sd14v1)<br/> *Trained with mmpose skeleton image* | A [mmpose skeleton](https://github.com/open-mmlab/mmpose) image.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_output.png"/></a>| |[TencentARC/t2iadapter_seg_sd14v1](https://huggingface.co/TencentARC/t2iadapter_seg_sd14v1)<br/>*Trained with semantic segmentation* | An [custom](https://github.com/TencentARC/T2I-Adapter/discussions/25) segmentation protocol image.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_output.png"/></a> | |[TencentARC/t2iadapter_canny_sd15v2](https://huggingface.co/TencentARC/t2iadapter_canny_sd15v2)|| |[TencentARC/t2iadapter_depth_sd15v2](https://huggingface.co/TencentARC/t2iadapter_depth_sd15v2)|| |[TencentARC/t2iadapter_sketch_sd15v2](https://huggingface.co/TencentARC/t2iadapter_sketch_sd15v2)|| |[TencentARC/t2iadapter_zoedepth_sd15v1](https://huggingface.co/TencentARC/t2iadapter_zoedepth_sd15v1)|| ## Example 1. Dependencies ```sh pip install diffusers transformers controlnet_aux ``` 2. Run code: ```python import torch from PIL import Image from controlnet_aux import PidiNetDetector from diffusers import ( T2IAdapter, StableDiffusionAdapterPipeline ) image = Image.open('./images/sketch_in.png') processor = PidiNetDetector.from_pretrained('lllyasviel/Annotators') sketch_image = processor(image) sketch_image.save('./images/sketch.png') adapter = T2IAdapter.from_pretrained("TencentARC/t2iadapter_zoedepth_sd15v1", torch_dtype=torch.float16) pipe = StableDiffusionAdapterPipeline.from_pretrained( "runwayml/stable-diffusion-v1-5", adapter=adapter, safety_checker=None, torch_dtype=torch.float16, variant="fp16" ) pipe.to('cuda') generator = torch.Generator().manual_seed(0) sketch_image_out = pipe(prompt="royal chamber with fancy bed", image=sketch_image, generator=generator).images[0] sketch_image_out.save('./images/sketch_image_out.png') ``` ![sketch_in](./images/sketch_in.png) ![sketch](./images/sketch.png) ![sketch_image_out](./images/sketch_image_out.png)
7,915
[ [ -0.0167388916015625, -0.01010894775390625, 0.0202178955078125, 0.0293121337890625, -0.03466796875, -0.01496124267578125, 0.003292083740234375, -0.031829833984375, 0.0258941650390625, -0.005504608154296875, -0.044921875, -0.04180908203125, -0.0487060546875, -...
flax-community/alberti-bert-base-multilingual-cased
2023-08-07T12:10:54.000Z
[ "transformers", "pytorch", "jax", "joblib", "safetensors", "bert", "fill-mask", "multilingual", "es", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
fill-mask
flax-community
null
null
flax-community/alberti-bert-base-multilingual-cased
5
1,206
transformers
2022-03-02T23:29:05
--- language: es license: cc-by-4.0 tags: - multilingual - bert pipeline_tag: fill-mask widget: - text: ¿Qué es la vida? Un [MASK]. --- <div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400"> <p><b>Update:</b> This model has been moved to <a href="https://huggingface.co/linhd-postdata/alberti-bert-base-multilingual-cased">linhd-postdata/alberti-bert-base-multilingual-cased</a>, where it will be maintained and updated. </p> </div> # ALBERTI ALBERTI is a set of two BERT-based multilingual model for poetry. One for verses and another one for stanzas. This model has been further trained with the PULPO corpus for verses using [Flax](https://github.com/google/flax), including training scripts. This is part of the [Flax/Jax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organised by [HuggingFace](https://huggingface.co/) and TPU usage sponsored by Google. ## PULPO PULPO, the Prodigious Unannotated Literary Poetry Corpus, is a set of multilingual corpora of verses and stanzas with over 95M words. The following corpora has been downloaded using the [Averell](https://github.com/linhd-postdata/averell/) tool, developed by the [POSTDATA](https://postdata.linhd.uned.es/) team: ### Spanish - [Disco v3](https://github.com/pruizf/disco) - [Corpus of Spanish Golden-Age Sonnets](https://github.com/bncolorado/CorpusSonetosSigloDeOro) - [Corpus general de poesía lírica castellana del Siglo de Oro](https://github.com/bncolorado/CorpusGeneralPoesiaLiricaCastellanaDelSigloDeOro) - [Gongocorpus](https://github.com/linhd-postdata/gongocorpus) - [source](http://obvil.sorbonne-universite.site/corpus/gongora/gongora_obra-poetica) ### English - [Eighteenth-Century Poetry Archive (ECPA)](https://github.com/alhuber1502/ECPA) - [For better for verse](https://github.com/waynegraham/for_better_for_verse) ### French - [Métrique en Ligne](https://crisco2.unicaen.fr/verlaine/index.php?navigation=accueil) - [source](https://github.com/linhd-postdata/metrique-en-ligne) ### Italian - [Biblioteca italiana](https://github.com/linhd-postdata/biblioteca_italiana) - [source](http://www.bibliotecaitaliana.it/) ### Czech - [Corpus of Czech Verse](https://github.com/versotym/corpusCzechVerse) ### Portuguese - [Stichotheque](https://gitlab.com/stichotheque/stichotheque-pt) Also, we obtained the following corpora from these sources: ### Spanish - [Poesi.as](https://github.com/linhd-postdata/poesi.as) - [source](http://www.poesi.as/) ### English - [A Gutenberg Poetry Corpus](https://github.com/aparrish/gutenberg-poetry-corpus) ### Arabic - [Arabic Poetry dataset](https://www.kaggle.com/ahmedabelal/arabic-poetry) ### Chinese - [THU Chinese Classical Poetry Corpus](https://github.com/THUNLP-AIPoet/Datasets/tree/master/CCPC) ### Finnish - [SKVR](https://github.com/sks190/SKVR) ### German - [TextGrid Poetry Corpus](https://github.com/linhd-postdata/textgrid-poetry) - [source](https://textgrid.de/en/digitale-bibliothek) - [German Rhyme Corpus](https://github.com/tnhaider/german-rhyme-corpus) ### Hungarian - [verskorpusz](https://github.com/ELTE-DH/verskorpusz) ### Portuguese - [Poems in Portuguese](https://www.kaggle.com/oliveirasp6/poems-in-portuguese) ### Russian - [19 000 Russian poems](https://www.kaggle.com/grafstor/19-000-russian-poems) ## Team members - Álvaro Pérez ([alvp](https://huggingface.co/alvp)) - Javier de la Rosa ([versae](https://huggingface.co/versae)) - Aitor Díaz ([aitordiaz](https://huggingface.co/aitordiaz)) - Elena González-Blanco - Salvador Ros ([salva](https://huggingface.co/salva)) ## Useful links - [Community Week timeline](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104#summary-timeline-calendar-6) - [Community Week README](https://github.com/huggingface/transformers/blob/master/examples/research_projects/jax-projects/README.md) - [Community Week thread](https://discuss.huggingface.co/t/bertin-pretrain-roberta-large-from-scratch-in-spanish/7125) - [Community Week channel](https://discord.com/channels/858019234139602994/859113060068229190) - [Masked Language Modelling example scripts](https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling) - [Model Repository](https://huggingface.co/flax-community/alberti-bert-base-multilingual-cased/) ## Acknowledgments This project would not have been possible without the infrastructure and resources provided by HuggingFace and Google Cloud. Moreover, we want to thank POSTDATA Project (ERC-StG-679528) and the Computational Literary Studies Infrastructure (CLS INFRA No. 101004984) of the European Union's Horizon 2020 research and innovation programme for their support and time allowance.
4,961
[ [ -0.042327880859375, -0.0225830078125, 0.0212860107421875, 0.035614013671875, -0.0133209228515625, 0.00881195068359375, -0.0347900390625, -0.0560302734375, 0.047821044921875, 0.0304412841796875, -0.0306854248046875, -0.06976318359375, -0.03826904296875, 0.021...
Yntec/SamaritanDoesArt
2023-08-09T10:15:40.000Z
[ "diffusers", "art", "anime", "style", "3D", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "jinofcoolnes", "PromptSharingSamaritan", "en", "license:creativeml-openrail-m", "endpoints_compatible", "has_space", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
Yntec
null
null
Yntec/SamaritanDoesArt
3
1,206
diffusers
2023-08-07T14:37:21
--- license: creativeml-openrail-m language: - en library_name: diffusers pipeline_tag: text-to-image tags: - art - anime - style - 3D - stable-diffusion - stable-diffusion-diffusers - text-to-image - jinofcoolnes - PromptSharingSamaritan inference: true --- # samaritanDoesArt Samples and prompts: ![sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/sM-AJRsWCQObWw5hcIkgV.png) ![sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/i87SDgcbdTQstVPNOXAi9.png) tiny baby girl. chibi. A mix of SamDoesArtUltimerge with Samaritan 3D Cartoon v2. Haha, if you think the only reason I mixed them was so I could name the model like this, you're right! Still, the results speak for themselves. Original pages: https://huggingface.co/jinofcoolnes/sammod https://civitai.com/models/81270?modelVersionId=113299
878
[ [ -0.036224365234375, -0.0245819091796875, 0.033905029296875, 0.04351806640625, -0.04962158203125, -0.00021207332611083984, 0.026397705078125, -0.0252227783203125, 0.06109619140625, 0.028961181640625, -0.048858642578125, -0.02294921875, -0.04473876953125, 0.00...
doyoungkim/bert-base-uncased-finetuned-sst2
2022-10-12T13:09:42.000Z
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
doyoungkim
null
null
doyoungkim/bert-base-uncased-finetuned-sst2
2
1,205
transformers
2022-03-02T23:29:05
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model_index: - name: bert-base-uncased-finetuned-sst2 results: - dataset: name: glue type: glue args: sst2 metric: name: Accuracy type: accuracy value: 0.926605504587156 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-sst2 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.2716 - Accuracy: 0.9266 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.1666 | 1.0 | 2105 | 0.2403 | 0.9232 | | 0.1122 | 2.0 | 4210 | 0.2716 | 0.9266 | | 0.0852 | 3.0 | 6315 | 0.3150 | 0.9232 | | 0.056 | 4.0 | 8420 | 0.3209 | 0.9163 | | 0.0344 | 5.0 | 10525 | 0.3740 | 0.9243 | ### Framework versions - Transformers 4.9.2 - Pytorch 1.8.1 - Datasets 1.11.0 - Tokenizers 0.10.1
1,767
[ [ -0.023284912109375, -0.044921875, 0.00965118408203125, 0.01067352294921875, -0.03179931640625, -0.0252227783203125, -0.0196380615234375, -0.01250457763671875, 0.010040283203125, 0.0201568603515625, -0.052764892578125, -0.036590576171875, -0.053466796875, -0....
lanwuwei/GigaBERT-v4-Arabic-and-English
2021-05-19T21:19:13.000Z
[ "transformers", "pytorch", "jax", "bert", "feature-extraction", "endpoints_compatible", "region:us" ]
feature-extraction
lanwuwei
null
null
lanwuwei/GigaBERT-v4-Arabic-and-English
3
1,205
transformers
2022-03-02T23:29:05
## GigaBERT-v4 GigaBERT-v4 is a continued pre-training of [GigaBERT-v3](https://huggingface.co/lanwuwei/GigaBERT-v3-Arabic-and-English) on code-switched data, showing improved zero-shot transfer performance from English to Arabic on information extraction (IE) tasks. More details can be found in the following paper: @inproceedings{lan2020gigabert, author = {Lan, Wuwei and Chen, Yang and Xu, Wei and Ritter, Alan}, title = {GigaBERT: Zero-shot Transfer Learning from English to Arabic}, booktitle = {Proceedings of The 2020 Conference on Empirical Methods on Natural Language Processing (EMNLP)}, year = {2020} } ## Download ``` from transformers import * tokenizer = BertTokenizer.from_pretrained("lanwuwei/GigaBERT-v4-Arabic-and-English", do_lower_case=True) model = BertForTokenClassification.from_pretrained("lanwuwei/GigaBERT-v4-Arabic-and-English") ``` Here is downloadable link [GigaBERT-v4](https://drive.google.com/drive/u/1/folders/1uFGzMuTOD7iNsmKQYp_zVuvsJwOaIdar).
1,022
[ [ -0.03326416015625, -0.0423583984375, 0.01221466064453125, 0.003520965576171875, -0.0299072265625, 0.0170440673828125, -0.01468658447265625, -0.04888916015625, 0.0124053955078125, 0.0273284912109375, -0.036346435546875, -0.0386962890625, -0.043060302734375, 0...
TheRaphael0000/mixes0000
2023-04-10T10:41:49.000Z
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "en", "region:us" ]
text-to-image
TheRaphael0000
null
null
TheRaphael0000/mixes0000
15
1,205
diffusers
2023-02-25T19:22:42
--- language: - en thumbnail: >- https://huggingface.co/TheRaphael0000/mixes0000/resolve/main/showcase/soul0000-banner.png tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers --- # Direct Download Links - [soul0000](https://huggingface.co/TheRaphael0000/mixes0000/resolve/main/soul0000.ckpt) - [underline0000](https://huggingface.co/TheRaphael0000/mixes0000/resolve/main/underline0000.ckpt) # underline0000 Based on my first model soul0000 and expmixLine [DDL underline0000](https://huggingface.co/TheRaphael0000/mixes0000/resolve/main/underline0000.ckpt) ## Examples coming soon ## VAE No VAE baked in. ## Proprotions ``` underline0000 [a3d9892a00] = (17/20 * soul0000 [61bb0b1362]) + (3/20 * expmixLine_v20_fp16 [4d651c7638]) + ``` # soul0000 My first mix based on models I like. [DDL soul0000](https://huggingface.co/TheRaphael0000/mixes0000/resolve/main/soul0000.ckpt) ## Examples ![](showcase/soul0000-banner.png) ``` soul, 1girl, white hair, blue eyes, empty eyes, long hair, parted lips, head tilt, bare shoulders, Negative prompt: (worst quality:1.4), (low quality:1.4) , (monochrome:1.1) Steps: 24, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 0, Size: 960x320, Model hash: 61bb0b1362, Denoising strength: 0.35, ENSD: 31337, Hires upscale: 1.8, Hires steps: 5, Hires upscaler: R-ESRGAN 4x+ Anime6B ``` --- ![](showcase/soul0000-1girl.png) ``` 1girl Steps: 20, Sampler: Euler a, CFG scale: 6.5, Seed: 2988847432, Size: 640x960, Model hash: 61bb0b1362, ENSD: 31337 ``` --- ![](showcase/soul0000-NUROISEA.png) ``` 1girl, blonde hair, short hair, wavy hair, blue eyes, hair ribbon, blue ribbon, sleeveless shirt, portrait Negative prompt: (worst quality:1.4), (low quality:1.4) , (monochrome:1.1) Steps: 24, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 223073768, Size: 512x768, Model hash: 61bb0b1362, ENSD: 31337 ``` ## VAE I recommend using a VAE, since I did not include any in the mix. For example : ![](showcase/soul0000-vae.png) From left to right: *(no vae)*, (Anything-V3.0.vae), (pastel-waifu-diffusion.vae) ``` (incredibly_absurdres, absurdres, highres, best quality, detailed:1.05), (high contrast, 2D, stylized art, watercolor (medium)), (painterly:0.9), looking to the left, police, uniform, facing away, looking away, AND (1girl:1.2), leaning forward, turquoise, blue eyes, black hair, short hair, red AND android, Edge of Tomorrow, Elysium, mechanical parts, Negative prompt: (bad-artist, bad_prompt_version2, bad-image-v2-39000:0.7), lowres, bad proportions, bad anatomy, bad hands, bad leg, bad feet, extra digits, wrong hand, wrong feet, speech bubble, translation request, error, cropped, deformed, jpeg artifacts, lossy-lossless, third-party edit, ugly, disgust, poorly drawn, mangled, guro, signature, watermark, * username, artist name, ai-assisted loli, mature female, nsfw Steps: 18, Sampler: DPM++ 2S a Karras, CFG scale: 6.5, Seed: 1463463247, Size: 768x576, Model hash: 61bb0b1362, ENSD: 31337, Latent Couple: "divisions=1:1,1:2,1:2 positions=0:0,0:0,0:1 weights=0.3,0.9,0.8 end at step=20" ``` ## Recipe ``` # Recipe = ProtoGen_X3.4 + AbyssOrangeMix2_nsfw, 0.5, ckpt = Midnight Melt + __O1__, 0.4, ckpt = Anything-V3.0 + __O2__, 0.625, ckpt = ligne_claire_anime_diffusion_v1 + __O3__, 0.8, ckpt soul0000 = pastelmix + __O4__, 0.625, ckpt # Variables(System) __O1__ __O2__ __O3__ __O4__ # Variables(User defined) ``` Recipe usable in : [https://github.com/bbc-mc/sdweb-merge-board]() ## Proprotions If you distribute the weighted sum you obtain the following proportions ``` soul0000 [61bb0b1362] = (6/16 * pastelmix) + (2/16 * ligne_claire_anime_diffusion_v1) + (3/16 * Anything-V3.0) + (3/16 * Midnight Melt) + (1/16 * ProtoGen_X3.4) + (1/16 * AbyssOrangeMix2_nsfw) ``` ## Models sources - pastelmix : https://huggingface.co/andite/pastel-mix/resolve/main/pastelmix.ckpt - ligne_claire_anime_diffusion_v1 : https://huggingface.co/breakcore2/ligne_claire_anime_diffusion/resolve/main/ligne_claire_anime_diffusion_v1.safetensors - Anything-V3.0 : https://huggingface.co/Linaqruf/anything-v3.0/blob/main/anything-v3-full.safetensors - Midnight Melt : https://huggingface.co/DrBob2142/Midnight_Mixes/resolve/main/Midnight%20Melt.safetensors - ProtoGen_X3.4 : https://huggingface.co/darkstorm2150/Protogen_x3.4_Official_Release/resolve/main/ProtoGen_X3.4.ckpt - AbyssOrangeMix2_nsfw : https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/Models/AbyssOrangeMix2/AbyssOrangeMix2_nsfw.safetensors
4,529
[ [ -0.04010009765625, -0.0390625, 0.031646728515625, 0.032257080078125, -0.0142822265625, -0.01067352294921875, 0.01047515869140625, -0.037353515625, 0.0750732421875, 0.0445556640625, -0.061859130859375, -0.059661865234375, -0.051361083984375, 0.002969741821289...
macrmaw/rc-car-ftl
2023-08-11T19:03:41.000Z
[ "diffusers", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "endpoints_compatible", "has_space", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
macrmaw
null
null
macrmaw/rc-car-ftl
0
1,205
diffusers
2023-08-11T18:59:26
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### RC-Car-[-ftl-] Dreambooth model trained by macrmaw following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: VJCET538 Sample pictures of this concept: ![0](https://huggingface.co/macrmaw/rc-car-ftl/resolve/main/sample_images/ftl(4).jpg)
388
[ [ -0.057861328125, -0.029052734375, 0.0304107666015625, 0.0100555419921875, -0.00263214111328125, 0.07745361328125, 0.030548095703125, -0.0289459228515625, 0.0225067138671875, 0.0234222412109375, -0.058929443359375, -0.0097503662109375, -0.01432037353515625, -...
mdarhri00/named-entity-recognition
2023-10-08T17:02:56.000Z
[ "transformers", "pytorch", "bert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
mdarhri00
null
null
mdarhri00/named-entity-recognition
3
1,205
transformers
2023-09-18T16:43:47
--- pipeline_tag: token-classification --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> The NER model developed using BERT is designed to recognize named entities in text for multiple languages, including Arabic, French, and English. It is adaptable to new labels, allowing users to extend its capabilities beyond the initial set of 10 predefined labels. which are: 'Person_Name', 'Brand_vehicule', 'Model_vehicule', 'Organization_Name', 'location', 'phone_number', 'IBAN', 'credit_card', 'date_time', 'email', 'Identification_Number' ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** yahya mdarhri - **Model type:** TOKEN CLASSIFICATION - **Finetuned from model :** bert-base-multilingual-cased - **License:** OPEN SOURCE ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> Named Entity Recognition (NER): The primary purpose of this model is to perform Named Entity Recognition (NER) in text data. It identifies and categorizes entities such as names of people, organizations, locations, dates, and more. Multilingual Support: The model is designed to support multiple languages, including Arabic, French, and English. It can be used by NLP practitioners, researchers, and developers working with text data in these languages. Adaptability: Users can adapt the model to recognize new entity labels by providing labeled training data for the desired categories. This feature makes it versatile for various NER tasks. ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> Bias and Fairness: Users and affected parties should be aware of potential biases in entity recognition, especially when it comes to personal names or other sensitive categories. Efforts should be made to minimize bias and ensure fairness in entity recognition. Privacy: The model should be used responsibly to protect the privacy of individuals and organizations. When handling personally identifiable information (PII), data protection laws and privacy guidelines should be followed. Transparency: Transparency in how the model operates, including its training data and evaluation metrics, is crucial to build trust with users and affected parties. User Consent: If the model is used in applications where user data is processed, obtaining informed consent from users for data processing is essential. We value your feedback! Please share your thoughts on this model. Thank you! ## Model Card Contact yahyamdarhri00@gmail.com
2,706
[ [ -0.036529541015625, -0.03912353515625, 0.0179290771484375, 0.02099609375, -0.015228271484375, -0.01302337646484375, -0.00537872314453125, -0.0743408203125, 0.0070343017578125, 0.038330078125, -0.0234832763671875, -0.037628173828125, -0.03985595703125, 0.0051...
TencentARC/t2iadapter_depth_sd15v2
2023-07-31T11:10:44.000Z
[ "diffusers", "art", "t2i-adapter", "controlnet", "stable-diffusion", "image-to-image", "arxiv:2302.08453", "license:apache-2.0", "diffusers:T2IAdapter", "region:us" ]
image-to-image
TencentARC
null
null
TencentARC/t2iadapter_depth_sd15v2
0
1,204
diffusers
2023-07-14T19:01:05
--- license: apache-2.0 base_model: runwayml/stable-diffusion-v1-5 tags: - art - t2i-adapter - controlnet - stable-diffusion - image-to-image --- # T2I Adapter - Depth T2I Adapter is a network providing additional conditioning to stable diffusion. Each t2i checkpoint takes a different type of conditioning as input and is used with a specific base stable diffusion checkpoint. This checkpoint provides conditioning on depth for the stable diffusion 1.5 checkpoint. ## Model Details - **Developed by:** T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models - **Model type:** Diffusion-based text-to-image generation model - **Language(s):** English - **License:** Apache 2.0 - **Resources for more information:** [GitHub Repository](https://github.com/TencentARC/T2I-Adapter), [Paper](https://arxiv.org/abs/2302.08453). - **Cite as:** @misc{ title={T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models}, author={Chong Mou, Xintao Wang, Liangbin Xie, Yanze Wu, Jian Zhang, Zhongang Qi, Ying Shan, Xiaohu Qie}, year={2023}, eprint={2302.08453}, archivePrefix={arXiv}, primaryClass={cs.CV} } ### Checkpoints | Model Name | Control Image Overview| Control Image Example | Generated Image Example | |---|---|---|---| |[TencentARC/t2iadapter_color_sd14v1](https://huggingface.co/TencentARC/t2iadapter_color_sd14v1)<br/> *Trained with spatial color palette* | A image with 8x8 color palette.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_input.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_output.png"/></a>| |[TencentARC/t2iadapter_canny_sd14v1](https://huggingface.co/TencentARC/t2iadapter_canny_sd14v1)<br/> *Trained with canny edge detection* | A monochrome image with white edges on a black background.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_input.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_output.png"/></a>| |[TencentARC/t2iadapter_sketch_sd14v1](https://huggingface.co/TencentARC/t2iadapter_sketch_sd14v1)<br/> *Trained with [PidiNet](https://github.com/zhuoinoulu/pidinet) edge detection* | A hand-drawn monochrome image with white outlines on a black background.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_input.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_output.png"/></a>| |[TencentARC/t2iadapter_depth_sd14v1](https://huggingface.co/TencentARC/t2iadapter_depth_sd14v1)<br/> *Trained with Midas depth estimation* | A grayscale image with black representing deep areas and white representing shallow areas.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_output.png"/></a>| |[TencentARC/t2iadapter_openpose_sd14v1](https://huggingface.co/TencentARC/t2iadapter_openpose_sd14v1)<br/> *Trained with OpenPose bone image* | A [OpenPose bone](https://github.com/CMU-Perceptual-Computing-Lab/openpose) image.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_output.png"/></a>| |[TencentARC/t2iadapter_keypose_sd14v1](https://huggingface.co/TencentARC/t2iadapter_keypose_sd14v1)<br/> *Trained with mmpose skeleton image* | A [mmpose skeleton](https://github.com/open-mmlab/mmpose) image.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_output.png"/></a>| |[TencentARC/t2iadapter_seg_sd14v1](https://huggingface.co/TencentARC/t2iadapter_seg_sd14v1)<br/>*Trained with semantic segmentation* | An [custom](https://github.com/TencentARC/T2I-Adapter/discussions/25) segmentation protocol image.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_output.png"/></a> | |[TencentARC/t2iadapter_canny_sd15v2](https://huggingface.co/TencentARC/t2iadapter_canny_sd15v2)|| |[TencentARC/t2iadapter_depth_sd15v2](https://huggingface.co/TencentARC/t2iadapter_depth_sd15v2)|| |[TencentARC/t2iadapter_sketch_sd15v2](https://huggingface.co/TencentARC/t2iadapter_sketch_sd15v2)|| |[TencentARC/t2iadapter_zoedepth_sd15v1](https://huggingface.co/TencentARC/t2iadapter_zoedepth_sd15v1)|| ## Example 1. Dependencies ```sh pip install diffusers transformers controlnet_aux ``` 2. Run code: ```python from controlnet_aux import MidasDetector from PIL import Image from diffusers import T2IAdapter, StableDiffusionAdapterPipeline import torch midas = MidasDetector.from_pretrained("lllyasviel/Annotators") image = Image.open('./images/depth_input.png') image = midas(image) image.save('./images/depth.png') adapter = T2IAdapter.from_pretrained("TencentARC/t2iadapter_depth_sd15v2", torch_dtype=torch.float16) pipe = StableDiffusionAdapterPipeline.from_pretrained( "runwayml/stable-diffusion-v1-5", adapter=adapter, safety_checker=None, torch_dtype=torch.float16, variant="fp16" ) pipe.to('cuda') generator = torch.Generator().manual_seed(1) depth_out = pipe(prompt="storm trooper giving a speech", image=image, generator=generator).images[0] depth_out.save('./images/depth_output.png') ``` ![depth_input](./images/depth_input.png) ![depth](./images/depth.png) ![depth_output](./images/depth_output.png)
7,840
[ [ -0.020111083984375, -0.009613037109375, 0.0205230712890625, 0.02703857421875, -0.034698486328125, -0.01453399658203125, 0.0034389495849609375, -0.03057861328125, 0.0203094482421875, -0.007640838623046875, -0.045928955078125, -0.044586181640625, -0.051025390625, ...
EnD-Diffusers/Porcelain-Dusk-Mix-V1
2023-04-17T07:23:49.000Z
[ "diffusers", "stable diffusion", "merge", "anime", "digital art", "text-to-image", "en", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
EnD-Diffusers
null
null
EnD-Diffusers/Porcelain-Dusk-Mix-V1
2
1,203
diffusers
2023-04-16T00:14:04
--- license: creativeml-openrail-m language: - en library_name: diffusers pipeline_tag: text-to-image tags: - stable diffusion - merge - anime - digital art --- # Duskfall's Digital Fantasy MERGE model by Duskfallcrew with Anything 3.0 and Something 2.2 in the Mix You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! Original model here: https://huggingface.co/Duskfallcrew/duskfall-s-digital-fantasy COMING SOON: DIFFUSERS MODEL AND DEMO SPACE More samples and info are here: https://civitai.com/user/duskfallcrew If you want to donate towards costs and don't want to subscribe: https://ko-fi.com/DUSKFALLcrew If you want to monthly support the EARTH & DUSK media projects and not just AI: https://www.patreon.com/earthndusk digidsk1 (use that on your prompt)
947
[ [ -0.03033447265625, -0.047332763671875, 0.0382080078125, 0.052642822265625, -0.0277557373046875, 0.01134490966796875, 0.0226593017578125, -0.059356689453125, 0.0411376953125, 0.033599853515625, -0.06146240234375, -0.02325439453125, -0.021392822265625, -0.0244...
hanifnoerr/Fine-tuned-Indonesian-Sentiment-Classifier
2023-04-15T06:40:02.000Z
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "id", "dataset:indonlu", "license:mit", "model-index", "endpoints_compatible", "region:us" ]
text-classification
hanifnoerr
null
null
hanifnoerr/Fine-tuned-Indonesian-Sentiment-Classifier
1
1,202
transformers
2023-04-07T01:10:10
--- license: mit tags: - generated_from_trainer datasets: - indonlu metrics: - accuracy - f1 model-index: - name: Fine-tuned-Indonesian-Sentiment-Classifier results: - task: name: Text Classification type: text-classification dataset: name: indonlu type: indonlu config: smsa split: validation args: smsa metrics: - name: Accuracy type: accuracy value: 0.9317460317460318 - name: F1 type: f1 value: 0.9034223843742829 language: - id pipeline_tag: text-classification widget: - text: "Kalo kamu WFH emang kerja?" - text: "buku ini kurang bagus isinya" --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Fine-tuned-Indonesian-Sentiment-Classifier This model is a fine-tuned version of [indobenchmark/indobert-base-p1](https://huggingface.co/indobenchmark/indobert-base-p1) on the [IndoNLU's SmSA](https://huggingface.co/datasets/indonlp/indonlu) dataset. It achieves the following results on the evaluation dataset: - Loss: 0.3233 - Accuracy: 0.9317 - F1: 0.9034 And the results of the test dataset: - Accuracy: 0.928 - F1 macro: 0.9113470780757361 - F1 micro: 0.928 - F1 weighted: 0.9261959965604815 ## Model description This model can be used to determine the sentiment of a text with three possible outputs [positive, negative, or neutral] ## How to use ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification Pre-trained = "hanifnoerr/Fine-tuned-Indonesian-Sentiment-Classifier" tokenizer = AutoTokenizer.from_pretrained(Pre-trained) model = AutoModelForSequenceClassification.from_pretrained(Pre-trained) ``` ### make classification ```python pretrained_name = "hanifnoerr/Fine-tuned-Indonesian-Sentiment-Classifier" sentimen = pipeline(tokenizer=pretrained_name, model=pretrained_name) kalimat = "buku ini jelek sekali" sentimen(kalimat) ``` output: [{'label': 'negative', 'score': 0.9996247291564941}] ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:| | 0.08 | 1.0 | 688 | 0.3532 | 0.9310 | 0.9053 | | 0.0523 | 2.0 | 1376 | 0.3233 | 0.9317 | 0.9034 | | 0.045 | 3.0 | 2064 | 0.3949 | 0.9286 | 0.8995 | | 0.0252 | 4.0 | 2752 | 0.4662 | 0.9310 | 0.9049 | | 0.0149 | 5.0 | 3440 | 0.6251 | 0.9246 | 0.8899 | | 0.0091 | 6.0 | 4128 | 0.6148 | 0.9254 | 0.8928 | | 0.0111 | 7.0 | 4816 | 0.6259 | 0.9222 | 0.8902 | | 0.0106 | 8.0 | 5504 | 0.6123 | 0.9238 | 0.8882 | | 0.0092 | 9.0 | 6192 | 0.6353 | 0.9230 | 0.8928 | | 0.0085 | 10.0 | 6880 | 0.6733 | 0.9254 | 0.8989 | | 0.0062 | 11.0 | 7568 | 0.6666 | 0.9302 | 0.9027 | | 0.0036 | 12.0 | 8256 | 0.7578 | 0.9230 | 0.8962 | | 0.0055 | 13.0 | 8944 | 0.7378 | 0.9270 | 0.8947 | | 0.0023 | 14.0 | 9632 | 0.7758 | 0.9230 | 0.8978 | | 0.0009 | 15.0 | 10320 | 0.7051 | 0.9278 | 0.9006 | | 0.0033 | 16.0 | 11008 | 0.7442 | 0.9214 | 0.8902 | | 0.0 | 17.0 | 11696 | 0.7513 | 0.9254 | 0.8974 | | 0.0 | 18.0 | 12384 | 0.7554 | 0.9270 | 0.8999 | Although trained with 18 epochs, this model uses the best weight (Epoch 2) ### Framework versions - Transformers 4.27.4 - Pytorch 2.0.0+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
3,720
[ [ -0.047088623046875, -0.036834716796875, -0.0032978057861328125, 0.010223388671875, -0.020538330078125, -0.0095062255859375, -0.0094146728515625, -0.00782012939453125, 0.02972412109375, 0.021820068359375, -0.042266845703125, -0.04522705078125, -0.058380126953125,...
nota-ai/bk-sdm-base
2023-08-19T12:13:35.000Z
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dataset:ChristophSchuhmann/improved_aesthetics_6.5plus", "arxiv:2305.15798", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
nota-ai
null
null
nota-ai/bk-sdm-base
11
1,202
diffusers
2023-07-10T06:50:32
--- license: creativeml-openrail-m tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image datasets: - ChristophSchuhmann/improved_aesthetics_6.5plus library_name: diffusers pipeline_tag: text-to-image extra_gated_prompt: >- This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content 2. The authors claim no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license carefully here: https://huggingface.co/spaces/CompVis/stable-diffusion-license extra_gated_heading: Please read the LICENSE to access this model --- # BK-SDM Model Card Block-removed Knowledge-distilled Stable Diffusion Model (BK-SDM) is an architecturally compressed SDM for efficient general-purpose text-to-image synthesis. This model is bulit with (i) removing several residual and attention blocks from the U-Net of [Stable Diffusion v1.4]( https://huggingface.co/CompVis/stable-diffusion-v1-4) and (ii) distillation pretraining on only 0.22M LAION pairs (fewer than 0.1% of the full training set). Despite being trained with very limited resources, our compact model can imitate the original SDM by benefiting from transferred knowledge. - **Resources for more information**: [Paper](https://arxiv.org/abs/2305.15798), [GitHub](https://github.com/Nota-NetsPresso/BK-SDM), [Demo]( https://huggingface.co/spaces/nota-ai/compressed-stable-diffusion). ## Examples with 🤗[Diffusers library](https://github.com/huggingface/diffusers). An inference code with the default PNDM scheduler and 50 denoising steps is as follows. ```python import torch from diffusers import StableDiffusionPipeline pipe = StableDiffusionPipeline.from_pretrained("nota-ai/bk-sdm-base", torch_dtype=torch.float16) pipe = pipe.to("cuda") prompt = "a tropical bird sitting on a branch of a tree" image = pipe(prompt).images[0] image.save("example.png") ``` The following code is also runnable, because we compressed the U-Net of [Stable Diffusion v1.4]( https://huggingface.co/CompVis/stable-diffusion-v1-4) while keeping the other parts (i.e., Text Encoder and Image Decoder) unchanged: ```python import torch from diffusers import StableDiffusionPipeline, UNet2DConditionModel pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16) pipe.unet = UNet2DConditionModel.from_pretrained("nota-ai/bk-sdm-base", subfolder="unet", torch_dtype=torch.float16) pipe = pipe.to("cuda") prompt = "a tropical bird sitting on a branch of a tree" image = pipe(prompt).images[0] image.save("example.png") ``` ## Compression Method ### U-Net Architecture Certain residual and attention blocks were eliminated from the U-Net of SDM-v1.4: - 1.04B-param [SDM-v1.4](https://huggingface.co/CompVis/stable-diffusion-v1-4) (0.86B-param U-Net): the original source model. - 0.76B-param [**BK-SDM-Base**](https://huggingface.co/nota-ai/bk-sdm-base) (0.58B-param U-Net): obtained with ① fewer blocks in outer stages. - 0.66B-param [**BK-SDM-Small**](https://huggingface.co/nota-ai/bk-sdm-small) (0.49B-param U-Net): obtained with ① and ② mid-stage removal. - 0.50B-param [**BK-SDM-Tiny**](https://huggingface.co/nota-ai/bk-sdm-tiny) (0.33B-param U-Net): obtained with ①, ②, and ③ further inner-stage removal. <center> <img alt="U-Net architectures" img src="https://netspresso-research-code-release.s3.us-east-2.amazonaws.com/assets-bk-sdm/fig_arch.png" width="100%"> </center> ### Distillation Pretraining The compact U-Net was trained to mimic the behavior of the original U-Net. We leveraged feature-level and output-level distillation, along with the denoising task loss. <center> <img alt="KD-based pretraining" img src="https://netspresso-research-code-release.s3.us-east-2.amazonaws.com/assets-bk-sdm/fig_kd_bksdm.png" width="100%"> </center> <br/> - **Training Data**: 212,776 image-text pairs (i.e., 0.22M pairs) from [LAION-Aesthetics V2 6.5+](https://laion.ai/blog/laion-aesthetics/). - **Hardware:** A single NVIDIA A100 80GB GPU - **Gradient Accumulations**: 4 - **Batch:** 256 (=4×64) - **Optimizer:** AdamW - **Learning Rate:** a constant learning rate of 5e-5 for 50K-iteration pretraining ## Experimental Results The following table shows the zero-shot results on 30K samples from the MS-COCO validation split. After generating 512×512 images with the PNDM scheduler and 25 denoising steps, we downsampled them to 256×256 for evaluating generation scores. Our models were drawn at the 50K-th training iteration. | Model | FID↓ | IS↑ | CLIP Score↑<br>(ViT-g/14) | # Params,<br>U-Net | # Params,<br>Whole SDM | |---|:---:|:---:|:---:|:---:|:---:| | [Stable Diffusion v1.4](https://huggingface.co/CompVis/stable-diffusion-v1-4) | 13.05 | 36.76 | 0.2958 | 0.86B | 1.04B | | [BK-SDM-Base](https://huggingface.co/nota-ai/bk-sdm-base) (Ours) | 15.76 | 33.79 | 0.2878 | 0.58B | 0.76B | | [BK-SDM-Small](https://huggingface.co/nota-ai/bk-sdm-small) (Ours) | 16.98 | 31.68 | 0.2677 | 0.49B | 0.66B | | [BK-SDM-Tiny](https://huggingface.co/nota-ai/bk-sdm-tiny) (Ours) | 17.12 | 30.09 | 0.2653 | 0.33B | 0.50B | <br/> The following figure depicts synthesized images with some MS-COCO captions. <center> <img alt="Visual results" img src="https://netspresso-research-code-release.s3.us-east-2.amazonaws.com/assets-bk-sdm/fig_results.png" width="100%"> </center> <br/> # Uses _Note: This section is taken from the [Stable Diffusion v1 model card]( https://huggingface.co/CompVis/stable-diffusion-v1-4) (which was based on the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini)) and applies in the same way to BK-SDMs_. ## Direct Use The model is intended for research purposes only. Possible research areas and tasks include - Safe deployment of models which have the potential to generate harmful content. - Probing and understanding the limitations and biases of generative models. - Generation of artworks and use in design and other artistic processes. - Applications in educational or creative tools. - Research on generative models. Excluded uses are described below. ### Misuse, Malicious Use, and Out-of-Scope Use The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes. #### Out-of-Scope Use The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. #### Misuse and Malicious Use Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to: - Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc. - Intentionally promoting or propagating discriminatory content or harmful stereotypes. - Impersonating individuals without their consent. - Sexual content without consent of the people who might see it. - Mis- and disinformation - Representations of egregious violence and gore - Sharing of copyrighted or licensed material in violation of its terms of use. - Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use. ## Limitations and Bias ### Limitations - The model does not achieve perfect photorealism - The model cannot render legible text - The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere” - Faces and people in general may not be generated properly. - The model was trained mainly with English captions and will not work as well in other languages. - The autoencoding part of the model is lossy - The model was trained on a large-scale dataset [LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material and is not fit for product use without additional safety mechanisms and considerations. - No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data. The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images. ### Bias While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases. Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/), which consists of images that are primarily limited to English descriptions. Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for. This affects the overall output of the model, as white and western cultures are often set as the default. Further, the ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts. ### Safety Module The intended use of this model is with the [Safety Checker](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py) in Diffusers. This checker works by checking model outputs against known hard-coded NSFW concepts. The concepts are intentionally hidden to reduce the likelihood of reverse-engineering this filter. Specifically, the checker compares the class probability of harmful concepts in the embedding space of the `CLIPTextModel` *after generation* of the images. The concepts are passed into the model with the generated image and compared to a hand-engineered weight for each NSFW concept. # Acknowledgments - We express our gratitude to [Microsoft for Startups Founders Hub](https://www.microsoft.com/en-us/startups) for generously providing the Azure credits used during pretraining. - We deeply appreciate the pioneering research on Latent/Stable Diffusion conducted by [CompVis](https://github.com/CompVis/latent-diffusion), [Runway](https://runwayml.com/), and [Stability AI](https://stability.ai/). - Special thanks to the contributors to [LAION](https://laion.ai/), [Diffusers](https://github.com/huggingface/diffusers), and [Gradio](https://www.gradio.app/) for their valuable support. # Citation ```bibtex @article{kim2023architectural, title={On Architectural Compression of Text-to-Image Diffusion Models}, author={Kim, Bo-Kyeong and Song, Hyoung-Kyu and Castells, Thibault and Choi, Shinkook}, journal={arXiv preprint arXiv:2305.15798}, year={2023}, url={https://arxiv.org/abs/2305.15798} } ``` ```bibtex @article{Kim_2023_ICMLW, title={BK-SDM: Architecturally Compressed Stable Diffusion for Efficient Text-to-Image Generation}, author={Kim, Bo-Kyeong and Song, Hyoung-Kyu and Castells, Thibault and Choi, Shinkook}, journal={ICML Workshop on Efficient Systems for Foundation Models (ES-FoMo)}, year={2023}, url={https://openreview.net/forum?id=bOVydU0XKC} } ``` *This model card was written by Bo-Kyeong Kim and is based on the [Stable Diffusion v1 model card]( https://huggingface.co/CompVis/stable-diffusion-v1-4).*
11,928
[ [ -0.042816162109375, -0.043670654296875, 0.01201629638671875, 0.0198822021484375, -0.035400390625, -0.011962890625, -0.003284454345703125, -0.0218353271484375, 0.02191162109375, 0.03399658203125, -0.03350830078125, -0.036346435546875, -0.046478271484375, 0.00...
macrmaw/ucl-match-ball
2023-08-11T16:24:26.000Z
[ "diffusers", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "endpoints_compatible", "has_space", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
macrmaw
null
null
macrmaw/ucl-match-ball
0
1,202
diffusers
2023-08-11T16:11:50
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### UCL-match-ball Dreambooth model trained by macrmaw following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: VJCET538 Sample pictures of this concept:
293
[ [ -0.034637451171875, -0.0210723876953125, 0.01495361328125, 0.0024166107177734375, -0.0009732246398925781, 0.047119140625, 0.043304443359375, -0.038543701171875, 0.026275634765625, 0.03851318359375, -0.0604248046875, -0.01910400390625, -0.0022029876708984375, ...
uer/gpt2-chinese-poem
2023-10-17T15:14:25.000Z
[ "transformers", "pytorch", "tf", "jax", "gpt2", "text-generation", "zh", "arxiv:1909.05658", "arxiv:2212.06385", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text-generation
uer
null
null
uer/gpt2-chinese-poem
30
1,201
transformers
2022-03-02T23:29:05
--- language: zh widget: - text: "[CLS] 万 叠 春 山 积 雨 晴 ," - text: "[CLS] 大 漠" --- # Chinese Poem GPT2 Model ## Model description The model is pre-trained by [UER-py](https://github.com/dbiir/UER-py/), which is introduced in [this paper](https://arxiv.org/abs/1909.05658). Besides, the model could also be pre-trained by [TencentPretrain](https://github.com/Tencent/TencentPretrain) introduced in [this paper](https://arxiv.org/abs/2212.06385), which inherits UER-py to support models with parameters above one billion, and extends it to a multimodal pre-training framework. The model is used to generate Chinese ancient poems. You can download the model from the [UER-py Modelzoo page](https://github.com/dbiir/UER-py/wiki/Modelzoo), or [GPT2-Chinese Github page](https://github.com/Morizeyao/GPT2-Chinese), or via HuggingFace from the link [gpt2-chinese-poem](https://huggingface.co/uer/gpt2-chinese-poem]). Since the parameter skip_special_tokens is used in the pipelines.py, special tokens such as [SEP], [UNK] will be deleted, the output results of Hosted inference API (right) may not be properly displayed. ## How to use You can use the model directly with a pipeline for text generation: When the parameter skip_special_tokens is True: ```python >>> from transformers import BertTokenizer, GPT2LMHeadModel,TextGenerationPipeline >>> tokenizer = BertTokenizer.from_pretrained("uer/gpt2-chinese-poem") >>> model = GPT2LMHeadModel.from_pretrained("uer/gpt2-chinese-poem") >>> text_generator = TextGenerationPipeline(model, tokenizer) >>> text_generator("[CLS]梅 山 如 积 翠 ,", max_length=50, do_sample=True) [{'generated_text': '[CLS]梅 山 如 积 翠 , 丛 竹 隠 疏 花 。 水 影 落 寒 濑 , 竹 声 随 暮 鸦 。 茅 茨 数 间 屋 , 烟 火 两 三 家 。 安 得 携 琴 酒 , 相 逢 烟 雨 赊 。 向 湖 边 过 , 偏 怜 雪 里 看 。 浮 峦 如 画 出 , 远 树 与 天 连 。 月 上 僧 房 静 , 风 回 萤 火 寒 。 幽 情 何 可 写 , 赖 有 子 期 弹 。 棠 真'}] ``` When the parameter skip_special_tokens is False: ```python >>> from transformers import BertTokenizer, GPT2LMHeadModel,TextGenerationPipeline >>> tokenizer = BertTokenizer.from_pretrained("uer/gpt2-chinese-poem") >>> model = GPT2LMHeadModel.from_pretrained("uer/gpt2-chinese-poem") >>> text_generator = TextGenerationPipeline(model, tokenizer) >>> text_generator("[CLS]梅 山 如 积 翠 ,", max_length=100, do_sample=True) [{'generated_text': '[CLS]梅 山 如 积 翠 , 秀 出 何 其 雄 。 矫 矫 云 间 质 , 映 日 生 玲 珑 。 根 大 乱 石 结 , 枝 高 青 云 蒙 。 常 因 风 露 晚 , 隠 映 瑶 台 中 。 忽 闻 山 石 裂 , 万 里 吹 天 风 。 又 觉 此 身 高 , 迥 出 凡 境 空 。 清 影 落 潭 水 , 暗 香 来 逈 峰 。 却 寻 白 太 白 , 月 影 摇 江 东 。 [SEP] 而 非'}] ``` ## Training data Training data contains 800,000 Chinese ancient poems which are collected by [chinese-poetry](https://github.com/chinese-poetry/chinese-poetry) and [Poetry](https://github.com/Werneror/Poetry) projects. ## Training procedure The model is pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 200,000 steps with a sequence length of 128. We use extended vocabulary to handle out-of-vocabulary words. The Chinese character that occurs greater than or equal to 100 in poem corpus is added to the vocabulary. ``` python3 preprocess.py --corpus_path corpora/poem.txt \ --vocab_path models/google_zh_poem_vocab.txt \ --dataset_path poem_dataset.pt --processes_num 16 \ --seq_length 128 --data_processor lm ``` ``` python3 pretrain.py --dataset_path poem_dataset.pt \ --vocab_path models/google_zh_poem_vocab.txt \ --config_path models/gpt2/config.json \ --output_model_path models/poem_gpt2_model.bin \ --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \ --total_steps 200000 --save_checkpoint_steps 50000 --report_steps 1000 \ --learning_rate 5e-4 --batch_size 64 ``` Finally, we convert the pre-trained model into Huggingface's format: ``` python3 scripts/convert_gpt2_from_uer_to_huggingface.py --input_model_path models/poem_gpt2_model.bin-200000 \ --output_model_path pytorch_model.bin \ --layers_num 12 ``` ### BibTeX entry and citation info ``` @article{radford2019language, title={Language Models are Unsupervised Multitask Learners}, author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya}, year={2019} } @article{zhao2019uer, title={UER: An Open-Source Toolkit for Pre-training Models}, author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong}, journal={EMNLP-IJCNLP 2019}, pages={241}, year={2019} } @article{zhao2023tencentpretrain, title={TencentPretrain: A Scalable and Flexible Toolkit for Pre-training Models of Different Modalities}, author={Zhao, Zhe and Li, Yudong and Hou, Cheng and Zhao, Jing and others}, journal={ACL 2023}, pages={217}, year={2023} } ```
5,037
[ [ -0.006500244140625, -0.040740966796875, 0.02490234375, 0.034149169921875, -0.041961669921875, -0.028564453125, -0.0161590576171875, -0.01898193359375, 0.00110626220703125, 0.0312347412109375, -0.04278564453125, -0.0297698974609375, -0.050567626953125, -0.008...
EleutherAI/pythia-2.8b-deduped-v0
2023-07-10T01:32:13.000Z
[ "transformers", "pytorch", "safetensors", "gpt_neox", "text-generation", "causal-lm", "pythia", "pythia_v0", "en", "dataset:EleutherAI/the_pile_deduplicated", "arxiv:2101.00027", "arxiv:2201.07311", "license:apache-2.0", "endpoints_compatible", "has_space", "text-generation-inference",...
text-generation
EleutherAI
null
null
EleutherAI/pythia-2.8b-deduped-v0
5
1,201
transformers
2022-11-23T17:41:01
--- language: - en tags: - pytorch - causal-lm - pythia - pythia_v0 license: apache-2.0 datasets: - EleutherAI/the_pile_deduplicated --- The *Pythia Scaling Suite* is a collection of models developed to facilitate interpretability research. It contains two sets of eight models of sizes 70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two models: one trained on the Pile, and one trained on the Pile after the dataset has been globally deduplicated. All 8 model sizes are trained on the exact same data, in the exact same order. All Pythia models are available [on Hugging Face](https://huggingface.co/models?other=pythia). The Pythia model suite was deliberately designed to promote scientific research on large language models, especially interpretability research. Despite not centering downstream performance as a design goal, we find the models <a href="#evaluations">match or exceed</a> the performance of similar and same-sized models, such as those in the OPT and GPT-Neo suites. Please note that all models in the *Pythia* suite were renamed in January 2023. For clarity, a <a href="#naming-convention-and-parameter-count">table comparing the old and new names</a> is provided in this model card, together with exact parameter counts. ## Pythia-2.8B-deduped ### Model Details - Developed by: [EleutherAI](http://eleuther.ai) - Model type: Transformer-based Language Model - Language: English - Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia) for training procedure, config files, and details on how to use. - Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox) - License: Apache 2.0 - Contact: to ask questions about this model, join the [EleutherAI Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`. Please read the existing *Pythia* documentation before asking about it in the EleutherAI Discord. For general correspondence: [contact@eleuther. ai](mailto:contact@eleuther.ai). <figure> | Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models | | -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: | | 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — | | 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M | | 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M | | 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — | | 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B | | 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B | | 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B | | 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — | <figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and non-deduped models of a given size have the same hyperparameters. “Equivalent” models have <b>exactly</b> the same architecture, and the same number of non-embedding parameters.</figcaption> </figure> ### Uses and Limitations #### Intended Use The primary intended use of Pythia is research on the behavior, functionality, and limitations of large language models. This suite is intended to provide a controlled setting for performing scientific experiments. To enable the study of how language models change in the course of training, we provide 143 evenly spaced intermediate checkpoints per model. These checkpoints are hosted on Hugging Face as branches. Note that branch `143000` corresponds exactly to the model checkpoint on the `main` branch of each model. You may also further fine-tune and adapt Pythia-2.8B-deduped for deployment, as long as your use is in accordance with the Apache 2.0 license. Pythia models work with the Hugging Face [Transformers Library](https://huggingface.co/docs/transformers/index). If you decide to use pre-trained Pythia-2.8B-deduped as a basis for your fine-tuned model, please conduct your own risk and bias assessment. #### Out-of-scope use The Pythia Suite is **not** intended for deployment. It is not a in itself a product and cannot be used for human-facing interactions. Pythia models are English-language only, and are not suitable for translation or generating text in other languages. Pythia-2.8B-deduped has not been fine-tuned for downstream contexts in which language models are commonly deployed, such as writing genre prose, or commercial chatbots. This means Pythia-2.8B-deduped will **not** respond to a given prompt the way a product like ChatGPT does. This is because, unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement Learning from Human Feedback (RLHF) to better “understand” human instructions. #### Limitations and biases The core functionality of a large language model is to take a string of text and predict the next token. The token deemed statistically most likely by the model need not produce the most “accurate” text. Never rely on Pythia-2.8B-deduped to produce factually accurate output. This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset known to contain profanity and texts that are lewd or otherwise offensive. See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a discussion of documented biases with regards to gender, religion, and race. Pythia-2.8B-deduped may produce socially unacceptable or undesirable text, *even if* the prompt itself does not include anything explicitly offensive. If you plan on using text generated through, for example, the Hosted Inference API, we recommend having a human curate the outputs of this language model before presenting it to other people. Please inform your audience that the text was generated by Pythia-2.8B-deduped. ### Quickstart Pythia models can be loaded and used via the following code, demonstrated here for the third `pythia-70m-deduped` checkpoint: ```python from transformers import GPTNeoXForCausalLM, AutoTokenizer model = GPTNeoXForCausalLM.from_pretrained( "EleutherAI/pythia-70m-deduped", revision="step3000", cache_dir="./pythia-70m-deduped/step3000", ) tokenizer = AutoTokenizer.from_pretrained( "EleutherAI/pythia-70m-deduped", revision="step3000", cache_dir="./pythia-70m-deduped/step3000", ) inputs = tokenizer("Hello, I am", return_tensors="pt") tokens = model.generate(**inputs) tokenizer.decode(tokens[0]) ``` Revision/branch `step143000` corresponds exactly to the model checkpoint on the `main` branch of each model.<br> For more information on how to use all Pythia models, see [documentation on GitHub](https://github.com/EleutherAI/pythia). ### Training #### Training data Pythia-2.8B-deduped was trained on the Pile **after the dataset has been globally deduplicated**.<br> [The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in English. It was created by EleutherAI specifically for training large language models. It contains texts from 22 diverse sources, roughly broken down into five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl), prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and miscellaneous (e.g. GitHub, Enron Emails). See [the Pile paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources, methodology, and a discussion of ethical implications. Consult [the datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation about the Pile and its component datasets. The Pile can be downloaded from the [official website](https://pile.eleuther.ai/), or from a [community mirror](https://the-eye.eu/public/AI/pile/). #### Training procedure All models were trained on the exact same data, in the exact same order. Each model saw 299,892,736,000 tokens during training, and 143 checkpoints for each model are saved every 2,097,152,000 tokens, spaced evenly throughout training. This corresponds to training for just under 1 epoch on the Pile for non-deduplicated models, and about 1.5 epochs on the deduplicated Pile. All *Pythia* models trained for the equivalent of 143000 steps at a batch size of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch size of 4M tokens listed were originally trained for 71500 steps instead, with checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for consistency with all 2M batch models, so `step1000` is the first checkpoint for `pythia-1.4b` that was saved (corresponding to step 500 in training), and `step1000` is likewise the first `pythia-6.9b` checkpoint that was saved (corresponding to 1000 “actual” steps).<br> See [GitHub](https://github.com/EleutherAI/pythia) for more details on training procedure, including [how to reproduce it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br> Pythia uses the same tokenizer as [GPT-NeoX- 20B](https://huggingface.co/EleutherAI/gpt-neox-20b). ### Evaluations All 16 *Pythia* models were evaluated using the [LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access the results by model and step at `results/json/*` in the [GitHub repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br> Expand the sections below to see plots of evaluation results for all Pythia and Pythia-deduped models compared with OPT and BLOOM. <details> <summary>LAMBADA – OpenAI</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/> </details> <details> <summary>Physical Interaction: Question Answering (PIQA)</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/> </details> <details> <summary>WinoGrande</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/> </details> <details> <summary>AI2 Reasoning Challenge – Challenge Set</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/> </details> <details> <summary>SciQ</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/> </details> ### Naming convention and parameter count *Pythia* models were renamed in January 2023. It is possible that the old naming convention still persists in some documentation by accident. The current naming convention (70M, 160M, etc.) is based on total parameter count. <figure style="width:32em"> | current Pythia suffix | old suffix | total params | non-embedding params | | --------------------: | ---------: | -------------: | -------------------: | | 70M | 19M | 70,426,624 | 18,915,328 | | 160M | 125M | 162,322,944 | 85,056,000 | | 410M | 350M | 405,334,016 | 302,311,424 | | 1B | 800M | 1,011,781,632 | 805,736,448 | | 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 | | 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 | | 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 | | 12B | 13B | 11,846,072,320 | 11,327,027,200 | </figure>
11,894
[ [ -0.0240478515625, -0.0640869140625, 0.019866943359375, 0.0034160614013671875, -0.0175628662109375, -0.01184844970703125, -0.015869140625, -0.035430908203125, 0.01404571533203125, 0.0158538818359375, -0.024139404296875, -0.023284912109375, -0.03546142578125, ...
microsoft/swin-large-patch4-window12-384-in22k
2022-05-16T18:40:51.000Z
[ "transformers", "pytorch", "tf", "swin", "image-classification", "vision", "dataset:imagenet-21k", "arxiv:2103.14030", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
image-classification
microsoft
null
null
microsoft/swin-large-patch4-window12-384-in22k
3
1,200
transformers
2022-03-02T23:29:05
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet-21k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # Swin Transformer (large-sized model) Swin Transformer model pre-trained on ImageNet-21k (14 million images, 21,841 classes) at resolution 384x384. It was introduced in the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Liu et al. and first released in [this repository](https://github.com/microsoft/Swin-Transformer). Disclaimer: The team releasing Swin Transformer did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The Swin Transformer is a type of Vision Transformer. It builds hierarchical feature maps by merging image patches (shown in gray) in deeper layers and has linear computation complexity to input image size due to computation of self-attention only within each local window (shown in red). It can thus serve as a general-purpose backbone for both image classification and dense recognition tasks. In contrast, previous vision Transformers produce feature maps of a single low resolution and have quadratic computation complexity to input image size due to computation of self-attention globally. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/swin_transformer_architecture.png) [Source](https://paperswithcode.com/method/swin-transformer) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=swin) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import AutoFeatureExtractor, SwinForImageClassification from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) feature_extractor = AutoFeatureExtractor.from_pretrained("microsoft/swin-large-patch4-window12-384-in22k") model = SwinForImageClassification.from_pretrained("microsoft/swin-large-patch4-window12-384-in22k") inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/swin.html#). ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2103-14030, author = {Ze Liu and Yutong Lin and Yue Cao and Han Hu and Yixuan Wei and Zheng Zhang and Stephen Lin and Baining Guo}, title = {Swin Transformer: Hierarchical Vision Transformer using Shifted Windows}, journal = {CoRR}, volume = {abs/2103.14030}, year = {2021}, url = {https://arxiv.org/abs/2103.14030}, eprinttype = {arXiv}, eprint = {2103.14030}, timestamp = {Thu, 08 Apr 2021 07:53:26 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2103-14030.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
3,753
[ [ -0.0496826171875, -0.0274810791015625, -0.00989532470703125, 0.01194000244140625, -0.005237579345703125, -0.021759033203125, -0.00318145751953125, -0.061767578125, 0.0052337646484375, 0.023590087890625, -0.040283203125, -0.0137176513671875, -0.043792724609375, ...
xlm-roberta-large-finetuned-conll03-german
2022-07-22T08:06:55.000Z
[ "transformers", "pytorch", "rust", "xlm-roberta", "token-classification", "multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", ...
token-classification
null
null
null
xlm-roberta-large-finetuned-conll03-german
3
1,199
transformers
2022-03-02T23:29:04
--- language: - multilingual - af - am - ar - as - az - be - bg - bn - br - bs - ca - cs - cy - da - de - el - en - eo - es - et - eu - fa - fi - fr - fy - ga - gd - gl - gu - ha - he - hi - hr - hu - hy - id - is - it - ja - jv - ka - kk - km - kn - ko - ku - ky - la - lo - lt - lv - mg - mk - ml - mn - mr - ms - my - ne - nl - no - om - or - pa - pl - ps - pt - ro - ru - sa - sd - si - sk - sl - so - sq - sr - su - sv - sw - ta - te - th - tl - tr - ug - uk - ur - uz - vi - xh - yi - zh --- # xlm-roberta-large-finetuned-conll03-german # Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Bias, Risks, and Limitations](#bias-risks-and-limitations) 4. [Training](#training) 5. [Evaluation](#evaluation) 6. [Environmental Impact](#environmental-impact) 7. [Technical Specifications](#technical-specifications) 8. [Citation](#citation) 9. [Model Card Authors](#model-card-authors) 10. [How To Get Started With the Model](#how-to-get-started-with-the-model) # Model Details ## Model Description The XLM-RoBERTa model was proposed in [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. It is based on Facebook's RoBERTa model released in 2019. It is a large multi-lingual language model, trained on 2.5TB of filtered CommonCrawl data. This model is [XLM-RoBERTa-large](https://huggingface.co/xlm-roberta-large) fine-tuned with the [conll2003](https://huggingface.co/datasets/conll2003) dataset in German. - **Developed by:** See [associated paper](https://arxiv.org/abs/1911.02116) - **Model type:** Multi-lingual language model - **Language(s) (NLP):** XLM-RoBERTa is a multilingual model trained on 100 different languages; see [GitHub Repo](https://github.com/facebookresearch/fairseq/tree/main/examples/xlmr) for full list; model is fine-tuned on a dataset in German - **License:** More information needed - **Related Models:** [RoBERTa](https://huggingface.co/roberta-base), [XLM](https://huggingface.co/docs/transformers/model_doc/xlm) - **Parent Model:** [XLM-RoBERTa-large](https://huggingface.co/xlm-roberta-large) - **Resources for more information:** -[GitHub Repo](https://github.com/facebookresearch/fairseq/tree/main/examples/xlmr) -[Associated Paper](https://arxiv.org/abs/1911.02116) # Uses ## Direct Use The model is a language model. The model can be used for token classification, a natural language understanding task in which a label is assigned to some tokens in a text. ## Downstream Use Potential downstream use cases include Named Entity Recognition (NER) and Part-of-Speech (PoS) tagging. To learn more about token classification and other potential downstream use cases, see the Hugging Face [token classification docs](https://huggingface.co/tasks/token-classification). ## Out-of-Scope Use The model should not be used to intentionally create hostile or alienating environments for people. # Bias, Risks, and Limitations **CONTENT WARNING: Readers should be made aware that language generated by this model may be disturbing or offensive to some and may propagate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). ## Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. # Training See the following resources for training data and training procedure details: - [XLM-RoBERTa-large model card](https://huggingface.co/xlm-roberta-large) - [CoNLL-2003 data card](https://huggingface.co/datasets/conll2003) - [Associated paper](https://arxiv.org/pdf/1911.02116.pdf) # Evaluation See the [associated paper](https://arxiv.org/pdf/1911.02116.pdf) for evaluation details. # Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** 500 32GB Nvidia V100 GPUs (from the [associated paper](https://arxiv.org/pdf/1911.02116.pdf)) - **Hours used:** More information needed - **Cloud Provider:** More information needed - **Compute Region:** More information needed - **Carbon Emitted:** More information needed # Technical Specifications See the [associated paper](https://arxiv.org/pdf/1911.02116.pdf) for further details. # Citation **BibTeX:** ```bibtex @article{conneau2019unsupervised, title={Unsupervised Cross-lingual Representation Learning at Scale}, author={Conneau, Alexis and Khandelwal, Kartikay and Goyal, Naman and Chaudhary, Vishrav and Wenzek, Guillaume and Guzm{\'a}n, Francisco and Grave, Edouard and Ott, Myle and Zettlemoyer, Luke and Stoyanov, Veselin}, journal={arXiv preprint arXiv:1911.02116}, year={2019} } ``` **APA:** - Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., ... & Stoyanov, V. (2019). Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116. # Model Card Authors This model card was written by the team at Hugging Face. # How to Get Started with the Model Use the code below to get started with the model. You can use this model directly within a pipeline for NER. <details> <summary> Click to expand </summary> ```python >>> from transformers import AutoTokenizer, AutoModelForTokenClassification >>> from transformers import pipeline >>> tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-large-finetuned-conll03-german") >>> model = AutoModelForTokenClassification.from_pretrained("xlm-roberta-large-finetuned-conll03-german") >>> classifier = pipeline("ner", model=model, tokenizer=tokenizer) >>> classifier("Bayern München ist wieder alleiniger Top-Favorit auf den Gewinn der deutschen Fußball-Meisterschaft.") [{'end': 6, 'entity': 'I-ORG', 'index': 1, 'score': 0.99999166, 'start': 0, 'word': '▁Bayern'}, {'end': 14, 'entity': 'I-ORG', 'index': 2, 'score': 0.999987, 'start': 7, 'word': '▁München'}, {'end': 77, 'entity': 'I-MISC', 'index': 16, 'score': 0.9999728, 'start': 68, 'word': '▁deutschen'}] ``` </details>
6,463
[ [ -0.036865234375, -0.0428466796875, 0.0125274658203125, 0.0092010498046875, -0.01447296142578125, -0.01142120361328125, -0.03192138671875, -0.04083251953125, 0.010955810546875, 0.028228759765625, -0.03277587890625, -0.043609619140625, -0.061126708984375, 0.00...
KoalaAI/Text-Moderation
2023-10-06T16:26:01.000Z
[ "transformers", "pytorch", "safetensors", "deberta", "text-classification", "autotrain", "en", "dataset:mmathys/openai-moderation-api-evaluation", "dataset:DarwinAnim8or/autotrain-data-text-moderation-v2-small", "license:openrail", "co2_eq_emissions", "endpoints_compatible", "has_space", "...
text-classification
KoalaAI
null
null
KoalaAI/Text-Moderation
1
1,198
transformers
2023-10-05T18:14:54
--- tags: - autotrain - text-classification language: - en widget: - text: I love AutoTrain - text: I absolutely hate those people - text: I love cake! - text: >- lets build the wall and deport illegals "they walk across the border like this is Central park" - text: EU offers to pay countries 6,000 euros per person to take in migrants datasets: - mmathys/openai-moderation-api-evaluation - DarwinAnim8or/autotrain-data-text-moderation-v2-small co2_eq_emissions: emissions: 0.03967468113268738 license: openrail --- # Text Moderation This model is a text classification model based on Deberta-v3 that predicts whether a text contains text that could be considered offensive. It is split up in the following labels: | Category | Label | Definition | | -------- | ----- | ---------- | | sexual | `S` | Content meant to arouse sexual excitement, such as the description of sexual activity, or that promotes sexual services (excluding sex education and wellness). | | hate | `H` | Content that expresses, incites, or promotes hate based on race, gender, ethnicity, religion, nationality, sexual orientation, disability status, or caste. | | violence | `V` | Content that promotes or glorifies violence or celebrates the suffering or humiliation of others. | | harassment | `HR` | Content that may be used to torment or annoy individuals in real life, or make harassment more likely to occur. | | self-harm | `SH` | Content that promotes, encourages, or depicts acts of self-harm, such as suicide, cutting, and eating disorders. | | sexual/minors | `S3` | Sexual content that includes an individual who is under 18 years old. | | hate/threatening | `H2` | Hateful content that also includes violence or serious harm towards the targeted group. | | violence/graphic | `V2` | Violent content that depicts death, violence, or serious physical injury in extreme graphic detail. | | OK | `OK` | Not offensive It's important to remember that this model was only trained on English texts, and may not perform well on non-English inputs. ## Ethical Considerations This is a model that deals with sensitive and potentially harmful language. Users should consider the ethical implications and potential risks of using or deploying this model in their applications or contexts. Some of the ethical issues that may arise are: - The model may reinforce or amplify existing biases or stereotypes in the data or in the society. For example, the model may associate certain words or topics with offensive language based on the frequency or co-occurrence in the data, without considering the meaning or intent behind them. This may result in unfair or inaccurate predictions for some groups or individuals. Users should carefully consider the purpose, context, and impact of using this model, and take appropriate measures to prevent or mitigate any potential harm. Users should also respect the privacy and consent of the data subjects, and adhere to the relevant laws and regulations in their jurisdictions. ## License This model is licensed under the CodeML OpenRAIL-M 0.1 license, which is a variant of the BigCode OpenRAIL-M license. This license allows you to freely access, use, modify, and distribute this model and its derivatives, for research, commercial or non-commercial purposes, as long as you comply with the following conditions: - You must include a copy of the license and the original source of the model in any copies or derivatives of the model that you distribute. - You must not use the model or its derivatives for any unlawful, harmful, abusive, discriminatory, or offensive purposes, or to cause or contribute to any social or environmental harm. - You must respect the privacy and consent of the data subjects whose data was used to train or evaluate the model, and adhere to the relevant laws and regulations in your jurisdiction. - You must acknowledge that the model and its derivatives are provided "as is", without any warranties or guarantees of any kind, and that the licensor is not liable for any damages or losses arising from your use of the model or its derivatives. By accessing or using this model, you agree to be bound by the terms of this license. If you do not agree with the terms of this license, you must not access or use this model. ## Training Details - Problem type: Multi-class Classification - CO2 Emissions (in grams): 0.0397 ## Validation Metrics - Loss: 0.848 - Accuracy: 0.749 (75%) - Macro F1: 0.326 - Micro F1: 0.749 - Weighted F1: 0.703 - Macro Precision: 0.321 - Micro Precision: 0.749 - Weighted Precision: 0.671 - Macro Recall: 0.349 - Micro Recall: 0.749 - Weighted Recall: 0.749 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/KoalaAI/Text-Moderation ``` Or Python API: ```python from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("KoalaAI/Text-Moderation", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("KoalaAI/Text-Moderation", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
5,333
[ [ -0.019866943359375, -0.06243896484375, 0.0173492431640625, 0.0193634033203125, -0.027984619140625, -0.0233306884765625, 0.0020599365234375, -0.034576416015625, -0.00510406494140625, 0.042877197265625, -0.042083740234375, -0.05206298828125, -0.057037353515625, ...
microsoft/deberta-v2-xxlarge-mnli
2021-05-21T20:08:40.000Z
[ "transformers", "pytorch", "deberta-v2", "text-classification", "deberta", "deberta-mnli", "en", "arxiv:2006.03654", "license:mit", "endpoints_compatible", "has_space", "region:us" ]
text-classification
microsoft
null
null
microsoft/deberta-v2-xxlarge-mnli
7
1,193
transformers
2022-03-02T23:29:05
--- language: en tags: - deberta - deberta-mnli tasks: mnli thumbnail: https://huggingface.co/front/thumbnails/microsoft.png license: mit widget: - text: "[CLS] I love you. [SEP] I like you. [SEP]" --- ## DeBERTa: Decoding-enhanced BERT with Disentangled Attention [DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. It outperforms BERT and RoBERTa on majority of NLU tasks with 80GB training data. Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates. This the DeBERTa V2 XXLarge model fine-tuned with MNLI task, 48 layers, 1536 hidden size. Total parameters 1.5B. ### Fine-tuning on NLU tasks We present the dev results on SQuAD 1.1/2.0 and several GLUE benchmark tasks. | Model | SQuAD 1.1 | SQuAD 2.0 | MNLI-m/mm | SST-2 | QNLI | CoLA | RTE | MRPC | QQP |STS-B | |---------------------------|-----------|-----------|-------------|-------|------|------|--------|-------|-------|------| | | F1/EM | F1/EM | Acc | Acc | Acc | MCC | Acc |Acc/F1 |Acc/F1 |P/S | | BERT-Large | 90.9/84.1 | 81.8/79.0 | 86.6/- | 93.2 | 92.3 | 60.6 | 70.4 | 88.0/- | 91.3/- |90.0/- | | RoBERTa-Large | 94.6/88.9 | 89.4/86.5 | 90.2/- | 96.4 | 93.9 | 68.0 | 86.6 | 90.9/- | 92.2/- |92.4/- | | XLNet-Large | 95.1/89.7 | 90.6/87.9 | 90.8/- | 97.0 | 94.9 | 69.0 | 85.9 | 90.8/- | 92.3/- |92.5/- | | [DeBERTa-Large](https://huggingface.co/microsoft/deberta-large)<sup>1</sup> | 95.5/90.1 | 90.7/88.0 | 91.3/91.1| 96.5|95.3| 69.5| 91.0| 92.6/94.6| 92.3/- |92.8/92.5 | | [DeBERTa-XLarge](https://huggingface.co/microsoft/deberta-xlarge)<sup>1</sup> | -/- | -/- | 91.5/91.2| 97.0 | - | - | 93.1 | 92.1/94.3 | - |92.9/92.7| | [DeBERTa-V2-XLarge](https://huggingface.co/microsoft/deberta-v2-xlarge)<sup>1</sup>|95.8/90.8| 91.4/88.9|91.7/91.6| **97.5**| 95.8|71.1|**93.9**|92.0/94.2|92.3/89.8|92.9/92.9| |**[DeBERTa-V2-XXLarge](https://huggingface.co/microsoft/deberta-v2-xxlarge)<sup>1,2</sup>**|**96.1/91.4**|**92.2/89.7**|**91.7/91.9**|97.2|**96.0**|**72.0**| 93.5| **93.1/94.9**|**92.7/90.3** |**93.2/93.1** | -------- #### Notes. - <sup>1</sup> Following RoBERTa, for RTE, MRPC, STS-B, we fine-tune the tasks based on [DeBERTa-Large-MNLI](https://huggingface.co/microsoft/deberta-large-mnli), [DeBERTa-XLarge-MNLI](https://huggingface.co/microsoft/deberta-xlarge-mnli), [DeBERTa-V2-XLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xlarge-mnli), [DeBERTa-V2-XXLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli). The results of SST-2/QQP/QNLI/SQuADv2 will also be slightly improved when start from MNLI fine-tuned models, however, we only report the numbers fine-tuned from pretrained base models for those 4 tasks. - <sup>2</sup> To try the **XXLarge** model with **[HF transformers](https://huggingface.co/transformers/main_classes/trainer.html)**, we recommand using **deepspeed** as it's faster and saves memory. Run with `Deepspeed`, ```bash pip install datasets pip install deepspeed # Download the deepspeed config file wget https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli/resolve/main/ds_config.json -O ds_config.json export TASK_NAME=rte output_dir="ds_results" num_gpus=8 batch_size=4 python -m torch.distributed.launch --nproc_per_node=${num_gpus} \\ run_glue.py \\ --model_name_or_path microsoft/deberta-v2-xxlarge-mnli \\ --task_name $TASK_NAME \\ --do_train \\ --do_eval \\ --max_seq_length 256 \\ --per_device_train_batch_size ${batch_size} \\ --learning_rate 3e-6 \\ --num_train_epochs 3 \\ --output_dir $output_dir \\ --overwrite_output_dir \\ --logging_steps 10 \\ --logging_dir $output_dir \\ --deepspeed ds_config.json ``` You can also run with `--sharded_ddp` ```bash cd transformers/examples/text-classification/ export TASK_NAME=rte python -m torch.distributed.launch --nproc_per_node=8 run_glue.py --model_name_or_path microsoft/deberta-v2-xxlarge-mnli \\ --task_name $TASK_NAME --do_train --do_eval --max_seq_length 256 --per_device_train_batch_size 4 \\ --learning_rate 3e-6 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --sharded_ddp --fp16 ``` ### Citation If you find DeBERTa useful for your work, please cite the following paper: ``` latex @inproceedings{ he2021deberta, title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION}, author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen}, booktitle={International Conference on Learning Representations}, year={2021}, url={https://openreview.net/forum?id=XPZIaotutsD} } ```
4,795
[ [ -0.037078857421875, -0.049468994140625, 0.0230560302734375, 0.033660888671875, -0.01294708251953125, 0.01068878173828125, -0.0022029876708984375, -0.046173095703125, 0.0200042724609375, 0.01154327392578125, -0.0634765625, -0.0260772705078125, -0.0672607421875, ...
Yntec/ClassicEra
2023-09-11T03:06:34.000Z
[ "diffusers", "Anime", "3D", "Classic Animation", "DucHaiten", "text-to-image", "stable-diffusion", "stable-diffusion-diffusers", "license:creativeml-openrail-m", "endpoints_compatible", "has_space", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
Yntec
null
null
Yntec/ClassicEra
2
1,193
diffusers
2023-09-11T00:27:30
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image tags: - Anime - 3D - Classic Animation - DucHaiten - text-to-image - stable-diffusion - stable-diffusion-diffusers - diffusers --- # Classic Era A mix of DucHaiten-ClassicAnime with DucHaiten-GoldenLife so GoldenLife can use ClassicAnime's great style! It has the MoistMixV2 VAE baked in. Comparison: ![Sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/r0B-NHkb9erCMXdtc5I7i.png) (Click for larger) Sample and prompt: ![Sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/ft0a4DVF-du2K7oYQFLFl.png) DETAILED CHIBI EYES, Cartoon Pretty CUTE LITTLE Girl playing flute in the middle of a birch forest during a storm at night while lightning dragons race down toward her, sharp focus. stanley artgerm lau highly detailed By rossdraws and ross tran and ilya kuvshinov Original pages: https://civitai.com/models/8542?modelVersionId=16168 (ClassicAnime) https://tensor.art/models/628276277415133426 (GoldenLife) # Recipes (SuperMerger Train Difference) - Add Difference Model A: DucHaitenClassicAnime (https://huggingface.co/Yntec/DucHaitenClassicAnime768/resolve/main/DH_ClassicAnime_mini.safetensors) Model B: DucHaitenClassicAnime Model C: v1-5-pruned-fp16-no-ema (https://huggingface.co/Yntec/DreamLikeRemix/resolve/main/v1-5-pruned-fp16-no-ema.safetensors) Output: ClassicAnimeEssense. - Weight Sum 0.70 Model A: ClassicAnimeEssense. Model B: DucHaitenGoldenLife Output: GoldenLife-ClassicAnime - Weight Sum MBW 0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1 Model A: DucHaitenClassicAnime Model B: GoldenLife-ClassicAnime Output: ClassicEra - Fp16-no-ema Output: ClassicEra-mini - Merge MoistMixV2 VAE Output: ClassicEraVAE
1,831
[ [ -0.03253173828125, -0.0251312255859375, 0.0084381103515625, 0.016082763671875, -0.01314544677734375, -0.0174102783203125, 0.01453399658203125, -0.017913818359375, 0.047027587890625, 0.06842041015625, -0.0670166015625, -0.029144287109375, -0.0258941650390625, ...
TheBloke/Phind-CodeLlama-34B-v2-AWQ
2023-09-27T12:51:22.000Z
[ "transformers", "safetensors", "llama", "text-generation", "code llama", "license:llama2", "model-index", "text-generation-inference", "region:us" ]
text-generation
TheBloke
null
null
TheBloke/Phind-CodeLlama-34B-v2-AWQ
18
1,193
transformers
2023-09-19T10:08:38
--- license: llama2 tags: - code llama base_model: Phind/Phind-CodeLlama-34B-v2 inference: false model_creator: Phind model_type: llama prompt_template: '### System Prompt {system_message} ### User Message {prompt} ### Assistant ' quantized_by: TheBloke model-index: - name: Phind-CodeLlama-34B-v1 results: - task: type: text-generation dataset: name: HumanEval type: openai_humaneval metrics: - type: pass@1 value: 73.8% name: pass@1 verified: false --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # CodeLlama 34B v2 - AWQ - Model creator: [Phind](https://huggingface.co/Phind) - Original model: [CodeLlama 34B v2](https://huggingface.co/Phind/Phind-CodeLlama-34B-v2) <!-- description start --> ## Description This repo contains AWQ model files for [Phind's CodeLlama 34B v2](https://huggingface.co/Phind/Phind-CodeLlama-34B-v2). ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference. It is also now supported by continuous batching server [vLLM](https://github.com/vllm-project/vllm), allowing use of AWQ models for high-throughput concurrent inference in multi-user server scenarios. Note that, at the time of writing, overall throughput is still lower than running vLLM with unquantised models, however using AWQ enables using much smaller GPUs which can lead to easier deployment and overall cost savings. For example, a 70B model can be run on 1 x 48GB GPU instead of 2 x 80GB. <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-v2-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-v2-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-v2-GGUF) * [Phind's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Phind/Phind-CodeLlama-34B-v2) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Phind ``` ### System Prompt {system_message} ### User Message {prompt} ### Assistant ``` <!-- prompt-template end --> <!-- README_AWQ.md-provided-files start --> ## Provided files and AWQ parameters For my first release of AWQ models, I am releasing 128g models only. I will consider adding 32g as well if there is interest, and once I have done perplexity and evaluation comparisons, but at this time 32g models are still not fully tested with AutoAWQ and vLLM. Models are released as sharded safetensors files. | Branch | Bits | GS | AWQ Dataset | Seq Len | Size | | ------ | ---- | -- | ----------- | ------- | ---- | | [main](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-v2-AWQ/tree/main) | 4 | 128 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 4096 | 18.31 GB <!-- README_AWQ.md-provided-files end --> <!-- README_AWQ.md-use-from-vllm start --> ## Serving this model from vLLM Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/). - When using vLLM as a server, pass the `--quantization awq` parameter, for example: ```shell python3 python -m vllm.entrypoints.api_server --model TheBloke/Phind-CodeLlama-34B-v2-AWQ --quantization awq ``` When using vLLM from Python code, pass the `quantization=awq` parameter, for example: ```python from vllm import LLM, SamplingParams prompts = [ "Hello, my name is", "The president of the United States is", "The capital of France is", "The future of AI is", ] sampling_params = SamplingParams(temperature=0.8, top_p=0.95) llm = LLM(model="TheBloke/Phind-CodeLlama-34B-v2-AWQ", quantization="awq") outputs = llm.generate(prompts, sampling_params) # Print the outputs. for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}") ``` <!-- README_AWQ.md-use-from-vllm start --> <!-- README_AWQ.md-use-from-python start --> ## How to use this AWQ model from Python code ### Install the necessary packages Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.0.2 or later ```shell pip3 install autoawq ``` If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y autoawq git clone https://github.com/casper-hansen/AutoAWQ cd AutoAWQ pip3 install . ``` ### You can then try the following example code ```python from awq import AutoAWQForCausalLM from transformers import AutoTokenizer model_name_or_path = "TheBloke/Phind-CodeLlama-34B-v2-AWQ" # Load model model = AutoAWQForCausalLM.from_quantized(model_name_or_path, fuse_layers=True, trust_remote_code=False, safetensors=True) tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=False) prompt = "Tell me about AI" prompt_template=f'''### System Prompt {system_message} ### User Message {prompt} ### Assistant ''' print("\n\n*** Generate:") tokens = tokenizer( prompt_template, return_tensors='pt' ).input_ids.cuda() # Generate output generation_output = model.generate( tokens, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, max_new_tokens=512 ) print("Output: ", tokenizer.decode(generation_output[0])) # Inference can also be done using transformers' pipeline from transformers import pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_AWQ.md-use-from-python end --> <!-- README_AWQ.md-compatibility start --> ## Compatibility The files provided are tested to work with [AutoAWQ](https://github.com/casper-hansen/AutoAWQ), and [vLLM](https://github.com/vllm-project/vllm). [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is not yet compatible with AWQ, but a PR is open which should bring support soon: [TGI PR #781](https://github.com/huggingface/text-generation-inference/issues/781). <!-- README_AWQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: Phind's CodeLlama 34B v2 # **Phind-CodeLlama-34B-v2** We've fine-tuned Phind-CodeLlama-34B-v1 on an additional 1.5B tokens high-quality programming-related data, achieving **73.8% pass@1** on HumanEval. It's the current state-of-the-art amongst open-source models. Furthermore, this model is **instruction-tuned** on the Alpaca/Vicuna format to be steerable and easy-to-use. More details can be found on our [blog post](https://www.phind.com/blog/code-llama-beats-gpt4). ## Model Details This model is fine-tuned from Phind-CodeLlama-34B-v1 and achieves **73.8% pass@1** on HumanEval. Phind-CodeLlama-34B-v2 is **multi-lingual** and is proficient in Python, C/C++, TypeScript, Java, and more. ## Dataset Details We fined-tuned on a proprietary dataset of 1.5B tokens of high quality programming problems and solutions. This dataset consists of instruction-answer pairs instead of code completion examples, making it structurally different from HumanEval. LoRA was not used -- both models are a native finetune. We used DeepSpeed ZeRO 3 and Flash Attention 2 to train these models in 15 hours on 32 A100-80GB GPUs. We used a sequence length of 4096 tokens. ## How to Get Started with the Model Make sure to install Transformers from the main git branch: ```bash pip install git+https://github.com/huggingface/transformers.git ``` ## How to Prompt the Model This model accepts the Alpaca/Vicuna instruction format. For example: ``` ### System Prompt You are an intelligent programming assistant. ### User Message Implement a linked list in C++ ### Assistant ... ``` ## How to reproduce HumanEval Results To reproduce our results: ```python from transformers import AutoTokenizer, LlamaForCausalLM from human_eval.data import write_jsonl, read_problems from tqdm import tqdm # initialize the model model_path = "Phind/Phind-CodeLlama-34B-v2" model = LlamaForCausalLM.from_pretrained(model_path, device_map="auto") tokenizer = AutoTokenizer.from_pretrained(model_path) # HumanEval helper def generate_one_completion(prompt: str): tokenizer.pad_token = tokenizer.eos_token inputs = tokenizer(prompt, return_tensors="pt", truncation=True, max_length=4096) # Generate generate_ids = model.generate(inputs.input_ids.to("cuda"), max_new_tokens=384, do_sample=True, top_p=0.75, top_k=40, temperature=0.1) completion = tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0] completion = completion.replace(prompt, "").split("\n\n\n")[0] return completion # perform HumanEval problems = read_problems() num_samples_per_task = 1 samples = [ dict(task_id=task_id, completion=generate_one_completion(problems[task_id]["prompt"])) for task_id in tqdm(problems) for _ in range(num_samples_per_task) ] write_jsonl("samples.jsonl", samples) # run `evaluate_functional_correctness samples.jsonl` in your HumanEval code sandbox ``` ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> This model has undergone very limited testing. Additional safety testing should be performed before any real-world deployments. ## Training details <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> - **Hardware Type:** 32x A100-80GB - **Hours used:** 480 GPU-hours - **Cloud Provider:** AWS - **Compute Region:** us-east-1
13,947
[ [ -0.039276123046875, -0.056365966796875, 0.02520751953125, 0.0013723373413085938, -0.01702880859375, -0.01447296142578125, 0.00707244873046875, -0.036468505859375, -0.00658416748046875, 0.0256500244140625, -0.0465087890625, -0.035186767578125, -0.0196685791015625...
microsoft/speecht5_vc
2023-03-22T20:10:32.000Z
[ "transformers", "pytorch", "speecht5", "audio", "audio-to-audio", "dataset:cmu-arctic", "arxiv:2110.07205", "license:mit", "endpoints_compatible", "has_space", "region:us" ]
audio-to-audio
microsoft
null
null
microsoft/speecht5_vc
38
1,192
transformers
2023-02-02T13:01:55
--- license: mit tags: - audio - audio-to-audio datasets: - cmu-arctic --- # SpeechT5 (voice conversion task) SpeechT5 model fine-tuned for voice conversion (speech-to-speech) on CMU ARCTIC. This model was introduced in [SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing](https://arxiv.org/abs/2110.07205) by Junyi Ao, Rui Wang, Long Zhou, Chengyi Wang, Shuo Ren, Yu Wu, Shujie Liu, Tom Ko, Qing Li, Yu Zhang, Zhihua Wei, Yao Qian, Jinyu Li, Furu Wei. SpeechT5 was first released in [this repository](https://github.com/microsoft/SpeechT5/), [original weights](https://huggingface.co/mechanicalsea/speecht5-vc). The license used is [MIT](https://github.com/microsoft/SpeechT5/blob/main/LICENSE). Disclaimer: The team releasing SpeechT5 did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model Description Motivated by the success of T5 (Text-To-Text Transfer Transformer) in pre-trained natural language processing models, we propose a unified-modal SpeechT5 framework that explores the encoder-decoder pre-training for self-supervised speech/text representation learning. The SpeechT5 framework consists of a shared encoder-decoder network and six modal-specific (speech/text) pre/post-nets. After preprocessing the input speech/text through the pre-nets, the shared encoder-decoder network models the sequence-to-sequence transformation, and then the post-nets generate the output in the speech/text modality based on the output of the decoder. Leveraging large-scale unlabeled speech and text data, we pre-train SpeechT5 to learn a unified-modal representation, hoping to improve the modeling capability for both speech and text. To align the textual and speech information into this unified semantic space, we propose a cross-modal vector quantization approach that randomly mixes up speech/text states with latent units as the interface between encoder and decoder. Extensive evaluations show the superiority of the proposed SpeechT5 framework on a wide variety of spoken language processing tasks, including automatic speech recognition, speech synthesis, speech translation, voice conversion, speech enhancement, and speaker identification. ## Intended Uses & Limitations You can use this model for speech conversion. See the [model hub](https://huggingface.co/models?search=speecht5) to look for fine-tuned versions on a task that interests you. Currently, both the feature extractor and model support PyTorch. ## Citation **BibTeX:** ```bibtex @inproceedings{ao-etal-2022-speecht5, title = {{S}peech{T}5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing}, author = {Ao, Junyi and Wang, Rui and Zhou, Long and Wang, Chengyi and Ren, Shuo and Wu, Yu and Liu, Shujie and Ko, Tom and Li, Qing and Zhang, Yu and Wei, Zhihua and Qian, Yao and Li, Jinyu and Wei, Furu}, booktitle = {Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)}, month = {May}, year = {2022}, pages={5723--5738}, } ``` ## How to Get Started With the Model Use the code below to convert a mono 16 kHz speech waveform into another. ```python from transformers import SpeechT5Processor, SpeechT5ForSpeechToSpeech, SpeechT5HifiGan from datasets import load_dataset dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") dataset = dataset.sort("id") sampling_rate = dataset.features["audio"].sampling_rate example_speech = dataset[0]["audio"]["array"] processor = SpeechT5Processor.from_pretrained("microsoft/speecht5_vc") model = SpeechT5ForSpeechToSpeech.from_pretrained("microsoft/speecht5_vc") vocoder = SpeechT5HifiGan.from_pretrained("microsoft/speecht5_hifigan") inputs = processor(audio=example_speech, sampling_rate=sampling_rate, return_tensors="pt") # load xvector containing speaker's voice characteristics from a file import numpy as np import torch speaker_embeddings = np.load("xvector_speaker_embedding.npy") speaker_embeddings = torch.tensor(speaker_embeddings).unsqueeze(0) speech = model.generate_speech(inputs["input_values"], speaker_embeddings, vocoder=vocoder) import soundfile as sf sf.write("speech.wav", speech.numpy(), samplerate=16000) ```
4,303
[ [ -0.03216552734375, -0.0372314453125, 0.010040283203125, 0.00650787353515625, -0.025909423828125, -0.01568603515625, -0.030548095703125, -0.025909423828125, -0.01183319091796875, 0.02447509765625, -0.042877197265625, -0.0443115234375, -0.0281219482421875, 0.0...
stablediffusionapi/amireal
2023-04-25T20:27:26.000Z
[ "diffusers", "stablediffusionapi.com", "stable-diffusion-api", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "endpoints_compatible", "has_space", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
stablediffusionapi
null
null
stablediffusionapi/amireal
2
1,192
diffusers
2023-04-25T20:26:38
--- license: creativeml-openrail-m tags: - stablediffusionapi.com - stable-diffusion-api - text-to-image - ultra-realistic pinned: true --- # amireal API Inference ![generated from stablediffusionapi.com](https://pub-8b49af329fae499aa563997f5d4068a4.r2.dev/generations/3388458241682454332.png) ## Get API Key Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed. Replace Key in below code, change **model_id** to "amireal" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs) Model link: [View model](https://stablediffusionapi.com/models/amireal) Credits: [View credits](https://civitai.com/?query=amireal) View all models: [View Models](https://stablediffusionapi.com/models) import requests import json url = "https://stablediffusionapi.com/api/v3/dreambooth" payload = json.dumps({ "key": "", "model_id": "amireal", "prompt": "actual 8K portrait photo of gareth person, portrait, happy colors, bright eyes, clear eyes, warm smile, smooth soft skin, big dreamy eyes, beautiful intricate colored hair, symmetrical, anime wide eyes, soft lighting, detailed face, by makoto shinkai, stanley artgerm lau, wlop, rossdraws, concept art, digital painting, looking into camera", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) > Use this coupon code to get 25% off **DMGG0RBN**
2,388
[ [ -0.0288238525390625, -0.062347412109375, 0.0394287109375, 0.019805908203125, -0.0286712646484375, 0.004444122314453125, 0.026031494140625, -0.031280517578125, 0.03558349609375, 0.044647216796875, -0.0601806640625, -0.0633544921875, -0.02593994140625, -0.0040...
julien-c/flair-ner
2020-11-26T22:01:14.000Z
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "en", "dataset:conll2003", "region:us" ]
token-classification
julien-c
null
null
julien-c/flair-ner
0
1,191
flair
2022-03-02T23:29:05
--- tags: - flair - token-classification - sequence-tagger-model language: en datasets: - conll2003 inference: false --- ## Flair NER model `en-ner-conll03-v0.4.pt` Imported from https://nlp.informatik.hu-berlin.de/resources/models/ner/ ### Demo: How to use in Flair ```python from flair.data import Sentence from flair.models import SequenceTagger sentence = Sentence( "My name is Julien, I currently live in Paris, I work at Hugging Face, Inc." ) tagger = SequenceTagger.load("julien-c/flair-ner") # predict NER tags tagger.predict(sentence) # print sentence with predicted tags print(sentence.to_tagged_string()) ``` yields the following output: > `My name is Julien <S-PER> , I currently live in Paris <S-LOC> , I work at Hugging <B-LOC> Face <E-LOC> .` ### Thanks [@stefan-it](https://huggingface.co/stefan-it) for the Flair integration ❤️ 🔥
864
[ [ -0.0242462158203125, -0.041473388671875, 0.0032901763916015625, 0.0246124267578125, -0.016265869140625, -0.01078033447265625, -0.00864410400390625, -0.0328369140625, 0.053985595703125, 0.0279693603515625, -0.047332763671875, -0.0251312255859375, -0.031494140625,...
timm/tf_efficientnet_l2.ns_jft_in1k_475
2023-04-27T21:38:06.000Z
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:1905.11946", "arxiv:1911.04252", "license:apache-2.0", "region:us" ]
image-classification
timm
null
null
timm/tf_efficientnet_l2.ns_jft_in1k_475
1
1,191
timm
2022-12-13T00:11:28
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for tf_efficientnet_l2.ns_jft_in1k_475 A EfficientNet image classification model. Trained on ImageNet-1k and unlabeled JFT-300m using Noisy Student semi-supervised learning in Tensorflow by paper authors, ported to PyTorch by Ross Wightman. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 480.3 - GMACs: 172.1 - Activations (M): 609.9 - Image size: 475 x 475 - **Papers:** - EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks: https://arxiv.org/abs/1905.11946 - Self-training with Noisy Student improves ImageNet classification: https://arxiv.org/abs/1911.04252 - **Dataset:** ImageNet-1k - **Original:** https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('tf_efficientnet_l2.ns_jft_in1k_475', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'tf_efficientnet_l2.ns_jft_in1k_475', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 72, 238, 238]) # torch.Size([1, 104, 119, 119]) # torch.Size([1, 176, 60, 60]) # torch.Size([1, 480, 30, 30]) # torch.Size([1, 1376, 15, 15]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'tf_efficientnet_l2.ns_jft_in1k_475', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 5504, 15, 15) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @inproceedings{tan2019efficientnet, title={Efficientnet: Rethinking model scaling for convolutional neural networks}, author={Tan, Mingxing and Le, Quoc}, booktitle={International conference on machine learning}, pages={6105--6114}, year={2019}, organization={PMLR} } ``` ```bibtex @article{Xie2019SelfTrainingWN, title={Self-Training With Noisy Student Improves ImageNet Classification}, author={Qizhe Xie and Eduard H. Hovy and Minh-Thang Luong and Quoc V. Le}, journal={2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, year={2019}, pages={10684-10695} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
4,628
[ [ -0.02886962890625, -0.0408935546875, -0.0075225830078125, 0.00841522216796875, -0.0178070068359375, -0.02838134765625, -0.02557373046875, -0.03302001953125, 0.0112762451171875, 0.0261077880859375, -0.0240020751953125, -0.0406494140625, -0.05517578125, -0.011...
olgaduchovny/t5-base-ner-mit-movie
2022-08-08T18:38:54.000Z
[ "transformers", "pytorch", "t5", "text2text-generation", "ner", "text generation", "seq2seq", "en", "dataset:conll2003", "arxiv:2203.03903", "license:mit", "autotrain_compatible", "text-generation-inference", "region:us" ]
text2text-generation
olgaduchovny
null
null
olgaduchovny/t5-base-ner-mit-movie
0
1,190
transformers
2022-08-08T18:21:54
--- language: - en tags: - pytorch - ner - text generation - seq2seq inference: false license: mit datasets: - conll2003 metrics: - f1 --- # t5-base-qa-ner-conll Unofficial implementation of [InstructionNER](https://arxiv.org/pdf/2203.03903v1.pdf). t5-base model tuned on conll2003 dataset. https://github.com/ovbystrova/InstructionNER ## Inference ```shell git clone https://github.com/ovbystrova/InstructionNER cd InstructionNER ``` ```python from instruction_ner.model import Model model = Model( model_path_or_name="olgaduchovny/t5-base-ner-mit-movie", tokenizer_path_or_name="olgaduchovny/t5-base-ner-mit-movie" ) options = [ "ACTOR", "AWARD", "CHARACTER", "DIRECTOR", "GENRE", "OPINION", "ORIGIN", "PLOT", "QUOTE", "RELATIONSHIP", "SOUNDTRACK", "YEAR" ] instruction = "please extract entities and their types from the input sentence, " \ "all entity types are in options" text = "are there any good romantic comedies out right now" generation_kwargs = { "num_beams": 2, "max_length": 128 } pred_spans = model.predict( text=text, generation_kwargs=generation_kwargs, instruction=instruction, options=options ) >>> [(19, 36, 'GENRE'), (41, 50, 'YEAR')] ```
1,321
[ [ -0.0008730888366699219, -0.061248779296875, 0.0288238525390625, 0.01317596435546875, -0.00905609130859375, -0.0263824462890625, -0.009521484375, 0.0090179443359375, -0.0215911865234375, 0.034393310546875, -0.050506591796875, -0.06195068359375, -0.038330078125, ...
csebuetnlp/banglat5_banglaparaphrase
2022-11-05T17:14:38.000Z
[ "transformers", "pytorch", "t5", "text2text-generation", "bn", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text2text-generation
csebuetnlp
null
null
csebuetnlp/banglat5_banglaparaphrase
0
1,189
transformers
2022-10-15T04:19:58
--- language: - bn licenses: - cc-by-nc-sa-4.0 --- # banglat5_banglaparaphrase This repository contains the pretrained checkpoint of the model **BanglaT5** finetuned on [BanglaParaphrase](https://huggingface.co/datasets/csebuetnlp/BanglaParaphrase) dataset. This is a sequence to sequence transformer model pretrained with the ["Span Corruption"]() objective. Finetuned models using this checkpoint achieve competitive results on the dataset. For finetuning and inference, refer to the scripts in the official GitHub repository of [BanglaNLG](https://github.com/csebuetnlp/BanglaNLG). **Note**: This model was pretrained using a specific normalization pipeline available [here](https://github.com/csebuetnlp/normalizer). All finetuning scripts in the official GitHub repository use this normalization by default. If you need to adapt the pretrained model for a different task make sure the text units are normalized using this pipeline before tokenizing to get best results. A basic example is given below: ## Using this model in `transformers` ```python from transformers import AutoModelForSeq2SeqLM, AutoTokenizer from normalizer import normalize # pip install git+https://github.com/csebuetnlp/normalizer model = AutoModelForSeq2SeqLM.from_pretrained("csebuetnlp/banglat5_banglaparaphrase") tokenizer = AutoTokenizer.from_pretrained("csebuetnlp/banglat5_banglaparaphrase", use_fast=False) input_sentence = "" input_ids = tokenizer(normalize(input_sentence), return_tensors="pt").input_ids generated_tokens = model.generate(input_ids) decoded_tokens = tokenizer.batch_decode(generated_tokens)[0] print(decoded_tokens) ``` ## Benchmarks * Supervised fine-tuning | Test Set | Model | sacreBLEU | ROUGE-L | PINC | BERTScore | BERT-iBLEU | | -------- | ----- | --------- | ------- | ---- | --------- | ---------- | | [BanglaParaphrase](https://huggingface.co/datasets/csebuetnlp/BanglaParaphrase) | [BanglaT5](https://huggingface.co/csebuetnlp/banglat5)<br>[IndicBART](https://huggingface.co/ai4bharat/IndicBART)<br>[IndicBARTSS](https://huggingface.co/ai4bharat/IndicBARTSS)| 32.8<br>5.60<br>4.90 | 63.58<br>35.61<br>33.66 | 74.40<br>80.26<br>82.10 | 94.80<br>91.50<br>91.10 | 92.18<br>91.16<br>90.95 | | [IndicParaphrase](https://huggingface.co/datasets/ai4bharat/IndicParaphrase) |BanglaT5<br>IndicBART<br>IndicBARTSS| 11.0<br>12.0<br>10.7| 19.99<br>21.58<br>20.59| 74.50<br>76.83<br>77.60| 94.80<br>93.30<br>93.10 | 87.738<br>90.65<br>90.54| The dataset can be found in the link below: * **[BanglaParaphrase](https://huggingface.co/datasets/csebuetnlp/BanglaParaphrase)** ## Citation If you use this model, please cite the following paper: ``` @article{akil2022banglaparaphrase, title={BanglaParaphrase: A High-Quality Bangla Paraphrase Dataset}, author={Akil, Ajwad and Sultana, Najrin and Bhattacharjee, Abhik and Shahriyar, Rifat}, journal={arXiv preprint arXiv:2210.05109}, year={2022} } ```
2,927
[ [ -0.01495361328125, -0.04656982421875, 0.003093719482421875, 0.045166015625, -0.01255035400390625, -0.004657745361328125, -0.011322021484375, -0.00524139404296875, 0.0014247894287109375, 0.017578125, -0.026824951171875, -0.04351806640625, -0.048126220703125, ...
hackathon-pln-es/paraphrase-spanish-distilroberta
2022-04-02T18:33:17.000Z
[ "sentence-transformers", "pytorch", "roberta", "feature-extraction", "sentence-similarity", "transformers", "es", "dataset:hackathon-pln-es/parallel-sentences", "arxiv:2004.09813", "endpoints_compatible", "has_space", "region:us" ]
sentence-similarity
hackathon-pln-es
null
null
hackathon-pln-es/paraphrase-spanish-distilroberta
9
1,188
sentence-transformers
2022-03-30T17:58:23
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers language: - es datasets: - hackathon-pln-es/parallel-sentences widget: - text: "A ver si nos tenemos que poner todos en huelga hasta cobrar lo que queramos." - text: "La huelga es el método de lucha más eficaz para conseguir mejoras en el salario." - text: "Tendremos que optar por hacer una huelga para cobrar lo que queremos." - text: "Queda descartada la huelga aunque no cobremos lo que queramos." --- # paraphrase-spanish-distilroberta This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. We follow a **teacher-student** transfer learning approach to train an `bertin-roberta-base-spanish` model using parallel EN-ES sentence pairs. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["Este es un ejemplo", "Cada oración es transformada"] model = SentenceTransformer('hackathon-pln-es/paraphrase-spanish-distilroberta') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch import torch.nn.functional as F #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['Este es un ejemplo", "Cada oración es transformada'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('hackathon-pln-es/paraphrase-spanish-distilroberta') model = AutoModel.from_pretrained('hackathon-pln-es/paraphrase-spanish-distilroberta') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) # Normalize embeddings sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Evaluation Results Similarity Evaluation on STS-2017.es-en.txt and STS-2017.es-es.txt (translated manually for evaluation purposes) We measure the semantic textual similarity (STS) between sentence pairs in different languages: ### ES-ES | cosine_pearson | cosine_spearman | manhattan_pearson | manhattan_spearman | euclidean_pearson | euclidean_spearman | dot_pearson | dot_spearman | | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | 0.8495 | 0.8579 | 0.8675 | 0.8474 | 0.8676 | 0.8478 | 0.8277 | 0.8258 | ### ES-EN | cosine_pearson | cosine_spearman | manhattan_pearson | manhattan_spearman | euclidean_pearson | euclidean_spearman | dot_pearson | dot_spearman | | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | 0.8344 | 0.8448 | 0.8279 | 0.8168 | 0.8282 | 0.8159 | 0.8083 | 0.8145 | ------ ## Intended uses Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks. ## Background This model is a bilingual Spanish-English model trained according to instructions in the paper [Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation](https://arxiv.org/pdf/2004.09813.pdf) and the [documentation](https://www.sbert.net/examples/training/multilingual/README.html) accompanying its companion python package. We have used the strongest available pretrained English Bi-Encoder ([paraphrase-mpnet-base-v2](https://www.sbert.net/docs/pretrained_models.html#sentence-embedding-models)) as a teacher model, and the pretrained Spanish [BERTIN](https://huggingface.co/bertin-project/bertin-roberta-base-spanish) as the student model. We developped this model during the [Hackathon 2022 NLP - Spanish](https://somosnlp.org/hackathon), organized by hackathon-pln-es Organization. ### Training data We use the concatenation from multiple datasets with sentence pairs (EN-ES). We could check out the dataset that was used during training: [parallel-sentences](https://huggingface.co/datasets/hackathon-pln-es/parallel-sentences) | Dataset | |--------------------------------------------------------| | AllNLI - ES (SNLI + MultiNLI)| | EuroParl | | JW300 | | News Commentary | | Open Subtitles | | TED 2020 | | Tatoeba | | WikiMatrix | ## Authors - [Anibal Pérez](https://huggingface.co/Anarpego), - [Emilio Tomás Ariza](https://huggingface.co/medardodt), - [Lautaro Gesuelli Pinto](https://huggingface.co/lautaro) - [Mauricio Mazuecos](https://huggingface.co/mmazuecos)
6,150
[ [ -0.0203857421875, -0.0592041015625, 0.02801513671875, 0.03509521484375, -0.02532958984375, -0.01605224609375, -0.024200439453125, -0.00740814208984375, 0.019683837890625, 0.017333984375, -0.03509521484375, -0.046142578125, -0.052001953125, 0.0224151611328125...
MaartenGr/BERTopic_Wikipedia
2023-05-31T17:58:03.000Z
[ "bertopic", "text-classification", "region:us" ]
text-classification
MaartenGr
null
null
MaartenGr/BERTopic_Wikipedia
13
1,188
bertopic
2023-05-30T08:03:05
--- tags: - bertopic library_name: bertopic pipeline_tag: text-classification --- # Wikipedia This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model. BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets. * Trained on ~1_000_000 Wikipedia pages (first paragraph of each page). * Data was retrieved from: https://huggingface.co/datasets/Cohere/wikipedia-22-12-en-embeddings ## Usage To use this model, please install BERTopic: ``` pip install -U bertopic pip install -U safetensors ``` You can use the model as follows: ```python from bertopic import BERTopic topic_model = BERTopic.load("MaartenGr/BERTopic_Wikipedia") topic_model.get_topic_info() ``` ## Topics 2D The top 50 topics visualized and reduced to 2-dimensional space using cuML's UMAP: !["visualization.png"](visualization.png) To generate this image, you can follow along with this tutorial: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1W7aEdDPxC29jP99GGZphUlqjMFFVKtBC?usp=sharing) ## Topic overview * Number of topics: 2377 * Number of training documents: 1000001 <details> <summary>Click here for an overview of all topics.</summary> | Topic ID | Topic Keywords | Topic Frequency | Label | |----------|----------------|-----------------|-------| | -1 | cast - films - film - movie - 2020 | 30 | -1_cast_films_film_movie | | 0 | goalscorer - scored - goals - goal - goalkeeper | 633881 | 0_goalscorer_scored_goals_goal | | 1 | khan - actor - raj - shah - crore | 18441 | 1_khan_actor_raj_shah | | 2 | married - divorced - couple - remarried - engaged | 8518 | 2_married_divorced_couple_remarried | | 3 | cast - actress - starred - actor - actors | 7521 | 3_cast_actress_starred_actor | | 4 | wrestle - reigns - wrestled - rumble - feud | 6765 | 4_wrestle_reigns_wrestled_rumble | | 5 | cuisine - cuisines - foods - culinary - meal | 5785 | 5_cuisine_cuisines_foods_culinary | | 6 | rebounds - harden - assists - scoring - wade | 5001 | 6_rebounds_harden_assists_scoring | | 7 | touchdowns - interceptions - quarterback - touchdown - fumble | 4238 | 7_touchdowns_interceptions_quarterback_touchdown | | 8 | goddesses - goddess - mythology - deities - gods | 3597 | 8_goddesses_goddess_mythology_deities | | 9 | reelection - election - republican - elections - electoral | 3354 | 9_reelection_election_republican_elections | | 10 | middleweight - fights - punches - welterweight - heavyweight | 3133 | 10_middleweight_fights_punches_welterweight | | 11 | hitter - hitters - inning - outfielder - batting | 2951 | 11_hitter_hitters_inning_outfielder | | 12 | yoga - sutra - sutras - meditation - dharma | 2768 | 12_yoga_sutra_sutras_meditation | | 13 | missile - missiles - aircraft - squadrons - fighter | 2686 | 13_missile_missiles_aircraft_squadrons | | 14 | chassis - vehicle - wheelbase - gearbox - sedan | 2653 | 14_chassis_vehicle_wheelbase_gearbox | | 15 | grace - rob - nick - anna - house | 2421 | 15_grace_rob_nick_anna | | 16 | chlorine - chloride - hydrochloric - hydroxide - corrosion | 2362 | 16_chlorine_chloride_hydrochloric_hydroxide | | 17 | planets - galaxies - planetary - astronomers - dwarf | 2354 | 17_planets_galaxies_planetary_astronomers | | 18 | matrices - matrix - transpose - eigenvector - multiplication | 2234 | 18_matrices_matrix_transpose_eigenvector | | 19 | rifle - rifles - firearm - firearms - ammunition | 2191 | 19_rifle_rifles_firearm_firearms | | 20 | campuses - colleges - campus - undergraduates - college | 2172 | 20_campuses_colleges_campus_undergraduates | | 21 | renewed - seasons - season - airing - 2023 | 2020 | 21_renewed_seasons_season_airing | | 22 | climates - climate - weather - temperatures - temperature | 1992 | 22_climates_climate_weather_temperatures | | 23 | benzodiazepines - benzodiazepine - antidepressants - antidepressant - diazepam | 1900 | 23_benzodiazepines_benzodiazepine_antidepressants_antidepressant | | 24 | consoles - console - gaming - platform - gamepad | 1765 | 24_consoles_console_gaming_platform | | 25 | vowel - vowels - consonants - consonant - diacritics | 1716 | 25_vowel_vowels_consonants_consonant | | 26 | heir - throne - nobility - eldest - 1536 | 1660 | 26_heir_throne_nobility_eldest | | 27 | 737 - airline - airliner - airlines - airliners | 1592 | 27_737_airline_airliner_airlines | | 28 | thermodynamic - thermodynamics - entropy - equilibrium - temperature | 1552 | 28_thermodynamic_thermodynamics_entropy_equilibrium | | 29 | venom - marvel - spider - doom - carnage | 1550 | 29_venom_marvel_spider_doom | | 30 | tales - folktales - tale - fairy - stories | 1541 | 30_tales_folktales_tale_fairy | | 31 | caesar - augustus - roman - consul - consuls | 1540 | 31_caesar_augustus_roman_consul | | 32 | gospels - testament - disciples - apostle - apostles | 1496 | 32_gospels_testament_disciples_apostle | | 33 | banks - banking - bank - mortgages - finance | 1368 | 33_banks_banking_bank_mortgages | | 34 | variance - variances - distribution - distributions - statistics | 1360 | 34_variance_variances_distribution_distributions | | 35 | prix - motorsport - raced - racing - qualifying | 1358 | 35_prix_motorsport_raced_racing | | 36 | filmed - filming - footage - photography - cinematographer | 1331 | 36_filmed_filming_footage_photography | | 37 | reactors - reactor - nuclear - fission - fissions | 1324 | 37_reactors_reactor_nuclear_fission | | 38 | mixtape - rapper - thug - mixtapes - rap | 1307 | 38_mixtape_rapper_thug_mixtapes | | 39 | khan - sheikh - maharaja - minister - appointed | 1271 | 39_khan_sheikh_maharaja_minister | | 40 | symphonies - concertos - symphonic - composers - concerto | 1255 | 40_symphonies_concertos_symphonic_composers | | 41 | lightsaber - lightsabers - prequels - prequel - han | 1222 | 41_lightsaber_lightsabers_prequels_prequel | | 42 | ants - insects - arachnids - arthropods - spiders | 1215 | 42_ants_insects_arachnids_arthropods | | 43 | psychiatric - psychosis - disorders - schizophrenia - disorder | 1198 | 43_psychiatric_psychosis_disorders_schizophrenia | | 44 | unionists - unionist - nationalists - loyalist - protestant | 1130 | 44_unionists_unionist_nationalists_loyalist | | 45 | renewable - renewables - hydroelectricity - hydroelectric - energy | 1077 | 45_renewable_renewables_hydroelectricity_hydroelectric | | 46 | eruptions - volcano - volcanoes - eruption - volcanic | 1076 | 46_eruptions_volcano_volcanoes_eruption | | 47 | 9million - 6million - 5million - 8million - 2million | 1048 | 47_9million_6million_5million_8million | | 48 | albums - songs - rapper - album - hyun | 1046 | 48_albums_songs_rapper_album | | 49 | nazi - ss - 1934 - 1938 - 1930 | 1021 | 49_nazi_ss_1934_1938 | | 50 | broadcasters - broadcasting - broadcasts - televised - broadcast | 1018 | 50_broadcasters_broadcasting_broadcasts_televised | | 51 | rpg - gaming - games - gamer - fantasy | 997 | 51_rpg_gaming_games_gamer | | 52 | vogue - magazine - glamour - magazines - playboy | 927 | 52_vogue_magazine_glamour_magazines | | 53 | comedian - primetime - night - comedians - podcast | 920 | 53_comedian_primetime_night_comedians | | 54 | collegiate - colleges - conferences - conference - intercollegiate | 908 | 54_collegiate_colleges_conferences_conference | | 55 | candidacy - candidate - candidates - presidential - presidency | 901 | 55_candidacy_candidate_candidates_presidential | | 56 | bond - royale - spectre - pierce - spy | 901 | 56_bond_royale_spectre_pierce | | 57 | band - frontman - bassist - vocalist - toured | 894 | 57_band_frontman_bassist_vocalist | | 58 | joker - superman - superhero - comics - comic | 891 | 58_joker_superman_superhero_comics | | 59 | airport - airports - airlines - airline - terminals | 878 | 59_airport_airports_airlines_airline | | 60 | communists - yuan - yang - communist - politburo | 875 | 60_communists_yuan_yang_communist | | 61 | titanic - lifeboat - lifeboats - sank - ships | 839 | 61_titanic_lifeboat_lifeboats_sank | | 62 | dynasty - emperor - dynasties - yang - yuan | 838 | 62_dynasty_emperor_dynasties_yang | | 63 | breeds - terrier - terriers - shepherd - dachshunds | 833 | 63_breeds_terrier_terriers_shepherd | | 64 | rating - rated - cinematography - film - screenplay | 824 | 64_rating_rated_cinematography_film | | 65 | protestant - catholic - churches - evangelicals - denominational | 818 | 65_protestant_catholic_churches_evangelicals | | 66 | interstates - interstate - highways - freeway - turnpike | 802 | 66_interstates_interstate_highways_freeway | | 67 | nationalists - secession - separatism - republics - nationalism | 780 | 67_nationalists_secession_separatism_republics | | 68 | yoon - hyun - jung - jae - hyung | 771 | 68_yoon_hyun_jung_jae | | 69 | confederation - 1867 - 1814 - 1871 - emperor | 770 | 69_confederation_1867_1814_1871 | | 70 | shah - khan - dynasty - dynasties - sultanate | 764 | 70_shah_khan_dynasty_dynasties | | 71 | airlines - airline - alliance - airways - flights | 763 | 71_airlines_airline_alliance_airways | | 72 | flash - storage - memory - gigabyte - devices | 763 | 72_flash_storage_memory_gigabyte | | 73 | constituencies - constituency - election - elections - candidates | 724 | 73_constituencies_constituency_election_elections | | 74 | constituencies - constituency - elections - election - candidates | 701 | 74_constituencies_constituency_elections_election | | 75 | boxer - heavyweight - middleweight - heavyweights - foreman | 695 | 75_boxer_heavyweight_middleweight_heavyweights | | 76 | programming - python - compilers - compiler - languages | 686 | 76_programming_python_compilers_compiler | | 77 | mafia - gangster - indictment - gangsters - indicted | 684 | 77_mafia_gangster_indictment_gangsters | | 78 | caliph - ibn - caliphs - caliphate - caliphates | 676 | 78_caliph_ibn_caliphs_caliphate | | 79 | manga - shonen - shōnen - anime - chapters | 676 | 79_manga_shonen_shōnen_anime | | 80 | warships - frigates - warship - frigate - battleships | 651 | 80_warships_frigates_warship_frigate | | 81 | heterosexuality - bisexuality - homosexual - heterosexual - heterosexuals | 647 | 81_heterosexuality_bisexuality_homosexual_heterosexual | | 82 | released - 2021 - releases - 20th - 2022 | 642 | 82_released_2021_releases_20th | | 83 | marvel - spider - marvels - avenger - superhero | 629 | 83_marvel_spider_marvels_avenger | | 84 | tennis - quarterfinal - semifinals - tournaments - quarterfinals | 622 | 84_tennis_quarterfinal_semifinals_tournaments | | 85 | heir - throne - kingdom - kingdoms - king | 615 | 85_heir_throne_kingdom_kingdoms | | 86 | poker - betting - gambling - casino - bets | 598 | 86_poker_betting_gambling_casino | | 87 | soundtrack - soundtracks - musical - symphony - instrumental | 596 | 87_soundtrack_soundtracks_musical_symphony | | 88 | consent - rape - minors - statutory - age | 592 | 88_consent_rape_minors_statutory | | 89 | 1860 - 1852 - 1832 - reelection - confederate | 591 | 89_1860_1852_1832_reelection | | 90 | trek - showrunner - starship - showrunners - starships | 587 | 90_trek_showrunner_starship_showrunners | | 91 | wickets - batsman - cricketer - wicket - cricket | 585 | 91_wickets_batsman_cricketer_wicket | | 92 | heir - duchess - eldest - prince - royal | 578 | 92_heir_duchess_eldest_prince | | 93 | goaltender - puck - goalie - hockey - ahl | 576 | 93_goaltender_puck_goalie_hockey | | 94 | keyboards - keyboard - keypad - diacritics - alphabet | 560 | 94_keyboards_keyboard_keypad_diacritics | | 95 | cartel - cartels - narco - trafficking - los | 558 | 95_cartel_cartels_narco_trafficking | | 96 | yang - yin - rituals - religions - shamanism | 540 | 96_yang_yin_rituals_religions | | 97 | astrology - astrological - zodiac - zodiacal - astrologers | 535 | 97_astrology_astrological_zodiac_zodiacal | | 98 | goddesses - stanzas - mythology - stanza - valkyries | 528 | 98_goddesses_stanzas_mythology_stanza | | 99 | rating - critics - reviews - review - rotten | 528 | 99_rating_critics_reviews_review | | 100 | dynasties - subcontinent - dharma - dynasty - ancient | 524 | 100_dynasties_subcontinent_dharma_dynasty | | 101 | dinosaur - fossil - dinosaurs - fossils - tyrannosaurids | 523 | 101_dinosaur_fossil_dinosaurs_fossils | | 102 | folkloric - folk - genres - traditional - folklore | 514 | 102_folkloric_folk_genres_traditional | | 103 | climber - climbers - mountaineering - climb - climbed | 511 | 103_climber_climbers_mountaineering_climb | | 104 | contestant - contestants - finalists - idol - finalist | 511 | 104_contestant_contestants_finalists_idol | | 105 | proteins - amino - protein - peptide - enzymes | 508 | 105_proteins_amino_protein_peptide | | 106 | battleships - battleship - naval - torpedoes - warships | 506 | 106_battleships_battleship_naval_torpedoes | | 107 | anthrax - slayer - thrash - bands - band | 505 | 107_anthrax_slayer_thrash_bands | | 108 | swift - songwriting - songwriter - songwriters - songs | 504 | 108_swift_songwriting_songwriter_songwriters | | 109 | airplane - airlines - flight - aircraft - aviation | 498 | 109_airplane_airlines_flight_aircraft | | 110 | paintings - painters - painter - cubism - cubist | 496 | 110_paintings_painters_painter_cubism | | 111 | flags - flag - flagpole - commonwealth - emblem | 493 | 111_flags_flag_flagpole_commonwealth | | 112 | cult - cruise - organizations - founder - organization | 481 | 112_cult_cruise_organizations_founder | | 113 | calendar - calendars - dates - calendrical - equinoxes | 481 | 113_calendar_calendars_dates_calendrical | | 114 | counties - county - population - populous - cities | 474 | 114_counties_county_population_populous | | 115 | degree - bachelor - diplomas - doctorates - diploma | 474 | 115_degree_bachelor_diplomas_doctorates | | 116 | spying - espionage - surveillance - spied - disclosures | 472 | 116_spying_espionage_surveillance_spied | | 117 | schooling - education - educational - kindergarten - curriculum | 471 | 117_schooling_education_educational_kindergarten | | 118 | railway - railways - autobahns - autobahn - trains | 470 | 118_railway_railways_autobahns_autobahn | | 119 | laden - jihadi - mujahideen - jihadis - al | 451 | 119_laden_jihadi_mujahideen_jihadis | | 120 | theatre - venue - venues - theater - orchestras | 450 | 120_theatre_venue_venues_theater | | 121 | earthquake - earthquakes - tsunami - tsunamis - quakes | 450 | 121_earthquake_earthquakes_tsunami_tsunamis | | 122 | superman - superhero - comics - sequels - joker | 446 | 122_superman_superhero_comics_sequels | | 123 | dodge - automakers - truck - automotive - trucks | 431 | 123_dodge_automakers_truck_automotive | | 124 | election - elections - candidates - candidate - voters | 431 | 124_election_elections_candidates_candidate | | 125 | broadway - musicals - musical - theatre - theater | 422 | 125_broadway_musicals_musical_theatre | | 126 | whales - whale - whaling - cetaceans - cetacean | 422 | 126_whales_whale_whaling_cetaceans | | 127 | potter - potters - wizard - wizardry - wizarding | 419 | 127_potter_potters_wizard_wizardry | | 128 | starship - spaceflight - spacecraft - shuttle - astronauts | 417 | 128_starship_spaceflight_spacecraft_shuttle | | 129 | pol - communists - rouge - soviet - communist | 412 | 129_pol_communists_rouge_soviet | | 130 | tombstone - corral - stagecoach - outlaw - outlaws | 403 | 130_tombstone_corral_stagecoach_outlaw | | 131 | tennis - competed - doubles - slams - finalist | 401 | 131_tennis_competed_doubles_slams | | 132 | lunar - moon - astronaut - astronauts - spacecraft | 399 | 132_lunar_moon_astronaut_astronauts | | 133 | hamlet - playwright - actor - cast - acting | 391 | 133_hamlet_playwright_actor_cast | | 134 | angels - archangels - archangel - angelic - angel | 384 | 134_angels_archangels_archangel_angelic | | 135 | labia - labial - lips - clitoris - vulval | 378 | 135_labia_labial_lips_clitoris | | 136 | jerseys - uniforms - 49ers - colors - helmets | 376 | 136_jerseys_uniforms_49ers_colors | | 137 | linguistics - languages - linguist - linguistic - language | 376 | 137_linguistics_languages_linguist_linguistic | | 138 | foxes - coyotes - coyote - mammals - fox | 376 | 138_foxes_coyotes_coyote_mammals | | 139 | tiger - tigers - species - lion - wildlife | 374 | 139_tiger_tigers_species_lion | | 140 | panzer - soviets - infantry - 1944 - artillery | 371 | 140_panzer_soviets_infantry_1944 | | 141 | hamlet - playwright - playwrights - tempest - soliloquy | 370 | 141_hamlet_playwright_playwrights_tempest | | 142 | potter - sorcerer - wizard - screenwriter - cast | 366 | 142_potter_sorcerer_wizard_screenwriter | | 143 | rating - critics - reviews - review - rotten | 366 | 143_rating_critics_reviews_review | | 144 | pepper - concert - albums - songs - album | 362 | 144_pepper_concert_albums_songs | | 145 | pope - papal - papacy - pontifical - popes | 358 | 145_pope_papal_papacy_pontifical | | 146 | unions - union - unionism - unionized - unionization | 356 | 146_unions_union_unionism_unionized | | 147 | cardiovascular - cardiomyopathy - cardiac - hypertension - myocardial | 355 | 147_cardiovascular_cardiomyopathy_cardiac_hypertension | | 148 | helicopters - missiles - helicopter - squadrons - insurgents | 354 | 148_helicopters_missiles_helicopter_squadrons | | 149 | shah - khan - dynasty - deposed - dictator | 352 | 149_shah_khan_dynasty_deposed | | 150 | waters - concert - tour - pink - wall | 351 | 150_waters_concert_tour_pink | | 151 | voyages - voyage - 1493 - explorers - expeditions | 345 | 151_voyages_voyage_1493_explorers | | 152 | spartan - rebelled - battle - besieged - victories | 343 | 152_spartan_rebelled_battle_besieged | | 153 | kanji - hiragana - pinyin - kun - katakana | 343 | 153_kanji_hiragana_pinyin_kun | | 154 | rings - ring - shire - hobbit - elves | 341 | 154_rings_ring_shire_hobbit | | 155 | confederates - confederate - confederacy - 1863 - 1861 | 339 | 155_confederates_confederate_confederacy_1863 | | 156 | mafia - gangs - cartels - cartel - syndicate | 336 | 156_mafia_gangs_cartels_cartel | | 157 | apartheid - decolonisation - 1979 - smith - nationalists | 332 | 157_apartheid_decolonisation_1979_smith | | 158 | fascism - fascist - italiana - fascists - nationalist | 330 | 158_fascism_fascist_italiana_fascists | | 159 | windows - vista - os - pc - versions | 329 | 159_windows_vista_os_pc | | 160 | chrome - browser - browsers - chromium - safari | 328 | 160_chrome_browser_browsers_chromium | | 161 | literacy - population - castes - literate - census | 323 | 161_literacy_population_castes_literate | | 162 | pip - miss - orphan - carol - protagonist | 321 | 162_pip_miss_orphan_carol | | 163 | ruby - assassination - assassinated - assassinate - warren | 319 | 163_ruby_assassination_assassinated_assassinate | | 164 | soviets - revolutionaries - soviet - 1917 - socialists | 316 | 164_soviets_revolutionaries_soviet_1917 | | 165 | twitter - tweets - tweet - microblogging - retweet | 315 | 165_twitter_tweets_tweet_microblogging | | 166 | sai - shakti - marries - revenge - pooja | 315 | 166_sai_shakti_marries_revenge | | 167 | quarks - quark - particles - protons - bosons | 314 | 167_quarks_quark_particles_protons | | 168 | polypropylene - polymers - polymer - polyethylene - polymerization | 314 | 168_polypropylene_polymers_polymer_polyethylene | | 169 | bourbon - 1685 - 1643 - heir - 1598 | 313 | 169_bourbon_1685_1643_heir | | 170 | cartoons - goofy - cartoon - bunny - hare | 313 | 170_cartoons_goofy_cartoon_bunny | | 171 | mountains - mountain - plains - topography - southwestern | 312 | 171_mountains_mountain_plains_topography | | 172 | epic - developers - studio - studios - blizzard | 311 | 172_epic_developers_studio_studios | | 173 | sergeant - lieutenants - sergeants - lieutenant - ranks | 309 | 173_sergeant_lieutenants_sergeants_lieutenant | | 174 | yoon - jong - hyun - jae - jung | 307 | 174_yoon_jong_hyun_jae | | 175 | villa - rebelled - barrios - rebellion - generals | 304 | 175_villa_rebelled_barrios_rebellion | | 176 | animator - animators - animation - animating - animated | 303 | 176_animator_animators_animation_animating | | 177 | dementia - dementias - neurodegenerative - parkinsonism - impairment | 303 | 177_dementia_dementias_neurodegenerative_parkinsonism | | 178 | doctor - doctors - dr - actor - tenth | 302 | 178_doctor_doctors_dr_actor | | 179 | counties - midlands - county - boroughs - district | 301 | 179_counties_midlands_county_boroughs | | 180 | philosopher - philosophy - platonic - philosophers - stoicism | 300 | 180_philosopher_philosophy_platonic_philosophers | | 181 | neural - neuron - neurons - convolutions - backpropagation | 299 | 181_neural_neuron_neurons_convolutions | | 182 | vaccines - vaccine - vaccination - vaccinated - vaccinate | 298 | 182_vaccines_vaccine_vaccination_vaccinated | | 183 | kickboxing - sparring - boxing - jitsu - karate | 293 | 183_kickboxing_sparring_boxing_jitsu | | 184 | payments - card - payment - cardholder - cardholders | 287 | 184_payments_card_payment_cardholder | | 185 | cathedrals - cathedral - arches - arched - vaults | 282 | 185_cathedrals_cathedral_arches_arched | | 186 | visual - studios - animation - filming - actors | 282 | 186_visual_studios_animation_filming | | 187 | psychoanalytical - psychoanalysts - psychoanalysis - psychoanalytic - psychoanalyst | 281 | 187_psychoanalytical_psychoanalysts_psychoanalysis_psychoanalytic | | 188 | novels - novelists - novelist - sensibility - 1818 | 280 | 188_novels_novelists_novelist_sensibility | | 189 | medieval - grail - knights - tales - knight | 278 | 189_medieval_grail_knights_tales | | 190 | uniforms - jerseys - uniform - logos - blazers | 277 | 190_uniforms_jerseys_uniform_logos | | 191 | cookies - cookie - http - session - browsers | 277 | 191_cookies_cookie_http_session | | 192 | polygamous - polygamy - polyamory - polygamists - monogamous | 277 | 192_polygamous_polygamy_polyamory_polygamists | | 193 | languages - speak - dialects - language - linguists | 275 | 193_languages_speak_dialects_language | | 194 | 1830s - tribe - tribes - confederate - natives | 274 | 194_1830s_tribe_tribes_confederate | | 195 | equilibria - equilibrium - strategic - strategies - strategy | 269 | 195_equilibria_equilibrium_strategic_strategies | | 196 | firearm - firearms - handgun - handguns - guns | 268 | 196_firearm_firearms_handgun_handguns | | 197 | kong - monster - monsters - franchise - sequel | 266 | 197_kong_monster_monsters_franchise | | 198 | murders - murdered - murderers - convicted - defendants | 264 | 198_murders_murdered_murderers_convicted | | 199 | homer - sitcom - cartoon - sitcoms - showrunner | 263 | 199_homer_sitcom_cartoon_sitcoms | | 200 | alleging - accused - alleged - defamation - allegations | 262 | 200_alleging_accused_alleged_defamation | | 201 | delegates - presidential - nominee - primaries - presidency | 261 | 201_delegates_presidential_nominee_primaries | | 202 | probation - misdemeanor - arrested - arrest - jail | 258 | 202_probation_misdemeanor_arrested_arrest | | 203 | fender - guitars - guitar - acoustic - amplifiers | 258 | 203_fender_guitars_guitar_acoustic | | 204 | trafficking - prostitution - prostitutes - prostitute - brothels | 257 | 204_trafficking_prostitution_prostitutes_prostitute | | 205 | isotopes - isotope - elements - neutron - neutrons | 257 | 205_isotopes_isotope_elements_neutron | | 206 | meth - methamphetamine - cocaine - drug - methylamine | 256 | 206_meth_methamphetamine_cocaine_drug | | 207 | channel - channels - tv - simulcast - streaming | 256 | 207_channel_channels_tv_simulcast | | 208 | frescoes - chapel - pope - basilica - sculptor | 252 | 208_frescoes_chapel_pope_basilica | | 209 | armistice - 1944 - française - 1945 - 1940 | 251 | 209_armistice_1944_française_1945 | | 210 | novelist - novels - novel - novelists - literature | 251 | 210_novelist_novels_novel_novelists | | 211 | 1936 - fascism - fascist - nationalists - nationalist | 251 | 211_1936_fascism_fascist_nationalists | | 212 | apple - 6s - smartphones - smartphone - phones | 248 | 212_apple_6s_smartphones_smartphone | | 213 | showrunner - episode - showrunners - episodes - primetime | 248 | 213_showrunner_episode_showrunners_episodes | | 214 | gemstones - gemstone - sapphires - sapphire - diamond | 247 | 214_gemstones_gemstone_sapphires_sapphire | | 215 | emperors - emperor - roman - empire - augustus | 246 | 215_emperors_emperor_roman_empire | | 216 | cavalry - legions - armies - battle - battles | 246 | 216_cavalry_legions_armies_battle | | 217 | 1649 - royalist - 1685 - royalists - 1640 | 246 | 217_1649_royalist_1685_royalists | | 218 | orgasms - orgasm - clitoris - clitoral - stimulation | 245 | 218_orgasms_orgasm_clitoris_clitoral | | 219 | glaucoma - retinopathy - blindness - retinal - cataract | 242 | 219_glaucoma_retinopathy_blindness_retinal | | 220 | novels - novelist - novel - literature - literary | 241 | 220_novels_novelist_novel_literature | | 221 | artillery - trenches - fortifications - bombardment - bombardments | 240 | 221_artillery_trenches_fortifications_bombardment | | 222 | beach - drums - albums - songs - drumming | 239 | 222_beach_drums_albums_songs | | 223 | nouveau - paintings - designers - façades - facades | 237 | 223_nouveau_paintings_designers_façades | | 224 | maya - civilizations - archaeological - archeological - civilization | 236 | 224_maya_civilizations_archaeological_archeological | | 225 | taekwondo - tae - karate - jitsu - martial | 235 | 225_taekwondo_tae_karate_jitsu | | 226 | rocky - creed - sequel - boxer - film | 233 | 226_rocky_creed_sequel_boxer | | 227 | assassins - creed - assassin - brotherhood - gameplay | 231 | 227_assassins_creed_assassin_brotherhood | | 228 | bp - petroleum - refinery - offshore - companies | 231 | 228_bp_petroleum_refinery_offshore | | 229 | minorities - ethnicity - ethnic - ethnically - census | 231 | 229_minorities_ethnicity_ethnic_ethnically | | 230 | baptism - baptisms - baptismal - baptized - baptised | 230 | 230_baptism_baptisms_baptismal_baptized | | 231 | bighorn - 1876 - bull - elk - tribes | 229 | 231_bighorn_1876_bull_elk | | 232 | psychotic - psychiatric - schizophrenia - psychiatry - sane | 227 | 232_psychotic_psychiatric_schizophrenia_psychiatry | | 233 | mexicana - latin - salsa - vida - una | 227 | 233_mexicana_latin_salsa_vida | | 234 | abortion - abortions - roe - unconstitutional - overturned | 225 | 234_abortion_abortions_roe_unconstitutional | | 235 | toy - toys - sequels - sequel - animator | 225 | 235_toy_toys_sequels_sequel | | 236 | euthanasia - suicide - legalised - suicides - suicidal | 225 | 236_euthanasia_suicide_legalised_suicides | | 237 | chan - kung - chang - kong - karate | 221 | 237_chan_kung_chang_kong | | 238 | protesting - activism - protests - protest - rallies | 220 | 238_protesting_activism_protests_protest | | 239 | tribes - tribe - natives - upstate - tribal | 219 | 239_tribes_tribe_natives_upstate | | 240 | toured - concert - concerts - drums - vocals | 219 | 240_toured_concert_concerts_drums | | 241 | nam - communists - insurgency - guerrilla - troops | 219 | 241_nam_communists_insurgency_guerrilla | | 242 | election - conservatives - liberal - liberals - partisanship | 219 | 242_election_conservatives_liberal_liberals | | 243 | chess - grandmaster - grandmasters - blitz - tournament | 219 | 243_chess_grandmaster_grandmasters_blitz | | 244 | radio - fm - stations - station - simulcasts | 218 | 244_radio_fm_stations_station | | 245 | awards - nominated - nominations - screenplay - cinematography | 218 | 245_awards_nominated_nominations_screenplay | | 246 | bombing - bomber - bombers - bombed - bombs | 218 | 246_bombing_bomber_bombers_bombed | | 247 | diesel - fuels - engines - combustion - petrol | 218 | 247_diesel_fuels_engines_combustion | | 248 | species - wildlife - fauna - birds - endangered | 217 | 248_species_wildlife_fauna_birds | | 249 | extraterrestrial - sightings - aliens - sighting - hoaxes | 216 | 249_extraterrestrial_sightings_aliens_sighting | | 250 | tick - ticks - burgdorferi - pathogens - infected | 215 | 250_tick_ticks_burgdorferi_pathogens | | 251 | congregational - denominational - congregations - evangelicalism - denomination | 215 | 251_congregational_denominational_congregations_evangelicalism | | 252 | lymphatic - lymph - gallbladder - organs - capillaries | 215 | 252_lymphatic_lymph_gallbladder_organs | | 253 | chemotherapy - treatments - cancer - cancers - radiotherapy | 215 | 253_chemotherapy_treatments_cancer_cancers | | 254 | creole - creoles - lingua - bilingual - dialects | 214 | 254_creole_creoles_lingua_bilingual | | 255 | princess - duchess - prince - countess - royal | 211 | 255_princess_duchess_prince_countess | | 256 | insurrection - revolt - 1821 - 1829 - uprising | 210 | 256_insurrection_revolt_1821_1829 | | 257 | charities - charity - donations - philanthropist - fundraising | 209 | 257_charities_charity_donations_philanthropist | | 258 | alien - predator - aliens - sequels - extraterrestrial | 209 | 258_alien_predator_aliens_sequels | | 259 | condor - dictators - declassified - dictatorships - chile | 208 | 259_condor_dictators_declassified_dictatorships | | 260 | inflation - inflationary - macroeconomics - macroeconomic - recessions | 207 | 260_inflation_inflationary_macroeconomics_macroeconomic | | 261 | warlock - infinity - eternity - gems - marvel | 206 | 261_warlock_infinity_eternity_gems | | 262 | bbc - channel - simulcast - channels - broadcasting | 205 | 262_bbc_channel_simulcast_channels | | 263 | eu - eurozone - euro - countries - borders | 205 | 263_eu_eurozone_euro_countries | | 264 | sonic - hedgehog - hedgehogs - tails - knuckles | 205 | 264_sonic_hedgehog_hedgehogs_tails | | 265 | battleships - torpedoed - torpedoes - torpedo - battleship | 203 | 265_battleships_torpedoed_torpedoes_torpedo | | 266 | hurricane - hurricanes - storms - cyclones - cyclone | 200 | 266_hurricane_hurricanes_storms_cyclones | | 267 | concert - concerts - tour - albums - toured | 200 | 267_concert_concerts_tour_albums | | 268 | shōgun - shogun - samurai - daimyō - daimyo | 199 | 268_shōgun_shogun_samurai_daimyō | | 269 | electrodes - electroluminescent - electrode - phosphors - displays | 199 | 269_electrodes_electroluminescent_electrode_phosphors | | 270 | brigades - soldiers - reinforcements - troops - casualties | 199 | 270_brigades_soldiers_reinforcements_troops | | 271 | presidency - populist - presidential - candidate - candidates | 199 | 271_presidency_populist_presidential_candidate | | 272 | heraldic - heraldry - gules - arms - garter | 198 | 272_heraldic_heraldry_gules_arms | | 273 | refrigerants - refrigeration - refrigerant - refrigerator - condenser | 198 | 273_refrigerants_refrigeration_refrigerant_refrigerator | | 274 | bee - sang - singer - songwriter - artists | 198 | 274_bee_sang_singer_songwriter | | 275 | thrones - novels - dragons - paperback - novel | 198 | 275_thrones_novels_dragons_paperback | | 276 | festivals - festival - celebrated - celebrations - festivities | 198 | 276_festivals_festival_celebrated_celebrations | | 277 | branch - fires - fired - deaths - wounded | 197 | 277_branch_fires_fired_deaths | | 278 | pasha - turkey - sultan - sultanate - nationalists | 197 | 278_pasha_turkey_sultan_sultanate | | 279 | neanderthalensis - paleolithic - sapiens - erectus - ancestor | 196 | 279_neanderthalensis_paleolithic_sapiens_erectus | | 280 | mujahideen - laden - militants - insurgency - jihad | 195 | 280_mujahideen_laden_militants_insurgency | | 281 | shogun - shōgun - shogunate - samurai - daimyō | 194 | 281_shogun_shōgun_shogunate_samurai | | 282 | hypothyroidism - hyperthyroidism - thyroid - thyroiditis - thyroidectomy | 194 | 282_hypothyroidism_hyperthyroidism_thyroid_thyroiditis | | 283 | mythos - tales - author - authors - writer | 193 | 283_mythos_tales_author_authors | | 284 | contest - contests - qualifying - winners - competed | 192 | 284_contest_contests_qualifying_winners | | 285 | impeachment - prosecutor - prosecutors - trump - prosecutorial | 192 | 285_impeachment_prosecutor_prosecutors_trump | | 286 | intelligence - personality - traits - trait - psychometric | 192 | 286_intelligence_personality_traits_trait | | 287 | terminator - sequels - sequel - prequel - trilogy | 191 | 287_terminator_sequels_sequel_prequel | | 288 | spacetime - relativity - relativistic - gravitation - geodesic | 191 | 288_spacetime_relativity_relativistic_gravitation | | 289 | dictatorships - dictatorship - regimes - dictators - authoritarianism | 191 | 289_dictatorships_dictatorship_regimes_dictators | | 290 | daft - punk - techno - toured - bands | 190 | 290_daft_punk_techno_toured | | 291 | peppers - chili - funk - flea - band | 190 | 291_peppers_chili_funk_flea | | 292 | dinosaurs - dinosaur - rex - prehistoric - sequels | 188 | 292_dinosaurs_dinosaur_rex_prehistoric | | 293 | surnames - surname - naming - names - suffixes | 188 | 293_surnames_surname_naming_names | | 294 | philosopher - 1765 - philosophers - writings - enlightenment | 187 | 294_philosopher_1765_philosophers_writings | | 295 | novels - novelist - 1925 - novel - 1920s | 187 | 295_novels_novelist_1925_novel | | 296 | depot - retailer - retailers - warehouses - stores | 186 | 296_depot_retailer_retailers_warehouses | | 297 | copyright - copyrights - copyrighted - royalties - infringement | 186 | 297_copyright_copyrights_copyrighted_royalties | | 298 | eastern - daylight - clocks - noon - clock | 184 | 298_eastern_daylight_clocks_noon | | 299 | numerals - numeral - numbers - numerology - digits | 184 | 299_numerals_numeral_numbers_numerology | | 300 | armament - armoured - turret - tanks - tank | 182 | 300_armament_armoured_turret_tanks | | 301 | vaccines - vaccine - vaccination - vaccinations - vaccinated | 182 | 301_vaccines_vaccine_vaccination_vaccinations | | 302 | cola - coca - coke - soda - bottled | 181 | 302_cola_coca_coke_soda | | 303 | fleet - 1797 - sailed - fleets - captains | 181 | 303_fleet_1797_sailed_fleets | | 304 | tsarina - empress - tsar - maria - princesses | 181 | 304_tsarina_empress_tsar_maria | | 305 | metalcore - thrash - deathcore - metal - hardcore | 179 | 305_metalcore_thrash_deathcore_metal | | 306 | medals - medal - commendation - gallantry - badge | 179 | 306_medals_medal_commendation_gallantry | | 307 | smith - prophets - revelations - revelation - scriptures | 179 | 307_smith_prophets_revelations_revelation | | 308 | newspaper - newspapers - gazette - news - magazine | 179 | 308_newspaper_newspapers_gazette_news | | 309 | philosopher - philosophers - philosophy - hermeneutics - philosophical | 179 | 309_philosopher_philosophers_philosophy_hermeneutics | | 310 | protocols - protocol - packet - packets - layers | 179 | 310_protocols_protocol_packet_packets | | 311 | coronation - airing - episodes - bbc - aired | 178 | 311_coronation_airing_episodes_bbc | | 312 | song - songs - singles - singer - billboard | 178 | 312_song_songs_singles_singer | | 313 | thylacines - thylacine - fauna - mammals - carnivorous | 178 | 313_thylacines_thylacine_fauna_mammals | | 314 | hearings - communists - subcommittee - committee - committees | 177 | 314_hearings_communists_subcommittee_committee | | 315 | 1776 - 1781 - 1775 - 1782 - 1778 | 177 | 315_1776_1781_1775_1782 | | 316 | comedian - circus - comedians - pythons - comedy | 177 | 316_comedian_circus_comedians_pythons | | 317 | railways - railway - trains - rail - train | 177 | 317_railways_railway_trains_rail | | 318 | nudity - naturism - naturists - naturist - nude | 175 | 318_nudity_naturism_naturists_naturist | | 319 | coalition - elections - populist - election - coalitions | 175 | 319_coalition_elections_populist_election | | 320 | jihad - coup - overthrow - militias - ba | 172 | 320_jihad_coup_overthrow_militias | | 321 | cement - cements - concretes - concrete - mortar | 171 | 321_cement_cements_concretes_concrete | | 322 | jeopardy - prizes - contestant - contestants - competed | 170 | 322_jeopardy_prizes_contestant_contestants | | 323 | panzer - commanders - blitzkrieg - commanded - 1944 | 169 | 323_panzer_commanders_blitzkrieg_commanded | | 324 | mushroom - mushrooms - sprites - sprite - super | 169 | 324_mushroom_mushrooms_sprites_sprite | | 325 | cossacks - tsar - tsarist - soviet - republics | 169 | 325_cossacks_tsar_tsarist_soviet | | 326 | apes - ape - sequels - gorilla - prequel | 169 | 326_apes_ape_sequels_gorilla | | 327 | graphene - graphite - nanotubes - carbon - conductivity | 168 | 327_graphene_graphite_nanotubes_carbon | | 328 | nicotine - tobacco - cigarettes - cigarette - smoking | 168 | 328_nicotine_tobacco_cigarettes_cigarette | | 329 | keyboardist - toured - guitarist - vocalist - bassist | 167 | 329_keyboardist_toured_guitarist_vocalist | | 330 | museums - museum - exhibitions - galleries - exhibits | 167 | 330_museums_museum_exhibitions_galleries | | 331 | motors - rotors - rotor - motor - rotary | 166 | 331_motors_rotors_rotor_motor | | 332 | tabby - cat - feline - cats - coloration | 165 | 332_tabby_cat_feline_cats | | 333 | handmaid - novels - novel - writers - tale | 163 | 333_handmaid_novels_novel_writers | | 334 | boulevard - celebrity - fame - celebrities - walk | 163 | 334_boulevard_celebrity_fame_celebrities | | 335 | trilogy - remastered - gods - editions - war | 162 | 335_trilogy_remastered_gods_editions | | 336 | genocide - peacekeeping - massacres - assassinated - killings | 162 | 336_genocide_peacekeeping_massacres_assassinated | | 337 | leopard - leopards - armament - refit - tanks | 162 | 337_leopard_leopards_armament_refit | | 338 | homicides - homicide - murders - crime - crimes | 162 | 338_homicides_homicide_murders_crime | | 339 | mercury - queen - bohemian - singer - musically | 162 | 339_mercury_queen_bohemian_singer | | 340 | tennis - tournaments - tournament - badminton - slams | 161 | 340_tennis_tournaments_tournament_badminton | | 341 | confederate - confederacy - confederates - slavery - 1861 | 160 | 341_confederate_confederacy_confederates_slavery | | 342 | scrum - agile - sprints - sprint - development | 159 | 342_scrum_agile_sprints_sprint | | 343 | museums - museum - galleries - exhibitions - exhibits | 159 | 343_museums_museum_galleries_exhibitions | | 344 | transformers - transformer - sequels - bumblebee - sequel | 158 | 344_transformers_transformer_sequels_bumblebee | | 345 | languages - dialects - language - bilingual - dialect | 158 | 345_languages_dialects_language_bilingual | | 346 | sponge - sponges - cartoon - cartoons - plankton | 158 | 346_sponge_sponges_cartoon_cartoons | | 347 | telescope - telescopes - observatory - astronomy - astronomical | 157 | 347_telescope_telescopes_observatory_astronomy | | 348 | mandarin - dialects - languages - lingua - china | 157 | 348_mandarin_dialects_languages_lingua | | 349 | kiss - toured - concerts - tour - lip | 156 | 349_kiss_toured_concerts_tour | | 350 | holiday - celebrates - holidays - celebrated - celebrations | 156 | 350_holiday_celebrates_holidays_celebrated | | 351 | conquered - empires - ancient - kingdoms - dynasty | 155 | 351_conquered_empires_ancient_kingdoms | | 352 | legionnaires - legion - regiments - guerrillas - regiment | 155 | 352_legionnaires_legion_regiments_guerrillas | | 353 | evolution - evolutionary - creationist - naturalist - biologist | 155 | 353_evolution_evolutionary_creationist_naturalist | | 354 | tennis - slams - quarterfinal - racquet - doubles | 155 | 354_tennis_slams_quarterfinal_racquet | | 355 | wikipedia - encyclopedia - encyclopedias - wikis - articles | 155 | 355_wikipedia_encyclopedia_encyclopedias_wikis | | 356 | detainees - inmates - prisoners - detention - prisons | 155 | 356_detainees_inmates_prisoners_detention | | 357 | operatic - opera - soprano - operas - arias | 155 | 357_operatic_opera_soprano_operas | | 358 | coalition - chancellors - chancellor - chancellorship - democrats | 154 | 358_coalition_chancellors_chancellor_chancellorship | | 359 | pixels - encoding - compression - pixel - bitmap | 154 | 359_pixels_encoding_compression_pixel | | 360 | augmented - oculus - vision - ar - virtual | 154 | 360_augmented_oculus_vision_ar | | 361 | flash - comics - episodes - storylines - showrunner | 154 | 361_flash_comics_episodes_storylines | | 362 | presidency - presidential - fascism - president - dictatorship | 153 | 362_presidency_presidential_fascism_president | | 363 | soil - soils - fertilizers - fertilizer - nutrient | 153 | 363_soil_soils_fertilizers_fertilizer | | 364 | novels - 1876 - 1881 - 1880 - writer | 153 | 364_novels_1876_1881_1880 | | 365 | critics - rankings - ranking - decade - films | 152 | 365_critics_rankings_ranking_decade | | 366 | dos - defendants - trafficking - alleged - recruited | 152 | 366_dos_defendants_trafficking_alleged | | 367 | abused - abuse - assaults - maltreatment - abusive | 152 | 367_abused_abuse_assaults_maltreatment | | 368 | masks - mask - pandemic - vaccine - vaccinated | 151 | 368_masks_mask_pandemic_vaccine | | 369 | novel - scout - rye - nonfiction - narrator | 151 | 369_novel_scout_rye_nonfiction | | 370 | tennis - doubles - competed - tournaments - tournament | 150 | 370_tennis_doubles_competed_tournaments | | 371 | macron - presidential - candidate - candidates - pen | 149 | 371_macron_presidential_candidate_candidates | | 372 | rose - roses - frontman - revolver - toured | 149 | 372_rose_roses_frontman_revolver | | 373 | satyagraha - revolt - rebellion - salt - protest | 148 | 373_satyagraha_revolt_rebellion_salt | | 374 | 1945 - allied - soviets - allies - reunification | 148 | 374_1945_allied_soviets_allies | | 375 | princes - prince - ambition - prudence - nobles | 148 | 375_princes_prince_ambition_prudence | | 376 | railways - railway - locomotives - trains - train | 148 | 376_railways_railway_locomotives_trains | | 377 | murdered - murders - convicted - sentenced - suspicion | 148 | 377_murdered_murders_convicted_sentenced | | 378 | syndrome - disorders - polycystic - diagnosed - ovarian | 148 | 378_syndrome_disorders_polycystic_diagnosed | | 379 | dune - dunes - novels - trilogy - novel | 148 | 379_dune_dunes_novels_trilogy | | 380 | temple - cult - peoples - disciples - teachings | 147 | 380_temple_cult_peoples_disciples | | 381 | 1963 - assassinated - 1964 - mosque - assassination | 147 | 381_1963_assassinated_1964_mosque | | 382 | chess - rook - grandmasters - grandmaster - tournaments | 147 | 382_chess_rook_grandmasters_grandmaster | | 383 | lithium - batteries - battery - rechargeable - electrochemical | 146 | 383_lithium_batteries_battery_rechargeable | | 384 | genocide - detainees - persecution - internment - holocaust | 146 | 384_genocide_detainees_persecution_internment | | 385 | neurons - neuronal - neuron - neurotransmitters - neurotransmitter | 146 | 385_neurons_neuronal_neuron_neurotransmitters | | 386 | poles - casualties - massacres - massacre - polish | 145 | 386_poles_casualties_massacres_massacre | | 387 | dialects - accents - isles - dialect - pronunciation | 145 | 387_dialects_accents_isles_dialect | | 388 | racing - speedway - raced - laps - motorsports | 145 | 388_racing_speedway_raced_laps | | 389 | rand - nonfiction - subjectivism - philosophers - philosopher | 145 | 389_rand_nonfiction_subjectivism_philosophers | | 390 | lee - pap - chairman - election - leaders | 145 | 390_lee_pap_chairman_election | | 391 | kernels - kernel - processors - processes - processor | 145 | 391_kernels_kernel_processors_processes | | 392 | nightmare - nightmares - elm - horror - supernatural | 144 | 392_nightmare_nightmares_elm_horror | | 393 | newspaper - newspapers - tabloid - newsprint - journalism | 144 | 393_newspaper_newspapers_tabloid_newsprint | | 394 | interrogation - interrogations - arrest - incrimination - defendant | 144 | 394_interrogation_interrogations_arrest_incrimination | | 395 | millennials - millennial - generations - generation - generational | 144 | 395_millennials_millennial_generations_generation | | 396 | hobbit - hobbits - shire - literature - publishers | 144 | 396_hobbit_hobbits_shire_literature | | 397 | pollution - pollutants - polluting - pollutant - polluted | 143 | 397_pollution_pollutants_polluting_pollutant | | 398 | sins - sin - sinfulness - theology - sinned | 143 | 398_sins_sin_sinfulness_theology | | 399 | nursing - nurse - nurses - hospitals - compassion | 143 | 399_nursing_nurse_nurses_hospitals | | 400 | aeronautical - aeronautics - aircraft - flew - airplanes | 143 | 400_aeronautical_aeronautics_aircraft_flew | | 401 | congregations - congregation - churches - denominations - denomination | 142 | 401_congregations_congregation_churches_denominations | | 402 | skyscraper - tallest - skyscrapers - towers - tower | 142 | 402_skyscraper_tallest_skyscrapers_towers | | 403 | consulate - embassy - suspects - assassination - consul | 142 | 403_consulate_embassy_suspects_assassination | | 404 | blu - disc - discs - codecs - digital | 142 | 404_blu_disc_discs_codecs | | 405 | pyramid - pyramids - pyramidion - excavations - tombs | 141 | 405_pyramid_pyramids_pyramidion_excavations | | 406 | antibiotics - antibiotic - amoxicillin - penicillin - ampicillin | 140 | 406_antibiotics_antibiotic_amoxicillin_penicillin | | 407 | activism - protest - protests - activist - marches | 140 | 407_activism_protest_protests_activist | | 408 | bbc - broadcasting - channel - al - simulcast | 140 | 408_bbc_broadcasting_channel_al | | 409 | pharaoh - pharaohs - throne - heir - tombs | 139 | 409_pharaoh_pharaohs_throne_heir | | 410 | bombing - troops - pentagon - war - troop | 139 | 410_bombing_troops_pentagon_war | | 411 | municipality - megacity - located - niger - town | 139 | 411_municipality_megacity_located_niger | | 412 | addresses - subnet - subnets - addressing - address | 138 | 412_addresses_subnet_subnets_addressing | | 413 | tom - cruise - screenwriter - tall - jack | 138 | 413_tom_cruise_screenwriter_tall | | 414 | motivation - motivations - motivational - motivate - motivates | 137 | 414_motivation_motivations_motivational_motivate | | 415 | deforestation - reforestation - forestry - forests - forested | 137 | 415_deforestation_reforestation_forestry_forests | | 416 | anesthesiologist - anatomy - neurosurgery - surgeon - cast | 137 | 416_anesthesiologist_anatomy_neurosurgery_surgeon | | 417 | pharaoh - prophets - prophet - messiah - patriarch | 136 | 417_pharaoh_prophets_prophet_messiah | | 418 | battlefield - warfare - modern - gameplay - remastered | 136 | 418_battlefield_warfare_modern_gameplay | | 419 | ancestry - mestizo - ethnic - ethnicity - natives | 136 | 419_ancestry_mestizo_ethnic_ethnicity | | 420 | telegram - messenger - messaging - chat - apps | 136 | 420_telegram_messenger_messaging_chat | | 421 | penalty - penalties - fouls - foul - goaltending | 136 | 421_penalty_penalties_fouls_foul | | 422 | miss - pageant - pageants - pageantry - finalist | 135 | 422_miss_pageant_pageants_pageantry | | 423 | throne - rebelled - heir - king - castles | 135 | 423_throne_rebelled_heir_king | | 424 | territory - airspace - blockade - sanctions - borders | 135 | 424_territory_airspace_blockade_sanctions | | 425 | jazz - saxophonist - trumpeter - saxophone - musicians | 135 | 425_jazz_saxophonist_trumpeter_saxophone | | 426 | stooge - moe - curly - comedies - comedians | 135 | 426_stooge_moe_curly_comedies | | 427 | lichens - lichen - fungi - fungal - fungus | 135 | 427_lichens_lichen_fungi_fungal | | 428 | rebels - overthrowing - generals - overthrow - coup | 134 | 428_rebels_overthrowing_generals_overthrow | | 429 | races - race - racial - anthropologist - anthropologists | 134 | 429_races_race_racial_anthropologist | | 430 | channel - channels - broadcasting - broadcasters - simulcast | 134 | 430_channel_channels_broadcasting_broadcasters | | 431 | prosecution - accused - bordereau - acquitted - investigation | 133 | 431_prosecution_accused_bordereau_acquitted | | 432 | missiles - soviets - missile - soviet - nuclear | 133 | 432_missiles_soviets_missile_soviet | | 433 | 1945 - armistice - surrender - surrendered - soviets | 133 | 433_1945_armistice_surrender_surrendered | | 434 | monastic - monastics - samadhi - monks - monastery | 133 | 434_monastic_monastics_samadhi_monks | | 435 | colors - colours - colour - magenta - pigment | 133 | 435_colors_colours_colour_magenta | | 436 | pipeline - pipelines - keystone - refinery - pipe | 133 | 436_pipeline_pipelines_keystone_refinery | | 437 | institutes - institute - universities - polytechnic - polytechnics | 133 | 437_institutes_institute_universities_polytechnic | | 438 | deepest - depths - oceanographic - oceanography - challenger | 132 | 438_deepest_depths_oceanographic_oceanography | | 439 | postcodes - postcode - zip - postal - addresses | 132 | 439_postcodes_postcode_zip_postal | | 440 | rockstar - grand - games - consoles - gameplay | 132 | 440_rockstar_grand_games_consoles | | 441 | woman - wonder - goddess - feminist - goddesses | 132 | 441_woman_wonder_goddess_feminist | | 442 | suffrage - referendum - referendums - women - enfranchised | 131 | 442_suffrage_referendum_referendums_women | | 443 | apartheid - cape - natal - protest - activist | 131 | 443_apartheid_cape_natal_protest | | 444 | barristers - barrister - solicitors - lawyers - solicitor | 131 | 444_barristers_barrister_solicitors_lawyers | | 445 | scrolls - manuscripts - antiquities - archaeology - archaeological | 131 | 445_scrolls_manuscripts_antiquities_archaeology | | 446 | slavery - revolution - slaves - revolt - colonial | 131 | 446_slavery_revolution_slaves_revolt | | 447 | boxer - cop - knockout - fighter - fights | 130 | 447_boxer_cop_knockout_fighter | | 448 | siblings - 1963 - assassinated - senator - youngest | 130 | 448_siblings_1963_assassinated_senator | | 449 | ku - confederate - activists - 1868 - whites | 130 | 449_ku_confederate_activists_1868 | | 450 | bear - bears - grizzly - predators - species | 130 | 450_bear_bears_grizzly_predators | | 451 | junta - detained - arrest - imprisonment - sentenced | 130 | 451_junta_detained_arrest_imprisonment | | 452 | oasis - albums - concert - songwriter - album | 129 | 452_oasis_albums_concert_songwriter | | 453 | darkness - literature - novelist - postcolonial - colonialism | 129 | 453_darkness_literature_novelist_postcolonial | | 454 | currencies - currency - monetary - dollar - dollars | 129 | 454_currencies_currency_monetary_dollar | | 455 | musically - musician - drums - percussion - composers | 129 | 455_musically_musician_drums_percussion | | 456 | infantry - insurgents - battalion - platoon - reconnaissance | 129 | 456_infantry_insurgents_battalion_platoon | | 457 | sesame - puppets - puppeteer - puppet - puppeteers | 128 | 457_sesame_puppets_puppeteer_puppet | | 458 | crocodiles - crocodile - alligators - alligator - reptiles | 128 | 458_crocodiles_crocodile_alligators_alligator | | 459 | antibiotics - antibiotic - penicillin - antimicrobial - amoxicillin | 128 | 459_antibiotics_antibiotic_penicillin_antimicrobial | | 460 | acropolis - excavations - temples - temple - archaeologists | 128 | 460_acropolis_excavations_temples_temple | | 461 | taxes - tax - taxation - taxable - taxed | 128 | 461_taxes_tax_taxation_taxable | | 462 | manning - arrested - offenses - prosecutors - whistleblower | 128 | 462_manning_arrested_offenses_prosecutors | | 463 | quantum - entanglement - entangled - decoherence - superposition | 128 | 463_quantum_entanglement_entangled_decoherence | | 464 | sang - carpenter - carpenters - billboard - songwriter | 128 | 464_sang_carpenter_carpenters_billboard | | 465 | languages - language - lingua - creole - vernacular | 127 | 465_languages_language_lingua_creole | | 466 | goddesses - mythological - goddess - deities - gods | 127 | 466_goddesses_mythological_goddess_deities | | 467 | katana - kata - swords - sword - samurai | 127 | 467_katana_kata_swords_sword | | 468 | haggard - sang - duets - ballads - songs | 127 | 468_haggard_sang_duets_ballads | | 469 | marathon - marathons - runners - runner - triathlon | 127 | 469_marathon_marathons_runners_runner | | 470 | comedian - comedians - sitcom - sitcoms - comedy | 127 | 470_comedian_comedians_sitcom_sitcoms | | 471 | armament - panzer - armoured - tanks - armored | 127 | 471_armament_panzer_armoured_tanks | | 472 | traditional - dhoti - sari - dresses - traditionally | 127 | 472_traditional_dhoti_sari_dresses | | 473 | prohibition - alcoholism - alcoholic - alcohol - liquor | 127 | 473_prohibition_alcoholism_alcoholic_alcohol | | 474 | lightning - thunderstorm - thunderstorms - storms - thunder | 126 | 474_lightning_thunderstorm_thunderstorms_storms | | 475 | militants - temple - terrorists - militant - casualties | 126 | 475_militants_temple_terrorists_militant | | 476 | cartoons - tom - shorts - cartoon - commercials | 125 | 476_cartoons_tom_shorts_cartoon | | 477 | mortality - fertility - expectancy - population - births | 125 | 477_mortality_fertility_expectancy_population | | 478 | lodges - masonic - lodge - masons - masonry | 125 | 478_lodges_masonic_lodge_masons | | 479 | judge - judges - courtroom - court - defendants | 125 | 479_judge_judges_courtroom_court | | 480 | entrepreneurship - entrepreneur - entrepreneurial - entrepreneurs - venture | 125 | 480_entrepreneurship_entrepreneur_entrepreneurial_entrepreneurs | | 481 | burger - burgers - hamburger - franchisees - hamburgers | 124 | 481_burger_burgers_hamburger_franchisees | | 482 | folate - folic - vitamin - vitamins - supplements | 124 | 482_folate_folic_vitamin_vitamins | | 483 | niger - haram - jihad - bombing - insurgency | 124 | 483_niger_haram_jihad_bombing | | 484 | viewership - viewers - subscribers - channel - livestreaming | 124 | 484_viewership_viewers_subscribers_channel | | 485 | 1080p - resolution - 1080 - 720p - 1080i | 124 | 485_1080p_resolution_1080_720p | | 486 | units - metre - quantities - unit - kilogram | 124 | 486_units_metre_quantities_unit | | 487 | oblast - soviet - yuri - grandmother - grandparents | 124 | 487_oblast_soviet_yuri_grandmother | | 488 | cricket - wickets - matches - umpires - rugby | 123 | 488_cricket_wickets_matches_umpires | | 489 | defendant - testify - prosecution - court - judge | 123 | 489_defendant_testify_prosecution_court | | 490 | inventor - electrical - inventors - inventions - electricity | 123 | 490_inventor_electrical_inventors_inventions | | 491 | apartheid - natal - cape - chairperson - appointed | 123 | 491_apartheid_natal_cape_chairperson | | 492 | ball - sitcom - tv - 1957 - miss | 123 | 492_ball_sitcom_tv_1957 | | 493 | zeppelin - stairway - concert - lyrics - psychedelic | 123 | 493_zeppelin_stairway_concert_lyrics | | 494 | negro - negroes - racial - whites - civilizing | 123 | 494_negro_negroes_racial_whites | | 495 | tornado - tornadoes - storms - thunderstorm - thunderstorms | 123 | 495_tornado_tornadoes_storms_thunderstorm | | 496 | façade - buildings - architect - architects - building | 122 | 496_façade_buildings_architect_architects | | 497 | marvel - superhero - marvels - supervillain - superman | 122 | 497_marvel_superhero_marvels_supervillain | | 498 | murders - homicide - rapist - murderer - suspect | 122 | 498_murders_homicide_rapist_murderer | | 499 | cram - murders - murdered - tortured - detectives | 121 | 499_cram_murders_murdered_tortured | | 500 | tequila - agave - distillation - distillery - liquor | 121 | 500_tequila_agave_distillation_distillery | | 501 | tennis - doubles - tournaments - singles - semifinals | 121 | 501_tennis_doubles_tournaments_singles | | 502 | conspiracies - conspiratorial - conspiracy - trafficking - trump | 121 | 502_conspiracies_conspiratorial_conspiracy_trafficking | | 503 | airship - zeppelin - airships - helium - flew | 121 | 503_airship_zeppelin_airships_helium | | 504 | dubbed - dub - dubbing - dubs - castle | 121 | 504_dubbed_dub_dubbing_dubs | | 505 | defamation - libel - defamatory - slander - slanderous | 120 | 505_defamation_libel_defamatory_slander | | 506 | soprano - mafia - joey - carmine - capo | 120 | 506_soprano_mafia_joey_carmine | | 507 | eagle - eagles - vultures - hawk - birds | 120 | 507_eagle_eagles_vultures_hawk | | 508 | households - household - average - families - census | 119 | 508_households_household_average_families | | 509 | taxonomic - genus - taxon - nomenclature - taxonomists | 119 | 509_taxonomic_genus_taxon_nomenclature | | 510 | 1984 - 1945 - 1949 - novelist - 1939 | 119 | 510_1984_1945_1949_novelist | | 511 | philosopher - philosophers - empiricism - philosophy - rationalist | 119 | 511_philosopher_philosophers_empiricism_philosophy | | 512 | women - comfort - geisha - grandmothers - yen | 119 | 512_women_comfort_geisha_grandmothers | | 513 | massacre - massacred - atrocities - victims - 1945 | 119 | 513_massacre_massacred_atrocities_victims | | 514 | internment - camps - detainees - camp - prisoners | 119 | 514_internment_camps_detainees_camp | | 515 | ribbons - ribbon - gallantry - medals - medal | 119 | 515_ribbons_ribbon_gallantry_medals | | 516 | tramp - films - film - cinema - cinematographer | 119 | 516_tramp_films_film_cinema | | 517 | caves - cave - temples - excavation - shrines | 119 | 517_caves_cave_temples_excavation | | 518 | jubilees - jubilee - celebrated - celebrations - celebration | 119 | 518_jubilees_jubilee_celebrated_celebrations | | 519 | chains - albums - album - toured - songs | 118 | 519_chains_albums_album_toured | | 520 | spice - concert - girls - spicy - debut | 118 | 520_spice_concert_girls_spicy | | 521 | malaria - malarial - antimalarial - mosquito - mosquitoes | 117 | 521_malaria_malarial_antimalarial_mosquito | | 522 | fertility - overpopulation - childbearing - adoptions - adoption | 117 | 522_fertility_overpopulation_childbearing_adoptions | | 523 | eucalyptus - acacia - rainforests - conifers - trees | 117 | 523_eucalyptus_acacia_rainforests_conifers | | 524 | prince - albums - album - duet - songs | 117 | 524_prince_albums_album_duet | | 525 | famine - famines - genocide - starvation - starved | 117 | 525_famine_famines_genocide_starvation | | 526 | 1832 - minister - peerage - constituency - exchequer | 117 | 526_1832_minister_peerage_constituency | | 527 | vertigo - scenes - film - screenplay - films | 116 | 527_vertigo_scenes_film_screenplay | | 528 | stark - thrones - throne - arya - wildlings | 116 | 528_stark_thrones_throne_arya | | 529 | mobile - telecommunications - mobiles - cellular - handsets | 116 | 529_mobile_telecommunications_mobiles_cellular | | 530 | shaggy - voiced - cartoon - cartoons - voice | 115 | 530_shaggy_voiced_cartoon_cartoons | | 531 | bear - bears - zoo - toy - pg | 115 | 531_bear_bears_zoo_toy | | 532 | coffeehouse - coffee - coffees - cafe - café | 115 | 532_coffeehouse_coffee_coffees_cafe | | 533 | segregation - segregationist - segregated - discrimination - unconstitutional | 115 | 533_segregation_segregationist_segregated_discrimination | | 534 | poverty - income - economies - agriculture - subsistence | 115 | 534_poverty_income_economies_agriculture | | 535 | capacitors - dielectrics - capacitor - capacitance - dielectric | 114 | 535_capacitors_dielectrics_capacitor_capacitance | | 536 | islands - archipelagos - archipelago - pacific - island | 114 | 536_islands_archipelagos_archipelago_pacific | | 537 | paramount - studios - corporation - merger - subsidiaries | 114 | 537_paramount_studios_corporation_merger | | 538 | iso - standards - standardization - organizational - stakeholders | 114 | 538_iso_standards_standardization_organizational | | 539 | paintings - painting - painters - art - artistic | 114 | 539_paintings_painting_painters_art | | 540 | mayor - mayors - mayoral - municipal - municipalities | 114 | 540_mayor_mayors_mayoral_municipal | | 541 | ethnicities - ethnonym - ethnic - ancestry - inhabitants | 114 | 541_ethnicities_ethnonym_ethnic_ancestry | | 542 | repeal - repealing - repealed - healthcare - uninsured | 113 | 542_repeal_repealing_repealed_healthcare | | 543 | watchmen - comics - superhero - superheroes - vendetta | 113 | 543_watchmen_comics_superhero_superheroes | | 544 | hashing - hash - hashes - hashed - tables | 113 | 544_hashing_hash_hashes_hashed | | 545 | pistols - punk - punks - band - pistol | 113 | 545_pistols_punk_punks_band | | 546 | chef - chefs - culinary - kitchens - cook | 113 | 546_chef_chefs_culinary_kitchens | | 547 | realism - surrealism - magical - fiction - imagination | 113 | 547_realism_surrealism_magical_fiction | | 548 | 1793 - 1789 - revolutionaries - revolt - insurrection | 113 | 548_1793_1789_revolutionaries_revolt | | 549 | 451 - writer - literature - writers - author | 113 | 549_451_writer_literature_writers | | 550 | punk - indie - genre - genres - bands | 113 | 550_punk_indie_genre_genres | | 551 | dances - dance - dancers - traditional - rituals | 112 | 551_dances_dance_dancers_traditional | | 552 | gong - qigong - communist - china - adherents | 112 | 552_gong_qigong_communist_china | | 553 | playlists - playlist - music - songs - podcasts | 112 | 553_playlists_playlist_music_songs | | 554 | fabrication - manufacturing - machining - inkjet - prototyping | 111 | 554_fabrication_manufacturing_machining_inkjet | | 555 | elections - election - electoral - polls - voters | 111 | 555_elections_election_electoral_polls | | 556 | steam - valve - platform - publishers - cloud | 111 | 556_steam_valve_platform_publishers | | 557 | orchestra - orchestras - orchestration - symphonies - symphony | 111 | 557_orchestra_orchestras_orchestration_symphonies | | 558 | albums - songs - toured - 1973 - 1974 | 111 | 558_albums_songs_toured_1973 | | 559 | arsenal - goals - scored - footballer - goal | 111 | 559_arsenal_goals_scored_footballer | | 560 | metro - railway - railways - transit - trains | 111 | 560_metro_railway_railways_transit | | 561 | laundering - banking - trafficking - smuggling - bank | 110 | 561_laundering_banking_trafficking_smuggling | | 562 | complement - binary - complements - unsigned - bitwise | 110 | 562_complement_binary_complements_unsigned | | 563 | piazza - boulevard - della - buildings - baroque | 110 | 563_piazza_boulevard_della_buildings | | 564 | synthesizers - synthesizer - techno - synth - genres | 110 | 564_synthesizers_synthesizer_techno_synth | | 565 | sprinter - bolt - sprinters - olympic - athletics | 109 | 565_sprinter_bolt_sprinters_olympic | | 566 | condoms - condom - contraception - prevention - protection | 108 | 566_condoms_condom_contraception_prevention | | 567 | flags - flag - soviet - flagpole - tricolour | 108 | 567_flags_flag_soviet_flagpole | | 568 | kanji - pinyin - characters - mandarin - character | 108 | 568_kanji_pinyin_characters_mandarin | | 569 | detective - hound - adventure - investigative - novels | 108 | 569_detective_hound_adventure_investigative | | 570 | subcontinent - viceroy - coalition - 1947 - raj | 108 | 570_subcontinent_viceroy_coalition_1947 | | 571 | lion - wardrobe - witch - chronicles - mythical | 107 | 571_lion_wardrobe_witch_chronicles | | 572 | prix - qualifying - podium - laps - overtook | 107 | 572_prix_qualifying_podium_laps | | 573 | soccer - athlete - assists - scoring - olympic | 106 | 573_soccer_athlete_assists_scoring | | 574 | impeachment - testified - indictment - prosecutor - hearings | 106 | 574_impeachment_testified_indictment_prosecutor | | 575 | databases - database - tables - schema - relational | 106 | 575_databases_database_tables_schema | | 576 | paramount - animators - studios - productions - animation | 106 | 576_paramount_animators_studios_productions | | 577 | gear - presenter - presenters - viewers - bbc | 106 | 577_gear_presenter_presenters_viewers | | 578 | tricolour - tricolore - tricolor - flags - flag | 105 | 578_tricolour_tricolore_tricolor_flags | | 579 | node - js - developers - frameworks - platform | 105 | 579_node_js_developers_frameworks | | 580 | populism - populists - populist - political - authoritarianism | 105 | 580_populism_populists_populist_political | | 581 | tempo - tempos - rhythmic - rhythm - bpm | 105 | 581_tempo_tempos_rhythmic_rhythm | | 582 | biometric - authentication - citizenship - identity - register | 105 | 582_biometric_authentication_citizenship_identity | | 583 | gambling - gamblers - gambler - casino - casinos | 105 | 583_gambling_gamblers_gambler_casino | | 584 | incompleteness - axiomatization - completeness - provability - consistency | 105 | 584_incompleteness_axiomatization_completeness_provability | | 585 | logics - logicians - logic - semantics - propositional | 105 | 585_logics_logicians_logic_semantics | | 586 | writings - discourses - discourse - theological - theologians | 104 | 586_writings_discourses_discourse_theological | | 587 | censorship - censor - censors - censored - forbidding | 104 | 587_censorship_censor_censors_censored | | 588 | barbarian - serpent - marvel - comics - blacksmith | 104 | 588_barbarian_serpent_marvel_comics | | 589 | uninsured - insurance - insured - healthcare - insurers | 104 | 589_uninsured_insurance_insured_healthcare | | 590 | privateers - pirates - pirate - slaves - enslaved | 103 | 590_privateers_pirates_pirate_slaves | | 591 | papillomavirus - cancers - cervical - warts - cancer | 103 | 591_papillomavirus_cancers_cervical_warts | | 592 | satellites - satellite - constellations - constellation - orbit | 103 | 592_satellites_satellite_constellations_constellation | | 593 | samurai - screenwriter - screenplay - screenplays - filmmaker | 103 | 593_samurai_screenwriter_screenplay_screenplays | | 594 | hammer - rapper - rappers - rap - raps | 103 | 594_hammer_rapper_rappers_rap | | 595 | bitcoin - bitcoins - blockchain - cryptocurrency - cryptocurrencies | 103 | 595_bitcoin_bitcoins_blockchain_cryptocurrency | | 596 | electronics - manufacturer - appliances - manufactures - lee | 103 | 596_electronics_manufacturer_appliances_manufactures | | 597 | utilitarianism - utilitarian - consequentialism - consequentialist - morality | 103 | 597_utilitarianism_utilitarian_consequentialism_consequentialist | | 598 | sitcom - woody - cast - primetime - shows | 103 | 598_sitcom_woody_cast_primetime | | 599 | republics - soviet - soviets - oblasts - republic | 103 | 599_republics_soviet_soviets_oblasts | | 600 | monarchy - junta - dictatorship - king - monarch | 102 | 600_monarchy_junta_dictatorship_king | | 601 | apps - app - android - mobile - downloads | 102 | 601_apps_app_android_mobile | | 602 | vampire - vampires - vampirism - vampiric - bloodlust | 102 | 602_vampire_vampires_vampirism_vampiric | | 603 | racism - racialism - prejudice - racial - discrimination | 102 | 603_racism_racialism_prejudice_racial | | 604 | twitch - streaming - stream - viewership - streams | 102 | 604_twitch_streaming_stream_viewership | | 605 | glucose - monosaccharides - monosaccharide - polysaccharides - oligosaccharides | 102 | 605_glucose_monosaccharides_monosaccharide_polysaccharides | | 606 | sponsors - sponsorship - sponsor - sponsorships - sponsored | 102 | 606_sponsors_sponsorship_sponsor_sponsorships | | 607 | minister - ministers - secretary - elected - cabinet | 102 | 607_minister_ministers_secretary_elected | | 608 | booth - assassination - assassinated - confederate - 1864 | 102 | 608_booth_assassination_assassinated_confederate | | 609 | torrents - torrent - peers - peer - downloading | 102 | 609_torrents_torrent_peers_peer | | 610 | coco - boutiques - boutique - designers - cosmetics | 102 | 610_coco_boutiques_boutique_designers | | 611 | crusades - crusade - crusaders - crusader - 1451 | 102 | 611_crusades_crusade_crusaders_crusader | | 612 | psychometric - intelligence - assessment - standardized - scores | 102 | 612_psychometric_intelligence_assessment_standardized | | 613 | prophets - prophet - prophethood - prophetic - scriptures | 101 | 613_prophets_prophet_prophethood_prophetic | | 614 | purge - purges - gulag - soviet - purged | 101 | 614_purge_purges_gulag_soviet | | 615 | politburo - soviet - perestroika - chairman - secretary | 101 | 615_politburo_soviet_perestroika_chairman | | 616 | powertrain - musk - cars - motors - drivetrain | 101 | 616_powertrain_musk_cars_motors | | 617 | pornography - pornographic - prohibits - porn - obscene | 101 | 617_pornography_pornographic_prohibits_porn | | 618 | bikers - angels - motorcycles - outlaws - motorcyclists | 101 | 618_bikers_angels_motorcycles_outlaws | | 619 | altruism - ethical - advocated - moral - ethics | 101 | 619_altruism_ethical_advocated_moral | | 620 | concert - duet - concerts - singer - medley | 101 | 620_concert_duet_concerts_singer | | 621 | licenses - licensing - license - licensed - proprietary | 101 | 621_licenses_licensing_license_licensed | | 622 | gentrification - suburbanization - gentrified - urbanization - redevelopment | 101 | 622_gentrification_suburbanization_gentrified_urbanization | | 623 | spying - spy - espionage - spyware - smartphones | 101 | 623_spying_spy_espionage_spyware | | 624 | apartheid - activism - blacks - activist - suffrage | 101 | 624_apartheid_activism_blacks_activist | | 625 | robotics - robot - robots - robotic - manipulators | 101 | 625_robotics_robot_robots_robotic | | 626 | 1783 - minister - peerage - ministers - 1784 | 100 | 626_1783_minister_peerage_ministers | | 627 | labour - children - labor - poverty - labourers | 100 | 627_labour_children_labor_poverty | | 628 | generative - adversarial - generating - generates - generator | 100 | 628_generative_adversarial_generating_generates | | 629 | concert - sang - scarecrow - vocals - musicians | 100 | 629_concert_sang_scarecrow_vocals | | 630 | mosque - masjid - mosques - tombs - mausoleum | 100 | 630_mosque_masjid_mosques_tombs | | 631 | sang - concert - zeppelin - rocker - tour | 100 | 631_sang_concert_zeppelin_rocker | | 632 | attachments - attachment - adoptions - parenting - infancy | 100 | 632_attachments_attachment_adoptions_parenting | | 633 | tennis - slams - tournaments - competed - doubles | 100 | 633_tennis_slams_tournaments_competed | | 634 | witchcraft - coven - covens - witches - paganism | 99 | 634_witchcraft_coven_covens_witches | | 635 | viruses - viral - virus - coronavirus - coronaviruses | 99 | 635_viruses_viral_virus_coronavirus | | 636 | demon - yakuza - shinobi - demons - priestess | 99 | 636_demon_yakuza_shinobi_demons | | 637 | psoriasis - psoriatic - erythematosus - keratinocytes - autoimmune | 99 | 637_psoriasis_psoriatic_erythematosus_keratinocytes | | 638 | guru - gurus - shakti - scriptures - divinity | 99 | 638_guru_gurus_shakti_scriptures | | 639 | population - populations - urbanization - china - populous | 99 | 639_population_populations_urbanization_china | | 640 | defamation - lawsuit - sued - libel - accused | 99 | 640_defamation_lawsuit_sued_libel | | 641 | rating - ratings - scores - rated - fide | 99 | 641_rating_ratings_scores_rated | | 642 | albums - singer - singers - songwriter - songs | 98 | 642_albums_singer_singers_songwriter | | 643 | ebook - ebooks - tablet - touchscreen - devices | 98 | 643_ebook_ebooks_tablet_touchscreen | | 644 | orthodox - patriarch - principality - rulers - ruled | 98 | 644_orthodox_patriarch_principality_rulers | | 645 | cyclones - cyclone - typhoon - hurricane - typhoons | 98 | 645_cyclones_cyclone_typhoon_hurricane | | 646 | boots - sequels - sequel - premiered - movie | 98 | 646_boots_sequels_sequel_premiered | | 647 | novels - novel - writer - nonfiction - fiction | 98 | 647_novels_novel_writer_nonfiction | | 648 | kami - rituals - deities - shin - ritual | 98 | 648_kami_rituals_deities_shin | | 649 | honorary - commencement - doctorate - conferred - degree | 98 | 649_honorary_commencement_doctorate_conferred | | 650 | evil - virtual - zombies - nemesis - sequel | 98 | 650_evil_virtual_zombies_nemesis | | 651 | voiced - voice - voices - voiceover - cast | 98 | 651_voiced_voice_voices_voiceover | | 652 | doom - ark - chronicles - films - sequel | 97 | 652_doom_ark_chronicles_films | | 653 | botulinum - toxin - toxins - neurotoxin - neurotoxins | 97 | 653_botulinum_toxin_toxins_neurotoxin | | 654 | tags - tagging - barcodes - transmitters - tag | 97 | 654_tags_tagging_barcodes_transmitters | | 655 | soviet - politburo - coup - arrest - perestroika | 97 | 655_soviet_politburo_coup_arrest | | 656 | twitter - tweets - accounts - hoaxes - trolls | 97 | 656_twitter_tweets_accounts_hoaxes | | 657 | cryptography - encryption - cryptosystems - cryptosystem - cryptographic | 97 | 657_cryptography_encryption_cryptosystems_cryptosystem | | 658 | lasers - fibers - laser - fiber - optical | 96 | 658_lasers_fibers_laser_fiber | | 659 | smartphone - smartphones - mobile - cellular - flagship | 96 | 659_smartphone_smartphones_mobile_cellular | | 660 | vaudeville - brothers - comedian - comedians - broadway | 96 | 660_vaudeville_brothers_comedian_comedians | | 661 | halo - 343 - consoles - franchise - spartan | 96 | 661_halo_343_consoles_franchise | | 662 | mosque - masjid - mosques - mecca - caliphate | 96 | 662_mosque_masjid_mosques_mecca | | 663 | motorsport - racing - prix - raced - cars | 96 | 663_motorsport_racing_prix_raced | | 664 | punches - featherweight - fighter - fighters - fights | 96 | 664_punches_featherweight_fighter_fighters | | 665 | herbicides - herbicide - orange - contaminated - chemicals | 96 | 665_herbicides_herbicide_orange_contaminated | | 666 | nonfiction - bestseller - novelist - autobiography - novels | 96 | 666_nonfiction_bestseller_novelist_autobiography | | 667 | cannabis - marijuana - sect - sects - cultivates | 96 | 667_cannabis_marijuana_sect_sects | | 668 | income - poverty - median - households - affluent | 96 | 668_income_poverty_median_households | | 669 | epistemological - epistemic - epistemology - epistemologists - belief | 96 | 669_epistemological_epistemic_epistemology_epistemologists | | 670 | genie - mother - abuse - childhood - parents | 95 | 670_genie_mother_abuse_childhood | | 671 | 802 - wireless - bandwidth - communications - antennas | 95 | 671_802_wireless_bandwidth_communications | | 672 | han - nam - 1945 - kai - troops | 95 | 672_han_nam_1945_kai | | 673 | wage - wages - minimum - hourly - raise | 95 | 673_wage_wages_minimum_hourly | | 674 | lambs - screenplay - thriller - silence - films | 95 | 674_lambs_screenplay_thriller_silence | | 675 | donation - donated - charity - donations - donating | 95 | 675_donation_donated_charity_donations | | 676 | wu - tang - rapper - kung - rap | 95 | 676_wu_tang_rapper_kung | | 677 | influenza - flu - pandemics - pandemic - epidemic | 95 | 677_influenza_flu_pandemics_pandemic | | 678 | animatronic - animatronics - minigames - nightmare - nights | 95 | 678_animatronic_animatronics_minigames_nightmare | | 679 | convicts - colonists - 1788 - convict - settlers | 94 | 679_convicts_colonists_1788_convict | | 680 | displays - monitors - cables - cable - ports | 94 | 680_displays_monitors_cables_cable | | 681 | trademarks - trademark - infringement - copyrights - copyright | 94 | 681_trademarks_trademark_infringement_copyrights | | 682 | farmworkers - unions - picketing - protest - laborers | 94 | 682_farmworkers_unions_picketing_protest | | 683 | libertarianism - libertarians - libertarian - liberalism - anarchists | 94 | 683_libertarianism_libertarians_libertarian_liberalism | | 684 | temptations - sang - toured - singers - albums | 94 | 684_temptations_sang_toured_singers | | 685 | 1898 - 1896 - 1902 - dictator - insurgent | 94 | 685_1898_1896_1902_dictator | | 686 | insurance - insurer - insurers - insured - insure | 94 | 686_insurance_insurer_insurers_insured | | 687 | shooting - shootings - shooters - shooter - firearm | 94 | 687_shooting_shootings_shooters_shooter | | 688 | colitis - bowel - gastrointestinal - intestinal - inflammatory | 94 | 688_colitis_bowel_gastrointestinal_intestinal | | 689 | divorce - peace - adultery - ballad - lyrics | 94 | 689_divorce_peace_adultery_ballad | | 690 | artillery - howitzers - howitzer - cannons - rifle | 93 | 690_artillery_howitzers_howitzer_cannons | | 691 | ups - deliveries - logistics - delivery - freight | 93 | 691_ups_deliveries_logistics_delivery | | 692 | metal - gear - consoles - sequels - franchise | 93 | 692_metal_gear_consoles_sequels | | 693 | ibn - hadith - imam - ijtihad - khan | 93 | 693_ibn_hadith_imam_ijtihad | | 694 | industrial - subsidiaries - manufacturer - industries - corporation | 93 | 694_industrial_subsidiaries_manufacturer_industries | | 695 | motorsport - prix - motorsports - racing - raced | 93 | 695_motorsport_prix_motorsports_racing | | 696 | 1936 - deposed - 1935 - invaded - 1937 | 93 | 696_1936_deposed_1935_invaded | | 697 | scotch - whisky - whiskey - distillery - bourbon | 93 | 697_scotch_whisky_whiskey_distillery | | 698 | premiered - machina - cast - critical - productions | 93 | 698_premiered_machina_cast_critical | | 699 | psychedelics - psychedelic - ayahuasca - cannabis - psilocybin | 93 | 699_psychedelics_psychedelic_ayahuasca_cannabis | | 700 | homeless - homelessness - shelters - shelter - housing | 93 | 700_homeless_homelessness_shelters_shelter | | 701 | newton - gravitation - gravitational - gravity - gravitating | 93 | 701_newton_gravitation_gravitational_gravity | | 702 | swamp - comics - comic - sting - likeness | 92 | 702_swamp_comics_comic_sting | | 703 | languages - language - linguists - lingua - linguistics | 92 | 703_languages_language_linguists_lingua | | 704 | mutilations - mutilation - mutilating - circumcision - clitoridectomy | 92 | 704_mutilations_mutilation_mutilating_circumcision | | 705 | harassment - harassing - harassed - harass - discrimination | 92 | 705_harassment_harassing_harassed_harass | | 706 | artistic - art - artwork - paintings - artworks | 92 | 706_artistic_art_artwork_paintings | | 707 | paintings - painter - painters - painting - portraits | 92 | 707_paintings_painter_painters_painting | | 708 | piazza - opera - tenor - bohème - arias | 92 | 708_piazza_opera_tenor_bohème | | 709 | tsar - tsarist - tsars - czar - emperors | 92 | 709_tsar_tsarist_tsars_czar | | 710 | ai - intelligence - machines - cognitive - intelligent | 92 | 710_ai_intelligence_machines_cognitive | | 711 | pamphlet - 1789 - revolutionary - 1790 - 1793 | 92 | 711_pamphlet_1789_revolutionary_1790 | | 712 | murders - detectives - murdered - constable - detective | 92 | 712_murders_detectives_murdered_constable | | 713 | healthcare - insurance - health - hospitals - insurers | 92 | 713_healthcare_insurance_health_hospitals | | 714 | plague - plagues - diseases - epidemics - epidemic | 91 | 714_plague_plagues_diseases_epidemics | | 715 | paleolithic - neolithic - archaeological - prehistory - archaeologists | 91 | 715_paleolithic_neolithic_archaeological_prehistory | | 716 | theology - faith - teachings - religion - monotheism | 91 | 716_theology_faith_teachings_religion | | 717 | alderman - mayor - mayoral - candidates - superintendent | 91 | 717_alderman_mayor_mayoral_candidates | | 718 | nam - chi - southeast - urban - city | 91 | 718_nam_chi_southeast_urban | | 719 | skating - skaters - skater - skate - competed | 91 | 719_skating_skaters_skater_skate | | 720 | banking - bank - finances - finance - funds | 91 | 720_banking_bank_finances_finance | | 721 | asbestos - asbestosis - minerals - mineral - toxicology | 91 | 721_asbestos_asbestosis_minerals_mineral | | 722 | municipalities - municipality - cities - population - city | 90 | 722_municipalities_municipality_cities_population | | 723 | headquartered - headquarters - companies - san - industries | 90 | 723_headquartered_headquarters_companies_san | | 724 | soviets - communists - communist - soviet - communism | 90 | 724_soviets_communists_communist_soviet | | 725 | tapes - recorder - recorders - recording - cassette | 90 | 725_tapes_recorder_recorders_recording | | 726 | swastika - swastikas - symbolises - symbol - symbolising | 90 | 726_swastika_swastikas_symbolises_symbol | | 727 | oblast - oblasts - annexation - annexations - annexed | 90 | 727_oblast_oblasts_annexation_annexations | | 728 | filmed - filming - premiered - premiere - seasons | 90 | 728_filmed_filming_premiered_premiere | | 729 | evacuated - evacuation - evacuate - ceasefire - bombed | 90 | 729_evacuated_evacuation_evacuate_ceasefire | | 730 | quad - quadrilateral - multilateral - alliances - trilateral | 90 | 730_quad_quadrilateral_multilateral_alliances | | 731 | sake - rice - liquor - brewing - alcohol | 90 | 731_sake_rice_liquor_brewing | | 732 | enigma - rotor - rotors - cipher - cryptographic | 90 | 732_enigma_rotor_rotors_cipher | | 733 | anthropology - anthropological - sociocultural - anthropologist - anthropologists | 90 | 733_anthropology_anthropological_sociocultural_anthropologist | | 734 | executives - stockholders - accounting - shareholders - insiders | 89 | 734_executives_stockholders_accounting_shareholders | | 735 | psychedelics - psychedelic - psilocybin - hallucinations - psychosis | 89 | 735_psychedelics_psychedelic_psilocybin_hallucinations | | 736 | quicksort - sorting - sort - sorts - algorithm | 89 | 736_quicksort_sorting_sort_sorts | | 737 | 1918 - soviets - polish - soviet - battle | 89 | 737_1918_soviets_polish_soviet | | 738 | barangays - barangay - municipalities - metropolitan - metro | 89 | 738_barangays_barangay_municipalities_metropolitan | | 739 | assists - rebounds - suns - 76ers - steals | 89 | 739_assists_rebounds_suns_76ers | | 740 | spaghetti - western - westerns - films - movies | 89 | 740_spaghetti_western_westerns_films | | 741 | airing - adult - swim - aqua - episodes | 89 | 741_airing_adult_swim_aqua | | 742 | queer - heterosexuality - heterosexuals - homosexual - homosexuals | 89 | 742_queer_heterosexuality_heterosexuals_homosexual | | 743 | control - controller - controlled - controllers - disturbances | 89 | 743_control_controller_controlled_controllers | | 744 | abortion - abortions - pregnancies - pregnancy - fetuses | 89 | 744_abortion_abortions_pregnancies_pregnancy | | 745 | voyages - voyage - caravel - expeditions - navigator | 89 | 745_voyages_voyage_caravel_expeditions | | 746 | channel - channels - broadcasting - syndicated - simulcast | 88 | 746_channel_channels_broadcasting_syndicated | | 747 | sati - castes - widowhood - prohibits - prohibition | 88 | 747_sati_castes_widowhood_prohibits | | 748 | conquistadors - confederation - tlatoani - provinces - rulers | 88 | 748_conquistadors_confederation_tlatoani_provinces | | 749 | supermarket - supermarkets - shops - retailer - retailers | 88 | 749_supermarket_supermarkets_shops_retailer | | 750 | khan - khanate - tsar - khans - khanates | 88 | 750_khan_khanate_tsar_khans | | 751 | separatists - soviet - militants - ceasefire - guerrillas | 88 | 751_separatists_soviet_militants_ceasefire | | 752 | magician - occultist - occultism - occultists - mysticism | 88 | 752_magician_occultist_occultism_occultists | | 753 | swam - swimmer - olympic - swimmers - freestyle | 88 | 753_swam_swimmer_olympic_swimmers | | 754 | alchemy - alchemists - alchemist - alchemical - al | 88 | 754_alchemy_alchemists_alchemist_alchemical | | 755 | robin - hood - friar - hoods - knight | 88 | 755_robin_hood_friar_hoods | | 756 | genders - gender - sexes - gendered - genderqueer | 87 | 756_genders_gender_sexes_gendered | | 757 | privacy - data - regulations - enforcement - regulation | 87 | 757_privacy_data_regulations_enforcement | | 758 | chocolate - chocolates - confectionery - brands - manufacturer | 87 | 758_chocolate_chocolates_confectionery_brands | | 759 | murders - corpse - unconscious - murder - strangled | 87 | 759_murders_corpse_unconscious_murder | | 760 | ayahuasca - psychedelics - psychedelic - addictions - shamans | 87 | 760_ayahuasca_psychedelics_psychedelic_addictions | | 761 | audit - audited - auditing - audits - fines | 87 | 761_audit_audited_auditing_audits | | 762 | dragons - dragon - amulets - carvings - robes | 87 | 762_dragons_dragon_amulets_carvings | | 763 | murderer - murders - murdered - killings - murder | 87 | 763_murderer_murders_murdered_killings | | 764 | diamond - sapphire - pearl - games - evolve | 87 | 764_diamond_sapphire_pearl_games | | 765 | hepatitis - hepatic - cirrhosis - liver - hepatocellular | 87 | 765_hepatitis_hepatic_cirrhosis_liver | | 766 | ba - antibody - antibodies - vaccines - 2022 | 87 | 766_ba_antibody_antibodies_vaccines | | 767 | algorithm - algorithms - paths - traversal - nodes | 87 | 767_algorithm_algorithms_paths_traversal | | 768 | gable - actresses - films - actor - film | 87 | 768_gable_actresses_films_actor | | 769 | verse - poetry - poet - poems - poem | 87 | 769_verse_poetry_poet_poems | | 770 | judicial - justices - judiciary - courts - judges | 87 | 770_judicial_justices_judiciary_courts | | 771 | processors - processor - intel - microarchitecture - cores | 87 | 771_processors_processor_intel_microarchitecture | | 772 | emperor - emperors - empress - dowager - eunuch | 87 | 772_emperor_emperors_empress_dowager | | 773 | anthrax - spores - assays - contaminated - microbiologist | 86 | 773_anthrax_spores_assays_contaminated | | 774 | comics - superhero - superman - superheroes - comic | 86 | 774_comics_superhero_superman_superheroes | | 775 | seo - searches - webmaster - webmasters - web | 86 | 775_seo_searches_webmaster_webmasters | | 776 | kabbalah - kabbalistic - esotericism - mysticism - theology | 86 | 776_kabbalah_kabbalistic_esotericism_mysticism | | 777 | caesarean - cesarean - uterus - pregnancies - uterine | 86 | 777_caesarean_cesarean_uterus_pregnancies | | 778 | semiconductor - transistors - transistor - gate - circuitry | 86 | 778_semiconductor_transistors_transistor_gate | | 779 | furniture - stores - store - warehouse - malls | 86 | 779_furniture_stores_store_warehouse | | 780 | inquisition - persecution - catholic - reformation - heresy | 86 | 780_inquisition_persecution_catholic_reformation | | 781 | dictator - dictatorship - dictatorial - regime - presidential | 86 | 781_dictator_dictatorship_dictatorial_regime | | 782 | emoji - emojis - smiley - symbols - glyphs | 86 | 782_emoji_emojis_smiley_symbols | | 783 | costumes - costume - dressed - dresses - dress | 86 | 783_costumes_costume_dressed_dresses | | 784 | sexiest - playboy - hottest - glamour - actresses | 86 | 784_sexiest_playboy_hottest_glamour | | 785 | karate - kung - martial - cobra - tae | 86 | 785_karate_kung_martial_cobra | | 786 | papacy - pope - papal - catholic - holocaust | 85 | 786_papacy_pope_papal_catholic | | 787 | tarot - cards - decks - deck - card | 85 | 787_tarot_cards_decks_deck | | 788 | deities - goddesses - goddess - mythology - underworld | 85 | 788_deities_goddesses_goddess_mythology | | 789 | waterboarding - waterboarded - torture - interrogations - interrogation | 85 | 789_waterboarding_waterboarded_torture_interrogations | | 790 | degree - bachelor - diploma - qualification - courses | 85 | 790_degree_bachelor_diploma_qualification | | 791 | nonprofit - nonprofits - donations - organizations - nongovernmental | 85 | 791_nonprofit_nonprofits_donations_organizations | | 792 | perjury - misconduct - impeachment - allegations - affair | 85 | 792_perjury_misconduct_impeachment_allegations | | 793 | retailer - supermarket - stores - supermarkets - shop | 85 | 793_retailer_supermarket_stores_supermarkets | | 794 | crimes - convicted - assaulted - raped - plea | 85 | 794_crimes_convicted_assaulted_raped | | 795 | paintings - painter - painting - murals - portraits | 85 | 795_paintings_painter_painting_murals | | 796 | mansa - throne - rulers - kingdoms - emperor | 85 | 796_mansa_throne_rulers_kingdoms | | 797 | stripes - jack - bands - band - bandmate | 84 | 797_stripes_jack_bands_band | | 798 | 1941 - polish - 1939 - nazi - treaty | 84 | 798_1941_polish_1939_nazi | | 799 | prix - motorsport - racing - motorsports - qualifying | 84 | 799_prix_motorsport_racing_motorsports | | 800 | buzz - toy - toys - woody - toyline | 84 | 800_buzz_toy_toys_woody | | 801 | generals - counterinsurgency - military - militias - strategist | 84 | 801_generals_counterinsurgency_military_militias | | 802 | casino - casinos - gambling - 1960s - hotel | 84 | 802_casino_casinos_gambling_1960s | | 803 | telecom - telecommunications - telecoms - provider - shareholders | 84 | 803_telecom_telecommunications_telecoms_provider | | 804 | sitcom - cast - cartoons - cartoon - voiced | 84 | 804_sitcom_cast_cartoons_cartoon | | 805 | extradition - jailed - convicted - sentenced - detained | 84 | 805_extradition_jailed_convicted_sentenced | | 806 | yogurt - yogurts - yoghurt - dairy - lactose | 84 | 806_yogurt_yogurts_yoghurt_dairy | | 807 | junta - loyalist - rebellion - juntas - royalist | 84 | 807_junta_loyalist_rebellion_juntas | | 808 | golfer - golfers - woods - golf - masters | 84 | 808_golfer_golfers_woods_golf | | 809 | fitness - gyms - gym - gymnastics - camps | 84 | 809_fitness_gyms_gym_gymnastics | | 810 | butter - gluten - flour - glutenin - dough | 83 | 810_butter_gluten_flour_glutenin | | 811 | sizes - paper - sheet - sheets - width | 83 | 811_sizes_paper_sheet_sheets | | 812 | baker - divorced - remarried - stepfather - divorcing | 83 | 812_baker_divorced_remarried_stepfather | | 813 | tattoos - tattooing - tattoo - tattooed - markings | 83 | 813_tattoos_tattooing_tattoo_tattooed | | 814 | castes - caste - discriminated - discrimination - raj | 83 | 814_castes_caste_discriminated_discrimination | | 815 | dreaming - lucidity - dreams - lucid - dreamer | 83 | 815_dreaming_lucidity_dreams_lucid | | 816 | mountains - mountainous - tributary - river - elevation | 83 | 816_mountains_mountainous_tributary_river | | 817 | bombings - murders - suspects - terrorist - homicide | 83 | 817_bombings_murders_suspects_terrorist | | 818 | conscription - military - enlistment - draftees - draft | 83 | 818_conscription_military_enlistment_draftees | | 819 | presentations - presentation - slides - keynote - slide | 83 | 819_presentations_presentation_slides_keynote | | 820 | paraphilia - paraphilias - pedophilia - pedophilic - paraphilic | 83 | 820_paraphilia_paraphilias_pedophilia_pedophilic | | 821 | bushido - bushidō - samurai - martial - judo | 83 | 821_bushido_bushidō_samurai_martial | | 822 | fjord - archaeological - meadows - voyages - settlers | 83 | 822_fjord_archaeological_meadows_voyages | | 823 | tofu - soy - soybean - sesame - vegetarian | 83 | 823_tofu_soy_soybean_sesame | | 824 | gang - gangs - comedies - productions - roach | 83 | 824_gang_gangs_comedies_productions | | 825 | accents - accent - dialects - dialect - pronunciation | 82 | 825_accents_accent_dialects_dialect | | 826 | screenplay - ultimatum - screenwriter - thriller - trilogy | 82 | 826_screenplay_ultimatum_screenwriter_thriller | | 827 | stamps - stamp - postage - postal - postmaster | 82 | 827_stamps_stamp_postage_postal | | 828 | typescript - compiler - type - developers - interpreter | 82 | 828_typescript_compiler_type_developers | | 829 | aspirin - ibuprofen - analgesics - inhibitors - medications | 82 | 829_aspirin_ibuprofen_analgesics_inhibitors | | 830 | atheist - agnostic - agnosticism - atheism - religious | 82 | 830_atheist_agnostic_agnosticism_atheism | | 831 | postal - postmaster - postage - deliveries - mail | 82 | 831_postal_postmaster_postage_deliveries | | 832 | 1914 - 1913 - 1915 - 1918 - 1912 | 82 | 832_1914_1913_1915_1918 | | 833 | graphite - carbon - steelmaking - mined - pencil | 82 | 833_graphite_carbon_steelmaking_mined | | 834 | integers - primes - integer - prime - arithmetic | 82 | 834_integers_primes_integer_prime | | 835 | bloods - gangs - gang - blood - criminals | 82 | 835_bloods_gangs_gang_blood | | 836 | osmosis - desalination - purification - filtration - membranes | 82 | 836_osmosis_desalination_purification_filtration | | 837 | guerre - french - 1958 - ceasefire - rebels | 82 | 837_guerre_french_1958_ceasefire | | 838 | actress - sonata - och - autumn - maid | 82 | 838_actress_sonata_och_autumn | | 839 | fastest - racing - mph - speed - motorsport | 82 | 839_fastest_racing_mph_speed | | 840 | airline - airlines - seats - seating - 737 | 82 | 840_airline_airlines_seats_seating | | 841 | novelist - writer - novels - literature - writers | 82 | 841_novelist_writer_novels_literature | | 842 | nationalism - nationalist - nationalists - patriotism - nation | 82 | 842_nationalism_nationalist_nationalists_patriotism | | 843 | celebrations - celebrated - festival - calendar - holidays | 82 | 843_celebrations_celebrated_festival_calendar | | 844 | guerrillas - guerrilla - rebels - dictator - fled | 82 | 844_guerrillas_guerrilla_rebels_dictator | | 845 | murdered - strangled - killed - unconscious - murders | 82 | 845_murdered_strangled_killed_unconscious | | 846 | rated - rating - ratings - pg - films | 81 | 846_rated_rating_ratings_pg | | 847 | mac - leopard - apple - os - versions | 81 | 847_mac_leopard_apple_os | | 848 | aboriginal - indigenous - settlers - provincial - prairies | 81 | 848_aboriginal_indigenous_settlers_provincial | | 849 | maps - map - google - android - street | 81 | 849_maps_map_google_android | | 850 | airplane - airlines - hijacked - hijackers - hijackings | 81 | 850_airplane_airlines_hijacked_hijackers | | 851 | bp - spill - spills - damages - negligence | 81 | 851_bp_spill_spills_damages | | 852 | longitude - latitudes - latitude - geocentric - ellipsoid | 81 | 852_longitude_latitudes_latitude_geocentric | | 853 | golfer - golfers - golf - masters - tournaments | 81 | 853_golfer_golfers_golf_masters | | 854 | dean - hunter - actor - biography - acting | 81 | 854_dean_hunter_actor_biography | | 855 | latching - latch - latches - flops - flip | 81 | 855_latching_latch_latches_flops | | 856 | honours - honorary - honour - knighted - appointed | 81 | 856_honours_honorary_honour_knighted | | 857 | clinical - gibbons - investigation - patents - laboratory | 81 | 857_clinical_gibbons_investigation_patents | | 858 | suffrage - suffragettes - activists - feminist - activist | 81 | 858_suffrage_suffragettes_activists_feminist | | 859 | toured - concert - début - tour - albums | 81 | 859_toured_concert_début_tour | | 860 | pastor - pastors - megachurch - evangelical - ministries | 81 | 860_pastor_pastors_megachurch_evangelical | | 861 | fm - stations - radio - station - broadcasts | 80 | 861_fm_stations_radio_station | | 862 | filters - filtering - covariance - filter - covariances | 80 | 862_filters_filtering_covariance_filter | | 863 | conspiracies - conspiratorial - conspiracy - conspiracism - conspiracist | 80 | 863_conspiracies_conspiratorial_conspiracy_conspiracism | | 864 | soprano - sopranos - actor - cast - actors | 80 | 864_soprano_sopranos_actor_cast | | 865 | expedition - voyage - whaling - exploration - 1901 | 80 | 865_expedition_voyage_whaling_exploration | | 866 | actor - hamlet - actors - acting - theatre | 80 | 866_actor_hamlet_actors_acting | | 867 | designers - designer - boutiques - fashion - makeup | 80 | 867_designers_designer_boutiques_fashion | | 868 | processors - 1070 - supercomputers - processor - hardware | 80 | 868_processors_1070_supercomputers_processor | | 869 | primus - toured - tour - praxis - drums | 80 | 869_primus_toured_tour_praxis | | 870 | roof - prosecution - defendants - sentencing - convicted | 80 | 870_roof_prosecution_defendants_sentencing | | 871 | strongman - strongest - strongmen - strength - competed | 80 | 871_strongman_strongest_strongmen_strength | | 872 | parliament - parliamentary - constituencies - legislature - legislatures | 80 | 872_parliament_parliamentary_constituencies_legislature | | 873 | monkey - monk - monkeys - buddha - tang | 80 | 873_monkey_monk_monkeys_buddha | | 874 | rap - albums - park - rock - hybrid | 80 | 874_rap_albums_park_rock | | 875 | coalition - election - minister - elections - 2021 | 80 | 875_coalition_election_minister_elections | | 876 | smartphone - smartphones - laptop - tablet - sales | 80 | 876_smartphone_smartphones_laptop_tablet | | 877 | stratosphere - meteorological - stratospheric - climatic - climate | 80 | 877_stratosphere_meteorological_stratospheric_climatic | | 878 | reformation - protestant - theologian - papacy - 1541 | 80 | 878_reformation_protestant_theologian_papacy | | 879 | neighbours - episodes - airing - episode - channel | 80 | 879_neighbours_episodes_airing_episode | | 880 | coca - cocaine - tobacco - cola - leaves | 80 | 880_coca_cocaine_tobacco_cola | | 881 | inferno - purgatory - sins - torment - theology | 80 | 881_inferno_purgatory_sins_torment | | 882 | confederate - flags - flag - confederacy - confederates | 80 | 882_confederate_flags_flag_confederacy | | 883 | dubbed - dub - anime - releases - premiered | 79 | 883_dubbed_dub_anime_releases | | 884 | baron - comedian - mockumentary - documentary - film | 79 | 884_baron_comedian_mockumentary_documentary | | 885 | golfer - masters - golf - golfers - tournament | 79 | 885_golfer_masters_golf_golfers | | 886 | spiritualism - spirituality - paganism - esotericism - religiosity | 79 | 886_spiritualism_spirituality_paganism_esotericism | | 887 | graffiti - paintings - painting - artworks - paint | 79 | 887_graffiti_paintings_painting_artworks | | 888 | lakes - lake - shipwrecks - shipwreck - sank | 79 | 888_lakes_lake_shipwrecks_shipwreck | | 889 | fashion - designers - designer - fashions - boutique | 79 | 889_fashion_designers_designer_fashions | | 890 | representation - philosophy - philosophical - philosopher - philosophies | 79 | 890_representation_philosophy_philosophical_philosopher | | 891 | railgun - railguns - rail - projectile - projectiles | 78 | 891_railgun_railguns_rail_projectile | | 892 | adobe - illustrator - software - mac - graphics | 78 | 892_adobe_illustrator_software_mac | | 893 | paternal - stepfather - nazi - illegitimate - grandfather | 78 | 893_paternal_stepfather_nazi_illegitimate | | 894 | helix - nucleic - discoveries - discovered - biophysics | 78 | 894_helix_nucleic_discoveries_discovered | | 895 | payments - payment - merchant - purchases - merchants | 78 | 895_payments_payment_merchant_purchases | | 896 | airlines - airline - pan - flights - midway | 78 | 896_airlines_airline_pan_flights | | 897 | secretariat - racehorse - racetrack - thoroughbred - racehorses | 78 | 897_secretariat_racehorse_racetrack_thoroughbred | | 898 | sensitivity - specificity - diagnostic - positives - precision | 78 | 898_sensitivity_specificity_diagnostic_positives | | 899 | pirate - piracy - bay - infringement - infringements | 78 | 899_pirate_piracy_bay_infringement | | 900 | oyster - oysters - shellfish - crabs - seafood | 78 | 900_oyster_oysters_shellfish_crabs | | 901 | ethnicities - ethnic - ethnically - ethnicity - population | 78 | 901_ethnicities_ethnic_ethnically_ethnicity | | 902 | abolitionist - abolitionists - slavery - 1860 - abolition | 78 | 902_abolitionist_abolitionists_slavery_1860 | | 903 | reefs - corals - coral - reef - aquaculture | 77 | 903_reefs_corals_coral_reef | | 904 | incomes - income - wealth - disparities - poverty | 77 | 904_incomes_income_wealth_disparities | | 905 | officers - officer - recruitment - administrative - secretaries | 77 | 905_officers_officer_recruitment_administrative | | 906 | sabbath - piers - frontman - airing - presenter | 77 | 906_sabbath_piers_frontman_airing | | 907 | aether - realms - realm - omnipotence - gods | 77 | 907_aether_realms_realm_omnipotence | | 908 | extinctions - extinction - extinct - dinosaurs - speciation | 77 | 908_extinctions_extinction_extinct_dinosaurs | | 909 | armistice - 38th - counterinsurgency - soviet - retreated | 77 | 909_armistice_38th_counterinsurgency_soviet | | 910 | magicians - magician - museum - vaudeville - cemetery | 77 | 910_magicians_magician_museum_vaudeville | | 911 | sequels - sequel - trilogy - screenplay - remake | 77 | 911_sequels_sequel_trilogy_screenplay | | 912 | executions - executed - clemency - punishment - inmates | 77 | 912_executions_executed_clemency_punishment | | 913 | neolithic - archaeological - archaeology - excavations - civilisation | 77 | 913_neolithic_archaeological_archaeology_excavations | | 914 | dolly - novel - literature - mansion - narrator | 77 | 914_dolly_novel_literature_mansion | | 915 | sparrow - pirates - pirate - privateer - captained | 77 | 915_sparrow_pirates_pirate_privateer | | 916 | scurvy - vitamin - supplementation - dietary - supplement | 77 | 916_scurvy_vitamin_supplementation_dietary | | 917 | holly - finale - office - receptionist - episode | 77 | 917_holly_finale_office_receptionist | | 918 | hemp - cannabis - textiles - cultivated - textile | 76 | 918_hemp_cannabis_textiles_cultivated | | 919 | lidar - radar - laser - photogrammetry - sensors | 76 | 919_lidar_radar_laser_photogrammetry | | 920 | dingoes - dingo - breeding - pets - kangaroos | 76 | 920_dingoes_dingo_breeding_pets | | 921 | crocodile - zookeeper - zoo - crocodiles - wildlife | 76 | 921_crocodile_zookeeper_zoo_crocodiles | | 922 | slots - slot - gambling - reels - poker | 76 | 922_slots_slot_gambling_reels | | 923 | bombs - bomb - 1945 - bombing - detonated | 76 | 923_bombs_bomb_1945_bombing | | 924 | manufacturer - corporate - corporation - company - brands | 76 | 924_manufacturer_corporate_corporation_company | | 925 | stones - stone - guitarist - guitarists - drums | 76 | 925_stones_stone_guitarist_guitarists | | 926 | meiosis - mitosis - chromosomal - chromosomes - chromosome | 76 | 926_meiosis_mitosis_chromosomal_chromosomes | | 927 | pirate - privateer - bonnet - pirates - privateering | 76 | 927_pirate_privateer_bonnet_pirates | | 928 | parks - park - attractions - studios - pavilion | 75 | 928_parks_park_attractions_studios | | 929 | medicine - medicinal - medicines - physicians - herbal | 75 | 929_medicine_medicinal_medicines_physicians | | 930 | acupuncture - acupuncturists - medicine - practitioners - patients | 75 | 930_acupuncture_acupuncturists_medicine_practitioners | | 931 | margarine - yeast - extracts - foods - recipe | 75 | 931_margarine_yeast_extracts_foods | | 932 | chiropractors - chiropractic - chiropractor - osteopathic - practitioners | 75 | 932_chiropractors_chiropractic_chiropractor_osteopathic | | 933 | negro - activist - behest - 1925 - racism | 75 | 933_negro_activist_behest_1925 | | 934 | infantry - tanks - soldier - 1944 - troops | 75 | 934_infantry_tanks_soldier_1944 | | 935 | geography - geographic - geographical - geographer - geographers | 75 | 935_geography_geographic_geographical_geographer | | 936 | federalism - federations - federation - unitary - sovereignty | 75 | 936_federalism_federations_federation_unitary | | 937 | braking - transmissions - brakes - automatic - brake | 75 | 937_braking_transmissions_brakes_automatic | | 938 | ford - presidency - presidential - presidents - wife | 75 | 938_ford_presidency_presidential_presidents | | 939 | eukaryotes - prokaryotes - eukaryotic - prokaryotic - eukaryote | 75 | 939_eukaryotes_prokaryotes_eukaryotic_prokaryotic | | 940 | electroconvulsive - antidepressants - antidepressant - anticonvulsant - electrodes | 75 | 940_electroconvulsive_antidepressants_antidepressant_anticonvulsant | | 941 | bourgeoisie - capitalist - bourgeois - capitalism - socialism | 75 | 941_bourgeoisie_capitalist_bourgeois_capitalism | | 942 | burger - hamburger - burgers - hamburgers - steak | 75 | 942_burger_hamburger_burgers_hamburgers | | 943 | stagecoach - ford - cinematography - films - actor | 75 | 943_stagecoach_ford_cinematography_films | | 944 | comics - cartoonist - adventures - magazine - comic | 75 | 944_comics_cartoonist_adventures_magazine | | 945 | detective - detectives - novels - murders - obituary | 75 | 945_detective_detectives_novels_murders | | 946 | laureates - laureate - prizes - prize - awarding | 75 | 946_laureates_laureate_prizes_prize | | 947 | bombed - troops - insurgency - casualties - tactics | 75 | 947_bombed_troops_insurgency_casualties | | 948 | allegations - molested - offences - alleged - abused | 74 | 948_allegations_molested_offences_alleged | | 949 | subreddit - subreddits - banning - censorship - incels | 74 | 949_subreddit_subreddits_banning_censorship | | 950 | onzz - superman - watchtower - superhero - storyline | 74 | 950_onzz_superman_watchtower_superhero | | 951 | pronouns - pronoun - plurality - plurals - plural | 74 | 951_pronouns_pronoun_plurality_plurals | | 952 | gymnast - gymnastics - gymnasts - olympic - competed | 74 | 952_gymnast_gymnastics_gymnasts_olympic | | 953 | bonobos - chimpanzees - primates - chimpanzee - primate | 74 | 953_bonobos_chimpanzees_primates_chimpanzee | | 954 | singer - songwriter - albums - musician - bono | 74 | 954_singer_songwriter_albums_musician | | 955 | pearls - pearl - pearling - oysters - oyster | 74 | 955_pearls_pearl_pearling_oysters | | 956 | patients - inpatients - physicians - physician - inpatient | 74 | 956_patients_inpatients_physicians_physician | | 957 | oz - wizard - 1939 - wicked - emerald | 74 | 957_oz_wizard_1939_wicked | | 958 | pride - flags - flag - rainbow - parade | 74 | 958_pride_flags_flag_rainbow | | 959 | espionage - spies - spy - spying - soviets | 74 | 959_espionage_spies_spy_spying | | 960 | chairman - executive - resigned - chief - directors | 74 | 960_chairman_executive_resigned_chief | | 961 | paramilitary - mercenaries - civilians - mercenary - casualties | 74 | 961_paramilitary_mercenaries_civilians_mercenary | | 962 | obesity - obese - overweight - underweight - adipose | 74 | 962_obesity_obese_overweight_underweight | | 963 | deities - polytheism - monotheistic - monotheism - creation | 74 | 963_deities_polytheism_monotheistic_monotheism | | 964 | housewives - housewife - airing - episodes - renewed | 73 | 964_housewives_housewife_airing_episodes | | 965 | tariffs - tariff - exports - agreements - economy | 73 | 965_tariffs_tariff_exports_agreements | | 966 | metric - imperial - units - metre - kilograms | 73 | 966_metric_imperial_units_metre | | 967 | forested - vegetation - conifers - forests - rainforests | 73 | 967_forested_vegetation_conifers_forests | | 968 | schemas - schema - metadata - structured - specification | 73 | 968_schemas_schema_metadata_structured | | 969 | homosexuality - homosexuals - homosexual - homophobia - immoral | 73 | 969_homosexuality_homosexuals_homosexual_homophobia | | 970 | dome - missiles - missile - protects - protect | 73 | 970_dome_missiles_missile_protects | | 971 | scramjet - scramjets - turbojet - turbojets - ramjet | 73 | 971_scramjet_scramjets_turbojet_turbojets | | 972 | esotericists - esotericism - esoteric - occultism - occultists | 73 | 972_esotericists_esotericism_esoteric_occultism | | 973 | regexes - regex - syntax - parsing - patterns | 73 | 973_regexes_regex_syntax_parsing | | 974 | auroral - aurora - auroras - magnetosphere - aurorae | 73 | 974_auroral_aurora_auroras_magnetosphere | | 975 | metamorphosis - literature - literary - writings - writer | 73 | 975_metamorphosis_literature_literary_writings | | 976 | musician - concert - gravestone - bandmate - backstage | 73 | 976_musician_concert_gravestone_bandmate | | 977 | dell - manufacturers - manufacturer - vendors - intel | 73 | 977_dell_manufacturers_manufacturer_vendors | | 978 | soviets - missiles - overflights - reconnaissance - overflight | 73 | 978_soviets_missiles_overflights_reconnaissance | | 979 | profiles - profile - freelancers - recruiters - resumes | 73 | 979_profiles_profile_freelancers_recruiters | | 980 | doge - pope - 1571 - mediterranean - duchy | 72 | 980_doge_pope_1571_mediterranean | | 981 | chess - grandmaster - fide - tournament - championship | 72 | 981_chess_grandmaster_fide_tournament | | 982 | comet - cometary - comets - meteor - telescope | 72 | 982_comet_cometary_comets_meteor | | 983 | totalitarianism - holocaust - totalitarian - biography - nazi | 72 | 983_totalitarianism_holocaust_totalitarian_biography | | 984 | tics - tic - disorders - neuropsychiatric - autism | 72 | 984_tics_tic_disorders_neuropsychiatric | | 985 | bullying - bullied - bullies - bully - cyberbullying | 72 | 985_bullying_bullied_bullies_bully | | 986 | psychopathy - psychopathic - psychopaths - psychopath - sociopathy | 72 | 986_psychopathy_psychopathic_psychopaths_psychopath | | 987 | linguistics - linguistic - linguists - linguist - languages | 72 | 987_linguistics_linguistic_linguists_linguist | | 988 | literature - writings - author - fictions - literary | 72 | 988_literature_writings_author_fictions | | 989 | cook - voyage - voyages - sailed - 1788 | 72 | 989_cook_voyage_voyages_sailed | | 990 | cyberpunk - cybernetics - novelists - novel - fiction | 72 | 990_cyberpunk_cybernetics_novelists_novel | | 991 | population - census - inhabitants - populous - populated | 72 | 991_population_census_inhabitants_populous | | 992 | linden - lab - copyright - token - refund | 72 | 992_linden_lab_copyright_token | | 993 | cartoons - cartoon - spinach - comic - cartoonists | 72 | 993_cartoons_cartoon_spinach_comic | | 994 | nazi - holocaust - 1941 - 1945 - persecuted | 72 | 994_nazi_holocaust_1941_1945 | | 995 | indictment - indictments - indicted - prosecutors - convicted | 72 | 995_indictment_indictments_indicted_prosecutors | | 996 | tributaries - tributary - river - rivers - alluvial | 72 | 996_tributaries_tributary_river_rivers | | 997 | vocalist - vocals - singers - singer - saxophonist | 72 | 997_vocalist_vocals_singers_singer | | 998 | esteem - self - ego - psychological - oneself | 72 | 998_esteem_self_ego_psychological | | 999 | rescuers - rescuer - survivors - rescue - camped | 72 | 999_rescuers_rescuer_survivors_rescue | | 1000 | coax - coaxial - cables - cable - antennas | 72 | 1000_coax_coaxial_cables_cable | | 1001 | synesthesia - synesthetic - synesthetes - paresthesia - synesthete | 72 | 1001_synesthesia_synesthetic_synesthetes_paresthesia | | 1002 | annexation - 1938 - annexed - 1945 - annex | 71 | 1002_annexation_1938_annexed_1945 | | 1003 | motocross - motorcycle - stunt - bike - stunts | 71 | 1003_motocross_motorcycle_stunt_bike | | 1004 | chocolate - factory - screenplay - wilder - bucket | 71 | 1004_chocolate_factory_screenplay_wilder | | 1005 | galaxy - smartphone - smartphones - mobile - flagship | 71 | 1005_galaxy_smartphone_smartphones_mobile | | 1006 | runes - rune - runestones - inscriptions - inscription | 71 | 1006_runes_rune_runestones_inscriptions | | 1007 | che - revolutionaries - guerrilla - revolutionary - guerrillas | 71 | 1007_che_revolutionaries_guerrilla_revolutionary | | 1008 | hemorrhage - surgery - surgical - injury - iron | 71 | 1008_hemorrhage_surgery_surgical_injury | | 1009 | referendum - conservative - trump - candidate - resigned | 71 | 1009_referendum_conservative_trump_candidate | | 1010 | sightings - sighting - hoaxes - hoax - skunk | 71 | 1010_sightings_sighting_hoaxes_hoax | | 1011 | sphinx - sphinxes - pharaoh - pyramid - statue | 71 | 1011_sphinx_sphinxes_pharaoh_pyramid | | 1012 | violinist - violin - violins - albums - vinyl | 71 | 1012_violinist_violin_violins_albums | | 1013 | law - jurisprudence - judicial - statutes - jurisdictions | 71 | 1013_law_jurisprudence_judicial_statutes | | 1014 | nails - albums - album - band - artists | 71 | 1014_nails_albums_album_band | | 1015 | apple - mac - microcomputers - microcomputer - computers | 71 | 1015_apple_mac_microcomputers_microcomputer | | 1016 | scream - paintings - painting - painter - art | 71 | 1016_scream_paintings_painting_painter | | 1017 | flew - flight - airplane - flying - aviator | 71 | 1017_flew_flight_airplane_flying | | 1018 | ninja - ninjas - anime - kai - cartoon | 71 | 1018_ninja_ninjas_anime_kai | | 1019 | investing - invest - investors - indexes - investment | 71 | 1019_investing_invest_investors_indexes | | 1020 | concord - airlines - flights - airliners - airliner | 71 | 1020_concord_airlines_flights_airliners | | 1021 | dysplasia - breeds - veterinary - shepherd - dystrophy | 71 | 1021_dysplasia_breeds_veterinary_shepherd | | 1022 | doll - dolls - toy - brand - fashion | 71 | 1022_doll_dolls_toy_brand | | 1023 | investments - invested - investor - investors - investment | 70 | 1023_investments_invested_investor_investors | | 1024 | intersectionality - intersectional - feminism - intersection - feminist | 70 | 1024_intersectionality_intersectional_feminism_intersection | | 1025 | festivals - festival - festivities - carnivals - carnival | 70 | 1025_festivals_festival_festivities_carnivals | | 1026 | tennis - racquet - tournament - quarterfinal - doubles | 70 | 1026_tennis_racquet_tournament_quarterfinal | | 1027 | daddy - reggaeton - rapper - rap - mixtape | 70 | 1027_daddy_reggaeton_rapper_rap | | 1028 | probability - probabilities - doors - car - door | 70 | 1028_probability_probabilities_doors_car | | 1029 | radar - radars - signals - doppler - transmitter | 70 | 1029_radar_radars_signals_doppler | | 1030 | blackberry - smartphone - smartphones - android - mobile | 70 | 1030_blackberry_smartphone_smartphones_android | | 1031 | cappuccino - espresso - coffee - capo - latte | 70 | 1031_cappuccino_espresso_coffee_capo | | 1032 | candidates - candidate - election - elections - populist | 70 | 1032_candidates_candidate_election_elections | | 1033 | cud - rapper - mixtape - kid - rap | 70 | 1033_cud_rapper_mixtape_kid | | 1034 | soviets - soviet - treaty - ceded - ceasefire | 70 | 1034_soviets_soviet_treaty_ceded | | 1035 | nuclear - disarmament - treaty - uranium - nations | 70 | 1035_nuclear_disarmament_treaty_uranium | | 1036 | ivy - poison - poisons - poisoned - poisoning | 70 | 1036_ivy_poison_poisons_poisoned | | 1037 | tsar - empress - heir - 1762 - mistress | 70 | 1037_tsar_empress_heir_1762 | | 1038 | sexuality - discipline - sociology - homosexuality - behavior | 70 | 1038_sexuality_discipline_sociology_homosexuality | | 1039 | elves - elf - folklore - fairies - dwarves | 69 | 1039_elves_elf_folklore_fairies | | 1040 | peacekeeping - sovereignty - niger - nations - territory | 69 | 1040_peacekeeping_sovereignty_niger_nations | | 1041 | torturing - strangled - stabbing - murdered - victims | 69 | 1041_torturing_strangled_stabbing_murdered | | 1042 | exorcist - exorcism - screenplay - possessed - demonic | 69 | 1042_exorcist_exorcism_screenplay_possessed | | 1043 | cloud - clouds - azure - virtualization - infrastructure | 69 | 1043_cloud_clouds_azure_virtualization | | 1044 | yaoi - manga - hentai - anime - heterosexual | 69 | 1044_yaoi_manga_hentai_anime | | 1045 | doping - athlete - lance - cyclist - steroids | 69 | 1045_doping_athlete_lance_cyclist | | 1046 | wickets - batsman - wicket - bowled - bowler | 69 | 1046_wickets_batsman_wicket_bowled | | 1047 | opus - pontifical - popes - priests - pope | 69 | 1047_opus_pontifical_popes_priests | | 1048 | ancestry - genetic - haplogroup - paleolithic - genes | 69 | 1048_ancestry_genetic_haplogroup_paleolithic | | 1049 | thanksgiving - holiday - holidays - celebrated - celebrations | 69 | 1049_thanksgiving_holiday_holidays_celebrated | | 1050 | joker - skins - superman - comics - knight | 69 | 1050_joker_skins_superman_comics | | 1051 | freeware - proprietary - software - licensing - licenses | 69 | 1051_freeware_proprietary_software_licensing | | 1052 | quantum - qubits - qubit - computational - computing | 69 | 1052_quantum_qubits_qubit_computational | | 1053 | bird - storm - star - rebounds - assists | 69 | 1053_bird_storm_star_rebounds | | 1054 | ceasefire - peacekeeping - oblast - militias - hostilities | 69 | 1054_ceasefire_peacekeeping_oblast_militias | | 1055 | communists - soviets - protests - demonstrators - communist | 69 | 1055_communists_soviets_protests_demonstrators | | 1056 | palaces - ibn - mosque - palace - excavations | 68 | 1056_palaces_ibn_mosque_palace | | 1057 | nirvana - overdosed - grunge - overdose - died | 68 | 1057_nirvana_overdosed_grunge_overdose | | 1058 | commanders - commander - allied - 1944 - panzer | 68 | 1058_commanders_commander_allied_1944 | | 1059 | blinding - heartless - lights - song - billboard | 68 | 1059_blinding_heartless_lights_song | | 1060 | fort - battle - 1836 - surrender - reinforcements | 68 | 1060_fort_battle_1836_surrender | | 1061 | touchdowns - cousins - interceptions - touchdown - yards | 68 | 1061_touchdowns_cousins_interceptions_touchdown | | 1062 | machines - computable - computational - machine - deterministic | 68 | 1062_machines_computable_computational_machine | | 1063 | creoles - creole - vernaculars - vernacular - lingua | 68 | 1063_creoles_creole_vernaculars_vernacular | | 1064 | endometriosis - endometrial - endometrium - uterus - menstruation | 68 | 1064_endometriosis_endometrial_endometrium_uterus | | 1065 | lin - undrafted - harden - assists - rebounds | 68 | 1065_lin_undrafted_harden_assists | | 1066 | pornography - porn - pornographic - playboy - affiliate | 68 | 1066_pornography_porn_pornographic_playboy | | 1067 | panchayat - panchayats - elections - electoral - election | 68 | 1067_panchayat_panchayats_elections_electoral | | 1068 | stalker - filmmaker - cinematographer - director - cinematography | 68 | 1068_stalker_filmmaker_cinematographer_director | | 1069 | loch - ness - sightings - sighting - folklore | 68 | 1069_loch_ness_sightings_sighting | | 1070 | taco - tacos - restaurants - restaurant - cafe | 68 | 1070_taco_tacos_restaurants_restaurant | | 1071 | absinthe - absinthes - herbs - cocktail - distilled | 68 | 1071_absinthe_absinthes_herbs_cocktail | | 1072 | resuscitation - defibrillation - defibrillator - cardiopulmonary - cardiac | 68 | 1072_resuscitation_defibrillation_defibrillator_cardiopulmonary | | 1073 | chancellor - secretary - minister - appointed - resigned | 68 | 1073_chancellor_secretary_minister_appointed | | 1074 | defrauded - fraud - fraudulent - fraudster - whistleblower | 68 | 1074_defrauded_fraud_fraudulent_fraudster | | 1075 | printing - printmaking - printers - printer - print | 68 | 1075_printing_printmaking_printers_printer | | 1076 | ancient - mediterranean - civilizations - archaeological - excavations | 68 | 1076_ancient_mediterranean_civilizations_archaeological | | 1077 | dodo - dodos - fauna - birds - species | 68 | 1077_dodo_dodos_fauna_birds | | 1078 | brave - novel - novels - utopia - utopian | 68 | 1078_brave_novel_novels_utopia | | 1079 | piccolo - dragon - kai - trunks - battle | 68 | 1079_piccolo_dragon_kai_trunks | | 1080 | parachutes - parachute - skydiving - flight - airlines | 68 | 1080_parachutes_parachute_skydiving_flight | | 1081 | autonomy - independence - constituted - nationalism - referendum | 68 | 1081_autonomy_independence_constituted_nationalism | | 1082 | robots - robot - robotic - robotics - ai | 68 | 1082_robots_robot_robotic_robotics | | 1083 | tanks - tank - partisan - ideological - think | 68 | 1083_tanks_tank_partisan_ideological | | 1084 | pharaoh - archaeological - dynasty - sea - dynasties | 67 | 1084_pharaoh_archaeological_dynasty_sea | | 1085 | hippie - hippies - hipster - hippy - counterculture | 67 | 1085_hippie_hippies_hipster_hippy | | 1086 | inscriptions - inscription - epigraphy - taluk - ancient | 67 | 1086_inscriptions_inscription_epigraphy_taluk | | 1087 | filmmaker - filmmaking - cinematographer - filmmakers - films | 67 | 1087_filmmaker_filmmaking_cinematographer_filmmakers | | 1088 | celebrations - festivities - celebrated - traditions - mosque | 67 | 1088_celebrations_festivities_celebrated_traditions | | 1089 | hawking - physicist - cosmology - sciences - marriage | 67 | 1089_hawking_physicist_cosmology_sciences | | 1090 | albums - songs - album - musical - music | 67 | 1090_albums_songs_album_musical | | 1091 | pound - poet - poetry - poems - literary | 67 | 1091_pound_poet_poetry_poems | | 1092 | embryos - embryo - fertility - infertility - infertile | 67 | 1092_embryos_embryo_fertility_infertility | | 1093 | satanic - satan - theology - devil - atheism | 67 | 1093_satanic_satan_theology_devil | | 1094 | bombing - insurgency - bombings - overthrow - militants | 67 | 1094_bombing_insurgency_bombings_overthrow | | 1095 | tribalism - nationalist - unrest - sovereignty - decolonization | 67 | 1095_tribalism_nationalist_unrest_sovereignty | | 1096 | kibbutz - kibbutzim - kibbutzniks - founders - communities | 67 | 1096_kibbutz_kibbutzim_kibbutzniks_founders | | 1097 | priest - demonic - priestess - demon - demons | 67 | 1097_priest_demonic_priestess_demon | | 1098 | eclampsia - pregnancies - pregnancy - prenatal - gestational | 67 | 1098_eclampsia_pregnancies_pregnancy_prenatal | | 1099 | riots - protests - protest - activism - activists | 67 | 1099_riots_protests_protest_activism | | 1100 | hill - silent - sequel - remake - gameplay | 67 | 1100_hill_silent_sequel_remake | | 1101 | treaty - treaties - covenant - league - nations | 67 | 1101_treaty_treaties_covenant_league | | 1102 | prix - motorsport - racing - qualifying - grand | 67 | 1102_prix_motorsport_racing_qualifying | | 1103 | automotive - ab - automobile - automobiles - vehicle | 67 | 1103_automotive_ab_automobile_automobiles | | 1104 | chamberlain - 1945 - minister - resigned - 1940 | 67 | 1104_chamberlain_1945_minister_resigned | | 1105 | vegetarian - vegetarianism - veganism - vegetarians - vegan | 67 | 1105_vegetarian_vegetarianism_veganism_vegetarians | | 1106 | dictator - dictatorship - fascism - fascist - authoritarian | 67 | 1106_dictator_dictatorship_fascism_fascist | | 1107 | celiac - gluten - coeliac - wheat - autoimmune | 66 | 1107_celiac_gluten_coeliac_wheat | | 1108 | ford - truck - trucks - chassis - jeep | 66 | 1108_ford_truck_trucks_chassis | | 1109 | inkblots - inkblot - ink - psychometric - psychoanalytic | 66 | 1109_inkblots_inkblot_ink_psychometric | | 1110 | crimson - guitarist - toured - guitars - bands | 66 | 1110_crimson_guitarist_toured_guitars | | 1111 | oblast - oblasts - governorates - province - soviet | 66 | 1111_oblast_oblasts_governorates_province | | 1112 | radio - fm - stations - channels - broadcasts | 66 | 1112_radio_fm_stations_channels | | 1113 | 1803 - 1763 - treaty - ceded - treaties | 66 | 1113_1803_1763_treaty_ceded | | 1114 | nicotine - nicotinic - tobacco - cigarettes - cigarette | 66 | 1114_nicotine_nicotinic_tobacco_cigarettes | | 1115 | flags - flag - sun - swastika - emblem | 66 | 1115_flags_flag_sun_swastika | | 1116 | philosopher - philosophers - philosophy - philosophical - logician | 66 | 1116_philosopher_philosophers_philosophy_philosophical | | 1117 | whataboutism - geopolitical - dissidents - propaganda - propagandists | 66 | 1117_whataboutism_geopolitical_dissidents_propaganda | | 1118 | nirvana - grunge - album - band - bands | 66 | 1118_nirvana_grunge_album_band | | 1119 | proud - boys - protests - protesters - demonstrators | 66 | 1119_proud_boys_protests_protesters | | 1120 | bands - slayer - thrash - band - frontman | 66 | 1120_bands_slayer_thrash_band | | 1121 | scored - scoring - goal - penalty - goals | 66 | 1121_scored_scoring_goal_penalty | | 1122 | turkey - terrorist - militants - terrorism - militant | 66 | 1122_turkey_terrorist_militants_terrorism | | 1123 | shroud - crucified - crucifixion - burial - sculpture | 66 | 1123_shroud_crucified_crucifixion_burial | | 1124 | blink - band - bands - 182 - punk | 66 | 1124_blink_band_bands_182 | | 1125 | poet - poetry - poems - poem - stanzas | 66 | 1125_poet_poetry_poems_poem | | 1126 | racing - speed - chases - racer - pursuit | 65 | 1126_racing_speed_chases_racer | | 1127 | mansion - bedrooms - mansions - residence - bedroom | 65 | 1127_mansion_bedrooms_mansions_residence | | 1128 | languages - multilingual - language - lingua - creole | 65 | 1128_languages_multilingual_language_lingua | | 1129 | espionage - spying - spy - informant - investigator | 65 | 1129_espionage_spying_spy_informant | | 1130 | yoon - jung - scandal - prosecutors - alleged | 65 | 1130_yoon_jung_scandal_prosecutors | | 1131 | 1451 - pasha - 1477 - 1476 - 1475 | 65 | 1131_1451_pasha_1477_1476 | | 1132 | burning - burners - organizers - attendees - gatherings | 65 | 1132_burning_burners_organizers_attendees | | 1133 | spartan - ancient - battle - invasion - retreated | 65 | 1133_spartan_ancient_battle_invasion | | 1134 | bell - telephone - telephones - inventor - invention | 65 | 1134_bell_telephone_telephones_inventor | | 1135 | mathematician - mathematicians - mathematics - algebra - arithmetical | 65 | 1135_mathematician_mathematicians_mathematics_algebra | | 1136 | restaurants - restaurant - chefs - culinary - cuisines | 65 | 1136_restaurants_restaurant_chefs_culinary | | 1137 | restaurants - customers - restaurant - burger - franchisees | 65 | 1137_restaurants_customers_restaurant_burger | | 1138 | misfits - albums - bands - band - toured | 65 | 1138_misfits_albums_bands_band | | 1139 | rationalism - rationalisation - rationalization - rationality - philosophy | 65 | 1139_rationalism_rationalisation_rationalization_rationality | | 1140 | paintings - artworks - gallery - painting - exhibitions | 65 | 1140_paintings_artworks_gallery_painting | | 1141 | dan - sitcom - cast - spinoff - remarrying | 65 | 1141_dan_sitcom_cast_spinoff | | 1142 | vocals - remixes - albums - chorus - album | 65 | 1142_vocals_remixes_albums_chorus | | 1143 | casualties - fatalities - deaths - mortality - insurgents | 65 | 1143_casualties_fatalities_deaths_mortality | | 1144 | gaming - retailers - games - retailer - gamers | 65 | 1144_gaming_retailers_games_retailer | | 1145 | tales - literature - tale - manuscripts - testament | 65 | 1145_tales_literature_tale_manuscripts | | 1146 | deposed - presidency - presidential - ousted - elections | 65 | 1146_deposed_presidency_presidential_ousted | | 1147 | citizenship - passport - territories - residency - sovereign | 64 | 1147_citizenship_passport_territories_residency | | 1148 | optimization - algorithms - optimal - algorithm - optimality | 64 | 1148_optimization_algorithms_optimal_algorithm | | 1149 | sentenced - imprisonment - convicted - pardoned - judiciary | 64 | 1149_sentenced_imprisonment_convicted_pardoned | | 1150 | caterpillar - diesel - manufacturer - manufacturing - tractors | 64 | 1150_caterpillar_diesel_manufacturer_manufacturing | | 1151 | hub - sci - lawsuit - scholarly - plaintiffs | 64 | 1151_hub_sci_lawsuit_scholarly | | 1152 | neolithic - stone - stones - excavations - archaeologists | 64 | 1152_neolithic_stone_stones_excavations | | 1153 | coordinates - coordinate - axes - axis - longitude | 64 | 1153_coordinates_coordinate_axes_axis | | 1154 | lingerie - secret - retailer - apparel - retail | 64 | 1154_lingerie_secret_retailer_apparel | | 1155 | biodiversity - extinction - extinctions - ecosystem - ecological | 64 | 1155_biodiversity_extinction_extinctions_ecosystem | | 1156 | pearl - jam - concert - toured - albums | 64 | 1156_pearl_jam_concert_toured | | 1157 | tesseract - polytopes - hexagonal - squares - cubes | 64 | 1157_tesseract_polytopes_hexagonal_squares | | 1158 | devices - pairing - paired - protocol - device | 64 | 1158_devices_pairing_paired_protocol | | 1159 | tsar - tsarina - empress - 1917 - duchess | 64 | 1159_tsar_tsarina_empress_1917 | | 1160 | neighbourhoods - khan - mosques - urban - municipal | 64 | 1160_neighbourhoods_khan_mosques_urban | | 1161 | assassination - colonel - secessionist - martyr - secession | 64 | 1161_assassination_colonel_secessionist_martyr | | 1162 | skater - skaters - skating - skate - olympic | 64 | 1162_skater_skaters_skating_skate | | 1163 | durations - duration - decoding - transmissions - milliseconds | 64 | 1163_durations_duration_decoding_transmissions | | 1164 | retailers - retailer - retailing - retail - thanksgiving | 64 | 1164_retailers_retailer_retailing_retail | | 1165 | panther - panthers - activists - activist - antiwar | 64 | 1165_panther_panthers_activists_activist | | 1166 | spironolactone - progesterone - antiandrogenic - aldosterone - antiandrogen | 64 | 1166_spironolactone_progesterone_antiandrogenic_aldosterone | | 1167 | unrest - uprising - protests - overthrow - protesters | 64 | 1167_unrest_uprising_protests_overthrow | | 1168 | tower - survivors - towers - 911 - evacuated | 64 | 1168_tower_survivors_towers_911 | | 1169 | venture - ventures - investors - entrepreneurship - entrepreneurs | 64 | 1169_venture_ventures_investors_entrepreneurship | | 1170 | sentencing - convicted - conviction - prosecution - jurors | 64 | 1170_sentencing_convicted_conviction_prosecution | | 1171 | exotic - tiger - zoo - zookeeper - wildlife | 64 | 1171_exotic_tiger_zoo_zookeeper | | 1172 | attacks - botnet - firewalls - exploits - attackers | 64 | 1172_attacks_botnet_firewalls_exploits | | 1173 | bridges - bridge - infantry - bridged - artillery | 64 | 1173_bridges_bridge_infantry_bridged | | 1174 | paintings - painting - auctioned - auction - painted | 63 | 1174_paintings_painting_auctioned_auction | | 1175 | islands - archipelago - sovereignty - island - atoll | 63 | 1175_islands_archipelago_sovereignty_island | | 1176 | cameo - cast - stunts - castmates - aired | 63 | 1176_cameo_cast_stunts_castmates | | 1177 | stagecoach - outlaw - murderer - marshal - gunfighter | 63 | 1177_stagecoach_outlaw_murderer_marshal | | 1178 | protesting - protests - protest - activism - climate | 63 | 1178_protesting_protests_protest_activism | | 1179 | billing - provider - customers - customer - subscribers | 63 | 1179_billing_provider_customers_customer | | 1180 | archipelagos - territories - islands - island - countries | 63 | 1180_archipelagos_territories_islands_island | | 1181 | deer - hunter - filmmaking - screenplay - film | 63 | 1181_deer_hunter_filmmaking_screenplay | | 1182 | apps - apple - app - voice - devices | 63 | 1182_apps_apple_app_voice | | 1183 | paintings - painting - artworks - artist - art | 63 | 1183_paintings_painting_artworks_artist | | 1184 | buses - midlands - railway - railways - trains | 63 | 1184_buses_midlands_railway_railways | | 1185 | sonic - hedgehog - supersonic - tails - voiced | 63 | 1185_sonic_hedgehog_supersonic_tails | | 1186 | memes - meme - 4chan - intertextuality - satirical | 63 | 1186_memes_meme_4chan_intertextuality | | 1187 | khanate - khan - khanates - khans - sultanate | 63 | 1187_khanate_khan_khanates_khans | | 1188 | orthodox - orthodoxy - religiosity - religions - catholic | 63 | 1188_orthodox_orthodoxy_religiosity_religions | | 1189 | shuttle - spacecraft - orbiters - orbiter - astronauts | 63 | 1189_shuttle_spacecraft_orbiters_orbiter | | 1190 | anarchists - anarchist - anarchism - anarchy - socialists | 63 | 1190_anarchists_anarchist_anarchism_anarchy | | 1191 | brands - brand - companies - company - bottled | 63 | 1191_brands_brand_companies_company | | 1192 | shares - invested - stock - investor - holdings | 62 | 1192_shares_invested_stock_investor | | 1193 | cricket - cricketers - stadium - cricketing - stadiums | 62 | 1193_cricket_cricketers_stadium_cricketing | | 1194 | mayor - mayors - mayoral - mayoralty - governor | 62 | 1194_mayor_mayors_mayoral_mayoralty | | 1195 | mac - office - os - versions - version | 62 | 1195_mac_office_os_versions | | 1196 | diary - diaries - manuscript - frank - editions | 62 | 1196_diary_diaries_manuscript_frank | | 1197 | patsy - singer - singing - melody - vocalists | 62 | 1197_patsy_singer_singing_melody | | 1198 | networking - packet - network - internetworking - protocols | 62 | 1198_networking_packet_network_internetworking | | 1199 | borscht - recipes - recipe - cuisines - cuisine | 62 | 1199_borscht_recipes_recipe_cuisines | | 1200 | gulag - prisoners - camps - prisons - inmates | 62 | 1200_gulag_prisoners_camps_prisons | | 1201 | philanthropist - philanthropy - philanthropists - philanthropic - financier | 62 | 1201_philanthropist_philanthropy_philanthropists_philanthropic | | 1202 | chapters - chapter - novels - paperback - books | 62 | 1202_chapters_chapter_novels_paperback | | 1203 | hybrids - hybrid - ev - corolla - vehicles | 62 | 1203_hybrids_hybrid_ev_corolla | | 1204 | hospice - hospices - palliative - caregiving - caregivers | 62 | 1204_hospice_hospices_palliative_caregiving | | 1205 | mithraeum - mithraea - rituals - temples - ritual | 62 | 1205_mithraeum_mithraea_rituals_temples | | 1206 | witches - witch - spells - spellbound - comics | 62 | 1206_witches_witch_spells_spellbound | | 1207 | android - smartphone - smartphones - apps - nexus | 62 | 1207_android_smartphone_smartphones_apps | | 1208 | electronics - appliances - manufacturer - subsidiaries - brand | 62 | 1208_electronics_appliances_manufacturer_subsidiaries | | 1209 | chess - tournaments - tournament - grandmaster - grandmasters | 62 | 1209_chess_tournaments_tournament_grandmaster | | 1210 | slaughterhouse - novelist - novels - writer - nonfiction | 62 | 1210_slaughterhouse_novelist_novels_writer | | 1211 | sequels - ash - trilogy - evil - sequel | 62 | 1211_sequels_ash_trilogy_evil | | 1212 | caffeine - caffeinated - drowsiness - coffee - intoxication | 62 | 1212_caffeine_caffeinated_drowsiness_coffee | | 1213 | electors - electoral - elector - elects - elections | 62 | 1213_electors_electoral_elector_elects | | 1214 | newscast - reporters - reporter - journalism - 1963 | 62 | 1214_newscast_reporters_reporter_journalism | | 1215 | caliph - ibn - caliphs - al - caliphate | 62 | 1215_caliph_ibn_caliphs_al | | 1216 | democrat - democrats - republican - reelection - caucus | 62 | 1216_democrat_democrats_republican_reelection | | 1217 | þáttr - saga - throne - sagas - skaldic | 62 | 1217_þáttr_saga_throne_sagas | | 1218 | dune - screenplay - director - cast - sequels | 62 | 1218_dune_screenplay_director_cast | | 1219 | colonies - niger - guinea - colonial - colonialist | 62 | 1219_colonies_niger_guinea_colonial | | 1220 | turtle - turtles - ninja - mutant - cartoon | 62 | 1220_turtle_turtles_ninja_mutant | | 1221 | pins - pin - pinning - feed - ads | 61 | 1221_pins_pin_pinning_feed | | 1222 | poetry - rhyme - stanzas - poems - rhymes | 61 | 1222_poetry_rhyme_stanzas_poems | | 1223 | automotive - presenter - rover - bbc - driving | 61 | 1223_automotive_presenter_rover_bbc | | 1224 | tennis - doubles - singles - tournaments - quarterfinal | 61 | 1224_tennis_doubles_singles_tournaments | | 1225 | bean - teddy - episodes - sitcom - diary | 61 | 1225_bean_teddy_episodes_sitcom | | 1226 | magnetism - magnetic - electromagnetism - magnetization - magnet | 61 | 1226_magnetism_magnetic_electromagnetism_magnetization | | 1227 | abolitionist - abolitionists - slavery - 1850s - slaves | 61 | 1227_abolitionist_abolitionists_slavery_1850s | | 1228 | 1451 - 1453 - 1456 - 1452 - siege | 61 | 1228_1451_1453_1456_1452 | | 1229 | raider - consoles - uncharted - tomb - tombs | 61 | 1229_raider_consoles_uncharted_tomb | | 1230 | insurgents - insurgency - troops - insurgent - war | 61 | 1230_insurgents_insurgency_troops_insurgent | | 1231 | annexation - annexed - annexing - refugees - 1948 | 61 | 1231_annexation_annexed_annexing_refugees | | 1232 | conferences - talks - presentations - livestreams - conference | 61 | 1232_conferences_talks_presentations_livestreams | | 1233 | awards - idol - nominations - sang - songs | 61 | 1233_awards_idol_nominations_sang | | 1234 | epoch - gong - times - reporters - journalism | 61 | 1234_epoch_gong_times_reporters | | 1235 | goths - gothic - archaeologists - ancient - romanized | 61 | 1235_goths_gothic_archaeologists_ancient | | 1236 | warriors - blazers - rockets - 76ers - hawks | 61 | 1236_warriors_blazers_rockets_76ers | | 1237 | milk - milkshake - mayor - foster - 1978 | 61 | 1237_milk_milkshake_mayor_foster | | 1238 | librarian - library - libraries - librarians - congress | 61 | 1238_librarian_library_libraries_librarians | | 1239 | gerrymandering - gerrymander - gerrymandered - redistricting - constituencies | 61 | 1239_gerrymandering_gerrymander_gerrymandered_redistricting | | 1240 | bitcoin - bitcoins - cryptocurrencies - cryptocurrency - currencies | 60 | 1240_bitcoin_bitcoins_cryptocurrencies_cryptocurrency | | 1241 | meditations - ashram - meditation - yoga - buddha | 60 | 1241_meditations_ashram_meditation_yoga | | 1242 | turret - tanks - ammunition - turrets - cupolas | 60 | 1242_turret_tanks_ammunition_turrets | | 1243 | heterochromia - pigmentation - pigment - pigments - coloration | 60 | 1243_heterochromia_pigmentation_pigment_pigments | | 1244 | libraries - library - archives - periodicals - books | 60 | 1244_libraries_library_archives_periodicals | | 1245 | gear - presenter - presenters - motorsport - snowmobile | 60 | 1245_gear_presenter_presenters_motorsport | | 1246 | crusade - crusaders - crusader - 1179 - 1177 | 60 | 1246_crusade_crusaders_crusader_1179 | | 1247 | shamanism - shamans - shaman - shamanistic - shamanic | 60 | 1247_shamanism_shamans_shaman_shamanistic | | 1248 | panther - pink - films - film - thief | 60 | 1248_panther_pink_films_film | | 1249 | ghost - ghosts - haunted - sequels - films | 60 | 1249_ghost_ghosts_haunted_sequels | | 1250 | marketing - advertising - market - consumers - consumer | 60 | 1250_marketing_advertising_market_consumers | | 1251 | 1773 - tea - colonists - colonies - taxation | 60 | 1251_1773_tea_colonists_colonies | | 1252 | eyewitnesses - retraction - biographers - historians - writings | 60 | 1252_eyewitnesses_retraction_biographers_historians | | 1253 | cookbook - cookbooks - recipes - chef - recipe | 60 | 1253_cookbook_cookbooks_recipes_chef | | 1254 | boxer - boxers - martial - rebellion - fought | 60 | 1254_boxer_boxers_martial_rebellion | | 1255 | pseudonym - masked - jailed - prisoner - imprisoned | 60 | 1255_pseudonym_masked_jailed_prisoner | | 1256 | slavery - slaves - enslaved - paternity - genealogical | 60 | 1256_slavery_slaves_enslaved_paternity | | 1257 | hadiths - hadith - ḥadīth - ibn - imam | 60 | 1257_hadiths_hadith_ḥadīth_ibn | | 1258 | elections - election - electoral - democratic - candidates | 60 | 1258_elections_election_electoral_democratic | | 1259 | treatises - rabbis - textual - commentaries - rabbinic | 60 | 1259_treatises_rabbis_textual_commentaries | | 1260 | feminism - feminist - atheism - feminists - gender | 60 | 1260_feminism_feminist_atheism_feminists | | 1261 | boxing - punches - martial - fights - heavyweight | 60 | 1261_boxing_punches_martial_fights | | 1262 | modularity - mathematician - conjecture - mathematicians - modular | 60 | 1262_modularity_mathematician_conjecture_mathematicians | | 1263 | 1775 - 1780 - 1778 - 1779 - militia | 60 | 1263_1775_1780_1778_1779 | | 1264 | hypothesis - hypotheses - statistic - statistics - tests | 60 | 1264_hypothesis_hypotheses_statistic_statistics | | 1265 | orphanage - doors - disappearance - door - detectives | 59 | 1265_orphanage_doors_disappearance_door | | 1266 | fairy - puppet - donkey - snail - puppeteer | 59 | 1266_fairy_puppet_donkey_snail | | 1267 | doomsday - sequel - gameplay - multiplayer - dawn | 59 | 1267_doomsday_sequel_gameplay_multiplayer | | 1268 | afar - militias - ceasefire - humanitarian - stationed | 59 | 1268_afar_militias_ceasefire_humanitarian | | 1269 | tennis - slams - doubles - quarterfinal - tournaments | 59 | 1269_tennis_slams_doubles_quarterfinal | | 1270 | barricades - barricade - escape - escapes - murderer | 59 | 1270_barricades_barricade_escape_escapes | | 1271 | jong - heir - eldest - successor - hyun | 59 | 1271_jong_heir_eldest_successor | | 1272 | firearm - firearms - handgun - guns - gun | 59 | 1272_firearm_firearms_handgun_guns | | 1273 | colonists - colony - colonies - settlers - voyage | 59 | 1273_colonists_colony_colonies_settlers | | 1274 | nazi - 1932 - 1938 - triumph - 1934 | 59 | 1274_nazi_1932_1938_triumph | | 1275 | retailer - groceries - mart - store - closing | 59 | 1275_retailer_groceries_mart_store | | 1276 | photosynthesis - photosynthetic - respiration - chloroplasts - chlorophyll | 59 | 1276_photosynthesis_photosynthetic_respiration_chloroplasts | | 1277 | mission - missions - sequel - cruise - fallout | 59 | 1277_mission_missions_sequel_cruise | | 1278 | rainbow - rainbows - violet - colours - refraction | 59 | 1278_rainbow_rainbows_violet_colours | | 1279 | hitchhiker - novels - hitchhiking - paperback - hitch | 59 | 1279_hitchhiker_novels_hitchhiking_paperback | | 1280 | paintings - painter - painting - artists - exhibitions | 59 | 1280_paintings_painter_painting_artists | | 1281 | tributaries - tributary - headwaters - river - lake | 59 | 1281_tributaries_tributary_headwaters_river | | 1282 | soccer - football - players - games - leagues | 59 | 1282_soccer_football_players_games | | 1283 | regiment - cavalry - infantry - battalions - retreated | 59 | 1283_regiment_cavalry_infantry_battalions | | 1284 | ontological - ontology - ontologically - ontologies - categories | 59 | 1284_ontological_ontology_ontologically_ontologies | | 1285 | flags - parks - resorts - rebranded - mascot | 59 | 1285_flags_parks_resorts_rebranded | | 1286 | sentenced - convicted - arson - crimes - arsons | 59 | 1286_sentenced_convicted_arson_crimes | | 1287 | art - artistic - artists - modernist - surrealists | 59 | 1287_art_artistic_artists_modernist | | 1288 | shamrock - wrestled - rematch - punches - fighters | 59 | 1288_shamrock_wrestled_rematch_punches | | 1289 | broadcasting - stations - broadcasts - channels - broadcast | 59 | 1289_broadcasting_stations_broadcasts_channels | | 1290 | printers - printer - prints - printing - inkjet | 58 | 1290_printers_printer_prints_printing | | 1291 | traders - colonial - trading - monopolise - 1609 | 58 | 1291_traders_colonial_trading_monopolise | | 1292 | violin - violins - violinists - violinist - instrument | 58 | 1292_violin_violins_violinists_violinist | | 1293 | mythological - prophecy - patricide - prophet - oracles | 58 | 1293_mythological_prophecy_patricide_prophet | | 1294 | offside - officiating - penalty - penalties - opponents | 58 | 1294_offside_officiating_penalty_penalties | | 1295 | candidates - candidate - minister - election - elected | 58 | 1295_candidates_candidate_minister_election | | 1296 | cyclists - cyclist - cycling - tour - riders | 58 | 1296_cyclists_cyclist_cycling_tour | | 1297 | hello - greeting - cat - cuteness - ukiyo | 58 | 1297_hello_greeting_cat_cuteness | | 1298 | investigation - jury - coroner - tabloid - alleged | 58 | 1298_investigation_jury_coroner_tabloid | | 1299 | jong - yong - hui - taek - ko | 58 | 1299_jong_yong_hui_taek | | 1300 | terrorism - terrorist - terrorists - terror - bombings | 58 | 1300_terrorism_terrorist_terrorists_terror | | 1301 | compass - compasses - magnetometers - geomagnetic - magnetic | 58 | 1301_compass_compasses_magnetometers_geomagnetic | | 1302 | famine - crops - agrarian - agricultural - farmers | 58 | 1302_famine_crops_agrarian_agricultural | | 1303 | etymology - isles - conquered - isle - mainland | 58 | 1303_etymology_isles_conquered_isle | | 1304 | guitarists - band - toured - bands - fronted | 58 | 1304_guitarists_band_toured_bands | | 1305 | retailers - seven - shops - stores - store | 58 | 1305_retailers_seven_shops_stores | | 1306 | polygamists - polygamous - polygamist - polygamy - marriages | 58 | 1306_polygamists_polygamous_polygamist_polygamy | | 1307 | cosmos - astronomers - astronomer - astronomy - astronomical | 58 | 1307_cosmos_astronomers_astronomer_astronomy | | 1308 | refraction - refractive - optics - wavelengths - reflectivity | 58 | 1308_refraction_refractive_optics_wavelengths | | 1309 | twilight - episodes - supernatural - zone - syndication | 58 | 1309_twilight_episodes_supernatural_zone | | 1310 | amazon - cloud - apple - echo - automation | 57 | 1310_amazon_cloud_apple_echo | | 1311 | diplomacy - geopolitical - secretary - 1972 - statesman | 57 | 1311_diplomacy_geopolitical_secretary_1972 | | 1312 | trademarked - trademark - brand - della - handbags | 57 | 1312_trademarked_trademark_brand_della | | 1313 | ceasefire - peacekeeping - refugees - conflict - war | 57 | 1313_ceasefire_peacekeeping_refugees_conflict | | 1314 | neutrinos - neutrino - antineutrinos - antineutrino - leptons | 57 | 1314_neutrinos_neutrino_antineutrinos_antineutrino | | 1315 | spaceflight - blue - launches - rocket - starship | 57 | 1315_spaceflight_blue_launches_rocket | | 1316 | heir - eldest - emperor - empress - grandchild | 57 | 1316_heir_eldest_emperor_empress | | 1317 | socialist - socialism - socialists - democratic - liberalism | 57 | 1317_socialist_socialism_socialists_democratic | | 1318 | resolver - resolving - resolve - domains - authoritative | 57 | 1318_resolver_resolving_resolve_domains | | 1319 | waits - musician - singer - singers - songwriter | 57 | 1319_waits_musician_singer_singers | | 1320 | aviation - pilots - airplane - pilot - flew | 57 | 1320_aviation_pilots_airplane_pilot | | 1321 | rating - rated - grades - grade - score | 57 | 1321_rating_rated_grades_grade | | 1322 | stations - radio - station - broadcasts - broadcasting | 57 | 1322_stations_radio_station_broadcasts | | 1323 | sheikh - prince - sultan - heir - princes | 57 | 1323_sheikh_prince_sultan_heir | | 1324 | conditioning - conditioned - stimuli - stimulus - reflex | 57 | 1324_conditioning_conditioned_stimuli_stimulus | | 1325 | cube - cubes - dodecahedron - puzzles - 3d | 57 | 1325_cube_cubes_dodecahedron_puzzles | | 1326 | nominations - awards - nominated - award - finales | 57 | 1326_nominations_awards_nominated_award | | 1327 | bounty - adrift - boatswain - seaman - voyage | 57 | 1327_bounty_adrift_boatswain_seaman | | 1328 | tectonics - tectonic - mantle - crust - plates | 57 | 1328_tectonics_tectonic_mantle_crust | | 1329 | jinn - jinni - ibn - demonic - deities | 57 | 1329_jinn_jinni_ibn_demonic | | 1330 | armada - fleet - fleets - sailed - 1596 | 57 | 1330_armada_fleet_fleets_sailed | | 1331 | foie - geese - goose - gras - poultry | 57 | 1331_foie_geese_goose_gras | | 1332 | goalkeeping - premiership - goalkeeper - arsenal - keeper | 57 | 1332_goalkeeping_premiership_goalkeeper_arsenal | | 1333 | peregrines - peregrine - falcon - falconry - bird | 56 | 1333_peregrines_peregrine_falcon_falconry | | 1334 | warship - frigate - frigates - naval - sailed | 56 | 1334_warship_frigate_frigates_naval | | 1335 | 731 - civilians - plague - pathogens - units | 56 | 1335_731_civilians_plague_pathogens | | 1336 | commodore - hardware - consoles - x86 - emulation | 56 | 1336_commodore_hardware_consoles_x86 | | 1337 | laurel - hardy - comedies - comedians - comic | 56 | 1337_laurel_hardy_comedies_comedians | | 1338 | eggs - egg - yolks - yolk - eggshell | 56 | 1338_eggs_egg_yolks_yolk | | 1339 | toymaker - franchise - toys - company - monopoly | 56 | 1339_toymaker_franchise_toys_company | | 1340 | vampire - showrunner - vampires - slayer - episodes | 56 | 1340_vampire_showrunner_vampires_slayer | | 1341 | scattering - sciences - physicists - discovered - wavelengths | 56 | 1341_scattering_sciences_physicists_discovered | | 1342 | voyages - voyage - literature - novels - journeys | 56 | 1342_voyages_voyage_literature_novels | | 1343 | besieged - retreating - recaptured - retreated - reinforcements | 56 | 1343_besieged_retreating_recaptured_retreated | | 1344 | singularity - superintelligence - technological - 2030 - supercomputers | 56 | 1344_singularity_superintelligence_technological_2030 | | 1345 | coli - bacterial - bacterium - bacteria - microbiota | 56 | 1345_coli_bacterial_bacterium_bacteria | | 1346 | propofol - midazolam - benzodiazepine - hospitalized - manslaughter | 56 | 1346_propofol_midazolam_benzodiazepine_hospitalized | | 1347 | peacemaker - suicide - filmmakers - cast - cameo | 56 | 1347_peacemaker_suicide_filmmakers_cast | | 1348 | coats - mafia - massacre - perpetrators - killers | 56 | 1348_coats_mafia_massacre_perpetrators | | 1349 | howl - poetry - poet - poems - poem | 56 | 1349_howl_poetry_poet_poems | | 1350 | 1080p - digital - cable - cables - resolution | 56 | 1350_1080p_digital_cable_cables | | 1351 | federalist - federalists - confederation - republicanism - federal | 56 | 1351_federalist_federalists_confederation_republicanism | | 1352 | adobe - formats - document - acrobat - documents | 56 | 1352_adobe_formats_document_acrobat | | 1353 | cherry - blossom - cherries - blossoms - orchards | 56 | 1353_cherry_blossom_cherries_blossoms | | 1354 | 1939 - 1942 - 1930s - affair - 1940 | 56 | 1354_1939_1942_1930s_affair | | 1355 | titans - superheroine - superheroes - superhero - comics | 56 | 1355_titans_superheroine_superheroes_superhero | | 1356 | likens - jenny - tormented - inflicting - endured | 56 | 1356_likens_jenny_tormented_inflicting | | 1357 | malls - shops - mall - centres - centre | 56 | 1357_malls_shops_mall_centres | | 1358 | glucose - insulin - diabetes - gluconeogenesis - pancreas | 56 | 1358_glucose_insulin_diabetes_gluconeogenesis | | 1359 | niger - inhabitants - migrants - natal - guinea | 56 | 1359_niger_inhabitants_migrants_natal | | 1360 | unconstitutional - marriages - amendment - marriage - constitutional | 56 | 1360_unconstitutional_marriages_amendment_marriage | | 1361 | hound - detective - hounds - bbc - episodes | 56 | 1361_hound_detective_hounds_bbc | | 1362 | blackface - minstrel - minstrels - blackness - performers | 56 | 1362_blackface_minstrel_minstrels_blackness | | 1363 | diamond - diamonds - gemstone - jeweler - jewelers | 56 | 1363_diamond_diamonds_gemstone_jeweler | | 1364 | delle - bourgeois - bourgeoisie - piazza - della | 56 | 1364_delle_bourgeois_bourgeoisie_piazza | | 1365 | hyperloop - musk - pod - pods - 400m | 56 | 1365_hyperloop_musk_pod_pods | | 1366 | data - datasets - analytics - databases - database | 56 | 1366_data_datasets_analytics_databases | | 1367 | punk - flag - bands - black - band | 55 | 1367_punk_flag_bands_black | | 1368 | writer - junkie - naked - writing - lunch | 55 | 1368_writer_junkie_naked_writing | | 1369 | tennis - semifinal - quarterfinal - semifinals - doubles | 55 | 1369_tennis_semifinal_quarterfinal_semifinals | | 1370 | pharmaceuticals - pharmaceutical - biotech - stocks - stock | 55 | 1370_pharmaceuticals_pharmaceutical_biotech_stocks | | 1371 | pixels - resolution - monitors - resolutions - monitor | 55 | 1371_pixels_resolution_monitors_resolutions | | 1372 | kerosene - fuels - diesel - refinery - fuel | 55 | 1372_kerosene_fuels_diesel_refinery | | 1373 | bonsai - trees - plantings - cultivation - exhibitions | 55 | 1373_bonsai_trees_plantings_cultivation | | 1374 | tsarina - tsar - gunmen - gunshots - gunshot | 55 | 1374_tsarina_tsar_gunmen_gunshots | | 1375 | zoom - privacy - ventures - consulting - phone | 55 | 1375_zoom_privacy_ventures_consulting | | 1376 | jagged - albums - songwriter - pill - songs | 55 | 1376_jagged_albums_songwriter_pill | | 1377 | holocaust - nazi - prosecution - prosecutors - extradition | 55 | 1377_holocaust_nazi_prosecution_prosecutors | | 1378 | grandmaster - grandmasters - fide - titles - tournaments | 55 | 1378_grandmaster_grandmasters_fide_titles | | 1379 | poet - poetry - poems - poets - stanzas | 55 | 1379_poet_poetry_poems_poets | | 1380 | colorblindness - blindness - colorblind - blind - trichromatic | 55 | 1380_colorblindness_blindness_colorblind_blind | | 1381 | guinea - niger - equatorial - equator - bordered | 55 | 1381_guinea_niger_equatorial_equator | | 1382 | population - municipalities - cities - city - towns | 55 | 1382_population_municipalities_cities_city | | 1383 | sim - unlocked - carriers - telecommunications - cellular | 55 | 1383_sim_unlocked_carriers_telecommunications | | 1384 | homeopathic - homeopaths - homeopathy - medicines - medicine | 55 | 1384_homeopathic_homeopaths_homeopathy_medicines | | 1385 | vampirism - vampire - vampiric - vampires - undead | 55 | 1385_vampirism_vampire_vampiric_vampires | | 1386 | convicted - airlines - airline - conviction - arrested | 55 | 1386_convicted_airlines_airline_conviction | | 1387 | albums - album - duets - singer - band | 55 | 1387_albums_album_duets_singer | | 1388 | rapper - tribe - rap - tip - rapping | 55 | 1388_rapper_tribe_rap_tip | | 1389 | lee - moody - fallen - songwriting - band | 55 | 1389_lee_moody_fallen_songwriting | | 1390 | parliamentarian - minister - français - politician - councillor | 55 | 1390_parliamentarian_minister_français_politician | | 1391 | poet - poems - poem - poets - poetry | 55 | 1391_poet_poems_poem_poets | | 1392 | gas - soviet - oil - sanctions - supply | 55 | 1392_gas_soviet_oil_sanctions | | 1393 | eclipse - eclipses - lunar - eclipsed - moon | 55 | 1393_eclipse_eclipses_lunar_eclipsed | | 1394 | brothers - nick - band - songs - album | 55 | 1394_brothers_nick_band_songs | | 1395 | twins - twin - twinning - monozygotic - duplications | 55 | 1395_twins_twin_twinning_monozygotic | | 1396 | biotechnology - pharmaceuticals - biotech - companies - agro | 55 | 1396_biotechnology_pharmaceuticals_biotech_companies | | 1397 | sim - create - simulation - gameplay - traits | 55 | 1397_sim_create_simulation_gameplay | | 1398 | duet - duets - sang - song - songs | 55 | 1398_duet_duets_sang_song | | 1399 | ibn - sheikh - sultanate - mecca - mosque | 54 | 1399_ibn_sheikh_sultanate_mecca | | 1400 | snaps - snap - messaging - sharing - chat | 54 | 1400_snaps_snap_messaging_sharing | | 1401 | dietary - diet - diets - cardiovascular - cholesterol | 54 | 1401_dietary_diet_diets_cardiovascular | | 1402 | stem - disciplines - majors - degree - engineering | 54 | 1402_stem_disciplines_majors_degree | | 1403 | poverty - deprivation - poor - welfare - income | 54 | 1403_poverty_deprivation_poor_welfare | | 1404 | simulations - simulation - simulating - simulated - stochastic | 54 | 1404_simulations_simulation_simulating_simulated | | 1405 | tether - treasuries - exchanges - cryptocurrencies - cryptocurrency | 54 | 1405_tether_treasuries_exchanges_cryptocurrencies | | 1406 | luxury - brands - brand - valuation - valuable | 54 | 1406_luxury_brands_brand_valuation | | 1407 | lynch - touchdowns - rushing - touchdown - quarterback | 54 | 1407_lynch_touchdowns_rushing_touchdown | | 1408 | celebrations - festival - celebrated - festivities - rituals | 54 | 1408_celebrations_festival_celebrated_festivities | | 1409 | missionaries - charity - nuns - orphanages - hospices | 54 | 1409_missionaries_charity_nuns_orphanages | | 1410 | languages - language - mandarin - lingua - multilingual | 54 | 1410_languages_language_mandarin_lingua | | 1411 | apartheid - histories - chieftains - tactics - historians | 54 | 1411_apartheid_histories_chieftains_tactics | | 1412 | daredevil - marvel - superhero - miniseries - episodes | 54 | 1412_daredevil_marvel_superhero_miniseries | | 1413 | representatives - elects - voters - congressional - commissioner | 54 | 1413_representatives_elects_voters_congressional | | 1414 | novelist - novels - biography - writer - literature | 54 | 1414_novelist_novels_biography_writer | | 1415 | cosmetics - salons - skincare - chemists - products | 54 | 1415_cosmetics_salons_skincare_chemists | | 1416 | wells - literature - novels - writer - author | 54 | 1416_wells_literature_novels_writer | | 1417 | elephant - showman - exhibit - exhibited - surgeon | 54 | 1417_elephant_showman_exhibit_exhibited | | 1418 | rebelled - rulers - kingdoms - recaptured - ruled | 54 | 1418_rebelled_rulers_kingdoms_recaptured | | 1419 | seeding - clouds - seed - cloud - drought | 54 | 1419_seeding_clouds_seed_cloud | | 1420 | dashes - hyphens - hyphenated - hyphen - dash | 54 | 1420_dashes_hyphens_hyphenated_hyphen | | 1421 | panda - pandas - bamboo - zoological - herbivorous | 54 | 1421_panda_pandas_bamboo_zoological | | 1422 | stations - broadcasts - radio - station - fm | 54 | 1422_stations_broadcasts_radio_station | | 1423 | sentencing - sentenced - arrest - conviction - judge | 54 | 1423_sentencing_sentenced_arrest_conviction | | 1424 | knights - duchy - papacy - nobles - feudal | 54 | 1424_knights_duchy_papacy_nobles | | 1425 | buffalo - cowboy - bison - bull - 1872 | 54 | 1425_buffalo_cowboy_bison_bull | | 1426 | knight - moon - villain - werewolf - sidekick | 53 | 1426_knight_moon_villain_werewolf | | 1427 | sg - premiere - spinoff - episodes - starburst | 53 | 1427_sg_premiere_spinoff_episodes | | 1428 | turkey - annexation - invaded - invasion - enosis | 53 | 1428_turkey_annexation_invaded_invasion | | 1429 | ketogenic - dietary - diet - diets - carbohydrates | 53 | 1429_ketogenic_dietary_diet_diets | | 1430 | ray - cinema - filmmaker - films - filmmakers | 53 | 1430_ray_cinema_filmmaker_films | | 1431 | leprosy - leper - pathogenicity - leprae - disease | 53 | 1431_leprosy_leper_pathogenicity_leprae | | 1432 | waves - compressional - compression - compressibility - wave | 53 | 1432_waves_compressional_compression_compressibility | | 1433 | paintings - painting - artwork - cans - artworks | 53 | 1433_paintings_painting_artwork_cans | | 1434 | ubiquitous - technologies - wireless - wirelessly - internet | 53 | 1434_ubiquitous_technologies_wireless_wirelessly | | 1435 | rituals - pagans - pagan - paganism - celebrations | 53 | 1435_rituals_pagans_pagan_paganism | | 1436 | acre - acres - yard - yards - area | 53 | 1436_acre_acres_yard_yards | | 1437 | touchdowns - receptions - quarterback - yards - touchdown | 53 | 1437_touchdowns_receptions_quarterback_yards | | 1438 | poet - poems - poetry - poem - shah | 53 | 1438_poet_poems_poetry_poem | | 1439 | samurai - shogun - mangaka - fictionalization - novelist | 53 | 1439_samurai_shogun_mangaka_fictionalization | | 1440 | strings - theories - string - theory - superstring | 53 | 1440_strings_theories_string_theory | | 1441 | fables - fable - tales - poems - proverbs | 53 | 1441_fables_fable_tales_poems | | 1442 | computing - analytical - mathematician - computation - mathematics | 53 | 1442_computing_analytical_mathematician_computation | | 1443 | generative - transformer - neural - learning - trained | 53 | 1443_generative_transformer_neural_learning | | 1444 | guitarist - bassist - instrumentalist - musicians - drummer | 53 | 1444_guitarist_bassist_instrumentalist_musicians | | 1445 | prions - prion - proteins - protein - amyloidosis | 53 | 1445_prions_prion_proteins_protein | | 1446 | happiness - wellbeing - unhappiness - happier - satisfaction | 53 | 1446_happiness_wellbeing_unhappiness_happier | | 1447 | bulbs - bulb - lamps - lamp - incandescent | 53 | 1447_bulbs_bulb_lamps_lamp | | 1448 | airplay - tv - apple - televisions - streaming | 53 | 1448_airplay_tv_apple_televisions | | 1449 | bear - robin - bears - teddy - rabbit | 53 | 1449_bear_robin_bears_teddy | | 1450 | newspapers - newspaper - periodicals - gazette - tabloid | 53 | 1450_newspapers_newspaper_periodicals_gazette | | 1451 | sepoys - rebellion - uprising - 1857 - uprisings | 53 | 1451_sepoys_rebellion_uprising_1857 | | 1452 | uncle - sam - relatives - 1886 - 1922 | 52 | 1452_uncle_sam_relatives_1886 | | 1453 | campaigned - politician - governor - long - impeached | 52 | 1453_campaigned_politician_governor_long | | 1454 | vertigo - dizziness - vestibular - tinnitus - migraine | 52 | 1454_vertigo_dizziness_vestibular_tinnitus | | 1455 | dowager - empress - emperors - empresses - emperor | 52 | 1455_dowager_empress_emperors_empresses | | 1456 | translator - translators - translations - translating - translates | 52 | 1456_translator_translators_translations_translating | | 1457 | presidents - presidential - presidency - president - polls | 52 | 1457_presidents_presidential_presidency_president | | 1458 | episodes - files - fox - storylines - comics | 52 | 1458_episodes_files_fox_storylines | | 1459 | spaghetti - pasta - monster - creationist - creationism | 52 | 1459_spaghetti_pasta_monster_creationist | | 1460 | aunt - uncle - breakfast - mammy - doll | 52 | 1460_aunt_uncle_breakfast_mammy | | 1461 | rating - critics - reviews - review - marvel | 52 | 1461_rating_critics_reviews_review | | 1462 | billionaire - lawsuit - founder - shares - entrepreneur | 52 | 1462_billionaire_lawsuit_founder_shares | | 1463 | flow - experiences - performance - motivation - psychology | 52 | 1463_flow_experiences_performance_motivation | | 1464 | valentine - celebrated - holiday - holidays - saint | 52 | 1464_valentine_celebrated_holiday_holidays | | 1465 | twins - brothers - 1950s - 1960s - biographical | 52 | 1465_twins_brothers_1950s_1960s | | 1466 | broadway - musical - actress - musicals - audition | 52 | 1466_broadway_musical_actress_musicals | | 1467 | mouse - mice - cursor - joystick - trackball | 52 | 1467_mouse_mice_cursor_joystick | | 1468 | hook - jack - crocodile - pan - nursery | 52 | 1468_hook_jack_crocodile_pan | | 1469 | satellites - satellite - spacecraft - orbit - constellations | 52 | 1469_satellites_satellite_spacecraft_orbit | | 1470 | golfers - golf - tournaments - golfing - tournament | 52 | 1470_golfers_golf_tournaments_golfing | | 1471 | legions - heresy - legion - crusade - factions | 52 | 1471_legions_heresy_legion_crusade | | 1472 | barcodes - barcode - scanners - code - scanner | 52 | 1472_barcodes_barcode_scanners_code | | 1473 | atoms - atom - atomic - quantum - particles | 52 | 1473_atoms_atom_atomic_quantum | | 1474 | opium - smuggling - narcotics - drug - addiction | 52 | 1474_opium_smuggling_narcotics_drug | | 1475 | indigenous - spirit - spirits - natives - aboriginal | 52 | 1475_indigenous_spirit_spirits_natives | | 1476 | evil - eye - gaze - eyes - glare | 52 | 1476_evil_eye_gaze_eyes | | 1477 | smartwatch - watches - smartwatches - apple - wrist | 52 | 1477_smartwatch_watches_smartwatches_apple | | 1478 | glitter - songs - remixes - vocals - punk | 51 | 1478_glitter_songs_remixes_vocals | | 1479 | paramount - films - 1957 - movies - 1942 | 51 | 1479_paramount_films_1957_movies | | 1480 | documentaries - bbc - planet - documentary - nature | 51 | 1480_documentaries_bbc_planet_documentary | | 1481 | 1848 - dictator - rebelled - insurgent - 1846 | 51 | 1481_1848_dictator_rebelled_insurgent | | 1482 | battalions - regiments - battalion - regiment - platoons | 51 | 1482_battalions_regiments_battalion_regiment | | 1483 | polytheistic - shamanism - shamanistic - monotheists - monotheistic | 51 | 1483_polytheistic_shamanism_shamanistic_monotheists | | 1484 | assassination - tortured - raped - incident - fedayeen | 51 | 1484_assassination_tortured_raped_incident | | 1485 | tsar - tsars - cathedrals - cathedral - palaces | 51 | 1485_tsar_tsars_cathedrals_cathedral | | 1486 | data - datasets - analytics - statistics - statistician | 51 | 1486_data_datasets_analytics_statistics | | 1487 | transformer - transformers - coils - windings - inductance | 51 | 1487_transformer_transformers_coils_windings | | 1488 | cruises - cruise - seas - ships - sailing | 51 | 1488_cruises_cruise_seas_ships | | 1489 | colonists - settlers - colonist - smith - colony | 51 | 1489_colonists_settlers_colonist_smith | | 1490 | fascist - fascism - fascists - 1930s - 1930 | 51 | 1490_fascist_fascism_fascists_1930s | | 1491 | ferry - songwriter - albums - toured - tour | 51 | 1491_ferry_songwriter_albums_toured | | 1492 | attractiveness - aesthetics - beauty - aesthetic - aesthetically | 51 | 1492_attractiveness_aesthetics_beauty_aesthetic | | 1493 | tribes - tribe - 1876 - tribal - treaties | 51 | 1493_tribes_tribe_1876_tribal | | 1494 | 1934 - robbery - robbers - gunfight - shootout | 51 | 1494_1934_robbery_robbers_gunfight | | 1495 | rosary - devotions - liturgical - prayers - prayer | 51 | 1495_rosary_devotions_liturgical_prayers | | 1496 | airborne - airfields - soviet - regiments - military | 51 | 1496_airborne_airfields_soviet_regiments | | 1497 | tenacious - destiny - guitarist - band - bands | 51 | 1497_tenacious_destiny_guitarist_band | | 1498 | feud - reigns - rumble - wrestling - wrestler | 51 | 1498_feud_reigns_rumble_wrestling | | 1499 | coronavirus - diagnosed - positive - vaccinated - flu | 51 | 1499_coronavirus_diagnosed_positive_vaccinated | | 1500 | decapitated - murders - detectives - homicide - murder | 51 | 1500_decapitated_murders_detectives_homicide | | 1501 | corruption - corrupt - bribes - bribery - bribe | 51 | 1501_corruption_corrupt_bribes_bribery | | 1502 | rooms - room - palace - furnishings - ballroom | 51 | 1502_rooms_room_palace_furnishings | | 1503 | lama - lamas - monks - monasteries - monastic | 51 | 1503_lama_lamas_monks_monasteries | | 1504 | warehouse - warehouses - retailer - retail - wholesale | 51 | 1504_warehouse_warehouses_retailer_retail | | 1505 | languages - dialects - ethnic - speak - language | 51 | 1505_languages_dialects_ethnic_speak | | 1506 | scored - goals - goalscorer - scoring - goalscorers | 51 | 1506_scored_goals_goalscorer_scoring | | 1507 | consciousness - conscious - unconscious - perceive - awareness | 50 | 1507_consciousness_conscious_unconscious_perceive | | 1508 | mansion - mansions - estate - residence - richest | 50 | 1508_mansion_mansions_estate_residence | | 1509 | mp3 - audio - formats - codecs - bitrate | 50 | 1509_mp3_audio_formats_codecs | | 1510 | dragons - evil - demigod - demigods - villains | 50 | 1510_dragons_evil_demigod_demigods | | 1511 | citizen - citizens - sovereign - sovereigns - sovereignty | 50 | 1511_citizen_citizens_sovereign_sovereigns | | 1512 | draft - undrafted - deadline - eligibility - early | 50 | 1512_draft_undrafted_deadline_eligibility | | 1513 | redheads - redhead - reddish - ginger - hair | 50 | 1513_redheads_redhead_reddish_ginger | | 1514 | measles - vaccines - vaccination - vaccine - vaccinated | 50 | 1514_measles_vaccines_vaccination_vaccine | | 1515 | literature - novels - novel - peace - novelists | 50 | 1515_literature_novels_novel_peace | | 1516 | microwaves - microwave - oven - ovens - cooking | 50 | 1516_microwaves_microwave_oven_ovens | | 1517 | cranberries - concert - albums - album - 1994 | 50 | 1517_cranberries_concert_albums_album | | 1518 | pope - papal - popes - papacy - della | 50 | 1518_pope_papal_popes_papacy | | 1519 | voyagers - heliosphere - interstellar - heliocentric - solar | 50 | 1519_voyagers_heliosphere_interstellar_heliocentric | | 1520 | album - songs - vocals - song - remixes | 50 | 1520_album_songs_vocals_song | | 1521 | dead - concert - burial - lyricists - psychedelic | 50 | 1521_dead_concert_burial_lyricists | | 1522 | athlete - olympic - athletic - athletes - decathlon | 50 | 1522_athlete_olympic_athletic_athletes | | 1523 | motorsport - prix - tyres - racing - qualifying | 50 | 1523_motorsport_prix_tyres_racing | | 1524 | acquitted - murdered - prosecutors - prosecution - criss | 50 | 1524_acquitted_murdered_prosecutors_prosecution | | 1525 | disenfranchisement - disenfranchising - disenfranchised - disenfranchise - suffrage | 50 | 1525_disenfranchisement_disenfranchising_disenfranchised_disenfranchise | | 1526 | graffiti - screenplay - cinematographers - film - godfather | 50 | 1526_graffiti_screenplay_cinematographers_film | | 1527 | cycling - bicycles - bikes - biking - bicycling | 50 | 1527_cycling_bicycles_bikes_biking | | 1528 | halo - chief - 343 - master - guardians | 50 | 1528_halo_chief_343_master | | 1529 | rockstar - acquisitions - owns - gaming - acquire | 50 | 1529_rockstar_acquisitions_owns_gaming | | 1530 | classroom - classrooms - cho - students - student | 50 | 1530_classroom_classrooms_cho_students | | 1531 | albums - concert - toured - band - songs | 50 | 1531_albums_concert_toured_band | | 1532 | golf - scoring - rounds - tournament - championship | 50 | 1532_golf_scoring_rounds_tournament | | 1533 | hunger - uprisings - rebellion - capitol - rebels | 50 | 1533_hunger_uprisings_rebellion_capitol | | 1534 | famine - famines - starvation - rice - rations | 50 | 1534_famine_famines_starvation_rice | | 1535 | anthem - anthems - hymn - stanza - stanzas | 50 | 1535_anthem_anthems_hymn_stanza | | 1536 | nations - summit - eu - agreements - summits | 50 | 1536_nations_summit_eu_agreements | | 1537 | commercials - commercial - advertisements - advertisement - advertising | 50 | 1537_commercials_commercial_advertisements_advertisement | | 1538 | bridges - bridge - viaducts - arches - truss | 50 | 1538_bridges_bridge_viaducts_arches | | 1539 | vulgar - profanity - slang - intercourse - pejorative | 50 | 1539_vulgar_profanity_slang_intercourse | | 1540 | mailbox - uploading - cloud - uploads - folders | 50 | 1540_mailbox_uploading_cloud_uploads | | 1541 | predator - predators - alien - creature - aliens | 50 | 1541_predator_predators_alien_creature | | 1542 | 1852 - novels - novel - cabin - literature | 50 | 1542_1852_novels_novel_cabin | | 1543 | hijab - sharia - fashion - veils - dress | 50 | 1543_hijab_sharia_fashion_veils | | 1544 | capsaicin - capsaicinoids - peppers - chili - spicy | 50 | 1544_capsaicin_capsaicinoids_peppers_chili | | 1545 | park - episodes - south - studios - spontaneity | 50 | 1545_park_episodes_south_studios | | 1546 | tornadoes - tornado - storms - thunderstorms - thunderstorm | 49 | 1546_tornadoes_tornado_storms_thunderstorms | | 1547 | restaurants - restaurant - franchisees - chick - franchise | 49 | 1547_restaurants_restaurant_franchisees_chick | | 1548 | blockchains - blockchain - ledgers - cryptocurrencies - decentralization | 49 | 1548_blockchains_blockchain_ledgers_cryptocurrencies | | 1549 | concert - toured - band - concerts - bands | 49 | 1549_concert_toured_band_concerts | | 1550 | dew - cola - flavors - soda - beverage | 49 | 1550_dew_cola_flavors_soda | | 1551 | circumcision - circumcise - circumcised - uncircumcised - foreskin | 49 | 1551_circumcision_circumcise_circumcised_uncircumcised | | 1552 | sultan - shah - khan - sultanate - rocket | 49 | 1552_sultan_shah_khan_sultanate | | 1553 | priesthood - priest - synagogue - temple - sect | 49 | 1553_priesthood_priest_synagogue_temple | | 1554 | ape - apes - jungle - gorilla - gorillas | 49 | 1554_ape_apes_jungle_gorilla | | 1555 | blockbuster - amazon - subscription - subscribers - streaming | 49 | 1555_blockbuster_amazon_subscription_subscribers | | 1556 | vogue - magazine - magazines - haute - fashion | 49 | 1556_vogue_magazine_magazines_haute | | 1557 | cocoa - farmers - commodities - chocolate - countries | 49 | 1557_cocoa_farmers_commodities_chocolate | | 1558 | anime - cartoon - cartoons - airing - samurai | 49 | 1558_anime_cartoon_cartoons_airing | | 1559 | rockabilly - chorus - songwriter - singing - musicians | 49 | 1559_rockabilly_chorus_songwriter_singing | | 1560 | brackets - parentheses - bracket - parenthesis - bracketed | 49 | 1560_brackets_parentheses_bracket_parenthesis | | 1561 | gulag - soviet - memoirs - novel - archipelago | 49 | 1561_gulag_soviet_memoirs_novel | | 1562 | jong - coma - detained - postmortem - tortured | 49 | 1562_jong_coma_detained_postmortem | | 1563 | dictator - corruption - unrest - corrupt - za | 49 | 1563_dictator_corruption_unrest_corrupt | | 1564 | scoliosis - spine - vertebral - vertebra - spinal | 49 | 1564_scoliosis_spine_vertebral_vertebra | | 1565 | festival - tomorrow - tickets - performers - organizers | 49 | 1565_festival_tomorrow_tickets_performers | | 1566 | niger - kingdoms - ethnicities - kingdom - ancestor | 49 | 1566_niger_kingdoms_ethnicities_kingdom | | 1567 | plc - programmable - microcontrollers - controllers - microcontroller | 49 | 1567_plc_programmable_microcontrollers_controllers | | 1568 | monopoly - monopolies - games - cash - cards | 49 | 1568_monopoly_monopolies_games_cash | | 1569 | productions - cola - merger - coca - corporation | 49 | 1569_productions_cola_merger_coca | | 1570 | gambling - gambler - gamble - gamblers - fortune | 49 | 1570_gambling_gambler_gamble_gamblers | | 1571 | donuts - doughnuts - donut - doughnut - restaurant | 49 | 1571_donuts_doughnuts_donut_doughnut | | 1572 | billionaires - richest - billionaire - wealthiest - billion | 49 | 1572_billionaires_richest_billionaire_wealthiest | | 1573 | lent - fasting - easter - liturgy - liturgical | 49 | 1573_lent_fasting_easter_liturgy | | 1574 | novels - books - readership - bestsellers - readers | 49 | 1574_novels_books_readership_bestsellers | | 1575 | ibn - emir - mecca - medina - emirate | 48 | 1575_ibn_emir_mecca_medina | | 1576 | ministers - minister - secretary - peerage - cabinet | 48 | 1576_ministers_minister_secretary_peerage | | 1577 | kratom - overdose - overdoses - alkaloids - toxicity | 48 | 1577_kratom_overdose_overdoses_alkaloids | | 1578 | knight - homicide - manslaughter - murderer - bail | 48 | 1578_knight_homicide_manslaughter_murderer | | 1579 | phase - phases - transformer - electrical - voltages | 48 | 1579_phase_phases_transformer_electrical | | 1580 | girdle - knights - chivalry - knight - knightly | 48 | 1580_girdle_knights_chivalry_knight | | 1581 | mix - albums - little - remix - singles | 48 | 1581_mix_albums_little_remix | | 1582 | shamrock - annals - priest - saint - apostles | 48 | 1582_shamrock_annals_priest_saint | | 1583 | aneurysms - aneurysm - coronary - prognosis - vasculitis | 48 | 1583_aneurysms_aneurysm_coronary_prognosis | | 1584 | mirage - 2000 - missile - airframes - aircraft | 48 | 1584_mirage_2000_missile_airframes | | 1585 | rangers - ranger - mighty - ninja - dubbed | 48 | 1585_rangers_ranger_mighty_ninja | | 1586 | iso - specifications - transmission - specification - interface | 48 | 1586_iso_specifications_transmission_specification | | 1587 | moai - statues - statue - archaeologists - archaeological | 48 | 1587_moai_statues_statue_archaeologists | | 1588 | cameras - camera - shutters - photography - shutter | 48 | 1588_cameras_camera_shutters_photography | | 1589 | bigamy - waltz - alimony - dancer - famous | 48 | 1589_bigamy_waltz_alimony_dancer | | 1590 | pussy - riot - activists - protesting - protest | 48 | 1590_pussy_riot_activists_protesting | | 1591 | musician - songs - singers - album - songwriters | 48 | 1591_musician_songs_singers_album | | 1592 | chile - poet - poems - poem - poetry | 48 | 1592_chile_poet_poems_poem | | 1593 | directorate - security - agencies - agency - executive | 48 | 1593_directorate_security_agencies_agency | | 1594 | steampunk - cyberpunk - steam - technocrats - conventions | 48 | 1594_steampunk_cyberpunk_steam_technocrats | | 1595 | planets - volcanically - volcanic - planet - craters | 48 | 1595_planets_volcanically_volcanic_planet | | 1596 | sky - gaming - gameplay - gamer - game | 48 | 1596_sky_gaming_gameplay_gamer | | 1597 | brewery - beers - breweries - tents - festival | 48 | 1597_brewery_beers_breweries_tents | | 1598 | drafted - rebounds - basketball - draft - hoop | 48 | 1598_drafted_rebounds_basketball_draft | | 1599 | ancient - mathematician - philosophers - philosopher - esotericism | 48 | 1599_ancient_mathematician_philosophers_philosopher | | 1600 | mural - artworks - paintings - murals - exhibitions | 48 | 1600_mural_artworks_paintings_murals | | 1601 | gamer - gamers - gaming - harassment - misogynistic | 48 | 1601_gamer_gamers_gaming_harassment | | 1602 | microprocessors - microelectronics - microprocessor - processors - transistors | 48 | 1602_microprocessors_microelectronics_microprocessor_processors | | 1603 | molested - murders - crimes - murdered - murdering | 48 | 1603_molested_murders_crimes_murdered | | 1604 | assassination - assassinate - archduke - assassins - conspirators | 48 | 1604_assassination_assassinate_archduke_assassins | | 1605 | noir - noirs - genre - cinematography - filmmaking | 48 | 1605_noir_noirs_genre_cinematography | | 1606 | ibn - folktales - tales - literature - nights | 48 | 1606_ibn_folktales_tales_literature | | 1607 | piracy - pirate - pirates - privateering - maritime | 48 | 1607_piracy_pirate_pirates_privateering | | 1608 | mysticism - theosophical - spiritual - spirituality - epistemology | 48 | 1608_mysticism_theosophical_spiritual_spirituality | | 1609 | gaol - literary - prose - biographies - ballad | 48 | 1609_gaol_literary_prose_biographies | | 1610 | complexity - computational - algorithms - cryptosystems - deterministic | 48 | 1610_complexity_computational_algorithms_cryptosystems | | 1611 | deepfake - deepfakes - videos - detecting - detection | 48 | 1611_deepfake_deepfakes_videos_detecting | | 1612 | metadata - semantic - vocabularies - schema - catalog | 48 | 1612_metadata_semantic_vocabularies_schema | | 1613 | railway - trains - highways - buses - trolleybus | 47 | 1613_railway_trains_highways_buses | | 1614 | olives - olive - mediterranean - tree - orchards | 47 | 1614_olives_olive_mediterranean_tree | | 1615 | till - acquitted - lynched - casket - lynching | 47 | 1615_till_acquitted_lynched_casket | | 1616 | thriller - ballads - usher - albums - songs | 47 | 1616_thriller_ballads_usher_albums | | 1617 | literature - tales - adventures - books - poems | 47 | 1617_literature_tales_adventures_books | | 1618 | typhoon - landfall - cyclone - tropical - meteorological | 47 | 1618_typhoon_landfall_cyclone_tropical | | 1619 | telecom - telecommunications - telecoms - broadband - provider | 47 | 1619_telecom_telecommunications_telecoms_broadband | | 1620 | sabbath - bands - band - guitarist - bassist | 47 | 1620_sabbath_bands_band_guitarist | | 1621 | puritan - reformation - protestant - congregational - sermons | 47 | 1621_puritan_reformation_protestant_congregational | | 1622 | conductivity - resistivity - resistances - resistance - ohms | 47 | 1622_conductivity_resistivity_resistances_resistance | | 1623 | reliance - shareholders - shareholder - chairman - chairmanship | 47 | 1623_reliance_shareholders_shareholder_chairman | | 1624 | vampires - vampirism - vampire - vampiric - folklore | 47 | 1624_vampires_vampirism_vampire_vampiric | | 1625 | genocide - humanitarian - atrocities - famine - starvation | 47 | 1625_genocide_humanitarian_atrocities_famine | | 1626 | anorexia - bulimia - anorexic - bulimic - disorders | 47 | 1626_anorexia_bulimia_anorexic_bulimic | | 1627 | slash - slashes - slashed - backslash - separator | 47 | 1627_slash_slashes_slashed_backslash | | 1628 | narcissism - narcissistic - psychopathy - traits - trait | 47 | 1628_narcissism_narcissistic_psychopathy_traits | | 1629 | payments - bank - payment - prepaid - banks | 47 | 1629_payments_bank_payment_prepaid | | 1630 | nomadic - deserts - tribes - desert - sheikhs | 47 | 1630_nomadic_deserts_tribes_desert | | 1631 | quarterback - quarterbacks - touchdowns - patriots - eagles | 47 | 1631_quarterback_quarterbacks_touchdowns_patriots | | 1632 | chocolate - cocoa - cacao - sugar - sugars | 47 | 1632_chocolate_cocoa_cacao_sugar | | 1633 | pharmaceuticals - pharmaceutical - oxycodone - opioids - lawsuits | 47 | 1633_pharmaceuticals_pharmaceutical_oxycodone_opioids | | 1634 | novels - literature - fiction - writings - writer | 47 | 1634_novels_literature_fiction_writings | | 1635 | comics - marvel - cartoonist - comic - superhero | 47 | 1635_comics_marvel_cartoonist_comic | | 1636 | rapper - rappers - rap - diva - singer | 47 | 1636_rapper_rappers_rap_diva | | 1637 | donkey - fairy - godmother - dragon - prince | 47 | 1637_donkey_fairy_godmother_dragon | | 1638 | contraception - contraceptive - contraceptives - abortion - abortions | 47 | 1638_contraception_contraceptive_contraceptives_abortion | | 1639 | adjutant - colonel - soldier - lieutenant - brigadier | 47 | 1639_adjutant_colonel_soldier_lieutenant | | 1640 | pasha - sultan - bey - beylik - beyliks | 47 | 1640_pasha_sultan_bey_beylik | | 1641 | hookah - hookahs - tobacco - smoking - smoked | 47 | 1641_hookah_hookahs_tobacco_smoking | | 1642 | goalscorer - scored - goals - scoring - goal | 47 | 1642_goalscorer_scored_goals_scoring | | 1643 | 172 - aircraft - fuselage - 177 - redesigned | 47 | 1643_172_aircraft_fuselage_177 | | 1644 | gospels - crucifixion - crucified - gospel - executed | 47 | 1644_gospels_crucifixion_crucified_gospel | | 1645 | genomes - genome - mutations - genes - spacer | 47 | 1645_genomes_genome_mutations_genes | | 1646 | catch - 22 - circumstance - novel - spurious | 47 | 1646_catch_22_circumstance_novel | | 1647 | aphasia - impairment - cognitive - dementia - impaired | 46 | 1647_aphasia_impairment_cognitive_dementia | | 1648 | screenwriter - blood - sequels - films - rocky | 46 | 1648_screenwriter_blood_sequels_films | | 1649 | arias - convicted - conviction - testified - convict | 46 | 1649_arias_convicted_conviction_testified | | 1650 | uniforms - uniformed - berets - beret - regiment | 46 | 1650_uniforms_uniformed_berets_beret | | 1651 | poems - poets - poet - poetry - poem | 46 | 1651_poems_poets_poet_poetry | | 1652 | malpractice - appeals - swallowing - upheld - feeding | 46 | 1652_malpractice_appeals_swallowing_upheld | | 1653 | bucket - albums - album - pike - tracks | 46 | 1653_bucket_albums_album_pike | | 1654 | merger - firms - mergers - acquisitions - firm | 46 | 1654_merger_firms_mergers_acquisitions | | 1655 | navy - military - enlisted - regiment - personnel | 46 | 1655_navy_military_enlisted_regiment | | 1656 | peacekeeping - insurgents - ceasefire - insurgency - insurgent | 46 | 1656_peacekeeping_insurgents_ceasefire_insurgency | | 1657 | shamrocks - parades - shamrock - celebrated - celebrations | 46 | 1657_shamrocks_parades_shamrock_celebrated | | 1658 | eternal - eternally - eternity - repetitions - recurrence | 46 | 1658_eternal_eternally_eternity_repetitions | | 1659 | tower - towers - fortification - moat - castles | 46 | 1659_tower_towers_fortification_moat | | 1660 | treaties - hostilities - wartime - convention - tribunal | 46 | 1660_treaties_hostilities_wartime_convention | | 1661 | khat - banning - misuse - legality - prohibition | 46 | 1661_khat_banning_misuse_legality | | 1662 | invested - investor - investors - funding - financing | 46 | 1662_invested_investor_investors_funding | | 1663 | democrats - parties - elections - election - democratic | 46 | 1663_democrats_parties_elections_election | | 1664 | mini - convertible - redesigned - discontinued - minimalism | 46 | 1664_mini_convertible_redesigned_discontinued | | 1665 | clowns - clown - concert - posse - circus | 46 | 1665_clowns_clown_concert_posse | | 1666 | rankings - ranking - ranks - universities - academics | 46 | 1666_rankings_ranking_ranks_universities | | 1667 | jam - cameo - cartoon - basketball - cameos | 46 | 1667_jam_cameo_cartoon_basketball | | 1668 | saffron - botanical - turmeric - cultivated - phytochemicals | 46 | 1668_saffron_botanical_turmeric_cultivated | | 1669 | mysticism - sharia - spirituality - imam - mystical | 46 | 1669_mysticism_sharia_spirituality_imam | | 1670 | remixes - pet - remixed - remix - duet | 46 | 1670_remixes_pet_remixed_remix | | 1671 | frontiersman - settlers - frontiersmen - wilderness - 1778 | 46 | 1671_frontiersman_settlers_frontiersmen_wilderness | | 1672 | episodes - shows - preschoolers - blue - preschool | 46 | 1672_episodes_shows_preschoolers_blue | | 1673 | municipalities - municipality - metropolitan - cities - populous | 46 | 1673_municipalities_municipality_metropolitan_cities | | 1674 | soccer - goals - assists - goal - goalscorer | 46 | 1674_soccer_goals_assists_goal | | 1675 | birthdays - birthday - probability - anniversaries - 365 | 46 | 1675_birthdays_birthday_probability_anniversaries | | 1676 | siren - sirens - mermaid - mermaids - mythology | 46 | 1676_siren_sirens_mermaid_mermaids | | 1677 | zombie - finale - survivor - hilltop - walkers | 46 | 1677_zombie_finale_survivor_hilltop | | 1678 | mosque - mosques - imam - imams - prophet | 46 | 1678_mosque_mosques_imam_imams | | 1679 | swan - swans - epistemic - theory - book | 46 | 1679_swan_swans_epistemic_theory | | 1680 | mar - trump - mansion - vacation - presidential | 46 | 1680_mar_trump_mansion_vacation | | 1681 | sequels - avatar - sequel - 3d - filming | 46 | 1681_sequels_avatar_sequel_3d | | 1682 | soldiers - helicopter - surrender - mujahideen - prisoner | 46 | 1682_soldiers_helicopter_surrender_mujahideen | | 1683 | tron - legacy - trailers - disc - trailer | 46 | 1683_tron_legacy_trailers_disc | | 1684 | roguelikes - roguelike - rogue - gameplay - dungeons | 45 | 1684_roguelikes_roguelike_rogue_gameplay | | 1685 | vocals - concert - vocalist - musicians - guitarist | 45 | 1685_vocals_concert_vocalist_musicians | | 1686 | editions - comics - hardcover - edition - miniseries | 45 | 1686_editions_comics_hardcover_edition | | 1687 | playwright - playwrights - theatricality - theatre - dramatize | 45 | 1687_playwright_playwrights_theatricality_theatre | | 1688 | paintings - painting - painters - painter - murals | 45 | 1688_paintings_painting_painters_painter | | 1689 | harassment - allegations - harassed - assaulted - victimized | 45 | 1689_harassment_allegations_harassed_assaulted | | 1690 | doll - child - dolls - voiced - remake | 45 | 1690_doll_child_dolls_voiced | | 1691 | jeep - jeeps - vehicle - vehicles - chassis | 45 | 1691_jeep_jeeps_vehicle_vehicles | | 1692 | cinema - filmmaking - cinemas - films - filmmakers | 45 | 1692_cinema_filmmaking_cinemas_films | | 1693 | bomber - bombers - missiles - aircraft - missile | 45 | 1693_bomber_bombers_missiles_aircraft | | 1694 | monarchy - 1867 - confederation - 1918 - 1848 | 45 | 1694_monarchy_1867_confederation_1918 | | 1695 | sugar - sugars - sugarcane - glucose - molasses | 45 | 1695_sugar_sugars_sugarcane_glucose | | 1696 | art - artistic - marina - exhibition - museum | 45 | 1696_art_artistic_marina_exhibition | | 1697 | racing - qualifying - raced - laps - prix | 45 | 1697_racing_qualifying_raced_laps | | 1698 | cellar - cellars - imprisonment - captives - raped | 45 | 1698_cellar_cellars_imprisonment_captives | | 1699 | ruby - rails - gems - interpreter - programming | 45 | 1699_ruby_rails_gems_interpreter | | 1700 | saints - row - stadia - games - arcade | 45 | 1700_saints_row_stadia_games | | 1701 | yakuza - anime - animations - chibi - voice | 45 | 1701_yakuza_anime_animations_chibi | | 1702 | sales - revenue - disc - discs - blu | 45 | 1702_sales_revenue_disc_discs | | 1703 | rabies - raccoons - infectious - vaccines - bitten | 45 | 1703_rabies_raccoons_infectious_vaccines | | 1704 | gypsy - disorder - seizure - investigators - syndrome | 45 | 1704_gypsy_disorder_seizure_investigators | | 1705 | clover - paramount - film - directorial - movie | 45 | 1705_clover_paramount_film_directorial | | 1706 | shades - sequels - twilight - trilogy - film | 45 | 1706_shades_sequels_twilight_trilogy | | 1707 | monastery - monks - monastic - monastics - nam | 45 | 1707_monastery_monks_monastic_monastics | | 1708 | scream - sequels - screenwriter - sequel - trilogy | 45 | 1708_scream_sequels_screenwriter_sequel | | 1709 | tablet - underworld - entrails - netherworld - throne | 45 | 1709_tablet_underworld_entrails_netherworld | | 1710 | peat - peatlands - peatland - wetlands - soils | 45 | 1710_peat_peatlands_peatland_wetlands | | 1711 | thirty - seconds - tour - album - headlining | 45 | 1711_thirty_seconds_tour_album | | 1712 | saxophones - saxophone - sax - saxophonists - saxophonist | 45 | 1712_saxophones_saxophone_sax_saxophonists | | 1713 | telecommunications - telecom - telecoms - telecommunication - telephony | 45 | 1713_telecommunications_telecom_telecoms_telecommunication | | 1714 | interceptions - touchdowns - quarterback - interception - quarterbacks | 45 | 1714_interceptions_touchdowns_quarterback_interception | | 1715 | tractors - tractor - deer - axles - machinery | 45 | 1715_tractors_tractor_deer_axles | | 1716 | quoting - quotations - quotes - apostrophes - quotation | 45 | 1716_quoting_quotations_quotes_apostrophes | | 1717 | panther - vibranium - panthers - spider - doom | 45 | 1717_panther_vibranium_panthers_spider | | 1718 | profiles - chats - swipes - profile - chatting | 45 | 1718_profiles_chats_swipes_profile | | 1719 | enterprises - ventures - affiliate - companies - commerce | 45 | 1719_enterprises_ventures_affiliate_companies | | 1720 | fibromyalgia - fibrous - neuropathy - chronic - neuropathic | 45 | 1720_fibromyalgia_fibrous_neuropathy_chronic | | 1721 | tithes - genesis - tithe - testament - pharaoh | 45 | 1721_tithes_genesis_tithe_testament | | 1722 | celestial - eternal - awakening - immortal - destruction | 44 | 1722_celestial_eternal_awakening_immortal | | 1723 | empathy - empathic - empathizing - empathize - sympathy | 44 | 1723_empathy_empathic_empathizing_empathize | | 1724 | surrogacy - surrogates - surrogate - parenthood - fertility | 44 | 1724_surrogacy_surrogates_surrogate_parenthood | | 1725 | tennis - tournaments - tournament - championships - finals | 44 | 1725_tennis_tournaments_tournament_championships | | 1726 | brands - brand - margarine - soap - oils | 44 | 1726_brands_brand_margarine_soap | | 1727 | leftist - leftists - nationalists - liberal - conservatives | 44 | 1727_leftist_leftists_nationalists_liberal | | 1728 | medal - medals - presidential - president - bestowed | 44 | 1728_medal_medals_presidential_president | | 1729 | quarterback - quarterbacks - interceptions - manning - touchdowns | 44 | 1729_quarterback_quarterbacks_interceptions_manning | | 1730 | nazi - 1941 - 1939 - 1944 - wartime | 44 | 1730_nazi_1941_1939_1944 | | 1731 | fractal - curves - boundary - holomorphic - bifurcation | 44 | 1731_fractal_curves_boundary_holomorphic | | 1732 | limp - rock - bands - band - rap | 44 | 1732_limp_rock_bands_band | | 1733 | devil - demon - satan - soul - souls | 44 | 1733_devil_demon_satan_soul | | 1734 | goalscorer - footballer - goals - goalscoring - scored | 44 | 1734_goalscorer_footballer_goals_goalscoring | | 1735 | libraries - library - librarians - bibliographic - scholarly | 44 | 1735_libraries_library_librarians_bibliographic | | 1736 | heir - eldest - nobility - peerage - baronetcy | 44 | 1736_heir_eldest_nobility_peerage | | 1737 | radium - chemist - polonium - radioactive - sciences | 44 | 1737_radium_chemist_polonium_radioactive | | 1738 | sitcom - episodes - sergeants - sheriff - comedian | 44 | 1738_sitcom_episodes_sergeants_sheriff | | 1739 | scum - feminist - satirist - manifesto - feminism | 44 | 1739_scum_feminist_satirist_manifesto | | 1740 | moose - singer - mansa - rapper - songs | 44 | 1740_moose_singer_mansa_rapper | | 1741 | population - municipalities - municipality - cities - inhabitants | 44 | 1741_population_municipalities_municipality_cities | | 1742 | sober - sobriety - rehab - addiction - addict | 44 | 1742_sober_sobriety_rehab_addiction | | 1743 | ant - ants - toured - tour - concert | 44 | 1743_ant_ants_toured_tour | | 1744 | financial - investors - investor - founder - bankruptcies | 44 | 1744_financial_investors_investor_founder | | 1745 | trail - anchorage - abandoned - rescued - canoe | 44 | 1745_trail_anchorage_abandoned_rescued | | 1746 | magnum - episodes - detective - episode - robin | 44 | 1746_magnum_episodes_detective_episode | | 1747 | moss - quarterback - cornerback - punts - touchdowns | 44 | 1747_moss_quarterback_cornerback_punts | | 1748 | papacy - 1523 - 1527 - papal - 1471 | 44 | 1748_papacy_1523_1527_papal | | 1749 | orcas - orca - whale - dolphin - whales | 43 | 1749_orcas_orca_whale_dolphin | | 1750 | cartoonist - comic - strips - cartoon - strip | 43 | 1750_cartoonist_comic_strips_cartoon | | 1751 | aids - antiretroviral - epidemiology - prevalence - population | 43 | 1751_aids_antiretroviral_epidemiology_prevalence | | 1752 | spam - restaurants - condiments - barbecue - canned | 43 | 1752_spam_restaurants_condiments_barbecue | | 1753 | motorcycles - motorcycle - motorbikes - motorbike - bikes | 43 | 1753_motorcycles_motorcycle_motorbikes_motorbike | | 1754 | toured - band - headlining - album - guitarist | 43 | 1754_toured_band_headlining_album | | 1755 | loaf - meat - duet - duets - sang | 43 | 1755_loaf_meat_duet_duets | | 1756 | horse - horses - cavalry - rode - 1877 | 43 | 1756_horse_horses_cavalry_rode | | 1757 | festival - lawsuit - lawsuits - defrauded - sued | 43 | 1757_festival_lawsuit_lawsuits_defrauded | | 1758 | noblewomen - empress - maids - governesses - nobility | 43 | 1758_noblewomen_empress_maids_governesses | | 1759 | retailer - retailers - marketplace - marketplaces - merchants | 43 | 1759_retailer_retailers_marketplace_marketplaces | | 1760 | expedition - expeditions - 1803 - voyage - explorers | 43 | 1760_expedition_expeditions_1803_voyage | | 1761 | grand - central - midtown - terminal - concourse | 43 | 1761_grand_central_midtown_terminal | | 1762 | hill - gibbons - vocalist - rock - beard | 43 | 1762_hill_gibbons_vocalist_rock | | 1763 | blueberries - blueberry - berries - cranberries - cranberry | 43 | 1763_blueberries_blueberry_berries_cranberries | | 1764 | microseconds - timestamps - clocks - epoch - timestamp | 43 | 1764_microseconds_timestamps_clocks_epoch | | 1765 | tinnitus - auditory - otitis - ears - ear | 43 | 1765_tinnitus_auditory_otitis_ears | | 1766 | currencies - renminbi - currency - yuan - monetary | 43 | 1766_currencies_renminbi_currency_yuan | | 1767 | amber - ambergris - jewelry - fragrance - resin | 43 | 1767_amber_ambergris_jewelry_fragrance | | 1768 | yakuza - gangs - crime - gang - thugs | 43 | 1768_yakuza_gangs_crime_gang | | 1769 | brave - browser - browsers - chrome - browse | 43 | 1769_brave_browser_browsers_chrome | | 1770 | bugs - insects - insecticides - pest - pests | 43 | 1770_bugs_insects_insecticides_pest | | 1771 | pit - rap - song - remix - songs | 43 | 1771_pit_rap_song_remix | | 1772 | calendar - calendars - dates - holidays - astronomical | 43 | 1772_calendar_calendars_dates_holidays | | 1773 | calculators - calculator - microelectronics - calculation - calculations | 43 | 1773_calculators_calculator_microelectronics_calculation | | 1774 | statutes - limitations - statute - limitation - prosecution | 43 | 1774_statutes_limitations_statute_limitation | | 1775 | priesthood - rituals - hymns - archaic - caste | 43 | 1775_priesthood_rituals_hymns_archaic | | 1776 | jock - reunion - cast - sitcom - finale | 43 | 1776_jock_reunion_cast_sitcom | | 1777 | boar - boars - pigs - pig - wildlife | 43 | 1777_boar_boars_pigs_pig | | 1778 | supermarket - supermarkets - shops - retailer - stores | 43 | 1778_supermarket_supermarkets_shops_retailer | | 1779 | fasting - fasts - fasted - fast - fatwas | 43 | 1779_fasting_fasts_fasted_fast | | 1780 | infantry - battalions - allied - landings - troops | 43 | 1780_infantry_battalions_allied_landings | | 1781 | protests - protesters - protest - demonstrators - square | 43 | 1781_protests_protesters_protest_demonstrators | | 1782 | witches - prophecy - throne - king - tyrant | 43 | 1782_witches_prophecy_throne_king | | 1783 | peanuts - comics - cartoonists - reprint - reprints | 43 | 1783_peanuts_comics_cartoonists_reprint | | 1784 | penicillin - antibiotics - antibiotic - antimicrobial - antibacterial | 43 | 1784_penicillin_antibiotics_antibiotic_antimicrobial | | 1785 | phosphors - phosphor - luminous - fluorescent - led | 43 | 1785_phosphors_phosphor_luminous_fluorescent | | 1786 | martial - kung - karate - taekwondo - lee | 43 | 1786_martial_kung_karate_taekwondo | | 1787 | werewolf - werewolves - wolf - lycanthropy - wolves | 42 | 1787_werewolf_werewolves_wolf_lycanthropy | | 1788 | marvel - marvels - superhero - superheroes - comics | 42 | 1788_marvel_marvels_superhero_superheroes | | 1789 | tai - chi - martial - wushu - yang | 42 | 1789_tai_chi_martial_wushu | | 1790 | cents - coins - monetary - shillings - coin | 42 | 1790_cents_coins_monetary_shillings | | 1791 | inter - assists - goalscorer - scored - goals | 42 | 1791_inter_assists_goalscorer_scored | | 1792 | massacre - soldiers - victims - civilians - regiment | 42 | 1792_massacre_soldiers_victims_civilians | | 1793 | soldier - poet - poem - poems - autobiography | 42 | 1793_soldier_poet_poem_poems | | 1794 | leases - addresses - client - subnet - subnets | 42 | 1794_leases_addresses_client_subnet | | 1795 | neolithic - archaeological - excavations - paleolithic - archeological | 42 | 1795_neolithic_archaeological_excavations_paleolithic | | 1796 | griffin - rebounds - basketball - wizards - triple | 42 | 1796_griffin_rebounds_basketball_wizards | | 1797 | surrealists - surrealist - surrealism - surrealistic - artists | 42 | 1797_surrealists_surrealist_surrealism_surrealistic | | 1798 | 1850 - settlers - 1846 - goldfields - 1848 | 42 | 1798_1850_settlers_1846_goldfields | | 1799 | serve - serving - volley - frontcourt - play | 42 | 1799_serve_serving_volley_frontcourt | | 1800 | engineering - engineers - engineer - electrical - electronics | 42 | 1800_engineering_engineers_engineer_electrical | | 1801 | festivals - festival - concerts - orchestras - venues | 42 | 1801_festivals_festival_concerts_orchestras | | 1802 | sentinel - islands - island - jungle - tribe | 42 | 1802_sentinel_islands_island_jungle | | 1803 | autobahns - autobahn - throttling - highways - motorways | 42 | 1803_autobahns_autobahn_throttling_highways | | 1804 | watches - wristwatches - wristwatch - watchmaker - timepieces | 42 | 1804_watches_wristwatches_wristwatch_watchmaker | | 1805 | actress - actresses - starred - portrayed - personae | 42 | 1805_actress_actresses_starred_portrayed | | 1806 | aikido - kendo - martial - judo - ryū | 42 | 1806_aikido_kendo_martial_judo | | 1807 | automotive - motors - automobile - jaguar - vehicle | 42 | 1807_automotive_motors_automobile_jaguar | | 1808 | sitcom - cast - bunch - spinoffs - tv | 42 | 1808_sitcom_cast_bunch_spinoffs | | 1809 | park - parks - parking - central - parkland | 42 | 1809_park_parks_parking_central | | 1810 | conquered - tribes - ancient - steppes - nomadic | 42 | 1810_conquered_tribes_ancient_steppes | | 1811 | smartphone - smartphones - android - flagship - mi | 42 | 1811_smartphone_smartphones_android_flagship | | 1812 | vocalists - singer - albums - ballads - songs | 42 | 1812_vocalists_singer_albums_ballads | | 1813 | honeys - honey - honeydew - bees - sugar | 42 | 1813_honeys_honey_honeydew_bees | | 1814 | albums - toured - album - concerts - band | 42 | 1814_albums_toured_album_concerts | | 1815 | photovoltaics - photovoltaic - solar - panels - modules | 42 | 1815_photovoltaics_photovoltaic_solar_panels | | 1816 | bebop - cowboy - anime - episodes - otaku | 42 | 1816_bebop_cowboy_anime_episodes | | 1817 | imaging - radiographic - radiology - scanning - scanned | 42 | 1817_imaging_radiographic_radiology_scanning | | 1818 | logistics - freight - warehousing - procurement - warehouses | 42 | 1818_logistics_freight_warehousing_procurement | | 1819 | javelin - athlete - badminton - olympic - athletics | 42 | 1819_javelin_athlete_badminton_olympic | | 1820 | theme - melody - soundtrack - tune - song | 42 | 1820_theme_melody_soundtrack_tune | | 1821 | commerce - retailers - shopping - retailing - retail | 42 | 1821_commerce_retailers_shopping_retailing | | 1822 | trail - trails - overland - railroad - wagons | 42 | 1822_trail_trails_overland_railroad | | 1823 | rover - rovers - vehicles - vehicle - chassis | 42 | 1823_rover_rovers_vehicles_vehicle | | 1824 | congressman - lawmaker - misconduct - congressional - appeals | 42 | 1824_congressman_lawmaker_misconduct_congressional | | 1825 | postcolonial - imperialism - moralist - revolt - french | 42 | 1825_postcolonial_imperialism_moralist_revolt | | 1826 | hound - warrior - hurling - sword - spear | 42 | 1826_hound_warrior_hurling_sword | | 1827 | ferns - fern - angiosperms - phylogenetic - phylogeny | 42 | 1827_ferns_fern_angiosperms_phylogenetic | | 1828 | credit - social - debtors - audits - blacklists | 42 | 1828_credit_social_debtors_audits | | 1829 | compulsions - compulsive - obsessive - obsession - obsessions | 42 | 1829_compulsions_compulsive_obsessive_obsession | | 1830 | bodybuilder - bodybuilding - bodybuilders - competed - weightlifting | 42 | 1830_bodybuilder_bodybuilding_bodybuilders_competed | | 1831 | actress - siblings - celebrity - actor - divorce | 42 | 1831_actress_siblings_celebrity_actor | | 1832 | assassinated - assassination - hanged - assassinate - assassin | 42 | 1832_assassinated_assassination_hanged_assassinate | | 1833 | eugenics - eugenic - geneticists - genetic - sterilisation | 42 | 1833_eugenics_eugenic_geneticists_genetic | | 1834 | civilians - contractors - prosecution - enforcement - security | 42 | 1834_civilians_contractors_prosecution_enforcement | | 1835 | botany - botanist - botanists - botanical - flora | 42 | 1835_botany_botanist_botanists_botanical | | 1836 | publics - public - pr - communicators - organizations | 41 | 1836_publics_public_pr_communicators | | 1837 | nonfiction - magazines - magazine - anthologies - writer | 41 | 1837_nonfiction_magazines_magazine_anthologies | | 1838 | kimchi - rice - cabbage - cuisine - recipes | 41 | 1838_kimchi_rice_cabbage_cuisine | | 1839 | anna - marriage - dolly - marrying - affair | 41 | 1839_anna_marriage_dolly_marrying | | 1840 | traumatic - trauma - traumas - posttraumatic - psychiatric | 41 | 1840_traumatic_trauma_traumas_posttraumatic | | 1841 | château - vineyard - baronet - winemaking - estates | 41 | 1841_château_vineyard_baronet_winemaking | | 1842 | tunnel - tunnelling - tunnels - railways - railway | 41 | 1842_tunnel_tunnelling_tunnels_railways | | 1843 | rivers - celebrity - housewives - comedian - contestant | 41 | 1843_rivers_celebrity_housewives_comedian | | 1844 | antifa - activists - fascists - fascist - protesters | 41 | 1844_antifa_activists_fascists_fascist | | 1845 | straits - albums - guitarist - dire - guitar | 41 | 1845_straits_albums_guitarist_dire | | 1846 | edict - orthodoxy - roman - persecution - ecumenical | 41 | 1846_edict_orthodoxy_roman_persecution | | 1847 | guitars - guitar - fretboard - frets - necks | 41 | 1847_guitars_guitar_fretboard_frets | | 1848 | limerence - attraction - affection - intrusive - infatuation | 41 | 1848_limerence_attraction_affection_intrusive | | 1849 | philosopher - martyrs - paganism - martyr - pagan | 41 | 1849_philosopher_martyrs_paganism_martyr | | 1850 | shingles - herpesvirus - chickenpox - herpes - smallpox | 41 | 1850_shingles_herpesvirus_chickenpox_herpes | | 1851 | heritage - preservation - films - film - culturally | 41 | 1851_heritage_preservation_films_film | | 1852 | slim - richest - billionaire - pesos - shareholder | 41 | 1852_slim_richest_billionaire_pesos | | 1853 | ninja - manga - anime - shinobi - shōnen | 41 | 1853_ninja_manga_anime_shinobi | | 1854 | opioid - senator - senators - lobbyist - overdoses | 41 | 1854_opioid_senator_senators_lobbyist | | 1855 | trump - trumps - president - grandchildren - paternal | 41 | 1855_trump_trumps_president_grandchildren | | 1856 | scratch - scratching - programming - scratched - adobe | 41 | 1856_scratch_scratching_programming_scratched | | 1857 | smallpox - epidemics - measles - epidemic - diseases | 41 | 1857_smallpox_epidemics_measles_epidemic | | 1858 | ideology - philosopher - philosophy - psychoanalytical - psychoanalytic | 41 | 1858_ideology_philosopher_philosophy_psychoanalytical | | 1859 | sai - samadhi - devotees - qawwali - guru | 41 | 1859_sai_samadhi_devotees_qawwali | | 1860 | college - degree - accredited - bachelor - faculty | 41 | 1860_college_degree_accredited_bachelor | | 1861 | mustard - mustards - bombs - gases - chemicals | 41 | 1861_mustard_mustards_bombs_gases | | 1862 | quixotic - literature - chivalric - novel - chivalry | 41 | 1862_quixotic_literature_chivalric_novel | | 1863 | rap - rock - album - band - boys | 41 | 1863_rap_rock_album_band | | 1864 | blur - oasis - albums - toured - tour | 41 | 1864_blur_oasis_albums_toured | | 1865 | colonies - settlers - colonists - 1624 - colony | 41 | 1865_colonies_settlers_colonists_1624 | | 1866 | satellites - satellite - soviets - spacecraft - soviet | 41 | 1866_satellites_satellite_soviets_spacecraft | | 1867 | brownies - brownie - folklore - stories - maids | 41 | 1867_brownies_brownie_folklore_stories | | 1868 | guardians - galaxy - marvel - 2022 - 2023 | 41 | 1868_guardians_galaxy_marvel_2022 | | 1869 | slender - skinny - creepypastas - creepypasta - portrayals | 41 | 1869_slender_skinny_creepypastas_creepypasta | | 1870 | viewership - viewers - ratings - streamed - viewing | 41 | 1870_viewership_viewers_ratings_streamed | | 1871 | burritos - tacos - salsa - tortillas - foods | 41 | 1871_burritos_tacos_salsa_tortillas | | 1872 | tsar - 1812 - armies - casualties - cavalrymen | 41 | 1872_tsar_1812_armies_casualties | | 1873 | divine - persona - scene - onstage - films | 41 | 1873_divine_persona_scene_onstage | | 1874 | mosque - synagogue - temple - synagogues - waqf | 41 | 1874_mosque_synagogue_temple_synagogues | | 1875 | extradition - arrest - fugitive - extradite - citizenship | 41 | 1875_extradition_arrest_fugitive_extradite | | 1876 | rage - albums - machine - band - album | 41 | 1876_rage_albums_machine_band | | 1877 | zombie - walkers - zombies - walking - episodes | 40 | 1877_zombie_walkers_zombies_walking | | 1878 | impeachment - impeach - bipartisan - republican - bipartisanship | 40 | 1878_impeachment_impeach_bipartisan_republican | | 1879 | lighting - lights - fluorescent - light - brighter | 40 | 1879_lighting_lights_fluorescent_light | | 1880 | sigma - deviations - variability - defects - statisticians | 40 | 1880_sigma_deviations_variability_defects | | 1881 | html - markup - browsers - hypertext - browser | 40 | 1881_html_markup_browsers_hypertext | | 1882 | designing - prototyping - drafting - drawings - designs | 40 | 1882_designing_prototyping_drafting_drawings | | 1883 | sultan - sultans - empresses - concubine - caliph | 40 | 1883_sultan_sultans_empresses_concubine | | 1884 | lost - teen - teens - boys - threesome | 40 | 1884_lost_teen_teens_boys | | 1885 | electromagnetism - electromagnet - electrical - electromagnetic - electrochemistry | 40 | 1885_electromagnetism_electromagnet_electrical_electromagnetic | | 1886 | pianos - piano - pedals - pianists - pedal | 40 | 1886_pianos_piano_pedals_pianists | | 1887 | bake - baking - bakery - bakeries - bakers | 40 | 1887_bake_baking_bakery_bakeries | | 1888 | incest - incestuous - inbreeding - familial - taboo | 40 | 1888_incest_incestuous_inbreeding_familial | | 1889 | policies - policy - legislation - bipartisan - violations | 40 | 1889_policies_policy_legislation_bipartisan | | 1890 | masters - contestants - premiered - chef - chefs | 40 | 1890_masters_contestants_premiered_chef | | 1891 | kitsune - folktale - folktales - folklore - tales | 40 | 1891_kitsune_folktale_folktales_folklore | | 1892 | butterfly - bride - dagger - flowers - cries | 40 | 1892_butterfly_bride_dagger_flowers | | 1893 | island - ancient - geography - continent - ocean | 40 | 1893_island_ancient_geography_continent | | 1894 | pawn - pawned - jewelry - shop - silver | 40 | 1894_pawn_pawned_jewelry_shop | | 1895 | antisemitism - nationalist - kibbutz - diaspora - nationalists | 40 | 1895_antisemitism_nationalist_kibbutz_diaspora | | 1896 | competed - tennis - quarterfinal - tournament - semifinal | 40 | 1896_competed_tennis_quarterfinal_tournament | | 1897 | refugees - repatriation - refugee - asylum - refuge | 40 | 1897_refugees_repatriation_refugee_asylum | | 1898 | divers - diving - dive - diver - cave | 40 | 1898_divers_diving_dive_diver | | 1899 | museums - museum - exhibit - exhibits - exhibition | 40 | 1899_museums_museum_exhibit_exhibits | | 1900 | marvel - marvels - superhero - superheroes - comics | 40 | 1900_marvel_marvels_superhero_superheroes | | 1901 | sat - exams - exam - admissions - scholastic | 40 | 1901_sat_exams_exam_admissions | | 1902 | murders - murdered - murder - robbery - killer | 40 | 1902_murders_murdered_murder_robbery | | 1903 | ancestors - subcontinent - ethnoreligious - dynasties - descended | 40 | 1903_ancestors_subcontinent_ethnoreligious_dynasties | | 1904 | squid - squids - colossal - giant - tentacles | 40 | 1904_squid_squids_colossal_giant | | 1905 | smurf - scorching - merchandising - midget - vocabulary | 40 | 1905_smurf_scorching_merchandising_midget | | 1906 | badminton - competed - tai - quarterfinal - tournament | 40 | 1906_badminton_competed_tai_quarterfinal | | 1907 | seasons - season - list - blazers - pistons | 40 | 1907_seasons_season_list_blazers | | 1908 | crash - gameplay - playable - remastered - games | 40 | 1908_crash_gameplay_playable_remastered | | 1909 | thrones - cast - stark - arya - actors | 40 | 1909_thrones_cast_stark_arya | | 1910 | puck - goalie - hockey - capitals - scoring | 40 | 1910_puck_goalie_hockey_capitals | | 1911 | libretto - operatic - opera - operas - arias | 40 | 1911_libretto_operatic_opera_operas | | 1912 | mounds - mound - excavations - prehistoric - archaeological | 40 | 1912_mounds_mound_excavations_prehistoric | | 1913 | marsupials - opossums - phylogenies - phylogenetic - marsupial | 40 | 1913_marsupials_opossums_phylogenies_phylogenetic | | 1914 | politburo - soviet - 1953 - exterminated - troika | 40 | 1914_politburo_soviet_1953_exterminated | | 1915 | fate - fates - doctor - doctorate - comics | 40 | 1915_fate_fates_doctor_doctorate | | 1916 | encoded - encoding - encodings - encode - byte | 40 | 1916_encoded_encoding_encodings_encode | | 1917 | evil - zombie - prequel - sequels - sequel | 40 | 1917_evil_zombie_prequel_sequels | | 1918 | identifiers - identifier - variants - byte - id | 40 | 1918_identifiers_identifier_variants_byte | | 1919 | racing - eliminated - laps - motorsports - won | 39 | 1919_racing_eliminated_laps_motorsports | | 1920 | centrifuges - centrifuge - nuclear - uranium - centrifugal | 39 | 1920_centrifuges_centrifuge_nuclear_uranium | | 1921 | messiah - prophet - prophets - crucified - crucifixion | 39 | 1921_messiah_prophet_prophets_crucified | | 1922 | cabin - autopsy - sheriff - handcuffed - suspect | 39 | 1922_cabin_autopsy_sheriff_handcuffed | | 1923 | youngest - activist - khan - journalist - laureate | 39 | 1923_youngest_activist_khan_journalist | | 1924 | gonzo - journalist - journalism - journalistic - magazine | 39 | 1924_gonzo_journalist_journalism_journalistic | | 1925 | treaty - cooperation - organization - multilateral - diplomacy | 39 | 1925_treaty_cooperation_organization_multilateral | | 1926 | duchess - duke - royal - potters - charity | 39 | 1926_duchess_duke_royal_potters | | 1927 | cyanobacteria - cyanobacterial - cyanobacterium - phytoplankton - algae | 39 | 1927_cyanobacteria_cyanobacterial_cyanobacterium_phytoplankton | | 1928 | accredited - accreditation - universities - certifications - admissions | 39 | 1928_accredited_accreditation_universities_certifications | | 1929 | ruins - excavation - valley - archaeological - archaeologists | 39 | 1929_ruins_excavation_valley_archaeological | | 1930 | engine - diesel - engines - turbodiesel - turbo | 39 | 1930_engine_diesel_engines_turbodiesel | | 1931 | procrastination - procrastinate - procrastinators - motivation - delaying | 39 | 1931_procrastination_procrastinate_procrastinators_motivation | | 1932 | law - laws - theological - morality - moral | 39 | 1932_law_laws_theological_morality | | 1933 | darknet - net - network - networks - dark | 39 | 1933_darknet_net_network_networks | | 1934 | stitch - hostage - operative - operatives - hostages | 39 | 1934_stitch_hostage_operative_operatives | | 1935 | rex - frontman - bassist - rock - ballads | 39 | 1935_rex_frontman_bassist_rock | | 1936 | projectors - projector - optics - optical - projection | 39 | 1936_projectors_projector_optics_optical | | 1937 | golf - golfer - golfers - handicaps - scoring | 39 | 1937_golf_golfer_golfers_handicaps | | 1938 | saint - rosary - shrine - shrines - rituals | 39 | 1938_saint_rosary_shrine_shrines | | 1939 | plague - epidemics - famines - epidemic - famine | 39 | 1939_plague_epidemics_famines_epidemic | | 1940 | apartheid - segregated - segregation - blacks - discrimination | 39 | 1940_apartheid_segregated_segregation_blacks | | 1941 | unbreakable - split - sequels - cameo - screenplay | 39 | 1941_unbreakable_split_sequels_cameo | | 1942 | sentencing - unconstitutional - punishment - executions - upheld | 39 | 1942_sentencing_unconstitutional_punishment_executions | | 1943 | viper - dodge - vehicle - car - fiat | 39 | 1943_viper_dodge_vehicle_car | | 1944 | pylori - gastric - gastritis - gastrointestinal - pyloric | 39 | 1944_pylori_gastric_gastritis_gastrointestinal | | 1945 | architect - architects - architecture - architectural - designs | 39 | 1945_architect_architects_architecture_architectural | | 1946 | prophet - publisher - writings - painter - 1910 | 39 | 1946_prophet_publisher_writings_painter | | 1947 | enterprise - enterprises - organizational - business - applications | 39 | 1947_enterprise_enterprises_organizational_business | | 1948 | cartoons - cartoon - 1930s - 1932 - 1933 | 39 | 1948_cartoons_cartoon_1930s_1932 | | 1949 | dragon - dragons - train - sequel - nightmare | 39 | 1949_dragon_dragons_train_sequel | | 1950 | unmanned - drone - drones - aircraft - piloted | 39 | 1950_unmanned_drone_drones_aircraft | | 1951 | duets - singer - songs - duet - songwriter | 39 | 1951_duets_singer_songs_duet | | 1952 | muse - concert - vocals - albums - tour | 39 | 1952_muse_concert_vocals_albums | | 1953 | oil - petroleum - oilfields - prices - 1971 | 39 | 1953_oil_petroleum_oilfields_prices | | 1954 | barrow - barrows - murdered - murders - sheriff | 39 | 1954_barrow_barrows_murdered_murders | | 1955 | dams - canyon - boulder - dam - aqueduct | 39 | 1955_dams_canyon_boulder_dam | | 1956 | ashes - wickets - cricket - innings - batsman | 39 | 1956_ashes_wickets_cricket_innings | | 1957 | defendants - autopsy - saw - gunshot - investigation | 39 | 1957_defendants_autopsy_saw_gunshot | | 1958 | absurdism - absurdist - absurdity - absurd - meaninglessness | 39 | 1958_absurdism_absurdist_absurdity_absurd | | 1959 | kung - panda - pandas - martial - sequels | 39 | 1959_kung_panda_pandas_martial | | 1960 | leucotomy - lobotomy - neurosurgical - psychosurgery - lobotomized | 39 | 1960_leucotomy_lobotomy_neurosurgical_psychosurgery | | 1961 | sovereignty - treatises - treatise - governance - philosophy | 39 | 1961_sovereignty_treatises_treatise_governance | | 1962 | colors - coloring - colours - elections - electoral | 38 | 1962_colors_coloring_colours_elections | | 1963 | garter - knighted - royal - knighthood - peerage | 38 | 1963_garter_knighted_royal_knighthood | | 1964 | 1666 - fires - 1670 - firefighters - burning | 38 | 1964_1666_fires_1670_firefighters | | 1965 | logic - mixtape - rapper - memoir - album | 38 | 1965_logic_mixtape_rapper_memoir | | 1966 | antisemitism - antisemitic - semitism - antifa - persecutions | 38 | 1966_antisemitism_antisemitic_semitism_antifa | | 1967 | kraken - octopuses - octopus - fishermen - cephalopods | 38 | 1967_kraken_octopuses_octopus_fishermen | | 1968 | salmon - fishes - fish - sturgeon - fishery | 38 | 1968_salmon_fishes_fish_sturgeon | | 1969 | constitution - constitutional - amended - amendments - amendment | 38 | 1969_constitution_constitutional_amended_amendments | | 1970 | triangles - angles - triangle - hypotenuse - cosines | 38 | 1970_triangles_angles_triangle_hypotenuse | | 1971 | executions - punishments - abolished - punishment - abolishing | 38 | 1971_executions_punishments_abolished_punishment | | 1972 | dragons - imagine - unreleased - indie - released | 38 | 1972_dragons_imagine_unreleased_indie | | 1973 | writer - novels - novel - literary - negro | 38 | 1973_writer_novels_novel_literary | | 1974 | shuttlecock - badminton - tennis - bouncing - backhand | 38 | 1974_shuttlecock_badminton_tennis_bouncing | | 1975 | acronyms - abbreviations - acronym - abbreviation - initials | 38 | 1975_acronyms_abbreviations_acronym_abbreviation | | 1976 | executions - electrocution - unconstitutional - inmates - executed | 38 | 1976_executions_electrocution_unconstitutional_inmates | | 1977 | bots - bot - automated - human - recognition | 38 | 1977_bots_bot_automated_human | | 1978 | prenuptial - agreements - marital - marriage - agreement | 38 | 1978_prenuptial_agreements_marital_marriage | | 1979 | population - 35 - 25 - 65 - age | 38 | 1979_population_35_25_65 | | 1980 | dengue - fever - mosquito - mosquitoes - mosquitos | 38 | 1980_dengue_fever_mosquito_mosquitoes | | 1981 | rainbow - studio - productions - cartoons - cartoon | 38 | 1981_rainbow_studio_productions_cartoons | | 1982 | developmental - classrooms - developmentally - classroom - educational | 38 | 1982_developmental_classrooms_developmentally_classroom | | 1983 | racing - raced - speedway - motorsports - racetrack | 38 | 1983_racing_raced_speedway_motorsports | | 1984 | agricultural - commodity - corn - commodities - biofuels | 38 | 1984_agricultural_commodity_corn_commodities | | 1985 | cosplay - cosplayers - costumes - contestants - masks | 38 | 1985_cosplay_cosplayers_costumes_contestants | | 1986 | sour - songwriter - grungy - debut - songs | 38 | 1986_sour_songwriter_grungy_debut | | 1987 | yoon - jin - jung - hye - kyung | 38 | 1987_yoon_jin_jung_hye | | 1988 | keynote - festival - attendees - conferences - organizers | 38 | 1988_keynote_festival_attendees_conferences | | 1989 | celebrity - spinoffs - sisters - rob - siblings | 38 | 1989_celebrity_spinoffs_sisters_rob | | 1990 | provider - security - cyberattack - cybersecurity - servers | 38 | 1990_provider_security_cyberattack_cybersecurity | | 1991 | pods - pod - containers - cluster - clusters | 38 | 1991_pods_pod_containers_cluster | | 1992 | lifespan - oldest - longevity - age - lived | 38 | 1992_lifespan_oldest_longevity_age | | 1993 | battleship - warship - battleships - naval - navy | 38 | 1993_battleship_warship_battleships_naval | | 1994 | regiments - regiment - battalions - recruits - recruitment | 38 | 1994_regiments_regiment_battalions_recruits | | 1995 | parliamentary - minister - parliament - ministers - constituency | 38 | 1995_parliamentary_minister_parliament_ministers | | 1996 | minister - politician - campaigned - constituency - elected | 37 | 1996_minister_politician_campaigned_constituency | | 1997 | subsidies - aid - postwar - economy - economists | 37 | 1997_subsidies_aid_postwar_economy | | 1998 | gameplay - gaming - rpg - games - twilight | 37 | 1998_gameplay_gaming_rpg_games | | 1999 | alexithymia - anxiety - disorders - psychiatric - disorder | 37 | 1999_alexithymia_anxiety_disorders_psychiatric | | 2000 | tests - test - gender - feminist - women | 37 | 2000_tests_test_gender_feminist | | 2001 | widows - widowhood - 1861 - bipolar - 1880s | 37 | 2001_widows_widowhood_1861_bipolar | | 2002 | demons - demonic - demon - eve - demonology | 37 | 2002_demons_demonic_demon_eve | | 2003 | gangster - gangsters - notorious - prohibition - jailing | 37 | 2003_gangster_gangsters_notorious_prohibition | | 2004 | automata - cellular - cells - cell - automaton | 37 | 2004_automata_cellular_cells_cell | | 2005 | languages - language - multilingual - soviet - lingua | 37 | 2005_languages_language_multilingual_soviet | | 2006 | population - 2050 - populations - 2060 - demographic | 37 | 2006_population_2050_populations_2060 | | 2007 | sarin - cousins - cousin - kidnap - affair | 37 | 2007_sarin_cousins_cousin_kidnap | | 2008 | apes - ape - primates - chimpanzees - primate | 37 | 2008_apes_ape_primates_chimpanzees | | 2009 | livestock - cattle - veterinary - beef - animal | 37 | 2009_livestock_cattle_veterinary_beef | | 2010 | van - alleged - suspect - lurid - arrested | 37 | 2010_van_alleged_suspect_lurid | | 2011 | emotion - emotions - emotional - affective - arousal | 37 | 2011_emotion_emotions_emotional_affective | | 2012 | creoles - creole - francophone - french - parishes | 37 | 2012_creoles_creole_francophone_french | | 2013 | laureates - laureate - prizes - prize - novelists | 37 | 2013_laureates_laureate_prizes_prize | | 2014 | pachinko - parlors - arcades - parlor - yakuza | 37 | 2014_pachinko_parlors_arcades_parlor | | 2015 | bohemian - queen - deacon - mercury - musical | 37 | 2015_bohemian_queen_deacon_mercury | | 2016 | dictator - regime - rebels - unrest - fled | 37 | 2016_dictator_regime_rebels_unrest | | 2017 | bombed - airship - airships - zeppelin - bombing | 37 | 2017_bombed_airship_airships_zeppelin | | 2018 | euthanasia - suicide - suicides - patients - deaths | 37 | 2018_euthanasia_suicide_suicides_patients | | 2019 | censorship - pornography - videos - moderation - abusing | 37 | 2019_censorship_pornography_videos_moderation | | 2020 | apple - retina - screen - camera - processor | 37 | 2020_apple_retina_screen_camera | | 2021 | marshals - marshal - department - deputy - deputies | 37 | 2021_marshals_marshal_department_deputy | | 2022 | baron - flew - 1918 - von - pilots | 37 | 2022_baron_flew_1918_von | | 2023 | orthodox - soviet - atheism - clergy - persecution | 37 | 2023_orthodox_soviet_atheism_clergy | | 2024 | metal - bands - band - idol - genre | 37 | 2024_metal_bands_band_idol | | 2025 | pharaoh - conquered - rulers - deposed - kings | 37 | 2025_pharaoh_conquered_rulers_deposed | | 2026 | abducted - murders - brooks - abduction - abductions | 37 | 2026_abducted_murders_brooks_abduction | | 2027 | currencies - currency - rates - inflation - parity | 37 | 2027_currencies_currency_rates_inflation | | 2028 | 1917 - 1918 - soviet - soviets - republic | 37 | 2028_1917_1918_soviet_soviets | | 2029 | animism - animists - animist - spiritualism - anthropological | 37 | 2029_animism_animists_animist_spiritualism | | 2030 | hypothesis - hypotheses - theories - scientific - reproducibility | 37 | 2030_hypothesis_hypotheses_theories_scientific | | 2031 | commerce - resell - sales - vendor - marketplace | 37 | 2031_commerce_resell_sales_vendor | | 2032 | mathematician - mathematicians - mathematics - physicist - von | 37 | 2032_mathematician_mathematicians_mathematics_physicist | | 2033 | channel - channels - tv - television - streaming | 37 | 2033_channel_channels_tv_television | | 2034 | marvel - superhero - comics - valkyrie - thunder | 37 | 2034_marvel_superhero_comics_valkyrie | | 2035 | 1080p - framerate - 1080 - 60fps - 720p | 37 | 2035_1080p_framerate_1080_60fps | | 2036 | head - butt - episodes - rerun - paramount | 37 | 2036_head_butt_episodes_rerun | | 2037 | woke - woken - wake - awake - twitter | 37 | 2037_woke_woken_wake_awake | | 2038 | ron - character - villains - sidekick - characters | 37 | 2038_ron_character_villains_sidekick | | 2039 | fed - inflation - monetary - financial - treasury | 37 | 2039_fed_inflation_monetary_financial | | 2040 | fatwas - fatwā - fatwa - satanic - author | 37 | 2040_fatwas_fatwā_fatwa_satanic | | 2041 | reliance - telecommunications - telecom - telecommunication - broadband | 37 | 2041_reliance_telecommunications_telecom_telecommunication | | 2042 | conqueror - 1066 - 1069 - 1067 - 1086 | 37 | 2042_conqueror_1066_1069_1067 | | 2043 | broadway - musicals - theatre - cat - musical | 37 | 2043_broadway_musicals_theatre_cat | | 2044 | philosopher - writings - aphorism - philosophers - poet | 37 | 2044_philosopher_writings_aphorism_philosophers | | 2045 | groceries - restaurants - restaurant - taxicab - taxi | 37 | 2045_groceries_restaurants_restaurant_taxicab | | 2046 | musicals - lyricist - musical - concertos - operas | 37 | 2046_musicals_lyricist_musical_concertos | | 2047 | mysticism - mystical - spiritual - spirituality - esotericism | 37 | 2047_mysticism_mystical_spiritual_spirituality | | 2048 | biblical - patriarch - prophethood - genesis - prophetic | 37 | 2048_biblical_patriarch_prophethood_genesis | | 2049 | chainsaw - massacre - slaughterhouse - sequels - chain | 36 | 2049_chainsaw_massacre_slaughterhouse_sequels | | 2050 | fingerprints - robbers - fingerprint - investigation - ransom | 36 | 2050_fingerprints_robbers_fingerprint_investigation | | 2051 | rocky - rapper - ap - mixtape - billboard | 36 | 2051_rocky_rapper_ap_mixtape | | 2052 | influential - list - publicized - ranking - world | 36 | 2052_influential_list_publicized_ranking | | 2053 | concert - concerts - tour - theater - headlining | 36 | 2053_concert_concerts_tour_theater | | 2054 | protozoans - protozoa - protozoan - taxonomic - phylogeny | 36 | 2054_protozoans_protozoa_protozoan_taxonomic | | 2055 | missile - missiles - supersonic - hypersonic - radar | 36 | 2055_missile_missiles_supersonic_hypersonic | | 2056 | anthrax - anthracis - infection - infections - infected | 36 | 2056_anthrax_anthracis_infection_infections | | 2057 | streaming - premiere - tv - stream - 1080p | 36 | 2057_streaming_premiere_tv_stream | | 2058 | apps - app - android - purchases - ads | 36 | 2058_apps_app_android_purchases | | 2059 | 1644 - theological - theologian - puritan - theology | 36 | 2059_1644_theological_theologian_puritan | | 2060 | spaceflight - military - spaceflights - spacecraft - aerospace | 36 | 2060_spaceflight_military_spaceflights_spacecraft | | 2061 | dick - biography - author - novelist - writer | 36 | 2061_dick_biography_author_novelist | | 2062 | mangroves - mangrove - ecosystem - ecosystems - biodiversity | 36 | 2062_mangroves_mangrove_ecosystem_ecosystems | | 2063 | harbour - naval - captured - 1842 - steamships | 36 | 2063_harbour_naval_captured_1842 | | 2064 | chipmunks - chipmunk - song - 1959 - albums | 36 | 2064_chipmunks_chipmunk_song_1959 | | 2065 | plasmas - plasma - electrostatic - electrically - electrons | 36 | 2065_plasmas_plasma_electrostatic_electrically | | 2066 | channel - mosh - media - channels - entertainment | 36 | 2066_channel_mosh_media_channels | | 2067 | kernel - latest - maintainers - os - maintainer | 36 | 2067_kernel_latest_maintainers_os | | 2068 | population - municipalities - cityscape - villages - city | 36 | 2068_population_municipalities_cityscape_villages | | 2069 | songs - song - singer - albums - album | 36 | 2069_songs_song_singer_albums | | 2070 | cannabinoids - cannabinoid - tetrahydrocannabinol - cannabidiol - cannabis | 36 | 2070_cannabinoids_cannabinoid_tetrahydrocannabinol_cannabidiol | | 2071 | jong - politburo - secretary - leader - chairman | 36 | 2071_jong_politburo_secretary_leader | | 2072 | dancer - dances - danced - dancing - choreography | 36 | 2072_dancer_dances_danced_dancing | | 2073 | reptilians - conspiracist - conspiracism - reptilian - extraterrestrial | 36 | 2073_reptilians_conspiracist_conspiracism_reptilian | | 2074 | newscast - reporter - primetime - anchor - journalist | 36 | 2074_newscast_reporter_primetime_anchor | | 2075 | 1605 - gunpowder - undercroft - conspirators - 1603 | 36 | 2075_1605_gunpowder_undercroft_conspirators | | 2076 | motley - albums - frontman - band - concert | 36 | 2076_motley_albums_frontman_band | | 2077 | wormholes - wormhole - spacetime - traversable - relativity | 36 | 2077_wormholes_wormhole_spacetime_traversable | | 2078 | habeas - constitution - detention - constitutional - imprisonment | 36 | 2078_habeas_constitution_detention_constitutional | | 2079 | renewed - primetime - episodes - airing - premiered | 36 | 2079_renewed_primetime_episodes_airing | | 2080 | strikers - goalscoring - winger - striker - goalkeeping | 36 | 2080_strikers_goalscoring_winger_striker | | 2081 | seal - undercover - smuggling - cartel - smuggler | 36 | 2081_seal_undercover_smuggling_cartel | | 2082 | placebo - concert - concerts - gigs - albums | 36 | 2082_placebo_concert_concerts_gigs | | 2083 | radiation - radiographs - rays - radiography - ray | 36 | 2083_radiation_radiographs_rays_radiography | | 2084 | thrash - vocals - drumming - rhythmic - melodic | 36 | 2084_thrash_vocals_drumming_rhythmic | | 2085 | scored - scoring - goalscoring - goalscorer - goals | 36 | 2085_scored_scoring_goalscoring_goalscorer | | 2086 | teams - league - leagues - stadium - conferences | 36 | 2086_teams_league_leagues_stadium | | 2087 | kava - hepatotoxicity - supplements - herbal - medicinal | 36 | 2087_kava_hepatotoxicity_supplements_herbal | | 2088 | expedition - expeditions - voyage - explorers - archaeologist | 36 | 2088_expedition_expeditions_voyage_explorers | | 2089 | polyamory - polyamorous - polygamous - polygamy - monogamous | 36 | 2089_polyamory_polyamorous_polygamous_polygamy | | 2090 | rose - autobiography - quotes - writing - quotation | 36 | 2090_rose_autobiography_quotes_writing | | 2091 | biblical - conquered - temple - kingdom - temples | 36 | 2091_biblical_conquered_temple_kingdom | | 2092 | bacon - pork - vegetarian - meat - beef | 36 | 2092_bacon_pork_vegetarian_meat | | 2093 | memes - evolution - evolutionary - meme - genetic | 36 | 2093_memes_evolution_evolutionary_meme | | 2094 | actress - actresses - blonde - playboy - vogue | 36 | 2094_actress_actresses_blonde_playboy | | 2095 | ancient - law - laws - treatise - legislation | 36 | 2095_ancient_law_laws_treatise | | 2096 | actor - peck - gangster - actors - portrayal | 36 | 2096_actor_peck_gangster_actors | | 2097 | protesting - protest - protesters - protests - picketing | 36 | 2097_protesting_protest_protesters_protests | | 2098 | neo - matrix - trinity - smith - cypher | 36 | 2098_neo_matrix_trinity_smith | | 2099 | mathematician - mathematicians - hardy - mathematics - mathematical | 36 | 2099_mathematician_mathematicians_hardy_mathematics | | 2100 | semiotics - semiotic - concepts - linguistics - formalist | 36 | 2100_semiotics_semiotic_concepts_linguistics | | 2101 | palace - palaces - monuments - museums - museum | 36 | 2101_palace_palaces_monuments_museums | | 2102 | episodes - airing - episode - ugly - finale | 36 | 2102_episodes_airing_episode_ugly | | 2103 | lunar - moon - landings - spacecraft - missions | 36 | 2103_lunar_moon_landings_spacecraft | | 2104 | protocols - antisemitism - conspiratorial - antisemitic - conspiracist | 35 | 2104_protocols_antisemitism_conspiratorial_antisemitic | | 2105 | sailed - seaworthy - boat - vessel - aground | 35 | 2105_sailed_seaworthy_boat_vessel | | 2106 | duet - concert - albums - songs - concerts | 35 | 2106_duet_concert_albums_songs | | 2107 | flame - ignited - flamethrower - flames - burning | 35 | 2107_flame_ignited_flamethrower_flames | | 2108 | wu - han - tung - qi - sun | 35 | 2108_wu_han_tung_qi | | 2109 | committees - parliamentary - parliament - chairperson - chairpersons | 35 | 2109_committees_parliamentary_parliament_chairperson | | 2110 | alleged - bail - prosecution - arrested - arrest | 35 | 2110_alleged_bail_prosecution_arrested | | 2111 | duet - sang - queen - concert - diamonds | 35 | 2111_duet_sang_queen_concert | | 2112 | municipality - polish - orchestras - cathedral - conservatory | 35 | 2112_municipality_polish_orchestras_cathedral | | 2113 | regiment - enlisted - regiments - regimental - navy | 35 | 2113_regiment_enlisted_regiments_regimental | | 2114 | tower - towers - tallest - 1889 - construction | 35 | 2114_tower_towers_tallest_1889 | | 2115 | taxation - tax - taxes - taxed - taxpayers | 35 | 2115_taxation_tax_taxes_taxed | | 2116 | sclerosis - ms - demyelination - encephalomyelitis - neurological | 35 | 2116_sclerosis_ms_demyelination_encephalomyelitis | | 2117 | seppuku - samurai - decapitation - decapitate - decapitates | 35 | 2117_seppuku_samurai_decapitation_decapitate | | 2118 | architect - architects - architecture - houses - designs | 35 | 2118_architect_architects_architecture_houses | | 2119 | albums - songs - lyrics - lyricist - sang | 35 | 2119_albums_songs_lyrics_lyricist | | 2120 | decibels - decibel - amplitude - amplitudes - amplifier | 35 | 2120_decibels_decibel_amplitude_amplitudes | | 2121 | palace - crystal - penalty - scorer - scored | 35 | 2121_palace_crystal_penalty_scorer | | 2122 | emir - sheikh - emirate - emirates - president | 35 | 2122_emir_sheikh_emirate_emirates | | 2123 | strips - circular - strip - folds - geometrically | 35 | 2123_strips_circular_strip_folds | | 2124 | sang - songwriter - singers - albums - singer | 35 | 2124_sang_songwriter_singers_albums | | 2125 | libel - celebrity - reportedly - scandals - resigned | 35 | 2125_libel_celebrity_reportedly_scandals | | 2126 | divergent - trilogy - novel - screenplay - sequel | 35 | 2126_divergent_trilogy_novel_screenplay | | 2127 | vaccine - vaccines - vaccination - vaccinated - vaccinations | 35 | 2127_vaccine_vaccines_vaccination_vaccinated | | 2128 | paramount - premiered - channel - tv - episodes | 35 | 2128_paramount_premiered_channel_tv | | 2129 | fish - cannibalism - tortured - murders - murder | 35 | 2129_fish_cannibalism_tortured_murders | | 2130 | touchdowns - interceptions - touchdown - quarterback - fumble | 35 | 2130_touchdowns_interceptions_touchdown_quarterback | | 2131 | viewership - viewers - televised - television - broadcasters | 35 | 2131_viewership_viewers_televised_television | | 2132 | shops - mall - stores - shop - store | 35 | 2132_shops_mall_stores_shop | | 2133 | niece - grandmother - 1918 - aunt - 1945 | 35 | 2133_niece_grandmother_1918_aunt | | 2134 | scored - goalscorers - goals - scoring - goal | 35 | 2134_scored_goalscorers_goals_scoring | | 2135 | seo - google - ranking - rankings - pages | 35 | 2135_seo_google_ranking_rankings | | 2136 | reliance - shareholders - crore - crores - industries | 35 | 2136_reliance_shareholders_crore_crores | | 2137 | postmodernism - modernism - postmodern - modernists - modernist | 35 | 2137_postmodernism_modernism_postmodern_modernists | | 2138 | genesis - biblical - patriarch - prophets - polytheism | 35 | 2138_genesis_biblical_patriarch_prophets | | 2139 | presidential - parliamentary - governs - government - presidents | 35 | 2139_presidential_parliamentary_governs_government | | 2140 | political - politics - sciences - sociology - sociologists | 35 | 2140_political_politics_sciences_sociology | | 2141 | feeds - syndication - feed - subscribing - browsers | 35 | 2141_feeds_syndication_feed_subscribing | | 2142 | ac - drummer - guitarist - drums - albums | 35 | 2142_ac_drummer_guitarist_drums | | 2143 | chassis - vehicle - dealerships - automotive - factory | 35 | 2143_chassis_vehicle_dealerships_automotive | | 2144 | biographers - revolt - orientalism - desertion - pillars | 35 | 2144_biographers_revolt_orientalism_desertion | | 2145 | wastes - waste - disposal - recycling - landfills | 35 | 2145_wastes_waste_disposal_recycling | | 2146 | radio - fm - stations - broadcasting - broadcasters | 35 | 2146_radio_fm_stations_broadcasting | | 2147 | resignation - riots - protest - paramilitary - protesters | 35 | 2147_resignation_riots_protest_paramilitary | | 2148 | theme - themes - intro - anime - ending | 35 | 2148_theme_themes_intro_anime | | 2149 | journalist - hitch - correspondent - readership - statesman | 34 | 2149_journalist_hitch_correspondent_readership | | 2150 | paintings - painting - painters - painter - murals | 34 | 2150_paintings_painting_painters_painter | | 2151 | fighter - fighters - featherweight - brawling - bantamweight | 34 | 2151_fighter_fighters_featherweight_brawling | | 2152 | transit - commute - commuting - commuters - buses | 34 | 2152_transit_commute_commuting_commuters | | 2153 | bliss - ambient - stages - albums - stage | 34 | 2153_bliss_ambient_stages_albums | | 2154 | studios - lions - acquisitions - paramount - owns | 34 | 2154_studios_lions_acquisitions_paramount | | 2155 | humidity - humid - moisture - dew - evaporation | 34 | 2155_humidity_humid_moisture_dew | | 2156 | codeine - prescription - paracetamol - prescribed - dihydrocodeine | 34 | 2156_codeine_prescription_paracetamol_prescribed | | 2157 | merger - shareholders - acquisition - acquire - acquired | 34 | 2157_merger_shareholders_acquisition_acquire | | 2158 | geopolitical - tsarist - ideology - geopolitics - political | 34 | 2158_geopolitical_tsarist_ideology_geopolitics | | 2159 | crops - agriculture - agricultural - maize - plantations | 34 | 2159_crops_agriculture_agricultural_maize | | 2160 | poutine - gravy - truffles - burger - cuisine | 34 | 2160_poutine_gravy_truffles_burger | | 2161 | autobiography - autobiographies - bird - poetry - poet | 34 | 2161_autobiography_autobiographies_bird_poetry | | 2162 | propaganda - propagandists - propagandistic - propagandist - persuasion | 34 | 2162_propaganda_propagandists_propagandistic_propagandist | | 2163 | sausage - sausages - bun - chili - condiments | 34 | 2163_sausage_sausages_bun_chili | | 2164 | albums - vocals - album - remix - punk | 34 | 2164_albums_vocals_album_remix | | 2165 | inmate - prison - prisoner - imprisonment - sentenced | 34 | 2165_inmate_prison_prisoner_imprisonment | | 2166 | discord - subscription - guilds - nitro - twitch | 34 | 2166_discord_subscription_guilds_nitro | | 2167 | gnostic - gnosis - theology - religions - theosophical | 34 | 2167_gnostic_gnosis_theology_religions | | 2168 | bomber - squadron - 509th - 1945 - bombardment | 34 | 2168_bomber_squadron_509th_1945 | | 2169 | boiler - turbine - engines - invention - inventor | 34 | 2169_boiler_turbine_engines_invention | | 2170 | fighter - arcade - street - arcades - consoles | 34 | 2170_fighter_arcade_street_arcades | | 2171 | parole - sentenced - hearings - convicted - judge | 34 | 2171_parole_sentenced_hearings_convicted | | 2172 | home - sequels - sequel - screenplays - film | 34 | 2172_home_sequels_sequel_screenplays | | 2173 | ferry - harbor - pier - wharf - waterfront | 34 | 2173_ferry_harbor_pier_wharf | | 2174 | muddy - harmonica - blues - albums - guitarist | 34 | 2174_muddy_harmonica_blues_albums | | 2175 | calamity - calamitous - novels - novel - soprano | 34 | 2175_calamity_calamitous_novels_novel | | 2176 | godfather - mafia - capo - murdered - murder | 34 | 2176_godfather_mafia_capo_murdered | | 2177 | undrafted - cornerback - patriots - receptions - touchdowns | 34 | 2177_undrafted_cornerback_patriots_receptions | | 2178 | royal - monarch - palace - palaces - royalty | 34 | 2178_royal_monarch_palace_palaces | | 2179 | joey - punk - drummer - sings - vocals | 34 | 2179_joey_punk_drummer_sings | | 2180 | nuclear - sanctions - missiles - jong - missile | 34 | 2180_nuclear_sanctions_missiles_jong | | 2181 | poet - poem - poetry - poems - scribes | 34 | 2181_poet_poem_poetry_poems | | 2182 | rebounds - warriors - curry - assists - green | 34 | 2182_rebounds_warriors_curry_assists | | 2183 | skyscraper - tallest - tower - towers - skyscrapers | 34 | 2183_skyscraper_tallest_tower_towers | | 2184 | shareholder - shareholders - investor - shares - holdings | 34 | 2184_shareholder_shareholders_investor_shares | | 2185 | astronomer - heliocentric - 1516 - papal - 1496 | 34 | 2185_astronomer_heliocentric_1516_papal | | 2186 | actresses - actress - wilder - portrayal - broadway | 34 | 2186_actresses_actress_wilder_portrayal | | 2187 | register - historic - cemeteries - landmarks - monuments | 34 | 2187_register_historic_cemeteries_landmarks | | 2188 | albums - album - songs - concert - comeback | 34 | 2188_albums_album_songs_concert | | 2189 | merger - aerospace - subsidiaries - acquisitions - firms | 34 | 2189_merger_aerospace_subsidiaries_acquisitions | | 2190 | wiggle - band - entertainers - concert - performers | 34 | 2190_wiggle_band_entertainers_concert | | 2191 | pilgrims - 1620 - pilgrim - voyage - sailed | 34 | 2191_pilgrims_1620_pilgrim_voyage | | 2192 | sneakers - footwear - shoes - sportswear - sneaker | 34 | 2192_sneakers_footwear_shoes_sportswear | | 2193 | blade - prequels - sequel - 2049 - runner | 34 | 2193_blade_prequels_sequel_2049 | | 2194 | torch - olympic - relay - relays - flame | 34 | 2194_torch_olympic_relay_relays | | 2195 | novelist - purple - novels - novel - literary | 34 | 2195_novelist_purple_novels_novel | | 2196 | ranger - regiment - infantry - regimental - rangers | 34 | 2196_ranger_regiment_infantry_regimental | | 2197 | dialects - languages - speak - dialect - language | 34 | 2197_dialects_languages_speak_dialect | | 2198 | gymnast - gymnasts - gymnastics - gymnastic - athlete | 34 | 2198_gymnast_gymnasts_gymnastics_gymnastic | | 2199 | haiku - haikai - poetry - poems - poet | 34 | 2199_haiku_haikai_poetry_poems | | 2200 | spetsnaz - soviet - infantrymen - commanders - brigades | 34 | 2200_spetsnaz_soviet_infantrymen_commanders | | 2201 | deaf - blindness - disabilities - braille - blind | 33 | 2201_deaf_blindness_disabilities_braille | | 2202 | medieval - antiquity - renaissance - darkness - dark | 33 | 2202_medieval_antiquity_renaissance_darkness | | 2203 | photographer - photography - photographs - photographic - photographers | 33 | 2203_photographer_photography_photographs_photographic | | 2204 | genocide - genocides - holocaust - perpetrators - genocidal | 33 | 2204_genocide_genocides_holocaust_perpetrators | | 2205 | bow - actress - 1932 - laurels - 1920s | 33 | 2205_bow_actress_1932_laurels | | 2206 | crops - agriculture - irrigation - agricultural - farmers | 33 | 2206_crops_agriculture_irrigation_agricultural | | 2207 | paella - paprika - cuisine - seafood - olive | 33 | 2207_paella_paprika_cuisine_seafood | | 2208 | novelist - literature - literary - writer - poet | 33 | 2208_novelist_literature_literary_writer | | 2209 | midnight - noon - midday - clock - evening | 33 | 2209_midnight_noon_midday_clock | | 2210 | griffin - donation - donated - museum - donating | 33 | 2210_griffin_donation_donated_museum | | 2211 | starling - cannibalized - cannibal - cannibalize - killer | 33 | 2211_starling_cannibalized_cannibal_cannibalize | | 2212 | taxis - taxi - taxicabs - fares - cabs | 33 | 2212_taxis_taxi_taxicabs_fares | | 2213 | hunts - noose - gun - hunting - hunters | 33 | 2213_hunts_noose_gun_hunting | | 2214 | ethnicities - ethnicity - ethnic - racial - census | 33 | 2214_ethnicities_ethnicity_ethnic_racial | | 2215 | buildings - skyscraper - apartments - building - architecture | 33 | 2215_buildings_skyscraper_apartments_building | | 2216 | sedan - convertibles - convertible - coupe - corvette | 33 | 2216_sedan_convertibles_convertible_coupe | | 2217 | harden - assists - rebounds - scoring - triple | 33 | 2217_harden_assists_rebounds_scoring | | 2218 | emails - webmail - email - mail - google | 33 | 2218_emails_webmail_email_mail | | 2219 | paintings - painting - painter - painters - paint | 33 | 2219_paintings_painting_painter_painters | | 2220 | yards - quarterback - touchdowns - receptions - cornerback | 33 | 2220_yards_quarterback_touchdowns_receptions | | 2221 | museums - museum - exhibitions - sheikh - sultanate | 33 | 2221_museums_museum_exhibitions_sheikh | | 2222 | retailer - store - sales - shop - stores | 33 | 2222_retailer_store_sales_shop | | 2223 | khan - politician - candidate - goldsmith - councillor | 33 | 2223_khan_politician_candidate_goldsmith | | 2224 | eigenfunctions - quantum - eigenstates - eigenstate - observables | 33 | 2224_eigenfunctions_quantum_eigenstates_eigenstate | | 2225 | recycling - recycled - recycle - recyclable - recycles | 33 | 2225_recycling_recycled_recycle_recyclable | | 2226 | thrash - slayer - band - bands - hardcore | 33 | 2226_thrash_slayer_band_bands | | 2227 | beetle - beetles - convertible - fenders - chassis | 33 | 2227_beetle_beetles_convertible_fenders | | 2228 | assists - rebounds - steals - doubles - triple | 33 | 2228_assists_rebounds_steals_doubles | | 2229 | fifths - fifth - circle - tones - numerals | 33 | 2229_fifths_fifth_circle_tones | | 2230 | rush - bands - rock - zeppelin - drummer | 33 | 2230_rush_bands_rock_zeppelin | | 2231 | yuan - chairman - hui - election - elections | 33 | 2231_yuan_chairman_hui_election | | 2232 | clitoris - position - positions - intercourse - clitoral | 33 | 2232_clitoris_position_positions_intercourse | | 2233 | viewers - episodes - generation - syndication - storylines | 33 | 2233_viewers_episodes_generation_syndication | | 2234 | hegemony - superpower - superpowers - diplomacy - hegemonic | 33 | 2234_hegemony_superpower_superpowers_diplomacy | | 2235 | population - demographic - comune - average - depopulation | 33 | 2235_population_demographic_comune_average | | 2236 | laptops - laptop - notebooks - notebook - desktops | 33 | 2236_laptops_laptop_notebooks_notebook | | 2237 | unrest - bombing - rebels - dictator - guerrillas | 33 | 2237_unrest_bombing_rebels_dictator | | 2238 | survivors - zombies - umbrella - hive - discovers | 33 | 2238_survivors_zombies_umbrella_hive | | 2239 | activist - intellectuals - activism - anarchism - linguistics | 33 | 2239_activist_intellectuals_activism_anarchism | | 2240 | sesame - episodes - cartoon - licensing - television | 32 | 2240_sesame_episodes_cartoon_licensing | | 2241 | moderate - conservatives - ideological - nationalist - conservative | 32 | 2241_moderate_conservatives_ideological_nationalist | | 2242 | biblical - testament - mythological - epistle - satan | 32 | 2242_biblical_testament_mythological_epistle | | 2243 | buried - cemetery - died - funeral - interred | 32 | 2243_buried_cemetery_died_funeral | | 2244 | defender - footballer - arsenal - villa - stoke | 32 | 2244_defender_footballer_arsenal_villa | | 2245 | dictionaries - dictionary - reprinting - typography - abridgement | 32 | 2245_dictionaries_dictionary_reprinting_typography | | 2246 | osteopathic - osteopathy - osteopaths - homeopathy - physiotherapists | 32 | 2246_osteopathic_osteopathy_osteopaths_homeopathy | | 2247 | indigenous - aboriginal - arctic - tribal - anthropologist | 32 | 2247_indigenous_aboriginal_arctic_tribal | | 2248 | religions - religion - religiosity - theology - religious | 32 | 2248_religions_religion_religiosity_theology | | 2249 | lily - robin - episode - doppelganger - doppelgänger | 32 | 2249_lily_robin_episode_doppelganger | | 2250 | pedagogy - pedagogical - pedagogue - educator - teaching | 32 | 2250_pedagogy_pedagogical_pedagogue_educator | | 2251 | touchdowns - interceptions - yards - quarterback - interception | 32 | 2251_touchdowns_interceptions_yards_quarterback | | 2252 | dubbed - satellite - amazon - premiere - streamed | 32 | 2252_dubbed_satellite_amazon_premiere | | 2253 | drummer - drumming - band - songwriters - gigs | 32 | 2253_drummer_drumming_band_songwriters | | 2254 | finasteride - antiandrogen - antiandrogenic - inhibitor - dosage | 32 | 2254_finasteride_antiandrogen_antiandrogenic_inhibitor | | 2255 | northwest - fort - settlers - forts - 1840s | 32 | 2255_northwest_fort_settlers_forts | | 2256 | ancestry - ancestor - ancestors - ancestral - archipelago | 32 | 2256_ancestry_ancestor_ancestors_ancestral | | 2257 | hypotenuse - triangles - squares - geometry - triangle | 32 | 2257_hypotenuse_triangles_squares_geometry | | 2258 | orbits - solutions - bodies - mathematical - gravitation | 32 | 2258_orbits_solutions_bodies_mathematical | | 2259 | easter - holiday - celebrated - feasts - feast | 32 | 2259_easter_holiday_celebrated_feasts | | 2260 | antihypertensive - propranolol - hypertension - blockers - adrenergic | 32 | 2260_antihypertensive_propranolol_hypertension_blockers | | 2261 | adder - servant - reign - descendants - queen | 32 | 2261_adder_servant_reign_descendants | | 2262 | genetics - genetic - heredity - traits - genes | 32 | 2262_genetics_genetic_heredity_traits | | 2263 | amazon - affiliate - retailers - retailer - sales | 32 | 2263_amazon_affiliate_retailers_retailer | | 2264 | birthday - doodle - birthdays - 26th - celebrated | 32 | 2264_birthday_doodle_birthdays_26th | | 2265 | hominem - argumentation - arguments - philosophical - philosopher | 32 | 2265_hominem_argumentation_arguments_philosophical | | 2266 | carmaker - automobiles - cars - sedans - vehicles | 32 | 2266_carmaker_automobiles_cars_sedans | | 2267 | amnesty - refugees - racism - asylum - discrimination | 32 | 2267_amnesty_refugees_racism_asylum | | 2268 | pamphlet - 1776 - pamphlets - revolutionary - revolutionaries | 32 | 2268_pamphlet_1776_pamphlets_revolutionary | | 2269 | imperialism - colonial - labour - humanitarian - ivory | 32 | 2269_imperialism_colonial_labour_humanitarian | | 2270 | news - journalism - propaganda - misinformation - credible | 32 | 2270_news_journalism_propaganda_misinformation | | 2271 | gymnast - gymnasts - gymnastics - olympic - medals | 32 | 2271_gymnast_gymnasts_gymnastics_olympic | | 2272 | stadia - subscriptions - subscription - launched - launch | 32 | 2272_stadia_subscriptions_subscription_launched | | 2273 | spinal - paralysis - paralyzed - vertebrae - cervical | 32 | 2273_spinal_paralysis_paralyzed_vertebrae | | 2274 | housewives - housewife - cast - reunion - guests | 32 | 2274_housewives_housewife_cast_reunion | | 2275 | irrigation - sea - waters - salinity - basins | 32 | 2275_irrigation_sea_waters_salinity | | 2276 | transistors - microprocessors - processors - microprocessor - transistor | 32 | 2276_transistors_microprocessors_processors_microprocessor | | 2277 | phantom - ghost - opera - lair - cloak | 32 | 2277_phantom_ghost_opera_lair | | 2278 | granites - granite - mineralogy - magmas - basaltic | 32 | 2278_granites_granite_mineralogy_magmas | | 2279 | victor - fascism - monarchist - monarchy - fascist | 32 | 2279_victor_fascism_monarchist_monarchy | | 2280 | fasciitis - fascia - plantar - fascicles - tendon | 32 | 2280_fasciitis_fascia_plantar_fascicles | | 2281 | company - conglomerate - market - enterprises - industries | 32 | 2281_company_conglomerate_market_enterprises | | 2282 | rosemary - lobotomy - nuns - lobotomized - convent | 32 | 2282_rosemary_lobotomy_nuns_lobotomized | | 2283 | mosque - terrorist - mosques - coroner - victims | 32 | 2283_mosque_terrorist_mosques_coroner | | 2284 | tennis - tournaments - tournament - finalist - quarterfinals | 32 | 2284_tennis_tournaments_tournament_finalist | | 2285 | dramas - airing - drama - sonata - cultural | 32 | 2285_dramas_airing_drama_sonata | | 2286 | globalization - globalisation - globalized - transnational - global | 32 | 2286_globalization_globalisation_globalized_transnational | | 2287 | knight - donated - philanthropist - philanthropic - donation | 32 | 2287_knight_donated_philanthropist_philanthropic | | 2288 | ibn - al - theology - treatises - ijtihad | 32 | 2288_ibn_al_theology_treatises | | 2289 | creatine - creatinine - supplementation - supplement - supplements | 32 | 2289_creatine_creatinine_supplementation_supplement | | 2290 | duo - app - mobile - android - proficiency | 32 | 2290_duo_app_mobile_android | | 2291 | offspring - albums - album - band - bands | 32 | 2291_offspring_albums_album_band | | 2292 | guards - defensive - guard - basketball - players | 32 | 2292_guards_defensive_guard_basketball | | 2293 | prix - lightning - racing - radiator - racers | 32 | 2293_prix_lightning_racing_radiator | | 2294 | executives - stockholder - executive - shareholders - company | 32 | 2294_executives_stockholder_executive_shareholders | | 2295 | presenter - savage - airing - keynote - premiered | 32 | 2295_presenter_savage_airing_keynote | | 2296 | multiracial - geisha - ethnic - actors - ethnically | 31 | 2296_multiracial_geisha_ethnic_actors | | 2297 | schools - academies - school - education - colleges | 31 | 2297_schools_academies_school_education | | 2298 | oz - debate - debater - debating - midterms | 31 | 2298_oz_debate_debater_debating | | 2299 | dragon - anime - manga - superman - piccolo | 31 | 2299_dragon_anime_manga_superman | | 2300 | tennis - slams - doubles - tournaments - racquets | 31 | 2300_tennis_slams_doubles_tournaments | | 2301 | disks - disk - floppy - drives - storage | 31 | 2301_disks_disk_floppy_drives | | 2302 | albums - duet - album - vocals - singles | 31 | 2302_albums_duet_album_vocals | | 2303 | guitarist - tour - touring - zeppelin - backstage | 31 | 2303_guitarist_tour_touring_zeppelin | | 2304 | bidets - bidet - toilets - bathrooms - toilet | 31 | 2304_bidets_bidet_toilets_bathrooms | | 2305 | spina - bifida - amniocentesis - maternal - pregnancy | 31 | 2305_spina_bifida_amniocentesis_maternal | | 2306 | bell - cliffhanger - saved - cast - sitcom | 31 | 2306_bell_cliffhanger_saved_cast | | 2307 | arcade - simulator - gameplay - racing - skyline | 31 | 2307_arcade_simulator_gameplay_racing | | 2308 | functional - programming - functions - programmer - function | 31 | 2308_functional_programming_functions_programmer | | 2309 | sting - band - bandmates - concert - verve | 31 | 2309_sting_band_bandmates_concert | | 2310 | mukbang - consuming - pornography - habits - cravings | 31 | 2310_mukbang_consuming_pornography_habits | | 2311 | translations - translating - translator - translated - translation | 31 | 2311_translations_translating_translator_translated | | 2312 | painting - paintings - painter - paint - art | 31 | 2312_painting_paintings_painter_paint | | 2313 | gambling - betting - gamblers - bets - casino | 31 | 2313_gambling_betting_gamblers_bets | | 2314 | ancient - archaeological - archaeology - neolithic - dynasties | 31 | 2314_ancient_archaeological_archaeology_neolithic | | 2315 | animals - drummer - animal - bassist - drums | 31 | 2315_animals_drummer_animal_bassist | | 2316 | feng - decorating - buildings - practices - shui | 31 | 2316_feng_decorating_buildings_practices | | 2317 | songwriter - singer - sings - keyboardist - vocals | 31 | 2317_songwriter_singer_sings_keyboardist | | 2318 | memories - memory - recall - psychology - falsehood | 31 | 2318_memories_memory_recall_psychology | | 2319 | draft - drafted - draftee - picks - blazers | 31 | 2319_draft_drafted_draftee_picks | | 2320 | registrars - registrar - domains - domain - registrants | 31 | 2320_registrars_registrar_domains_domain | | 2321 | officers - police - gunshots - shooter - shooting | 31 | 2321_officers_police_gunshots_shooter | | 2322 | moon - drummer - drums - drummers - drumming | 31 | 2322_moon_drummer_drums_drummers | | 2323 | lymphomas - lymphoma - lymphadenopathy - lymphoid - lymphocytic | 31 | 2323_lymphomas_lymphoma_lymphadenopathy_lymphoid | | 2324 | reggae - albums - band - bassist - toured | 31 | 2324_reggae_albums_band_bassist | | 2325 | risqué - bath - erotica - insider - twitter | 31 | 2325_risqué_bath_erotica_insider | | 2326 | spawn - disowns - destroys - shapeshift - souls | 31 | 2326_spawn_disowns_destroys_shapeshift | | 2327 | broadcasting - syndication - broadcast - fox - channel | 31 | 2327_broadcasting_syndication_broadcast_fox | | 2328 | domino - pizzas - pizza - pizzerias - restaurants | 31 | 2328_domino_pizzas_pizza_pizzerias | | 2329 | soldering - boards - drilling - soldered - board | 31 | 2329_soldering_boards_drilling_soldered | | 2330 | customers - marketing - customer - consumers - consumer | 31 | 2330_customers_marketing_customer_consumers | | 2331 | incels - incel - misogynistic - misogynist - feminism | 31 | 2331_incels_incel_misogynistic_misogynist | | 2332 | polo - khan - yuan - traveller - merchant | 31 | 2332_polo_khan_yuan_traveller | | 2333 | bob - hope - honorary - biography - comedian | 31 | 2333_bob_hope_honorary_biography | | 2334 | ethnic - minorities - ethnicity - minority - population | 31 | 2334_ethnic_minorities_ethnicity_minority | | 2335 | tennis - doubles - backhand - sprinter - forehand | 31 | 2335_tennis_doubles_backhand_sprinter | | 2336 | nations - china - sovereign - sovereignty - republic | 31 | 2336_nations_china_sovereign_sovereignty | | 2337 | hostage - hostages - gunmen - terrorists - kidnappers | 30 | 2337_hostage_hostages_gunmen_terrorists | | 2338 | novelist - writings - poetry - poets - writer | 30 | 2338_novelist_writings_poetry_poets | | 2339 | topological - topology - topologically - topologies - manifolds | 30 | 2339_topological_topology_topologically_topologies | | 2340 | tower - towers - elevators - elevator - storeys | 30 | 2340_tower_towers_elevators_elevator | | 2341 | malls - destinations - cities - mall - roads | 30 | 2341_malls_destinations_cities_mall | | 2342 | theremin - instruments - instrument - orchestral - concerto | 30 | 2342_theremin_instruments_instrument_orchestral | | 2343 | cryptocurrency - cryptocurrencies - crypto - bitcoin - doge | 30 | 2343_cryptocurrency_cryptocurrencies_crypto_bitcoin | | 2344 | wee - pee - cameo - cameos - comedian | 30 | 2344_wee_pee_cameo_cameos | | 2345 | castes - caste - jati - jatis - tribal | 30 | 2345_castes_caste_jati_jatis | | 2346 | marriages - cohabitation - marriage - heterosexuals - couples | 30 | 2346_marriages_cohabitation_marriage_heterosexuals | | 2347 | financier - fund - funds - investors - investor | 30 | 2347_financier_fund_funds_investors | | 2348 | mammoth - mammoths - prehistoric - fossils - palaeontology | 30 | 2348_mammoth_mammoths_prehistoric_fossils | | 2349 | eunuchs - eunuch - servants - slaves - enslaved | 30 | 2349_eunuchs_eunuch_servants_slaves | | 2350 | condemnation - testimony - guilt - heresy - accusation | 30 | 2350_condemnation_testimony_guilt_heresy | | 2351 | chaebols - chaebol - debts - economies - economy | 30 | 2351_chaebols_chaebol_debts_economies | | 2352 | songwriter - songwriters - performer - guitarist - concert | 30 | 2352_songwriter_songwriters_performer_guitarist | | 2353 | awards - academy - nominees - theaters - theatre | 30 | 2353_awards_academy_nominees_theaters | | 2354 | catalytic - catalysts - catalyst - converters - catalyzing | 30 | 2354_catalytic_catalysts_catalyst_converters | | 2355 | militia - amendment - constitution - constitutions - militias | 30 | 2355_militia_amendment_constitution_constitutions | | 2356 | atheism - atheist - agnosticism - atheists - atheistic | 30 | 2356_atheism_atheist_agnosticism_atheists | | 2357 | studium - catholic - pope - pontifical - latin | 30 | 2357_studium_catholic_pope_pontifical | | 2358 | composers - orchestra - composer - orchestras - choral | 30 | 2358_composers_orchestra_composer_orchestras | | 2359 | albums - singer - guitar - duet - album | 30 | 2359_albums_singer_guitar_duet | | 2360 | toured - concert - concerts - grease - tour | 30 | 2360_toured_concert_concerts_grease | | 2361 | famine - potatoes - potato - hunger - starving | 30 | 2361_famine_potatoes_potato_hunger | | 2362 | ancient - testament - epic - bible - cuneiform | 30 | 2362_ancient_testament_epic_bible | | 2363 | nightclub - nightclubs - 54 - cabaret - club | 30 | 2363_nightclub_nightclubs_54_cabaret | | 2364 | headquartered - universal - headquarters - company - music | 30 | 2364_headquartered_universal_headquarters_company | | 2365 | exports - imports - economy - agriculture - archipelagos | 30 | 2365_exports_imports_economy_agriculture | | 2366 | ecumenical - orthodox - catholic - ecclesiastical - papal | 30 | 2366_ecumenical_orthodox_catholic_ecclesiastical | | 2367 | critical - thinking - reasoned - reflective - thinker | 30 | 2367_critical_thinking_reasoned_reflective | | 2368 | maglev - trains - levitation - railway - levitating | 30 | 2368_maglev_trains_levitation_railway | | 2369 | van - ev - vans - automakers - vehicles | 30 | 2369_van_ev_vans_automakers | | 2370 | rococo - sculptor - decorative - ornamental - designs | 30 | 2370_rococo_sculptor_decorative_ornamental | | 2371 | paintings - painting - paint - art - artist | 30 | 2371_paintings_painting_paint_art | | 2372 | tulips - tulip - economists - economic - bulbs | 30 | 2372_tulips_tulip_economists_economic | | 2373 | squads - squad - roster - players - teams | 30 | 2373_squads_squad_roster_players | | 2374 | entrances - subterranean - tunnel - stairs - pyramid | 30 | 2374_entrances_subterranean_tunnel_stairs | | 2375 | transhumanism - transhumanists - transhumanist - humanists - humanist | 30 | 2375_transhumanism_transhumanists_transhumanist_humanists | </details> ## Training hyperparameters * calculate_probabilities: False * language: None * low_memory: False * min_topic_size: 10 * n_gram_range: (1, 1) * nr_topics: None * seed_topic_list: None * top_n_words: 10 * verbose: True ## Framework versions * Numpy: 1.22.4 * HDBSCAN: 0.8.29 * UMAP: 0.5.3 * Pandas: 1.5.3 * Scikit-Learn: 1.2.2 * Sentence-transformers: 2.2.2 * Transformers: 4.29.2 * Numba: 0.56.4 * Plotly: 5.13.1 * Python: 3.10.11
261,987
[ [ -0.057952880859375, -0.0321044921875, 0.0110626220703125, -0.0137481689453125, -0.017974853515625, 0.01361846923828125, 0.0013103485107421875, -0.0027294158935546875, 0.0576171875, 0.008697509765625, -0.0555419921875, -0.038970947265625, -0.050872802734375, ...
flaviagiammarino/pubmed-clip-vit-base-patch32
2023-07-12T13:08:14.000Z
[ "transformers", "pytorch", "tf", "jax", "clip", "zero-shot-image-classification", "medical", "vision", "en", "arxiv:2112.13906", "license:mit", "endpoints_compatible", "has_space", "region:us" ]
zero-shot-image-classification
flaviagiammarino
null
null
flaviagiammarino/pubmed-clip-vit-base-patch32
1
1,188
transformers
2023-06-13T16:18:14
--- license: mit language: - en tags: - medical - vision widget: - src: "https://huggingface.co/flaviagiammarino/pubmed-clip-vit-base-patch32/resolve/main/scripts/input.jpeg" candidate_labels: "Chest X-Ray, Brain MRI, Abdomen CT Scan" example_title: "Abdomen CT Scan" --- # Model Card for PubMedCLIP PubMedCLIP is a fine-tuned version of [CLIP](https://huggingface.co/docs/transformers/model_doc/clip) for the medical domain. ## Model Description PubMedCLIP was trained on the [Radiology Objects in COntext (ROCO)](https://github.com/razorx89/roco-dataset) dataset, a large-scale multimodal medical imaging dataset. The ROCO dataset includes diverse imaging modalities (such as X-Ray, MRI, ultrasound, fluoroscopy, etc.) from various human body regions (such as head, spine, chest, abdomen, etc.) captured from open-access [PubMed](https://pubmed.ncbi.nlm.nih.gov/) articles.<br> PubMedCLIP was trained for 50 epochs with a batch size of 64 using the Adam optimizer with a learning rate of 10−5. The authors have released three different pre-trained models at this [link](https://1drv.ms/u/s!ApXgPqe9kykTgwD4Np3-f7ODAot8?e=zLVlJ2) which use ResNet-50, ResNet-50x4 and ViT32 as image encoders. This repository includes only the ViT32 variant of the PubMedCLIP model.<br> - **Repository:** [PubMedCLIP Official GitHub Repository](https://github.com/sarahESL/PubMedCLIP) - **Paper:** [Does CLIP Benefit Visual Question Answering in the Medical Domain as Much as it Does in the General Domain?](https://arxiv.org/abs/2112.13906) ## Usage ```python import requests from PIL import Image import matplotlib.pyplot as plt from transformers import CLIPProcessor, CLIPModel model = CLIPModel.from_pretrained("flaviagiammarino/pubmed-clip-vit-base-patch32") processor = CLIPProcessor.from_pretrained("flaviagiammarino/pubmed-clip-vit-base-patch32") url = "https://huggingface.co/flaviagiammarino/pubmed-clip-vit-base-patch32/resolve/main/scripts/input.jpeg" image = Image.open(requests.get(url, stream=True).raw) text = ["Chest X-Ray", "Brain MRI", "Abdominal CT Scan"] inputs = processor(text=text, images=image, return_tensors="pt", padding=True) probs = model(**inputs).logits_per_image.softmax(dim=1).squeeze() plt.subplots() plt.imshow(image) plt.title("".join([x[0] + ": " + x[1] + "\n" for x in zip(text, [format(prob, ".4%") for prob in probs])])) plt.axis("off") plt.tight_layout() plt.show() ``` ## Additional Information ### Licensing Information The authors have released the model code and pre-trained checkpoints under the [MIT License](https://github.com/sarahESL/PubMedCLIP/blob/main/LICENSE). ### Citation Information ``` @article{eslami2021does, title={Does clip benefit visual question answering in the medical domain as much as it does in the general domain?}, author={Eslami, Sedigheh and de Melo, Gerard and Meinel, Christoph}, journal={arXiv preprint arXiv:2112.13906}, year={2021} } ```
2,931
[ [ -0.028717041015625, -0.05731201171875, 0.047119140625, -0.001567840576171875, -0.040863037109375, -0.00792694091796875, 0.0081787109375, -0.027679443359375, 0.031097412109375, 0.03338623046875, -0.045074462890625, -0.043792724609375, -0.03900146484375, 0.007...
stablediffusionapi/hentaiflow-eevee
2023-07-24T13:23:23.000Z
[ "diffusers", "stablediffusionapi.com", "stable-diffusion-api", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "endpoints_compatible", "has_space", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
stablediffusionapi
null
null
stablediffusionapi/hentaiflow-eevee
1
1,188
diffusers
2023-07-24T11:43:26
--- license: creativeml-openrail-m tags: - stablediffusionapi.com - stable-diffusion-api - text-to-image - ultra-realistic pinned: true --- # Hentaiflow Eevee API Inference ![generated from stablediffusionapi.com](https://cdn.stablediffusionapi.com/generations/1265335181690204508.png) ## Get API Key Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed. Replace Key in below code, change **model_id** to "hentaiflow-eevee" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs) Try model for free: [Generate Images](https://stablediffusionapi.com/models/hentaiflow-eevee) Model link: [View model](https://stablediffusionapi.com/models/hentaiflow-eevee) Credits: [View credits](https://civitai.com/?query=Hentaiflow%20Eevee) View all models: [View Models](https://stablediffusionapi.com/models) import requests import json url = "https://stablediffusionapi.com/api/v4/dreambooth" payload = json.dumps({ "key": "your_api_key", "model_id": "hentaiflow-eevee", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) > Use this coupon code to get 25% off **DMGG0RBN**
2,472
[ [ -0.033660888671875, -0.059661865234375, 0.0301513671875, 0.024169921875, -0.03424072265625, -0.005153656005859375, 0.025787353515625, -0.036041259765625, 0.04730224609375, 0.047027587890625, -0.060272216796875, -0.052337646484375, -0.0301513671875, 0.0054321...
microsoft/git-base-textvqa
2023-01-24T17:21:06.000Z
[ "transformers", "pytorch", "git", "text-generation", "vision", "visual-question-answering", "en", "arxiv:2205.14100", "license:mit", "has_space", "region:us" ]
visual-question-answering
microsoft
null
null
microsoft/git-base-textvqa
6
1,187
transformers
2022-12-06T11:17:56
--- language: en license: mit tags: - vision model_name: microsoft/git-base-textvqa inference: false pipeline_tag: visual-question-answering --- # GIT (GenerativeImage2Text), base-sized, fine-tuned on TextVQA GIT (short for GenerativeImage2Text) model, base-sized version, fine-tuned on TextVQA. It was introduced in the paper [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) by Wang et al. and first released in [this repository](https://github.com/microsoft/GenerativeImage2Text). Disclaimer: The team releasing GIT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description GIT is a Transformer decoder conditioned on both CLIP image tokens and text tokens. The model is trained using "teacher forcing" on a lot of (image, text) pairs. The goal for the model is simply to predict the next text token, giving the image tokens and previous text tokens. The model has full access to (i.e. a bidirectional attention mask is used for) the image patch tokens, but only has access to the previous text tokens (i.e. a causal attention mask is used for the text tokens) when predicting the next text token. ![GIT architecture](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/git_architecture.jpg) This allows the model to be used for tasks like: - image and video captioning - visual question answering (VQA) on images and videos - even image classification (by simply conditioning the model on the image and asking it to generate a class for it in text). ## Intended uses & limitations You can use the raw model for visual question answering (VQA). See the [model hub](https://huggingface.co/models?search=microsoft/git) to look for fine-tuned versions on a task that interests you. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/transformers/main/model_doc/git.html). ## Training data From the paper: > We collect 0.8B image-text pairs for pre-training, which include COCO (Lin et al., 2014), Conceptual Captions (CC3M) (Sharma et al., 2018), SBU (Ordonez et al., 2011), Visual Genome (VG) (Krishna et al., 2016), Conceptual Captions (CC12M) (Changpinyo et al., 2021), ALT200M (Hu et al., 2021a), and an extra 0.6B data following a similar collection procedure in Hu et al. (2021a). => however this is for the model referred to as "GIT" in the paper, which is not open-sourced. This checkpoint is "GIT-base", which is a smaller variant of GIT trained on 10 million image-text pairs. Next, the model was fine-tuned on TextVQA. See table 11 in the [paper](https://arxiv.org/abs/2205.14100) for more details. ### Preprocessing We refer to the original repo regarding details for preprocessing during training. During validation, one resizes the shorter edge of each image, after which center cropping is performed to a fixed-size resolution. Next, frames are normalized across the RGB channels with the ImageNet mean and standard deviation. ## Evaluation results For evaluation results, we refer readers to the [paper](https://arxiv.org/abs/2205.14100).
3,188
[ [ -0.04681396484375, -0.05938720703125, 0.01483154296875, -0.01528167724609375, -0.0328369140625, -0.0028934478759765625, -0.0046234130859375, -0.032073974609375, 0.018707275390625, 0.034637451171875, -0.0491943359375, -0.0288238525390625, -0.060394287109375, ...
AI-Growth-Lab/PatentSBERTa
2023-02-16T18:25:30.000Z
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "sentence-similarity", "transformers", "arxiv:2103.11933", "endpoints_compatible", "has_space", "region:us" ]
sentence-similarity
AI-Growth-Lab
null
null
AI-Growth-Lab/PatentSBERTa
19
1,186
sentence-transformers
2022-03-02T23:29:04
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # PatentSBERTa ## PatentSBERTa: A Deep NLP based Hybrid Model for Patent Distance and Classification using Augmented SBERT ### Aalborg University Business School, AI: Growth-Lab https://arxiv.org/abs/2103.11933 https://github.com/AI-Growth-Lab/PatentSBERTa This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('AI-Growth-Lab/PatentSBERTa') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch def cls_pooling(model_output, attention_mask): return model_output[0][:,0] # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('AI-Growth-Lab/PatentSBERTa') model = AutoModel.from_pretrained('AI-Growth-Lab/PatentSBERTa') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, cls pooling. sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 5 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 100, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors ```LaTeX @article{bekamiri2021patentsberta, title={PatentSBERTa: A Deep NLP based Hybrid Model for Patent Distance and Classification using Augmented SBERT}, author={Bekamiri, Hamid and Hain, Daniel S and Jurowetzki, Roman}, journal={arXiv preprint arXiv:2103.11933}, year={2021} } ```
3,878
[ [ -0.008270263671875, -0.05316162109375, 0.0217437744140625, 0.0160064697265625, -0.0118408203125, -0.00727081298828125, -0.01180267333984375, -0.0166778564453125, 0.00998687744140625, 0.025390625, -0.034271240234375, -0.037933349609375, -0.054168701171875, -0...
Maltehb/danish-bert-botxo
2023-02-07T07:07:47.000Z
[ "transformers", "pytorch", "tf", "jax", "bert", "token-classification", "danish", "masked-lm", "Certainly", "fill-mask", "da", "dataset:common_crawl", "dataset:wikipedia", "dataset:dindebat.dk", "dataset:hestenettet.dk", "dataset:danishOpenSubtitles", "license:cc-by-4.0", "autotrai...
fill-mask
Maltehb
null
null
Maltehb/danish-bert-botxo
7
1,186
transformers
2022-03-02T23:29:04
--- language: da tags: - danish - bert - masked-lm - Certainly license: cc-by-4.0 datasets: - common_crawl - wikipedia - dindebat.dk - hestenettet.dk - danishOpenSubtitles pipeline_tag: fill-mask widget: - text: "København er [MASK] i Danmark." --- # Danish BERT (version 2, uncased) by [Certainly](https://certainly.io/) (previously known as BotXO). All credit goes to [Certainly](https://certainly.io/) (previously known as BotXO), who developed Danish BERT. For data and training details see their [GitHub repository](https://github.com/certainlyio/nordic_bert) or [this article](https://www.certainly.io/blog/danish-bert-model/). You can also visit their [organization page](https://huggingface.co/Certainly) on Hugging Face. It is both available in TensorFlow and Pytorch format. The original TensorFlow version can be downloaded using [this link](https://www.dropbox.com/s/19cjaoqvv2jicq9/danish_bert_uncased_v2.zip?dl=1). Here is an example on how to load Danish BERT in PyTorch using the [🤗Transformers](https://github.com/huggingface/transformers) library: ```python from transformers import AutoTokenizer, AutoModelForPreTraining tokenizer = AutoTokenizer.from_pretrained("Maltehb/danish-bert-botxo") model = AutoModelForPreTraining.from_pretrained("Maltehb/danish-bert-botxo") ```
1,304
[ [ -0.0309906005859375, -0.035919189453125, 0.00809478759765625, 0.04736328125, -0.0231475830078125, -0.013885498046875, -0.0200042724609375, -0.0460205078125, 0.02191162109375, 0.01483917236328125, -0.049713134765625, -0.0198974609375, -0.05419921875, 0.002323...
digiplay/DarkSushi2.5D_v1
2023-06-23T04:55:37.000Z
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
digiplay
null
null
digiplay/DarkSushi2.5D_v1
3
1,186
diffusers
2023-06-23T02:48:10
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- Model info : https://civitai.com/models/48671?modelVersionId=53252 Original Author's DEMO image : ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/fb58f9fa-d9f9-46fc-b424-0600ceabcd00/width=1536/13650-1953889366-[%E4%BF%AE%E6%89%8B1_0],_[((Delicate%20arms%20and%20hands),%20%F0%9F%96%90)_%20_20],_[%E7%94%BB%E9%A3%8Etag_0]_(ultra-detailed),%20(best%20shadow),%20classic,%20(cinematic%20lighting),%20dynami.jpeg)
540
[ [ -0.0168609619140625, -0.0259552001953125, 0.02728271484375, 0.0120849609375, -0.032867431640625, -0.005702972412109375, 0.0195465087890625, -0.0016603469848632812, 0.032989501953125, 0.042633056640625, -0.0528564453125, -0.05401611328125, -0.0206146240234375, ...
vinai/bartpho-syllable-base
2022-10-22T09:00:27.000Z
[ "transformers", "pytorch", "mbart", "feature-extraction", "arxiv:2109.09701", "endpoints_compatible", "has_space", "region:us" ]
feature-extraction
vinai
null
null
vinai/bartpho-syllable-base
0
1,185
transformers
2022-08-19T14:21:32
# <a name="introduction"></a> BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese The pre-trained model `vinai/bartpho-syllable-base` is the "base" variant of `BARTpho-syllable`, which uses the "base" architecture and pre-training scheme of the sequence-to-sequence denoising model [BART](https://github.com/pytorch/fairseq/tree/main/examples/bart). The general architecture and experimental results of BARTpho can be found in our [paper](https://arxiv.org/abs/2109.09701): @article{bartpho, title = {{BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese}}, author = {Nguyen Luong Tran and Duong Minh Le and Dat Quoc Nguyen}, journal = {arXiv preprint}, volume = {arXiv:2109.09701}, year = {2021} } **Please CITE** our paper when BARTpho is used to help produce published results or incorporated into other software. For further information or requests, please go to [BARTpho's homepage](https://github.com/VinAIResearch/BARTpho)!
983
[ [ -0.01654052734375, -0.05010986328125, 0.0204925537109375, 0.00980377197265625, -0.047943115234375, -0.01116943359375, -0.00811004638671875, -0.01558685302734375, 0.009429931640625, 0.06390380859375, -0.0360107421875, -0.060699462890625, -0.038177490234375, 0...
Marfuen98/photorealistic-1
2023-06-19T19:01:19.000Z
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
Marfuen98
null
null
Marfuen98/photorealistic-1
0
1,185
diffusers
2023-07-01T20:21:14
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- Model info : https://civitai.com/models/43331?modelVersionId=94640
193
[ [ -0.0229339599609375, 0.029266357421875, 0.039306640625, 0.0309906005859375, -0.032379150390625, -0.017822265625, 0.038909912109375, -0.00970458984375, 0.0170745849609375, 0.0297393798828125, -0.050750732421875, 0.0004220008850097656, 0.01311492919921875, -0....
cambridgeltl/SapBERT-UMLS-2020AB-all-lang-from-XLMR
2023-06-14T19:00:30.000Z
[ "transformers", "pytorch", "safetensors", "xlm-roberta", "feature-extraction", "arxiv:2010.11784", "endpoints_compatible", "region:us" ]
feature-extraction
cambridgeltl
null
null
cambridgeltl/SapBERT-UMLS-2020AB-all-lang-from-XLMR
1
1,184
transformers
2022-03-02T23:29:05
--- language: multilingual tags: - biomedical - lexical-semantics - cross-lingual datasets: - UMLS **[news]** A cross-lingual extension of SapBERT will appear in the main onference of **ACL 2021**! <br> **[news]** SapBERT will appear in the conference proceedings of **NAACL 2021**! ### SapBERT-XLMR SapBERT [(Liu et al. 2020)](https://arxiv.org/pdf/2010.11784.pdf) trained with [UMLS](https://www.nlm.nih.gov/research/umls/licensedcontent/umlsknowledgesources.html) 2020AB, using [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) as the base model. Please use [CLS] as the representation of the input. #### Extracting embeddings from SapBERT The following script converts a list of strings (entity names) into embeddings. ```python import numpy as np import torch from tqdm.auto import tqdm from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("cambridgeltl/SapBERT-from-PubMedBERT-fulltext") model = AutoModel.from_pretrained("cambridgeltl/SapBERT-from-PubMedBERT-fulltext").cuda() # replace with your own list of entity names all_names = ["covid-19", "Coronavirus infection", "high fever", "Tumor of posterior wall of oropharynx"] bs = 128 # batch size during inference all_embs = [] for i in tqdm(np.arange(0, len(all_names), bs)): toks = tokenizer.batch_encode_plus(all_names[i:i+bs], padding="max_length", max_length=25, truncation=True, return_tensors="pt") toks_cuda = {} for k,v in toks.items(): toks_cuda[k] = v.cuda() cls_rep = model(**toks_cuda)[0][:,0,:] # use CLS representation as the embedding all_embs.append(cls_rep.cpu().detach().numpy()) all_embs = np.concatenate(all_embs, axis=0) ``` For more details about training and eval, see SapBERT [github repo](https://github.com/cambridgeltl/sapbert). ### Citation ```bibtex @inproceedings{liu2021learning, title={Learning Domain-Specialised Representations for Cross-Lingual Biomedical Entity Linking}, author={Liu, Fangyu and Vuli{\'c}, Ivan and Korhonen, Anna and Collier, Nigel}, booktitle={Proceedings of ACL-IJCNLP 2021}, month = aug, year={2021} } ```
2,282
[ [ -0.003299713134765625, -0.0360107421875, 0.03582763671875, 0.0010528564453125, -0.01548004150390625, 0.00965118408203125, -0.015350341796875, -0.014068603515625, 0.022216796875, 0.04254150390625, -0.015869140625, -0.0684814453125, -0.043914794921875, 0.00655...
TheBloke/WizardMath-70B-V1.0-GPTQ
2023-09-27T12:45:42.000Z
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:2304.12244", "arxiv:2306.08568", "arxiv:2308.09583", "license:llama2", "text-generation-inference", "region:us" ]
text-generation
TheBloke
null
null
TheBloke/WizardMath-70B-V1.0-GPTQ
5
1,184
transformers
2023-08-11T18:10:40
--- license: llama2 model_name: WizardMath 70B V1.0 base_model: WizardLM/WizardMath-70B-V1.0 inference: false model_creator: WizardLM model_type: llama prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: Let''s think step by step. ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # WizardMath 70B V1.0 - GPTQ - Model creator: [WizardLM](https://huggingface.co/WizardLM) - Original model: [WizardMath 70B V1.0](https://huggingface.co/WizardLM/WizardMath-70B-V1.0) <!-- description start --> ## Description This repo contains GPTQ model files for [WizardLM's WizardMath 70B V1.0](https://huggingface.co/WizardLM/WizardMath-70B-V1.0). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/WizardMath-70B-V1.0-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/WizardMath-70B-V1.0-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/WizardMath-70B-V1.0-GGUF) * [WizardLM's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/WizardLM/WizardMath-70B-V1.0) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca-CoT ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: Let's think step by step. ``` <!-- prompt-template end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. All recent GPTQ files are made with AutoGPTQ, and all files in non-main branches are made with AutoGPTQ. Files in the `main` branch which were uploaded before August 2023 were made with GPTQ-for-LLaMa. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The dataset used for quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/WizardMath-70B-V1.0-GPTQ/tree/main) | 4 | None | Yes | 0.1 | [CamelAI Math](https://huggingface.co/datasets/andersonbcdefg/math) | 4096 | 35.33 GB | Yes | 4-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/WizardMath-70B-V1.0-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [CamelAI Math](https://huggingface.co/datasets/andersonbcdefg/math) | 4096 | 40.66 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/WizardMath-70B-V1.0-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [CamelAI Math](https://huggingface.co/datasets/andersonbcdefg/math) | 4096 | 37.99 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. | | [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/WizardMath-70B-V1.0-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [CamelAI Math](https://huggingface.co/datasets/andersonbcdefg/math) | 4096 | 36.65 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-3bit--1g-actorder_True](https://huggingface.co/TheBloke/WizardMath-70B-V1.0-GPTQ/tree/gptq-3bit--1g-actorder_True) | 3 | None | Yes | 0.1 | [CamelAI Math](https://huggingface.co/datasets/andersonbcdefg/math) | 4096 | 26.78 GB | No | 3-bit, with Act Order and no group size. Lowest possible VRAM requirements. May be lower quality than 3-bit 128g. | | [gptq-3bit-128g-actorder_True](https://huggingface.co/TheBloke/WizardMath-70B-V1.0-GPTQ/tree/gptq-3bit-128g-actorder_True) | 3 | 128 | Yes | 0.1 | [CamelAI Math](https://huggingface.co/datasets/andersonbcdefg/math) | 4096 | 28.03 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download from branches - In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/WizardMath-70B-V1.0-GPTQ:main` - With Git, you can clone a branch with: ``` git clone --single-branch --branch main https://huggingface.co/TheBloke/WizardMath-70B-V1.0-GPTQ ``` - In Python Transformers code, the branch is the `revision` parameter; see below. <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui). Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/WizardMath-70B-V1.0-GPTQ`. - To download from a specific branch, enter for example `TheBloke/WizardMath-70B-V1.0-GPTQ:main` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `WizardMath-70B-V1.0-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. * Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-python start --> ## How to use this GPTQ model from Python code ### Install the necessary packages Requires: Transformers 4.32.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install transformers>=4.32.0 optimum>=1.12.0 pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7 ``` If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ pip3 install . ``` ### For CodeLlama models only: you must use Transformers 4.33.0 or later. If 4.33.0 is not yet released when you read this, you will need to install Transformers from source: ```shell pip3 uninstall -y transformers pip3 install git+https://github.com/huggingface/transformers.git ``` ### You can then use the following code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/WizardMath-70B-V1.0-GPTQ" # To use a different branch, change revision # For example: revision="main" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=False, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Tell me about AI" prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: Let's think step by step. ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI). [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility. [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: WizardLM's WizardMath 70B V1.0 ## WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct (RLEIF) <p align="center"> 🤗 <a href="https://huggingface.co/WizardLM" target="_blank">HF Repo</a> •🐱 <a href="https://github.com/nlpxucan/WizardLM" target="_blank">Github Repo</a> • 🐦 <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> • 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> • 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a> <br> </p> <p align="center"> 👋 Join our <a href="https://discord.gg/VZjjHtWrKs" target="_blank">Discord</a> </p> | Model | Checkpoint | Paper | HumanEval | MBPP | Demo | License | | ----- |------| ---- |------|-------| ----- | ----- | | WizardCoder-Python-34B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-Python-34B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 73.2 | 61.2 | [Demo](http://47.103.63.15:50085/) | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> | | WizardCoder-15B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-15B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 59.8 |50.6 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> | | WizardCoder-Python-13B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-Python-13B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 64.0 | 55.6 | -- | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> | | WizardCoder-Python-7B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-Python-7B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 55.5 | 51.6 | [Demo](http://47.103.63.15:50088/) | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> | | WizardCoder-3B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-3B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 34.8 |37.4 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> | | WizardCoder-1B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-1B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 23.8 |28.6 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> | | Model | Checkpoint | Paper | GSM8k | MATH |Online Demo| License| | ----- |------| ---- |------|-------| ----- | ----- | | WizardMath-70B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-70B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **81.6** | **22.7** |[Demo](http://47.103.63.15:50083/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a> | | WizardMath-13B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-13B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **63.9** | **14.0** |[Demo](http://47.103.63.15:50082/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a> | | WizardMath-7B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-7B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **54.9** | **10.7** | [Demo](http://47.103.63.15:50080/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a>| <font size=4> | <sup>Model</sup> | <sup>Checkpoint</sup> | <sup>Paper</sup> |<sup>MT-Bench</sup> | <sup>AlpacaEval</sup> | <sup>GSM8k</sup> | <sup>HumanEval</sup> | <sup>License</sup>| | ----- |------| ---- |------|-------| ----- | ----- | ----- | | <sup>**WizardLM-70B-V1.0**</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-70B-V1.0" target="_blank">HF Link</a> </sup>|<sup>📃**Coming Soon**</sup>| <sup>**7.78**</sup> | <sup>**92.91%**</sup> |<sup>**77.6%**</sup> | <sup> **50.6 pass@1**</sup>|<sup> <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License </a></sup> | | <sup>WizardLM-13B-V1.2</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.2" target="_blank">HF Link</a> </sup>| | <sup>7.06</sup> | <sup>89.17%</sup> |<sup>55.3%</sup> | <sup>36.6 pass@1</sup>|<sup> <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License </a></sup> | | <sup>WizardLM-13B-V1.1</sup> |<sup> 🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.1" target="_blank">HF Link</a> </sup> | | <sup>6.76</sup> |<sup>86.32%</sup> | | <sup>25.0 pass@1</sup>| <sup>Non-commercial</sup>| | <sup>WizardLM-30B-V1.0</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-30B-V1.0" target="_blank">HF Link</a></sup> | | <sup>7.01</sup> | | | <sup>37.8 pass@1</sup>| <sup>Non-commercial</sup> | | <sup>WizardLM-13B-V1.0</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.0" target="_blank">HF Link</a> </sup> | | <sup>6.35</sup> | <sup>75.31%</sup> | | <sup> 24.0 pass@1 </sup> | <sup>Non-commercial</sup>| | <sup>WizardLM-7B-V1.0 </sup>| <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-7B-V1.0" target="_blank">HF Link</a> </sup> |<sup> 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> </sup>| | | |<sup>19.1 pass@1 </sup>|<sup> Non-commercial</sup>| </font> **Github Repo**: https://github.com/nlpxucan/WizardLM/tree/main/WizardMath **Twitter**: https://twitter.com/WizardLM_AI/status/1689998428200112128 **Discord**: https://discord.gg/VZjjHtWrKs ## Comparing WizardMath-V1.0 with Other LLMs. 🔥 The following figure shows that our **WizardMath-70B-V1.0 attains the fifth position in this benchmark**, surpassing ChatGPT (81.6 vs. 80.8) , Claude Instant (81.6 vs. 80.9), PaLM 2 540B (81.6 vs. 80.7). <p align="center" width="100%"> <a ><img src="https://raw.githubusercontent.com/nlpxucan/WizardLM/main/WizardMath/images/wizardmath_gsm8k.png" alt="WizardMath" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a> </p> ❗<b>Note for model system prompts usage:</b> Please use **the same systems prompts strictly** with us, and we do not guarantee the accuracy of the **quantified versions**. **Default version:** ``` "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:" ``` **CoT Version:** (❗For the **simple** math questions, we do NOT recommend to use the CoT prompt.) ``` "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response: Let's think step by step." ``` ## Inference WizardMath Demo Script We provide the WizardMath inference demo code [here](https://github.com/nlpxucan/WizardLM/tree/main/demo). ❗<b>To commen concern about dataset:</b> Recently, there have been clear changes in the open-source policy and regulations of our overall organization's code, data, and models. Despite this, we have still worked hard to obtain opening the weights of the model first, but the data involves stricter auditing and is in review with our legal team . Our researchers have no authority to publicly release them without authorization. Thank you for your understanding. ## Citation Please cite the repo if you use the data, method or code in this repo. ``` @article{luo2023wizardmath, title={WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct}, author={Luo, Haipeng and Sun, Qingfeng and Xu, Can and Zhao, Pu and Lou, Jianguang and Tao, Chongyang and Geng, Xiubo and Lin, Qingwei and Chen, Shifeng and Zhang, Dongmei}, journal={arXiv preprint arXiv:2308.09583}, year={2023} } ```
23,576
[ [ -0.041534423828125, -0.059661865234375, -0.00405120849609375, 0.01453399658203125, -0.01041412353515625, -0.01496124267578125, 0.00986480712890625, -0.0251007080078125, 0.0125885009765625, 0.034698486328125, -0.048583984375, -0.037841796875, -0.0224609375, -...
EleutherAI/pythia-12b-v0
2023-03-29T18:46:38.000Z
[ "transformers", "pytorch", "gpt_neox", "text-generation", "causal-lm", "pythia", "pythia_v0", "en", "dataset:the_pile", "arxiv:2101.00027", "arxiv:2201.07311", "license:apache-2.0", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text-generation
EleutherAI
null
null
EleutherAI/pythia-12b-v0
21
1,183
transformers
2022-10-16T19:03:14
--- language: - en tags: - pytorch - causal-lm - pythia - pythia_v0 license: apache-2.0 datasets: - the_pile --- The *Pythia Scaling Suite* is a collection of models developed to facilitate interpretability research. It contains two sets of eight models of sizes 70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two models: one trained on the Pile, and one trained on the Pile after the dataset has been globally deduplicated. All 8 model sizes are trained on the exact same data, in the exact same order. All Pythia models are available [on Hugging Face](https://huggingface.co/models?other=pythia). The Pythia model suite was deliberately designed to promote scientific research on large language models, especially interpretability research. Despite not centering downstream performance as a design goal, we find the models <a href="#evaluations">match or exceed</a> the performance of similar and same-sized models, such as those in the OPT and GPT-Neo suites. Please note that all models in the *Pythia* suite were renamed in January 2023. For clarity, a <a href="#naming-convention-and-parameter-count">table comparing the old and new names</a> is provided in this model card, together with exact parameter counts. ## Pythia-12B ### Model Details - Developed by: [EleutherAI](http://eleuther.ai) - Model type: Transformer-based Language Model - Language: English - Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia) for training procedure, config files, and details on how to use. - Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox) - License: Apache 2.0 - Contact: to ask questions about this model, join the [EleutherAI Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`. Please read the existing *Pythia* documentation before asking about it in the EleutherAI Discord. For general correspondence: [contact@eleuther. ai](mailto:contact@eleuther.ai). <figure> | Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models | | -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: | | 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — | | 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M | | 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M | | 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — | | 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B | | 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B | | 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B | | 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — | <figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and non-deduped models of a given size have the same hyperparameters. “Equivalent” models have <b>exactly</b> the same architecture, and the same number of non-embedding parameters.</figcaption> </figure> ### Uses and Limitations #### Intended Use The primary intended use of Pythia is research on the behavior, functionality, and limitations of large language models. This suite is intended to provide a controlled setting for performing scientific experiments. To enable the study of how language models change over the course of training, we provide 143 evenly spaced intermediate checkpoints per model. These checkpoints are hosted on Hugging Face as branches. Note that branch `143000` corresponds exactly to the model checkpoint on the `main` branch of each model. You may also further fine-tune and adapt Pythia-12B for deployment, as long as your use is in accordance with the Apache 2.0 license. Pythia models work with the Hugging Face [Transformers Library](https://huggingface.co/docs/transformers/index). If you decide to use pre-trained Pythia-12B as a basis for your fine-tuned model, please conduct your own risk and bias assessment. #### Out-of-scope use The Pythia Suite is **not** intended for deployment. It is not a in itself a product and cannot be used for human-facing interactions. Pythia models are English-language only, and are not suitable for translation or generating text in other languages. Pythia-12B has not been fine-tuned for downstream contexts in which language models are commonly deployed, such as writing genre prose, or commercial chatbots. This means Pythia-12B will **not** respond to a given prompt the way a product like ChatGPT does. This is because, unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement Learning from Human Feedback (RLHF) to better “understand” human instructions. #### Limitations and biases The core functionality of a large language model is to take a string of text and predict the next token. The token deemed statistically most likely by the model need not produce the most “accurate” text. Never rely on Pythia-12B to produce factually accurate output. This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset known to contain profanity and texts that are lewd or otherwise offensive. See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a discussion of documented biases with regards to gender, religion, and race. Pythia-12B may produce socially unacceptable or undesirable text, *even if* the prompt itself does not include anything explicitly offensive. If you plan on using text generated through, for example, the Hosted Inference API, we recommend having a human curate the outputs of this language model before presenting it to other people. Please inform your audience that the text was generated by Pythia-12B. ### Quickstart Pythia models can be loaded and used via the following code, demonstrated here for the third `pythia-70m-deduped` checkpoint: ```python from transformers import GPTNeoXForCausalLM, AutoTokenizer model = GPTNeoXForCausalLM.from_pretrained( "EleutherAI/pythia-70m-deduped", revision="step3000", cache_dir="./pythia-70m-deduped/step3000", ) tokenizer = AutoTokenizer.from_pretrained( "EleutherAI/pythia-70m-deduped", revision="step3000", cache_dir="./pythia-70m-deduped/step3000", ) inputs = tokenizer("Hello, I am", return_tensors="pt") tokens = model.generate(**inputs) tokenizer.decode(tokens[0]) ``` Revision/branch `step143000` corresponds exactly to the model checkpoint on the `main` branch of each model.<br> For more information on how to use all Pythia models, see [documentation on GitHub](https://github.com/EleutherAI/pythia). ### Training #### Training data [The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in English. It was created by EleutherAI specifically for training large language models. It contains texts from 22 diverse sources, roughly broken down into five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl), prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and miscellaneous (e.g. GitHub, Enron Emails). See [the Pile paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources, methodology, and a discussion of ethical implications. Consult [the datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation about the Pile and its component datasets. The Pile can be downloaded from the [official website](https://pile.eleuther.ai/), or from a [community mirror](https://the-eye.eu/public/AI/pile/).<br> The Pile was **not** deduplicated before being used to train Pythia-12B. #### Training procedure All models were trained on the exact same data, in the exact same order. Each model saw 299,892,736,000 tokens during training, and 143 checkpoints for each model are saved every 2,097,152,000 tokens, spaced evenly throughout training. This corresponds to training for just under 1 epoch on the Pile for non-deduplicated models, and about 1.5 epochs on the deduplicated Pile. All *Pythia* models trained for the equivalent of 143000 steps at a batch size of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch size of 4M tokens listed were originally trained for 71500 steps instead, with checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for consistency with all 2M batch models, so `step1000` is the first checkpoint for `pythia-1.4b` that was saved (corresponding to step 500 in training), and `step1000` is likewise the first `pythia-6.9b` checkpoint that was saved (corresponding to 1000 “actual” steps).<br> See [GitHub](https://github.com/EleutherAI/pythia) for more details on training procedure, including [how to reproduce it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br> Pythia uses the same tokenizer as [GPT-NeoX- 20B](https://huggingface.co/EleutherAI/gpt-neox-20b). ### Evaluations All 16 *Pythia* models were evaluated using the [LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access the results by model and step at `results/json/*` in the [GitHub repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br> Expand the sections below to see plots of evaluation results for all Pythia and Pythia-deduped models compared with OPT and BLOOM. <details> <summary>LAMBADA – OpenAI</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/> </details> <details> <summary>Physical Interaction: Question Answering (PIQA)</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/> </details> <details> <summary>WinoGrande</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/> </details> <details> <summary>AI2 Reasoning Challenge—Challenge Set</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/> </details> <details> <summary>SciQ</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/> </details> ### Naming convention and parameter count *Pythia* models were renamed in January 2023. It is possible that the old naming convention still persists in some documentation by accident. The current naming convention (70M, 160M, etc.) is based on total parameter count. <figure style="width:32em"> | current Pythia suffix | old suffix | total params | non-embedding params | | --------------------: | ---------: | -------------: | -------------------: | | 70M | 19M | 70,426,624 | 18,915,328 | | 160M | 125M | 162,322,944 | 85,056,000 | | 410M | 350M | 405,334,016 | 302,311,424 | | 1B | 800M | 1,011,781,632 | 805,736,448 | | 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 | | 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 | | 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 | | 12B | 13B | 11,846,072,320 | 11,327,027,200 | </figure>
11,774
[ [ -0.0265655517578125, -0.062164306640625, 0.021270751953125, 0.005931854248046875, -0.016082763671875, -0.0114593505859375, -0.0174102783203125, -0.035247802734375, 0.0164642333984375, 0.013580322265625, -0.0253143310546875, -0.0241851806640625, -0.03616333007812...
uitnlp/visobert
2023-10-28T15:35:36.000Z
[ "transformers", "pytorch", "xlm-roberta", "fill-mask", "arxiv:2310.11166", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
uitnlp
null
null
uitnlp/visobert
14
1,183
transformers
2023-10-17T06:13:28
--- pipeline_tag: fill-mask widget: - text: "hào quang rực <mask>" --- # <a name="introduction"></a> ViSoBERT: A Pre-Trained Language Model for Vietnamese Social Media Text Processing (EMNLP 2023 - Main) **Disclaimer**: The paper contains actual comments on social networks that might be construed as abusive, offensive, or obscene. ViSoBERT is the state-of-the-art language model for Vietnamese social media tasks: - ViSoBERT is the first monolingual MLM ([XLM-R](https://github.com/facebookresearch/XLM#xlm-r-new-model) architecture) built specifically for Vietnamese social media texts. - ViSoBERT outperforms previous monolingual, multilingual, and multilingual social media approaches, obtaining new state-of-the-art performances on four downstream Vietnamese social media tasks. The general architecture and experimental results of ViSoBERT can be found in our [paper](https://arxiv.org/abs/2310.11166): @misc{nguyen2023visobert, title={ViSoBERT: A Pre-Trained Language Model for Vietnamese Social Media Text Processing}, author={Quoc-Nam Nguyen and Thang Chau Phan and Duc-Vu Nguyen and Kiet Van Nguyen}, year={2023}, eprint={2310.11166}, archivePrefix={arXiv}, primaryClass={cs.CL} } **Please CITE** our paper when ViSoBERT is used to help produce published results or is incorporated into other software. **Installation** Install `transformers` and `SentencePiece` packages: pip install transformers pip install SentencePiece **Example usage** ```python from transformers import AutoModel, AutoTokenizer import torch model= AutoModel.from_pretrained('uitnlp/visobert') tokenizer = AutoTokenizer.from_pretrained('uitnlp/visobert') encoding = tokenizer('hào quang rực rỡ', return_tensors='pt') with torch.no_grad(): output = model(**encoding) ```
1,857
[ [ -0.00609588623046875, -0.07208251953125, 0.0188140869140625, 0.032318115234375, -0.035430908203125, -0.0048065185546875, -0.02752685546875, -0.037994384765625, 0.017364501953125, 0.043243408203125, -0.0222015380859375, -0.05328369140625, -0.053619384765625, ...
climatebert/distilroberta-base-climate-tcfd
2023-06-20T18:51:43.000Z
[ "transformers", "pytorch", "safetensors", "roberta", "text-classification", "climate", "en", "dataset:climatebert/tcfd_recommendations", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
climatebert
null
null
climatebert/distilroberta-base-climate-tcfd
0
1,178
transformers
2022-03-02T23:29:05
--- license: apache-2.0 datasets: - climatebert/tcfd_recommendations language: - en metrics: - accuracy tags: - climate --- # Model Card for distilroberta-base-climate-tcfd ## Model Description This is the fine-tuned ClimateBERT language model with a classification head for classifying climate-related paragraphs into the four TCFD recommendation categories ([fsb-tcfd.org](https://www.fsb-tcfd.org)). Using the [climatebert/distilroberta-base-climate-f](https://huggingface.co/climatebert/distilroberta-base-climate-f) language model as starting point, the distilroberta-base-climate-tcfd model is fine-tuned on our [climatebert/tcfd_recommendations](https://huggingface.co/climatebert/tcfd_recommendations) dataset using only the four recommendation categories (i.e., we remove the non-climate-related class from the dataset). *Note: This model is trained on paragraphs. It may not perform well on sentences.* ## Citation Information ```bibtex @techreport{bingler2023cheaptalk, title={How Cheap Talk in Climate Disclosures Relates to Climate Initiatives, Corporate Emissions, and Reputation Risk}, author={Bingler, Julia and Kraus, Mathias and Leippold, Markus and Webersinke, Nicolas}, type={Working paper}, institution={Available at SSRN 3998435}, year={2023} } ``` ## How to Get Started With the Model You can use the model with a pipeline for text classification: ```python from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline from transformers.pipelines.pt_utils import KeyDataset import datasets from tqdm.auto import tqdm dataset_name = "climatebert/tcfd_recommendations" model_name = "climatebert/distilroberta-base-climate-tcfd" # If you want to use your own data, simply load them as 🤗 Datasets dataset, see https://huggingface.co/docs/datasets/loading dataset = datasets.load_dataset(dataset_name, split="test") model = AutoModelForSequenceClassification.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name, max_len=512) pipe = pipeline("text-classification", model=model, tokenizer=tokenizer, device=0) # See https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.pipeline for out in tqdm(pipe(KeyDataset(dataset, "text"), padding=True, truncation=True)): print(out) ```
2,306
[ [ -0.0303955078125, -0.046905517578125, 0.016937255859375, 0.01377105712890625, -0.03399658203125, 0.00691986083984375, -0.01239776611328125, -0.0165863037109375, -0.0027618408203125, 0.033294677734375, -0.03582763671875, -0.0555419921875, -0.054046630859375, ...
stabilityai/japanese-stablelm-3b-4e1t-instruct
2023-10-25T01:53:07.000Z
[ "transformers", "safetensors", "stablelm_epoch", "text-generation", "japanese-stablelm", "causal-lm", "custom_code", "ja", "arxiv:2307.09288", "arxiv:2104.09864", "arxiv:2204.06745", "arxiv:1607.06450", "arxiv:1910.07467", "license:apache-2.0", "region:us" ]
text-generation
stabilityai
null
null
stabilityai/japanese-stablelm-3b-4e1t-instruct
15
1,178
transformers
2023-10-16T07:50:31
--- language: - ja tags: - japanese-stablelm - causal-lm pipeline_tag: text-generation license: apache-2.0 extra_gated_fields: Name: text Email: text Country: text Organization or Affiliation: text I allow Stability AI to contact me about information related to its models and research: checkbox --- # Japanese StableLM-3B-4E1T Instruct ## Model Description This is a 3B-parameter decoder-only Japanese language model fine-tuned on instruction-following datasets, built on top of the base model [Japanese StableLM-3B-4E1T Base](https://huggingface.co/stabilityai/japanese-stablelm-3b-4e1t-base). *If you are in search of a larger model, please check [Japanese Stable LM Instruct Gamma 7B](https://huggingface.co/stabilityai/japanese-stablelm-instruct-gamma-7b)*. ## Usage ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("stabilityai/japanese-stablelm-3b-4e1t-instruct") model = AutoModelForCausalLM.from_pretrained( "stabilityai/japanese-stablelm-3b-4e1t-instruct", trust_remote_code=True, torch_dtype="auto", ) model.eval() if torch.cuda.is_available(): model = model.to("cuda") def build_prompt(user_query, inputs="", sep="\n\n### "): sys_msg = "以下は、タスクを説明する指示と、文脈のある入力の組み合わせです。要求を適切に満たす応答を書きなさい。" p = sys_msg roles = ["指示", "応答"] msgs = [": \n" + user_query, ": \n"] if inputs: roles.insert(1, "入力") msgs.insert(1, ": \n" + inputs) for role, msg in zip(roles, msgs): p += sep + role + msg return p # Infer with prompt without any additional input user_inputs = { "user_query": "与えられたことわざの意味を小学生でも分かるように教えてください。", "inputs": "情けは人のためならず" } prompt = build_prompt(**user_inputs) input_ids = tokenizer.encode( prompt, add_special_tokens=False, return_tensors="pt" ) tokens = model.generate( input_ids.to(device=model.device), max_new_tokens=256, temperature=1, top_p=0.95, do_sample=True, ) out = tokenizer.decode(tokens[0][input_ids.shape[1]:], skip_special_tokens=True).strip() print(out) ``` ## Model Details * **Developed by**: [Stability AI](https://stability.ai/) * **Model type**: `Japanese StableLM-3B-4E1T Instruct` model is an auto-regressive language model based on the transformer decoder architecture. * **Language(s)**: Japanese * **License**: This model is licensed under [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0). * **Contact**: For questions and comments about the model, please join [Stable Community Japan](https://discord.gg/StableJP). For future announcements / information about Stability AI models, research, and events, please follow https://twitter.com/StabilityAI_JP. ### Model Architecture The model is a decoder-only transformer similar to the LLaMA ([Touvron et al., 2023](https://arxiv.org/abs/2307.09288)) architecture with the following modifications: | Parameters | Hidden Size | Layers | Heads | Sequence Length | |----------------|-------------|--------|-------|-----------------| | 2,795,443,200 | 2560 | 32 | 32 | 4096 | * **Position Embeddings**: Rotary Position Embeddings ([Su et al., 2021](https://arxiv.org/abs/2104.09864)) applied to the first 25% of head embedding dimensions for improved throughput following [Black et al. (2022)](https://arxiv.org/pdf/2204.06745.pdf). * **Normalization**: LayerNorm ([Ba et al., 2016](https://arxiv.org/abs/1607.06450)) with learned bias terms as opposed to RMSNorm ([Zhang & Sennrich, 2019](https://arxiv.org/abs/1910.07467)). * **Tokenizer**: GPT-NeoX ([Black et al., 2022](https://arxiv.org/abs/2204.06745)). ### Training Datasets - [Japanese translation of the Databricks Dolly-15k dataset](https://huggingface.co/datasets/kunishou/databricks-dolly-15k-ja) - [Japanese translation of the subset of the Anthropic HH dataset](https://huggingface.co/datasets/fujiki/japanese_hh-rlhf-49k) - [Wikinews](https://ja.wikinews.org/wi) [subset](https://huggingface.co/datasets/fujiki/llm-japanese-dataset_wikinews) of the [izumi-lab/llm-japanese-dataset](https://huggingface.co/datasets/izumi-lab/llm-japanese-dataset) ## Use and Limitations ### Intended Use The model is intended to be used by all individuals as a foundational model for application-specific fine-tuning without strict limitations on commercial use. ### Limitations and bias The pre-training dataset may have contained offensive or inappropriate content even after applying data cleansing filters which can be reflected in the model-generated text. We recommend users exercise reasonable caution when using these models in production systems. Do not use the model for any applications that may cause harm or distress to individuals or groups. ## Credits The fine-tuning was carried out by [Fujiki Nakamura](https://huggingface.co/fujiki). Other aspects, including data preparation and evaluation, were handled by the Language Team of Stability AI Japan, notably [Meng Lee](https://huggingface.co/leemeng), [Makoto Shing](https://huggingface.co/mkshing), [Paul McCann](https://huggingface.co/polm-stability), [Naoki Orii](https://huggingface.co/mrorii), and [Takuya Akiba](https://huggingface.co/iwiwi). ## Acknowledgements We are grateful for the contributions of the EleutherAI Polyglot-JA team in helping us to collect a large amount of pre-training data in Japanese. Polyglot-JA members includes Hyunwoong Ko (Project Lead), Fujiki Nakamura (originally started this project when he commited to the Polyglot team), Yunho Mo, Minji Jung, KeunSeok Im, and Su-Kyeong Jang. We are also appreciative of [AI Novelist/Sta (Bit192, Inc.)](https://ai-novel.com/index.php) and the numerous contributors from [Stable Community Japan](https://discord.gg/VPrcE475HB) for assisting us in gathering a large amount of high-quality Japanese textual data for model training.
5,892
[ [ -0.0291595458984375, -0.064453125, 0.0054779052734375, 0.01322174072265625, -0.0230255126953125, -0.0174713134765625, -0.0239410400390625, -0.030670166015625, 0.0191802978515625, 0.02899169921875, -0.039886474609375, -0.0517578125, -0.04534912109375, 0.01338...
digiplay/Colorful_v3.1
2023-07-18T18:09:01.000Z
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "endpoints_compatible", "has_space", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
digiplay
null
null
digiplay/Colorful_v3.1
1
1,177
diffusers
2023-07-18T00:14:40
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- Model info : https://civitai.com/models/7279?modelVersionId=90599 Original Author's DEMO image : ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/7bd73518-9fc0-4388-8818-8353f55922bd/width=720/9.jpeg)
329
[ [ -0.0231170654296875, -0.01371002197265625, 0.0357666015625, 0.0105743408203125, -0.028289794921875, -0.0162811279296875, 0.0207061767578125, -0.0019254684448242188, 0.045745849609375, 0.0526123046875, -0.050384521484375, -0.018798828125, -0.0011739730834960938, ...
Mitsua/mitsua-diffusion-one
2023-03-09T11:31:33.000Z
[ "diffusers", "stable-diffusion", "text-to-image", "stable-diffusion-diffusers", "license:other", "endpoints_compatible", "has_space", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
Mitsua
null
null
Mitsua/mitsua-diffusion-one
66
1,176
diffusers
2023-03-03T09:55:55
--- license: other tags: - stable-diffusion - text-to-image - stable-diffusion-diffusers - diffusers inference: true --- # Mitsua Diffusion One Model Card Mitsua Diffusion One is a latent text-to-image diffusion model, which is a successor of [Mitsua Diffusion CC0](https://huggingface.co/Mitsua/mitsua-diffusion-cc0). This model is **trained from scratch using only public domain/CC0 or copyright images with permission for use**, with using a fixed pretrained text encoder ([OpenCLIP ViT-H/14](https://github.com/mlfoundations/open_clip), MIT License). This will be used as a base model for [**AI VTuber Elan Mitsua🖌️**](https://elanmitsua.com/en/)’s activity. ❗❗ **Currently, the model is still of low quality and lacks diversity** ❗❗ ## Further training will be done fully opt-in basis. If you are interested in, [please click here to submit an opt-in application](https://forms.gle/Nk3M7UyqSgYAqdpA6). We are active on [a Discord server for opt-in contributors only](https://discord.com/invite/7VTGRweTUg). Communication is currently in Japanese. ❗❗ **To train this model, images from opt-in contributors have not yet been used** ❗❗ ![Header](https://huggingface.co/Mitsua/mitsua-diffusion-one/resolve/main/mitsua-diffusion-one.jpg) You can check [here to all prompts to generate these images](https://huggingface.co/Mitsua/mitsua-diffusion-one/blob/main/mitsua-diffusion-one-prompts.csv). ## License - Mitsua Open RAIL-M License (More restrictive variant of CreativeML Open RAIL-M) This model is open access and available to all, with a Mitsua Open RAIL-M license further specifying rights and usage. The Mitsua Open RAIL-M License specifies: 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You can't use the model to infringe any rights of other by feeding image sources or model weights to the model (e.g. using another person's copyrighted image for fine-tuning without permission, using another person's copyrighted image as a source for image2image without permission). 4. You can't misrepresent that a generated image as not AI-generated. 5. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the Mitsua Open RAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/Mitsua/mitsua-diffusion-one/blob/main/MODEL-LICENSE) ## Training Data Sources All data was obtained ethically and in compliance with the site's terms and conditions. No copyright images are used in the training of this model without the permission. No AI generated images are in the dataset. - The Metropolitan Museum of Art Open Access (CC0 / Public domain) - Smithsonian Museum Open Access (CC0 / Public domain) - Cleveland Museum of Art Open Access (CC0 / Public domain) - National Gallery of Art Open Access (CC0 / Public domain) - The Art Institute of Chicago Open Access (CC0 / Public domain) - The Walters Art Museum Open Access (CC0 / Public domain) - J. Paul Getty Museum Open Access (CC0 / Public domain) - ArtBench-10 (public domain subset) - Flickr (CC0 subset) - Wikimedia Commons (CC0 subset) - NFT arts *1 (goblintown.nft, mfer, tubby-cats, Timeless) (CC0) - Full version of [VRoid Image Dataset](https://huggingface.co/datasets/Mitsua/vroid-image-dataset-lite) (CC0 or licensed) - Open Clipart (Public domain) - Open Duelyst (CC0) - 3dicons (CC0) - ambientCG (CC0) - Wuffle comics made by Piti Yindee (CC0) - 大崎一番太郎 made by 大崎駅西口商店会 (CC0) - Traditional Generative Art (Non-AI) and Visual Artworks made by Rhizomatiks (licensed) Approx 11M images in total with data augmentation. 1. Their work is released under a CC0 license, but if you are considering using this model to create a work inspired by their NFT and sell it as NFT, please consider paying them a royalty to help the CC0 NFT community grow. ## Training Notes - Trained resolution : 256x256 --> 512x512 --> (512x512, 640x448, 448x640) --> (512x512, 768x512, 512x768) - diffusers version and `mitsua-diffusion-one.ckpt` are fine-tuned with [Diffusion With Offset Noise](https://www.crosslabs.org/blog/diffusion-with-offset-noise) technique which is applied to last 12k steps with `p=0.02`. - `mitsua-diffusion-one-base.ckpt` is non-fine-tuned version. For fine-tuning stuff, this version would be better choice. ## Cosine similarity (as a proof of full-scratch training) - VAE - 0.16694325 (vs Stable Diffusion v2.1 base) - 0.20887965 (vs Stable Diffusion v.1.4) - All fine-tuned variant would have over 0.90 - U-Net - 0.07097270 (vs Stable Diffusion v2.1 base) - 0.08351029 (vs Stable Diffusion v.1.4) - All fine-tuned variant would have over 0.99 ## Developed by - Latent Diffusion Models (for algorithm and training scripts, MIT License) : Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser and Björn Ommer - OpenCLIP : Ilharco Gabriel, Wortsman Mitchell, Wightman Ross, Gordon Cade, Carlini Nicholas, Taori Rohan, Dave Achal, Shankar Vaishaal, Namkoong Hongseok, Miller John, Hajishirzi Hannaneh, Farhadi Ali, Schmidt Ludwig - Mitsua Diffusion One : Abstract Engine - Special Thanks to Mitsua Contributors
5,483
[ [ -0.04718017578125, -0.057464599609375, 0.03070068359375, 0.015899658203125, -0.036163330078125, -0.0216827392578125, 0.01456451416015625, -0.03216552734375, 0.024658203125, 0.036712646484375, -0.06109619140625, -0.032073974609375, -0.037811279296875, -0.0178...
timm/resnet50.fb_ssl_yfcc100m_ft_in1k
2023-04-05T18:12:53.000Z
[ "timm", "pytorch", "safetensors", "image-classification", "arxiv:1905.00546", "arxiv:1512.03385", "license:cc-by-nc-4.0", "region:us" ]
image-classification
timm
null
null
timm/resnet50.fb_ssl_yfcc100m_ft_in1k
0
1,176
timm
2023-04-05T18:12:27
--- tags: - image-classification - timm library_tag: timm license: cc-by-nc-4.0 --- # Model card for resnet50.fb_ssl_yfcc100m_ft_in1k A ResNet-B image classification model. This model features: * ReLU activations * single layer 7x7 convolution with pooling * 1x1 convolution shortcut downsample Pretrained on a subset of YFCC100M using semi-supervised learning and fine-tuned on ImageNet-1k by paper authors. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 25.6 - GMACs: 4.1 - Activations (M): 11.1 - Image size: 224 x 224 - **Papers:** - Billion-scale semi-supervised learning for image classification: https://arxiv.org/abs/1905.00546 - Deep Residual Learning for Image Recognition: https://arxiv.org/abs/1512.03385 - **Original:** https://github.com/facebookresearch/semi-supervised-ImageNet1K-models ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('resnet50.fb_ssl_yfcc100m_ft_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'resnet50.fb_ssl_yfcc100m_ft_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 64, 112, 112]) # torch.Size([1, 256, 56, 56]) # torch.Size([1, 512, 28, 28]) # torch.Size([1, 1024, 14, 14]) # torch.Size([1, 2048, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'resnet50.fb_ssl_yfcc100m_ft_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 2048, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). |model |img_size|top1 |top5 |param_count|gmacs|macts|img/sec| |------------------------------------------|--------|-----|-----|-----------|-----|-----|-------| |[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|320 |86.72|98.17|93.6 |35.2 |69.7 |451 | |[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|288 |86.51|98.08|93.6 |28.5 |56.4 |560 | |[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|288 |86.49|98.03|93.6 |28.5 |56.4 |557 | |[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|224 |85.96|97.82|93.6 |17.2 |34.2 |923 | |[resnext101_32x32d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x32d.fb_wsl_ig1b_ft_in1k)|224 |85.11|97.44|468.5 |87.3 |91.1 |254 | |[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|416 |85.0 |97.12|191.9 |108.4|213.8|134 | |[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|352 |84.96|97.22|102.1 |50.2 |101.2|291 | |[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|320 |84.73|97.18|102.1 |41.5 |83.7 |353 | |[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|384 |84.71|96.99|164.0 |77.6 |154.7|183 | |[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|288 |84.57|97.08|93.6 |28.5 |56.4 |557 | |[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|320 |84.45|97.08|93.2 |31.5 |67.8 |446 | |[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|352 |84.43|96.97|129.9 |51.1 |105.5|280 | |[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|288 |84.36|96.92|93.6 |27.6 |53.0 |595 | |[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|320 |84.35|97.04|66.8 |24.1 |47.7 |610 | |[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|288 |84.3 |96.94|164.0 |43.7 |87.1 |333 | |[resnext101_32x8d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_swsl_ig1b_ft_in1k)|224 |84.28|97.17|88.8 |16.5 |31.2 |1100 | |[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|320 |84.24|96.86|191.9 |64.2 |126.6|228 | |[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|288 |84.19|96.87|93.6 |27.2 |51.6 |613 | |[resnext101_32x16d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_wsl_ig1b_ft_in1k)|224 |84.18|97.19|194.0 |36.3 |51.2 |581 | |[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|288 |84.11|97.11|44.6 |15.1 |29.0 |1144 | |[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|320 |83.97|96.82|64.7 |31.2 |67.3 |518 | |[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|256 |83.87|96.75|93.2 |20.2 |43.4 |692 | |[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|224 |83.86|96.65|93.6 |17.2 |34.2 |923 | |[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|320 |83.72|96.61|86.6 |24.3 |48.1 |617 | |[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|256 |83.69|96.78|66.8 |15.4 |30.6 |943 | |[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|224 |83.68|96.61|93.6 |16.7 |32.0 |986 | |[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|320 |83.67|96.74|60.2 |24.1 |47.7 |706 | |[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|256 |83.59|96.61|129.9 |27.1 |55.8 |526 | |[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|224 |83.58|96.4 |93.6 |16.5 |31.2 |1013 | |[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|224 |83.54|96.83|44.6 |9.1 |17.6 |1864 | |[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|288 |83.46|96.54|60.2 |19.1 |37.3 |904 | |[resnext101_32x16d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_swsl_ig1b_ft_in1k)|224 |83.35|96.85|194.0 |36.3 |51.2 |582 | |[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|256 |83.23|96.53|64.7 |20.0 |43.1 |809 | |[resnext101_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_swsl_ig1b_ft_in1k)|224 |83.22|96.75|44.2 |8.0 |21.2 |1814 | |[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|288 |83.16|96.38|83.5 |25.7 |51.6 |590 | |[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|256 |83.14|96.38|60.2 |15.4 |30.5 |1096 | |[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|320 |83.02|96.45|44.6 |16.5 |34.8 |992 | |[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|288 |82.98|96.54|44.6 |13.4 |28.2 |1077 | |[resnext101_64x4d.tv_in1k](https://huggingface.co/timm/resnext101_64x4d.tv_in1k)|224 |82.98|96.25|83.5 |15.5 |31.2 |989 | |[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|256 |82.86|96.28|86.6 |15.6 |30.8 |951 | |[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|224 |82.83|96.22|88.8 |16.5 |31.2 |1099 | |[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|224 |82.8 |96.13|60.2 |11.6 |22.6 |1486 | |[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|288 |82.8 |96.32|44.6 |13.0 |26.8 |1291 | |[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|288 |82.74|95.71|60.2 |19.1 |37.3 |905 | |[resnext101_32x8d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_wsl_ig1b_ft_in1k)|224 |82.69|96.63|88.8 |16.5 |31.2 |1100 | |[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|288 |82.62|95.75|60.2 |19.1 |37.3 |904 | |[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|288 |82.61|96.49|25.6 |8.9 |20.6 |1729 | |[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|288 |82.53|96.13|36.8 |9.9 |21.5 |1773 | |[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|224 |82.5 |96.02|126.9 |22.8 |21.2 |1078 | |[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|224 |82.46|95.92|83.5 |15.5 |31.2 |987 | |[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|288 |82.36|96.18|35.7 |8.1 |20.9 |1964 | |[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|320 |82.35|96.14|25.6 |8.8 |24.1 |1386 | |[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|288 |82.31|95.63|44.6 |13.0 |26.8 |1291 | |[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|288 |82.29|96.01|63.6 |13.6 |28.5 |1078 | |[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|224 |82.29|96.0 |60.2 |11.6 |22.6 |1484 | |[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|288 |82.27|96.06|68.9 |18.9 |23.8 |1176 | |[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|256 |82.26|96.07|44.6 |10.6 |22.2 |1542 | |[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|288 |82.24|95.73|44.6 |13.0 |26.8 |1290 | |[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|288 |82.2 |96.14|27.6 |7.0 |23.8 |1547 | |[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|224 |82.18|96.05|44.6 |8.1 |17.1 |1771 | |[resnext50_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_swsl_ig1b_ft_in1k)|224 |82.17|96.22|25.0 |4.3 |14.4 |2943 | |[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|288 |82.12|95.65|25.6 |7.1 |19.6 |1704 | |[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|288 |82.03|95.94|25.0 |7.0 |23.8 |1745 | |[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|288 |82.0 |96.15|24.9 |5.8 |12.7 |1787 | |[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|256 |81.99|95.85|36.8 |7.8 |17.0 |2230 | |[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|176 |81.98|95.72|88.8 |10.3 |19.4 |1768 | |[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|224 |81.97|95.24|60.2 |11.6 |22.6 |1486 | |[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|224 |81.93|95.75|44.6 |7.8 |16.2 |2122 | |[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|224 |81.9 |95.77|44.6 |7.8 |16.2 |2118 | |[resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k)|224 |81.84|96.1 |194.0 |36.3 |51.2 |583 | |[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|256 |81.78|95.94|35.7 |6.4 |16.6 |2471 | |[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|224 |81.77|95.22|60.2 |11.6 |22.6 |1485 | |[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|224 |81.74|96.06|25.6 |5.4 |12.4 |2813 | |[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|288 |81.65|95.54|25.6 |7.1 |19.6 |1703 | |[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|288 |81.64|95.88|25.6 |7.2 |19.7 |1694 | |[resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k)|224 |81.62|96.04|88.8 |16.5 |31.2 |1101 | |[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|224 |81.61|95.76|68.9 |11.4 |14.4 |1930 | |[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|288 |81.61|95.83|25.6 |8.5 |19.2 |1868 | |[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|224 |81.5 |95.16|44.6 |7.8 |16.2 |2125 | |[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|288 |81.48|95.16|25.0 |7.0 |23.8 |1745 | |[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|288 |81.47|95.71|25.9 |6.9 |18.6 |2071 | |[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|224 |81.45|95.53|68.9 |11.4 |14.4 |1929 | |[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|288 |81.44|95.22|25.6 |7.2 |19.7 |1908 | |[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|256 |81.44|95.67|25.6 |5.6 |15.4 |2168 | |[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|288 |81.4 |95.82|30.2 |6.8 |13.9 |2132 | |[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|288 |81.37|95.74|25.6 |7.2 |19.7 |1910 | |[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|224 |81.32|95.19|44.6 |7.8 |16.2 |2125 | |[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|288 |81.3 |95.65|28.1 |6.8 |18.4 |1803 | |[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|288 |81.3 |95.11|25.0 |7.0 |23.8 |1746 | |[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|224 |81.27|95.62|27.6 |4.3 |14.4 |2591 | |[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|224 |81.26|95.16|25.6 |4.3 |11.8 |2823 | |[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|288 |81.23|95.54|15.7 |4.8 |19.6 |2117 | |[senet154.gluon_in1k](https://huggingface.co/timm/senet154.gluon_in1k)|224 |81.23|95.35|115.1 |20.8 |38.7 |545 | |[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|288 |81.22|95.11|25.6 |6.8 |18.4 |2089 | |[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|288 |81.22|95.63|25.6 |6.8 |18.4 |676 | |[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|288 |81.18|95.09|25.6 |7.2 |19.7 |1908 | |[resnet50.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet50.fb_swsl_ig1b_ft_in1k)|224 |81.18|95.98|25.6 |4.1 |11.1 |3455 | |[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|224 |81.17|95.34|25.0 |4.3 |14.4 |2933 | |[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|224 |81.1 |95.33|25.0 |4.3 |14.4 |2934 | |[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|288 |81.1 |95.23|28.1 |6.8 |18.4 |1801 | |[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|288 |81.1 |95.12|28.1 |6.8 |18.4 |1799 | |[resnet152s.gluon_in1k](https://huggingface.co/timm/resnet152s.gluon_in1k)|224 |81.02|95.41|60.3 |12.9 |25.0 |1347 | |[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|288 |80.97|95.44|25.6 |6.8 |18.4 |2085 | |[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|256 |80.94|95.45|25.9 |5.4 |14.7 |2571 | |[resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.93|95.73|44.2 |8.0 |21.2 |1814 | |[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|288 |80.91|95.55|25.6 |6.8 |18.4 |2084 | |[seresnext101_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_32x4d.gluon_in1k)|224 |80.9 |95.31|49.0 |8.0 |21.3 |1585 | |[seresnext101_64x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_64x4d.gluon_in1k)|224 |80.9 |95.3 |88.2 |15.5 |31.2 |918 | |[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|288 |80.86|95.52|25.6 |6.8 |18.4 |2085 | |[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|224 |80.85|95.43|25.6 |4.1 |11.1 |3450 | |[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|224 |80.84|95.02|25.6 |4.3 |11.8 |2821 | |[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|224 |80.79|95.62|24.9 |3.5 |7.7 |2961 | |[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|288 |80.79|95.36|19.8 |6.0 |14.8 |2506 | |[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|288 |80.79|95.58|19.9 |4.2 |10.6 |2349 | |[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|288 |80.78|94.99|25.6 |6.8 |18.4 |2088 | |[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|288 |80.71|95.43|25.6 |6.8 |18.4 |2087 | |[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|288 |80.7 |95.39|25.0 |7.0 |23.8 |1749 | |[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|192 |80.69|95.24|63.6 |6.0 |12.7 |2270 | |[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|224 |80.68|94.71|25.6 |4.4 |11.9 |3162 | |[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|288 |80.68|95.36|19.7 |6.0 |14.8 |2637 | |[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|224 |80.67|95.3 |25.6 |4.1 |11.1 |3452 | |[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|288 |80.67|95.42|25.0 |7.4 |25.1 |1626 | |[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|224 |80.63|95.21|25.6 |5.2 |11.6 |3034 | |[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|224 |80.61|95.32|25.6 |4.4 |11.9 |2813 | |[resnext101_64x4d.gluon_in1k](https://huggingface.co/timm/resnext101_64x4d.gluon_in1k)|224 |80.61|94.99|83.5 |15.5 |31.2 |989 | |[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|288 |80.6 |95.31|19.9 |6.0 |14.8 |2578 | |[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|256 |80.57|95.17|15.7 |3.8 |15.5 |2710 | |[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|224 |80.56|95.0 |60.2 |11.6 |22.6 |1483 | |[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|224 |80.53|95.16|25.6 |4.4 |11.9 |3164 | |[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|224 |80.53|94.46|25.0 |4.3 |14.4 |2930 | |[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|176 |80.48|94.98|126.9 |14.3 |13.2 |1719 | |[resnet152d.gluon_in1k](https://huggingface.co/timm/resnet152d.gluon_in1k)|224 |80.47|95.2 |60.2 |11.8 |23.4 |1428 | |[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|288 |80.45|95.32|25.6 |6.8 |18.4 |2086 | |[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|224 |80.45|95.24|30.2 |4.1 |8.4 |3530 | |[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|224 |80.45|94.63|25.0 |4.3 |14.4 |2936 | |[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|176 |80.43|95.09|68.9 |7.3 |9.0 |3015 | |[resnet101d.gluon_in1k](https://huggingface.co/timm/resnet101d.gluon_in1k)|224 |80.42|95.01|44.6 |8.1 |17.0 |2007 | |[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|224 |80.38|94.6 |25.6 |4.1 |11.1 |3461 | |[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|256 |80.36|95.1 |19.8 |4.8 |11.7 |3267 | |[resnext101_32x4d.gluon_in1k](https://huggingface.co/timm/resnext101_32x4d.gluon_in1k)|224 |80.34|94.93|44.2 |8.0 |21.2 |1814 | |[resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.32|95.4 |25.0 |4.3 |14.4 |2941 | |[resnet101s.gluon_in1k](https://huggingface.co/timm/resnet101s.gluon_in1k)|224 |80.28|95.16|44.7 |9.2 |18.6 |1851 | |[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|224 |80.26|95.08|28.1 |4.1 |11.1 |2972 | |[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|288 |80.24|95.24|25.6 |8.5 |19.9 |1523 | |[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|224 |80.22|94.63|25.6 |4.4 |11.9 |3162 | |[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|176 |80.2 |94.64|60.2 |7.2 |14.0 |2346 | |[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|224 |80.08|94.74|28.1 |4.1 |11.1 |2969 | |[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|256 |80.08|94.97|19.7 |4.8 |11.7 |3284 | |[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|256 |80.06|94.99|19.9 |4.8 |11.7 |3216 | |[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|224 |80.06|94.95|25.6 |4.1 |11.1 |1109 | |[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|224 |80.02|94.71|28.1 |4.1 |11.1 |2962 | |[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|288 |79.97|95.05|25.6 |6.8 |18.4 |2086 | |[resnet152c.gluon_in1k](https://huggingface.co/timm/resnet152c.gluon_in1k)|224 |79.92|94.84|60.2 |11.8 |23.4 |1455 | |[seresnext50_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext50_32x4d.gluon_in1k)|224 |79.91|94.82|27.6 |4.3 |14.4 |2591 | |[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|224 |79.91|94.67|25.6 |4.1 |11.1 |3456 | |[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|176 |79.9 |94.6 |44.6 |4.9 |10.1 |3341 | |[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|224 |79.89|94.97|35.7 |4.5 |12.1 |2774 | |[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|224 |79.88|94.87|25.6 |4.1 |11.1 |3455 | |[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|320 |79.86|95.07|16.0 |5.2 |16.4 |2168 | |[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|224 |79.85|94.56|25.6 |4.1 |11.1 |3460 | |[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|288 |79.83|94.97|25.6 |6.8 |18.4 |2087 | |[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|224 |79.82|94.62|44.6 |7.8 |16.2 |2114 | |[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|224 |79.76|94.6 |25.0 |4.3 |14.4 |2943 | |[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|224 |79.74|94.95|25.6 |4.1 |11.1 |3455 | |[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|224 |79.74|94.87|19.9 |2.5 |6.4 |3929 | |[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|288 |79.71|94.83|19.7 |6.0 |14.8 |2710 | |[resnet152.gluon_in1k](https://huggingface.co/timm/resnet152.gluon_in1k)|224 |79.68|94.74|60.2 |11.6 |22.6 |1486 | |[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|224 |79.67|94.87|25.0 |4.5 |15.2 |2729 | |[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|288 |79.63|94.91|25.6 |6.8 |18.4 |2086 | |[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|224 |79.56|94.72|25.6 |4.3 |11.8 |2805 | |[resnet101c.gluon_in1k](https://huggingface.co/timm/resnet101c.gluon_in1k)|224 |79.53|94.58|44.6 |8.1 |17.0 |2062 | |[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|224 |79.52|94.61|25.6 |4.1 |11.1 |3459 | |[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|176 |79.42|94.64|25.6 |2.6 |6.9 |5397 | |[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|288 |79.4 |94.66|18.0 |5.9 |14.6 |2752 | |[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|224 |79.38|94.57|25.6 |4.1 |11.1 |3459 | |[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|176 |79.37|94.3 |25.0 |2.7 |9.0 |4577 | |[resnext50_32x4d.gluon_in1k](https://huggingface.co/timm/resnext50_32x4d.gluon_in1k)|224 |79.36|94.43|25.0 |4.3 |14.4 |2942 | |[resnext101_32x8d.tv_in1k](https://huggingface.co/timm/resnext101_32x8d.tv_in1k)|224 |79.31|94.52|88.8 |16.5 |31.2 |1100 | |[resnet101.gluon_in1k](https://huggingface.co/timm/resnet101.gluon_in1k)|224 |79.31|94.53|44.6 |7.8 |16.2 |2125 | |[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|224 |79.31|94.63|25.6 |5.2 |12.0 |2524 | |[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|176 |79.27|94.49|25.6 |2.6 |6.9 |5404 | |[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|224 |79.25|94.31|25.0 |4.3 |14.4 |2931 | |[resnet50.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet50.fb_ssl_yfcc100m_ft_in1k)|224 |79.22|94.84|25.6 |4.1 |11.1 |3451 | |[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|256 |79.21|94.56|19.7 |4.8 |11.7 |3392 | |[resnet50d.gluon_in1k](https://huggingface.co/timm/resnet50d.gluon_in1k)|224 |79.07|94.48|25.6 |4.4 |11.9 |3162 | |[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|224 |79.03|94.38|25.6 |4.1 |11.1 |3453 | |[resnet50.am_in1k](https://huggingface.co/timm/resnet50.am_in1k)|224 |79.01|94.39|25.6 |4.1 |11.1 |3461 | |[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|256 |79.01|94.37|18.0 |4.6 |11.6 |3440 | |[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|256 |78.9 |94.54|16.0 |3.4 |10.5 |3421 | |[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|160 |78.89|94.11|60.2 |5.9 |11.5 |2745 | |[wide_resnet101_2.tv_in1k](https://huggingface.co/timm/wide_resnet101_2.tv_in1k)|224 |78.84|94.28|126.9 |22.8 |21.2 |1079 | |[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|288 |78.83|94.24|16.8 |4.5 |16.8 |2251 | |[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|224 |78.81|94.32|25.6 |4.1 |11.1 |3454 | |[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|288 |78.74|94.33|16.8 |4.5 |16.7 |2264 | |[resnet50s.gluon_in1k](https://huggingface.co/timm/resnet50s.gluon_in1k)|224 |78.72|94.23|25.7 |5.5 |13.5 |2796 | |[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|224 |78.71|94.24|25.6 |4.4 |11.9 |3154 | |[wide_resnet50_2.tv_in1k](https://huggingface.co/timm/wide_resnet50_2.tv_in1k)|224 |78.47|94.09|68.9 |11.4 |14.4 |1934 | |[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|224 |78.46|94.27|25.6 |4.1 |11.1 |3454 | |[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|288 |78.43|94.35|21.8 |6.5 |7.5 |3291 | |[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|288 |78.42|94.04|10.5 |3.1 |13.3 |3226 | |[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|320 |78.33|94.13|16.0 |5.2 |16.4 |2391 | |[resnet152.tv_in1k](https://huggingface.co/timm/resnet152.tv_in1k)|224 |78.32|94.04|60.2 |11.6 |22.6 |1487 | |[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|288 |78.28|94.1 |10.4 |3.1 |13.3 |3062 | |[bat_resnext26ts.ch_in1k](https://huggingface.co/timm/bat_resnext26ts.ch_in1k)|256 |78.25|94.1 |10.7 |2.5 |12.5 |3393 | |[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|224 |78.06|93.78|25.6 |4.1 |11.1 |3450 | |[resnet50c.gluon_in1k](https://huggingface.co/timm/resnet50c.gluon_in1k)|224 |78.0 |93.99|25.6 |4.4 |11.9 |3286 | |[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|288 |78.0 |93.91|10.3 |3.1 |13.3 |3297 | |[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|224 |77.98|93.75|16.8 |2.7 |10.1 |3841 | |[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|288 |77.92|93.77|21.8 |6.1 |6.2 |3609 | |[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|160 |77.88|93.71|44.6 |4.0 |8.3 |3926 | |[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|256 |77.87|93.84|16.0 |3.4 |10.5 |3772 | |[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|256 |77.86|93.79|10.4 |2.4 |10.5 |4263 | |[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|160 |77.82|93.81|35.7 |2.3 |6.2 |5238 | |[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|256 |77.81|93.82|10.5 |2.4 |10.5 |4183 | |[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|160 |77.79|93.6 |25.6 |2.2 |6.0 |5329 | |[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|160 |77.73|93.32|25.0 |2.2 |7.4 |5576 | |[resnext50_32x4d.tv_in1k](https://huggingface.co/timm/resnext50_32x4d.tv_in1k)|224 |77.61|93.7 |25.0 |4.3 |14.4 |2944 | |[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|224 |77.59|93.61|16.8 |2.7 |10.2 |3807 | |[resnet50.gluon_in1k](https://huggingface.co/timm/resnet50.gluon_in1k)|224 |77.58|93.72|25.6 |4.1 |11.1 |3455 | |[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|256 |77.44|93.56|10.3 |2.4 |10.5 |4284 | |[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|288 |77.41|93.63|16.0 |4.3 |13.5 |2907 | |[resnet101.tv_in1k](https://huggingface.co/timm/resnet101.tv_in1k)|224 |77.38|93.54|44.6 |7.8 |16.2 |2125 | |[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|160 |77.22|93.27|25.6 |2.2 |6.1 |5982 | |[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|288 |77.17|93.47|10.3 |3.1 |13.3 |3392 | |[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|288 |77.15|93.27|21.8 |6.1 |6.2 |3615 | |[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|224 |77.1 |93.37|21.8 |3.9 |4.5 |5436 | |[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|224 |77.02|93.07|28.1 |4.1 |11.1 |2952 | |[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|256 |76.78|93.13|10.3 |2.4 |10.5 |4410 | |[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|224 |76.7 |93.17|16.0 |2.6 |8.2 |4859 | |[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|288 |76.5 |93.35|21.8 |6.1 |6.2 |3617 | |[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|224 |76.42|92.87|21.8 |3.7 |3.7 |5984 | |[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|288 |76.35|93.18|16.0 |3.9 |12.2 |3331 | |[resnet50.tv_in1k](https://huggingface.co/timm/resnet50.tv_in1k)|224 |76.13|92.86|25.6 |4.1 |11.1 |3457 | |[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|160 |75.96|92.5 |25.6 |2.1 |5.7 |6490 | |[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|224 |75.52|92.44|21.8 |3.7 |3.7 |5991 | |[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|224 |75.3 |92.58|16.0 |2.4 |7.4 |5583 | |[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|224 |75.16|92.18|21.8 |3.7 |3.7 |5994 | |[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|160 |75.1 |92.08|28.1 |2.1 |5.7 |5513 | |[resnet34.gluon_in1k](https://huggingface.co/timm/resnet34.gluon_in1k)|224 |74.57|91.98|21.8 |3.7 |3.7 |5984 | |[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|288 |73.81|91.83|11.7 |3.4 |5.4 |5196 | |[resnet34.tv_in1k](https://huggingface.co/timm/resnet34.tv_in1k)|224 |73.32|91.42|21.8 |3.7 |3.7 |5979 | |[resnet18.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet18.fb_swsl_ig1b_ft_in1k)|224 |73.28|91.73|11.7 |1.8 |2.5 |10213 | |[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|288 |73.16|91.03|11.7 |3.0 |4.1 |6050 | |[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|224 |72.98|91.11|21.8 |3.7 |3.7 |5967 | |[resnet18.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet18.fb_ssl_yfcc100m_ft_in1k)|224 |72.6 |91.42|11.7 |1.8 |2.5 |10213 | |[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|288 |72.37|90.59|11.7 |3.0 |4.1 |6051 | |[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|224 |72.26|90.31|10.1 |1.7 |5.8 |7026 | |[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|224 |72.26|90.68|11.7 |2.1 |3.3 |8707 | |[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|224 |71.49|90.07|11.7 |1.8 |2.5 |10187 | |[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|176 |71.31|89.69|10.1 |1.1 |3.6 |10970 | |[resnet18.gluon_in1k](https://huggingface.co/timm/resnet18.gluon_in1k)|224 |70.84|89.76|11.7 |1.8 |2.5 |10210 | |[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|224 |70.64|89.47|11.7 |1.8 |2.5 |10194 | |[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|160 |70.56|89.52|21.8 |1.9 |1.9 |10737 | |[resnet18.tv_in1k](https://huggingface.co/timm/resnet18.tv_in1k)|224 |69.76|89.07|11.7 |1.8 |2.5 |10205 | |[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|224 |68.34|88.03|5.4 |1.1 |2.4 |13079 | |[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|224 |68.25|88.17|11.7 |1.8 |2.5 |10167 | |[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|176 |66.71|86.96|5.4 |0.7 |1.5 |20327 | |[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|160 |65.66|86.26|11.7 |0.9 |1.3 |18229 | ## Citation ```bibtex @misc{yalniz2019billionscale, title={Billion-scale semi-supervised learning for image classification}, author={I. Zeki Yalniz and Hervé Jégou and Kan Chen and Manohar Paluri and Dhruv Mahajan}, year={2019}, eprint={1905.00546}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ```bibtex @article{He2015, author = {Kaiming He and Xiangyu Zhang and Shaoqing Ren and Jian Sun}, title = {Deep Residual Learning for Image Recognition}, journal = {arXiv preprint arXiv:1512.03385}, year = {2015} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
38,445
[ [ -0.06488037109375, -0.01885986328125, 0.0022830963134765625, 0.03009033203125, -0.031707763671875, -0.00859832763671875, -0.010009765625, -0.03118896484375, 0.08526611328125, 0.0196685791015625, -0.04803466796875, -0.04022216796875, -0.0445556640625, -0.0005...
timm/coatnext_nano_rw_224.sw_in1k
2023-05-10T23:51:19.000Z
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2201.03545", "license:apache-2.0", "region:us" ]
image-classification
timm
null
null
timm/coatnext_nano_rw_224.sw_in1k
0
1,175
timm
2023-01-20T21:29:41
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for coatnext_nano_rw_224.sw_in1k A timm specific CoAtNeXt image classification model. Trained in `timm` on ImageNet-1k by Ross Wightman. ImageNet-1k training done on TPUs thanks to support of the [TRC](https://sites.research.google/trc/about/) program. ### Model Variants in [maxxvit.py](https://github.com/huggingface/pytorch-image-models/blob/main/timm/models/maxxvit.py) MaxxViT covers a number of related model architectures that share a common structure including: - CoAtNet - Combining MBConv (depthwise-separable) convolutional blocks in early stages with self-attention transformer blocks in later stages. - MaxViT - Uniform blocks across all stages, each containing a MBConv (depthwise-separable) convolution block followed by two self-attention blocks with different partitioning schemes (window followed by grid). - CoAtNeXt - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in CoAtNet. All normalization layers are LayerNorm (no BatchNorm). - MaxxViT - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in MaxViT. All normalization layers are LayerNorm (no BatchNorm). - MaxxViT-V2 - A MaxxViT variation that removes the window block attention leaving only ConvNeXt blocks and grid attention w/ more width to compensate. Aside from the major variants listed above, there are more subtle changes from model to model. Any model name with the string `rw` are `timm` specific configs w/ modelling adjustments made to favour PyTorch eager use. These were created while training initial reproductions of the models so there are variations. All models with the string `tf` are models exactly matching Tensorflow based models by the original paper authors with weights ported to PyTorch. This covers a number of MaxViT models. The official CoAtNet models were never released. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 14.7 - GMACs: 2.5 - Activations (M): 12.8 - Image size: 224 x 224 - **Papers:** - CoAtNet: Marrying Convolution and Attention for All Data Sizes: https://arxiv.org/abs/2201.03545 - A ConvNet for the 2020s: https://arxiv.org/abs/2201.03545 - **Dataset:** ImageNet-1k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('coatnext_nano_rw_224.sw_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'coatnext_nano_rw_224.sw_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 64, 112, 112]) # torch.Size([1, 64, 56, 56]) # torch.Size([1, 128, 28, 28]) # torch.Size([1, 256, 14, 14]) # torch.Size([1, 512, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'coatnext_nano_rw_224.sw_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 512, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison ### By Top-1 |model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)| |------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:| |[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22| |[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76| |[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99| |[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15| |[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84| |[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90| |[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95| |[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74| |[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43| |[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64| |[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77| |[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99| |[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22| |[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15| |[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78| |[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90| |[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84| |[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77| |[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59| |[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65| |[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42| |[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35| |[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13| |[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01| |[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38| |[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78| |[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30| |[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17| |[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92| |[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60| |[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11| |[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78| |[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47| |[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05| |[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05| |[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92| |[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28| |[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04| |[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73| |[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34| |[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80| |[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41| |[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86| ### By Throughput (samples / sec) |model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)| |------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:| |[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80| |[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41| |[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34| |[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73| |[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04| |[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86| |[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05| |[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92| |[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05| |[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28| |[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11| |[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47| |[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13| |[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78| |[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60| |[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92| |[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30| |[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17| |[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22| |[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78| |[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78| |[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38| |[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77| |[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64| |[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01| |[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42| |[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35| |[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65| |[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43| |[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74| |[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59| |[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95| |[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90| |[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90| |[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77| |[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84| |[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84| |[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99| |[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99| |[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76| |[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15| |[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15| |[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22| ## Citation ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ``` ```bibtex @article{tu2022maxvit, title={MaxViT: Multi-Axis Vision Transformer}, author={Tu, Zhengzhong and Talebi, Hossein and Zhang, Han and Yang, Feng and Milanfar, Peyman and Bovik, Alan and Li, Yinxiao}, journal={ECCV}, year={2022}, } ``` ```bibtex @article{dai2021coatnet, title={CoAtNet: Marrying Convolution and Attention for All Data Sizes}, author={Dai, Zihang and Liu, Hanxiao and Le, Quoc V and Tan, Mingxing}, journal={arXiv preprint arXiv:2106.04803}, year={2021} } ```
22,209
[ [ -0.054290771484375, -0.033782958984375, 0.00392913818359375, 0.0290679931640625, -0.023712158203125, -0.0133056640625, -0.00992584228515625, -0.02679443359375, 0.054168701171875, 0.015960693359375, -0.041900634765625, -0.045379638671875, -0.0477294921875, -0...
TheBloke/Llama-2-70B-AWQ
2023-09-27T12:49:45.000Z
[ "transformers", "safetensors", "llama", "text-generation", "facebook", "meta", "pytorch", "llama-2", "en", "arxiv:2307.09288", "license:llama2", "text-generation-inference", "region:us" ]
text-generation
TheBloke
null
null
TheBloke/Llama-2-70B-AWQ
5
1,175
transformers
2023-09-19T00:05:44
--- language: - en license: llama2 tags: - facebook - meta - pytorch - llama - llama-2 model_name: Llama 2 70B base_model: meta-llama/Llama-2-70b-hf inference: false model_creator: Meta Llama 2 model_type: llama pipeline_tag: text-generation prompt_template: '{prompt} ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Llama 2 70B - AWQ - Model creator: [Meta Llama 2](https://huggingface.co/meta-llama) - Original model: [Llama 2 70B](https://huggingface.co/meta-llama/Llama-2-70b-hf) <!-- description start --> ## Description This repo contains AWQ model files for [Meta Llama 2's Llama 2 70B](https://huggingface.co/meta-llama/Llama-2-70b-hf). ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference. It is also now supported by continuous batching server [vLLM](https://github.com/vllm-project/vllm), allowing use of AWQ models for high-throughput concurrent inference in multi-user server scenarios. Note that, at the time of writing, overall throughput is still lower than running vLLM with unquantised models, however using AWQ enables using much smaller GPUs which can lead to easier deployment and overall cost savings. For example, a 70B model can be run on 1 x 48GB GPU instead of 2 x 80GB. <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Llama-2-70B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-70B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Llama-2-70B-GGUF) * [Meta Llama 2's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/meta-llama/Llama-2-70b-hf) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: None ``` {prompt} ``` <!-- prompt-template end --> <!-- README_AWQ.md-provided-files start --> ## Provided files and AWQ parameters For my first release of AWQ models, I am releasing 128g models only. I will consider adding 32g as well if there is interest, and once I have done perplexity and evaluation comparisons, but at this time 32g models are still not fully tested with AutoAWQ and vLLM. Models are released as sharded safetensors files. | Branch | Bits | GS | AWQ Dataset | Seq Len | Size | | ------ | ---- | -- | ----------- | ------- | ---- | | [main](https://huggingface.co/TheBloke/Llama-2-70B-AWQ/tree/main) | 4 | 128 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 36.61 GB <!-- README_AWQ.md-provided-files end --> <!-- README_AWQ.md-use-from-vllm start --> ## Serving this model from vLLM Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/). - When using vLLM as a server, pass the `--quantization awq` parameter, for example: ```shell python3 python -m vllm.entrypoints.api_server --model TheBloke/Llama-2-70B-AWQ --quantization awq ``` When using vLLM from Python code, pass the `quantization=awq` parameter, for example: ```python from vllm import LLM, SamplingParams prompts = [ "Hello, my name is", "The president of the United States is", "The capital of France is", "The future of AI is", ] sampling_params = SamplingParams(temperature=0.8, top_p=0.95) llm = LLM(model="TheBloke/Llama-2-70B-AWQ", quantization="awq") outputs = llm.generate(prompts, sampling_params) # Print the outputs. for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}") ``` <!-- README_AWQ.md-use-from-vllm start --> <!-- README_AWQ.md-use-from-python start --> ## How to use this AWQ model from Python code ### Install the necessary packages Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.0.2 or later ```shell pip3 install autoawq ``` If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y autoawq git clone https://github.com/casper-hansen/AutoAWQ cd AutoAWQ pip3 install . ``` ### You can then try the following example code ```python from awq import AutoAWQForCausalLM from transformers import AutoTokenizer model_name_or_path = "TheBloke/Llama-2-70B-AWQ" # Load model model = AutoAWQForCausalLM.from_quantized(model_name_or_path, fuse_layers=True, trust_remote_code=False, safetensors=True) tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=False) prompt = "Tell me about AI" prompt_template=f'''{prompt} ''' print("\n\n*** Generate:") tokens = tokenizer( prompt_template, return_tensors='pt' ).input_ids.cuda() # Generate output generation_output = model.generate( tokens, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, max_new_tokens=512 ) print("Output: ", tokenizer.decode(generation_output[0])) # Inference can also be done using transformers' pipeline from transformers import pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_AWQ.md-use-from-python end --> <!-- README_AWQ.md-compatibility start --> ## Compatibility The files provided are tested to work with [AutoAWQ](https://github.com/casper-hansen/AutoAWQ), and [vLLM](https://github.com/vllm-project/vllm). [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is not yet compatible with AWQ, but a PR is open which should bring support soon: [TGI PR #781](https://github.com/huggingface/text-generation-inference/issues/781). <!-- README_AWQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: Meta Llama 2's Llama 2 70B # **Llama 2** Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 70B pretrained model, converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom. ## Model Details *Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.* Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM. **Model Developers** Meta **Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations. **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety. ||Training Data|Params|Content Length|GQA|Tokens|LR| |---|---|---|---|---|---|---| |Llama 2|*A new mix of publicly available online data*|7B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|13B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|70B|4k|&#10004;|2.0T|1.5 x 10<sup>-4</sup>| *Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability. **Model Dates** Llama 2 was trained between January 2023 and July 2023. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) **Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288) ## Intended Use **Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212). **Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program. ||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)| |---|---|---|---| |Llama 2 7B|184320|400|31.22| |Llama 2 13B|368640|400|62.44| |Llama 2 70B|1720320|400|291.42| |Total|3311616||539.00| **CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023. ## Evaluation Results In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library. |Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval| |---|---|---|---|---|---|---|---|---|---| |Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9| |Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9| |Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7| |Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6| |Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3| |Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1| |Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**| **Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1. |||TruthfulQA|Toxigen| |---|---|---|---| |Llama 1|7B|27.42|23.00| |Llama 1|13B|41.74|23.08| |Llama 1|33B|44.19|22.57| |Llama 1|65B|48.71|21.77| |Llama 2|7B|33.29|**21.25**| |Llama 2|13B|41.86|26.10| |Llama 2|70B|**50.18**|24.60| **Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better). |||TruthfulQA|Toxigen| |---|---|---|---| |Llama-2-Chat|7B|57.04|**0.00**| |Llama-2-Chat|13B|62.18|**0.00**| |Llama-2-Chat|70B|**64.14**|0.01| **Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above. ## Ethical Considerations and Limitations Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide) ## Reporting Issues Please report any software “bug,” or other problems with the models through one of the following means: - Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama) - Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) - Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) ## Llama Model Index |Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf| |---|---|---|---|---| |7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)| |13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)| |70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
19,502
[ [ -0.0379638671875, -0.054229736328125, 0.0282745361328125, 0.0037975311279296875, -0.0220184326171875, -0.00806427001953125, 0.00930023193359375, -0.04150390625, -0.0005626678466796875, 0.0238800048828125, -0.049407958984375, -0.0369873046875, -0.023284912109375,...
nielsr/layoutxlm-finetuned-xfund-fr
2023-09-18T08:50:31.000Z
[ "transformers", "pytorch", "tensorboard", "layoutlmv2", "token-classification", "generated_from_trainer", "dataset:nielsr/XFUN", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "region:us" ]
token-classification
nielsr
null
null
nielsr/layoutxlm-finetuned-xfund-fr
3
1,174
transformers
2022-09-19T10:48:36
--- license: cc-by-nc-sa-4.0 tags: - generated_from_trainer datasets: - nielsr/XFUN inference: false base_model: microsoft/layoutxlm-base model-index: - name: layoutxlm-finetuned-xfund-fr results: [] --- # layoutxlm-finetuned-xfund-fr This model is a fine-tuned version of [microsoft/layoutxlm-base](https://huggingface.co/microsoft/layoutxlm-base) on the [XFUND](https://github.com/doc-analysis/XFUND) dataset (French split). ## Model usage Note that this model requires Tesseract, French package, in order to perform inference. You can install it using `!sudo apt-get install tesseract-ocr-fra`. Here's how to use this model: ``` from transformers import AutoProcessor, AutoModelForTokenClassification import torch from PIL import Image processor = AutoProcessor.from_pretrained("nielsr/layoutxlm-finetuned-xfund-fr") model = AutoModelForTokenClassification.from_pretrained(nielsr/layoutxlm-finetuned-xfund-fr") # assuming you have a French document, turned into an image image = Image.open("...").convert("RGB") # prepare for the model encoding = processor(image, padding="max_length", max_length=512, truncation=True, return_tensors="pt") with torch.no_grad(): outputs = model(**encoding) logits = outputs.logits predictions = logits.argmax(-1) ``` ## Intended uses & limitations This model can be used for NER on French scanned documents. It can recognize 4 categories: "question", "answer", "header" and "other". ## Training and evaluation data This checkpoint used the French portion of the multilingual [XFUND](https://github.com/doc-analysis/XFUND) dataset. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 1000 ### Training results ### Framework versions - Transformers 4.22.1 - Pytorch 1.10.0+cu111 - Datasets 2.4.0 - Tokenizers 0.12.1
2,052
[ [ -0.0217742919921875, -0.033843994140625, 0.0230865478515625, 0.00640106201171875, -0.0069732666015625, -0.002132415771484375, -0.00286865234375, -0.018341064453125, 0.0017719268798828125, 0.05133056640625, -0.040557861328125, -0.044586181640625, -0.0372314453125...
timm/tinynet_b.in1k
2023-04-27T21:50:23.000Z
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2010.14819", "license:apache-2.0", "region:us" ]
image-classification
timm
null
null
timm/tinynet_b.in1k
0
1,173
timm
2022-12-13T00:22:06
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for tinynet_b.in1k A TinyNet image classification model. Trained on ImageNet-1k by paper authors. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 3.7 - GMACs: 0.2 - Activations (M): 4.4 - Image size: 188 x 188 - **Papers:** - Model rubik's cube: Twisting resolution, depth and width for tinynets: https://arxiv.org/abs/2010.14819v2 - **Dataset:** ImageNet-1k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('tinynet_b.in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'tinynet_b.in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 16, 94, 94]) # torch.Size([1, 24, 47, 47]) # torch.Size([1, 32, 24, 24]) # torch.Size([1, 88, 12, 12]) # torch.Size([1, 240, 6, 6]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'tinynet_b.in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 1280, 6, 6) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @article{han2020model, title={Model rubik’s cube: Twisting resolution, depth and width for tinynets}, author={Han, Kai and Wang, Yunhe and Zhang, Qiulin and Zhang, Wei and Xu, Chunjing and Zhang, Tong}, journal={Advances in Neural Information Processing Systems}, volume={33}, pages={19353--19364}, year={2020} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
3,929
[ [ -0.03289794921875, -0.037384033203125, 0.003360748291015625, 0.00133514404296875, -0.0207366943359375, -0.0280609130859375, -0.019775390625, -0.0273590087890625, 0.019012451171875, 0.0267486572265625, -0.03057861328125, -0.04339599609375, -0.046661376953125, ...
sampathkethineedi/industry-classification
2020-07-16T15:27:38.000Z
[ "transformers", "pytorch", "tf", "distilbert", "text-classification", "tensorflow", "industry", "buisiness", "description", "multi-class", "classification", "en", "has_space", "region:us" ]
text-classification
sampathkethineedi
null
null
sampathkethineedi/industry-classification
16
1,172
transformers
2022-03-02T23:29:05
--- language: "en" thumbnail: "https://huggingface.co/sampathkethineedi" tags: - distilbert - pytorch - tensorflow - text-classification - industry - buisiness - description - multi-class - classification liscence: "mit" inference: false --- # industry-classification ## Model description DistilBERT Model to classify a business description into one of **62 industry tags**. Trained on 7000 samples of Business Descriptions and associated labels of companies in India. ## How to use PyTorch and TF models available ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline tokenizer = AutoTokenizer.from_pretrained("sampathkethineedi/industry-classification") model = AutoModelForSequenceClassification.from_pretrained("sampathkethineedi/industry-classification") industry_tags = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer) industry_tags("Stellar Capital Services Limited is an India-based non-banking financial company ... loan against property, management consultancy, personal loans and unsecured loans.") '''Ouput''' [{'label': 'Consumer Finance', 'score': 0.9841355681419373}] ``` ## Limitations and bias Training data is only for Indian companies
1,225
[ [ -0.0048980712890625, -0.035858154296875, -0.013427734375, 0.036895751953125, -0.02447509765625, 0.01299285888671875, 0.00716400146484375, -0.0100555419921875, 0.001956939697265625, 0.0254669189453125, -0.0391845703125, -0.053985595703125, -0.040435791015625, ...
postbot/gpt2-medium-emailgen
2023-03-29T21:21:58.000Z
[ "transformers", "pytorch", "onnx", "safetensors", "gpt2", "text-generation", "text generation", "emailgen", "email generation", "email", "dataset:aeslc", "dataset:postbot/multi-emails-100k", "license:apache-2.0", "endpoints_compatible", "has_space", "text-generation-inference", "regi...
text-generation
postbot
null
null
postbot/gpt2-medium-emailgen
2
1,172
transformers
2022-09-29T10:01:21
--- license: - apache-2.0 tags: - text generation - emailgen - email generation - email datasets: - aeslc - postbot/multi-emails-100k widget: - text: "Good Morning Professor Beans, Hope you are doing well. I just wanted to reach out and ask if differential calculus will be on the exam" example_title: "email to prof" - text: "Hey <NAME>,\n\nThank you for signing up for my weekly newsletter. Before we get started, you'll have to confirm your email address." example_title: "newsletter" - text: "Hi <NAME>,\n\nI hope this email finds you well. I wanted to reach out and ask about office hours" example_title: "office hours" - text: "Greetings <NAME>,\n\nI hope you had a splendid evening at the Company sausage eating festival. I am reaching out because" example_title: "festival" - text: "Good Morning Harold,\n\nI was wondering when the next" example_title: "event" - text: "URGENT - I need the TPS reports" example_title: "URGENT" - text: "Hi Archibald,\n\nI hope this email finds you extremely well." example_title: "emails that find you" - text: "Hello there.\n\nI just wanted to reach out and check in to" example_title: "checking in" - text: "Hello <NAME>,\n\nI hope this email finds you well. I wanted to reach out and see if you've enjoyed your time with us" example_title: "work well" - text: "Hi <NAME>,\n\nI hope this email finds you well. I wanted to reach out and see if we could catch up" example_title: "catch up" - text: "I'm <NAME> and I just moved into the area and wanted to reach out and get some details on where I could get groceries and" example_title: "grocery" parameters: min_length: 32 max_length: 128 no_repeat_ngram_size: 2 do_sample: True temperature: 0.3 top_k: 20 top_p: 0.95 repetition_penalty: 3.5 length_penalty: 0.9 --- # gpt2-medium-emailgen [![colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/gist/pszemraj/70058788c6d4b430398c12ee8ba10602/minimal-demo-for-postbot-gpt2-medium-emailgen.ipynb ) Why write the entire email when you can generate (most of) it? ```python from transformers import pipeline model_tag = "postbot/gpt2-medium-emailgen" generator = pipeline( 'text-generation', model=model_tag, ) prompt = """ Hello, Following up on the bubblegum shipment.""" result = generator( prompt, max_length=64, do_sample=False, early_stopping=True, ) # generate print(result[0]['generated_text']) ``` ## about This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on the postbot/multi-emails-100k dataset. It achieves the following results on the evaluation set: - Loss: 1.5840 ## Model description More information needed ## Intended uses & limitations - this is intended as a tool to save time writing predictable emails and not to write emails without a human-in-the-loop. validate that your email is factually correct before sending it to others. ## Training and evaluation data - the dataset is essentially a hand-curated/augmented expansion to the classic `aeslc` dataset ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.02 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.8701 | 1.0 | 789 | 1.8378 | | 1.5065 | 2.0 | 1578 | 1.6176 | | 1.1873 | 3.0 | 2367 | 1.5840 | ### Framework versions - Transformers 4.22.2 - Pytorch 1.10.0+cu113 - Datasets 2.5.1 - Tokenizers 0.12.1
3,944
[ [ -0.0285186767578125, -0.04766845703125, 0.01346588134765625, 0.0132293701171875, -0.021392822265625, -0.01812744140625, -0.01461029052734375, -0.0187835693359375, -0.005706787109375, 0.01157379150390625, -0.05242919921875, -0.03619384765625, -0.0618896484375, ...
timm/tf_mobilenetv3_large_100.in1k
2023-04-27T22:49:44.000Z
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:1905.02244", "license:apache-2.0", "has_space", "region:us" ]
image-classification
timm
null
null
timm/tf_mobilenetv3_large_100.in1k
0
1,172
timm
2022-12-16T05:38:58
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for tf_mobilenetv3_large_100.in1k A MobileNet-v3 image classification model. Trained on ImageNet-1k in Tensorflow by paper authors, ported to PyTorch by Ross Wightman. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 5.5 - GMACs: 0.2 - Activations (M): 4.4 - Image size: 224 x 224 - **Papers:** - Searching for MobileNetV3: https://arxiv.org/abs/1905.02244 - **Dataset:** ImageNet-1k - **Original:** https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('tf_mobilenetv3_large_100.in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'tf_mobilenetv3_large_100.in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 16, 112, 112]) # torch.Size([1, 24, 56, 56]) # torch.Size([1, 40, 28, 28]) # torch.Size([1, 112, 14, 14]) # torch.Size([1, 960, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'tf_mobilenetv3_large_100.in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 960, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @inproceedings{howard2019searching, title={Searching for mobilenetv3}, author={Howard, Andrew and Sandler, Mark and Chu, Grace and Chen, Liang-Chieh and Chen, Bo and Tan, Mingxing and Wang, Weijun and Zhu, Yukun and Pang, Ruoming and Vasudevan, Vijay and others}, booktitle={Proceedings of the IEEE/CVF international conference on computer vision}, pages={1314--1324}, year={2019} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
4,159
[ [ -0.0306243896484375, -0.026519775390625, -0.0008883476257324219, 0.00982666015625, -0.027099609375, -0.0286102294921875, -0.00890350341796875, -0.0272369384765625, 0.020355224609375, 0.0295562744140625, -0.0218963623046875, -0.05908203125, -0.04705810546875, ...
hf-internal-testing/tiny-sdxl-custom-components
2023-10-20T14:23:45.000Z
[ "diffusers", "onnx", "text-to-image", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
hf-internal-testing
null
null
hf-internal-testing/tiny-sdxl-custom-components
0
1,172
diffusers
2023-10-20T10:04:15
--- library_name: diffusers tags: - text-to-image --- ```python from diffusers import DiffusionPipeline pipeline = DiffusionPipeline.from_pretrained("hf-internal-testing/tiny-sdxl-custom-components", trust_remote_code=True) assert pipeline.config.unet == ('diffusers_modules.local.my_unet_model', 'MyUNetModel') assert pipeline.config.scheduler == ('diffusers_modules.local.my_scheduler', 'MyScheduler') assert pipeline.__class__.__name__ == "StableDiffusionXLPipeline" pipeline = pipeline.to(torch_device) images = pipeline("test", num_inference_steps=2, output_type="np")[0] assert images.shape == (1, 64, 64, 3) ```
622
[ [ -0.0241546630859375, -0.029327392578125, 0.0225982666015625, 0.0194091796875, -0.01503753662109375, -0.0093994140625, 0.0247955322265625, 0.01189422607421875, -0.00888824462890625, 0.02508544921875, -0.041046142578125, -0.0165252685546875, -0.04083251953125, ...
Den4ikAI/FRED-T5-LARGE_text_qa
2023-06-21T16:54:07.000Z
[ "transformers", "pytorch", "safetensors", "t5", "text2text-generation", "ru", "dataset:Den4ikAI/ru_sberquad_long_answers", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text2text-generation
Den4ikAI
null
null
Den4ikAI/FRED-T5-LARGE_text_qa
3
1,171
transformers
2023-05-29T10:04:44
--- license: mit datasets: - Den4ikAI/ru_sberquad_long_answers language: - ru widget: - text: '<SC6>Текст: Основными конкурентами РН Протон-М по цене и по выводимой полезной нагрузке являются американская РН Falcon 9, европейская ракета тяжёлого класса Ариан-5 компании Арианэспас и международный проект Морской старт с РН средне-тяжёлого класса Зенит. Кроме того, конкурентами по массе полезной нагрузки, выводимой на орбиту, могут считаться американские носители Атлас-5 и Дельта-4, а также японский носитель H-IIB. Тем не менее стоимость последних трёх упомянутых РН значительно превышает стоимость РН Протон-М, и поэтому они фактически не конкурируют с Протоном на рынке коммерческих запусков[145].\nВопрос: Как называется Японский носитель?\nОтвет: <extra_id_0>' pipeline_tag: text2text-generation --- # Den4ikAI/FRED-T5-LARGE_text_qa Модель обучена отвечать на вопросы с помощью текста. Wandb: [link](https://wandb.ai/den4ikai/huggingface/runs/qkzvhuxb) # Использование ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM import torch from transformers import GenerationConfig use_cuda = torch.cuda.is_available() device = torch.device("cuda" if use_cuda else "cpu") generation_config = GenerationConfig.from_pretrained("Den4ikAI/FRED-T5-LARGE_text_qa") tokenizer = AutoTokenizer.from_pretrained("Den4ikAI/FRED-T5-LARGE_text_qa") model = AutoModelForSeq2SeqLM.from_pretrained("Den4ikAI/FRED-T5-LARGE_text_qa").to(device) model.eval() def generate(prompt): data = tokenizer(f"{prompt}", return_tensors="pt").to(model.device) output_ids = model.generate( **data, generation_config=generation_config )[0] print(tokenizer.decode(data["input_ids"][0].tolist())) out = tokenizer.decode(output_ids.tolist()) return out while 1: prompt = '''<SC6>Текст: {}\nВопрос: {}\nОтвет: <extra_id_0> '''.format(input('Текст: '), input('Вопрос: ')) print(generate(prompt)) ```
1,917
[ [ -0.0247802734375, -0.046905517578125, 0.01593017578125, 0.02593994140625, -0.037445068359375, 0.00513458251953125, -0.014190673828125, -0.006805419921875, 0.0014753341674804688, 0.0014505386352539062, -0.043304443359375, -0.037017822265625, -0.0418701171875, ...
vinai/bartpho-word
2022-10-22T09:05:26.000Z
[ "transformers", "pytorch", "tf", "mbart", "feature-extraction", "arxiv:2109.09701", "endpoints_compatible", "region:us" ]
feature-extraction
vinai
null
null
vinai/bartpho-word
0
1,170
transformers
2022-03-02T23:29:05
# <a name="introduction"></a> BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese Two BARTpho versions `BARTpho-syllable` and `BARTpho-word` are the first public large-scale monolingual sequence-to-sequence models pre-trained for Vietnamese. BARTpho uses the "large" architecture and pre-training scheme of the sequence-to-sequence denoising model [BART](https://github.com/pytorch/fairseq/tree/main/examples/bart), thus especially suitable for generative NLP tasks. Experiments on a downstream task of Vietnamese text summarization show that in both automatic and human evaluations, BARTpho outperforms the strong baseline [mBART](https://github.com/pytorch/fairseq/tree/main/examples/mbart) and improves the state-of-the-art. The general architecture and experimental results of BARTpho can be found in our [paper](https://arxiv.org/abs/2109.09701): @article{bartpho, title = {{BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese}}, author = {Nguyen Luong Tran and Duong Minh Le and Dat Quoc Nguyen}, journal = {arXiv preprint}, volume = {arXiv:2109.09701}, year = {2021} } **Please CITE** our paper when BARTpho is used to help produce published results or incorporated into other software. For further information or requests, please go to [BARTpho's homepage](https://github.com/VinAIResearch/BARTpho)!
1,361
[ [ -0.0259857177734375, -0.046142578125, 0.025146484375, 0.03369140625, -0.045013427734375, -0.00876617431640625, -0.0178070068359375, -0.02496337890625, 0.01318359375, 0.062255859375, -0.0277862548828125, -0.0679931640625, -0.04473876953125, 0.0421142578125, ...
Zetatech/pvt-tiny-224
2023-09-12T04:51:39.000Z
[ "transformers", "pytorch", "safetensors", "pvt", "image-classification", "vision", "dataset:imagenet-1k", "arxiv:2102.12122", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
Zetatech
null
null
Zetatech/pvt-tiny-224
0
1,170
transformers
2023-05-24T00:53:31
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet-1k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # Pyramid Vision Transformer (tiny-sized model) Pyramid Vision Transformer (PVT) model pre-trained on ImageNet-1K (1 million images, 1000 classes) at resolution 224x224, and fine-tuned on ImageNet 2012 (1 million images, 1,000 classes) at resolution 224x224. It was introduced in the paper [Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions](https://arxiv.org/abs/2102.12122) by Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, Ling Shao and first released in [this repository](https://github.com/whai362/PVT). Disclaimer: The team releasing PVT did not write a model card for this model so this model card has been written by [Rinat S. [@Xrenya]](https://huggingface.co/Xrenya). ## Model description The Pyramid Vision Transformer (PVT) is a transformer encoder model (BERT-like) pretrained on ImageNet-1k (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, also at resolution 224x224. Images are presented to the model as a sequence of variable-size patches, which are linearly embedded. Unlike ViT models, PVT is using a progressive shrinking pyramid to reduce computations of large feature maps at each stage. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/Xrenya) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import PvtImageProcessor, PvtForImageClassification from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) processor = PvtImageProcessor.from_pretrained('Zetatech/pvt-tiny-224') model = PvtForImageClassification.from_pretrained('Zetatech/pvt-tiny-224') inputs = processor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/pvt.html#). ## Training data The ViT model was pretrained on [ImageNet-1k](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found [here](https://github.com/whai362/PVT/blob/v2/classification/datasets.py). Images are resized/rescaled to the same resolution (224x224) and normalized across the RGB channels with mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225). ### BibTeX entry and citation info ```bibtex @inproceedings{wang2021pyramid, title={Pyramid vision transformer: A versatile backbone for dense prediction without convolutions}, author={Wang, Wenhai and Xie, Enze and Li, Xiang and Fan, Deng-Ping and Song, Kaitao and Liang, Ding and Lu, Tong and Luo, Ping and Shao, Ling}, booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision}, pages={568--578}, year={2021} } ```
4,443
[ [ -0.045745849609375, -0.01116180419921875, -0.0075836181640625, -0.005008697509765625, -0.03173828125, -0.01244354248046875, 0.00820159912109375, -0.028228759765625, 0.01142120361328125, 0.043182373046875, -0.0289764404296875, -0.01216888427734375, -0.04843139648...
facebook/mms-tts-spa
2023-09-01T14:15:35.000Z
[ "transformers", "pytorch", "safetensors", "vits", "text-to-audio", "mms", "text-to-speech", "arxiv:2305.13516", "license:cc-by-nc-4.0", "endpoints_compatible", "has_space", "region:us" ]
text-to-speech
facebook
null
null
facebook/mms-tts-spa
3
1,170
transformers
2023-09-01T14:15:17
--- license: cc-by-nc-4.0 tags: - mms - vits pipeline_tag: text-to-speech --- # Massively Multilingual Speech (MMS): Spanish Text-to-Speech This repository contains the **Spanish (spa)** language text-to-speech (TTS) model checkpoint. This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to provide speech technology across a diverse range of languages. You can find more details about the supported languages and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html), and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts). MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. ## Model Details VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior. A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers, much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to synthesise speech with different rhythms from the same input text. The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training. To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor, the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform. For the MMS project, a separate VITS checkpoint is trained on each langauge. ## Usage MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint, first install the latest version of the library: ``` pip install --upgrade transformers accelerate ``` Then, run inference with the following code-snippet: ```python from transformers import VitsModel, AutoTokenizer import torch model = VitsModel.from_pretrained("facebook/mms-tts-spa") tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-spa") text = "some example text in the Spanish language" inputs = tokenizer(text, return_tensors="pt") with torch.no_grad(): output = model(**inputs).waveform ``` The resulting waveform can be saved as a `.wav` file: ```python import scipy scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output) ``` Or displayed in a Jupyter Notebook / Google Colab: ```python from IPython.display import Audio Audio(output, rate=model.config.sampling_rate) ``` ## BibTex citation This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper: ``` @article{pratap2023mms, title={Scaling Speech Technology to 1,000+ Languages}, author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli}, journal={arXiv}, year={2023} } ``` ## License The model is licensed as **CC-BY-NC 4.0**.
3,972
[ [ -0.0260009765625, -0.059112548828125, 0.01296234130859375, 0.040924072265625, -0.0020751953125, -0.0016422271728515625, -0.021240234375, -0.0227813720703125, 0.029296875, 0.0176849365234375, -0.061492919921875, -0.037200927734375, -0.041839599609375, 0.00324...
CAMeL-Lab/bert-base-arabic-camelbert-msa
2021-09-14T14:33:41.000Z
[ "transformers", "pytorch", "tf", "jax", "bert", "fill-mask", "ar", "arxiv:2103.06678", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
CAMeL-Lab
null
null
CAMeL-Lab/bert-base-arabic-camelbert-msa
5
1,169
transformers
2022-03-02T23:29:04
--- language: - ar license: apache-2.0 widget: - text: "الهدف من الحياة هو [MASK] ." --- # CAMeLBERT: A collection of pre-trained models for Arabic NLP tasks ## Model description **CAMeLBERT** is a collection of BERT models pre-trained on Arabic texts with different sizes and variants. We release pre-trained language models for Modern Standard Arabic (MSA), dialectal Arabic (DA), and classical Arabic (CA), in addition to a model pre-trained on a mix of the three. We also provide additional models that are pre-trained on a scaled-down set of the MSA variant (half, quarter, eighth, and sixteenth). The details are described in the paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* This model card describes **CAMeLBERT-MSA** (`bert-base-arabic-camelbert-msa`), a model pre-trained on the entire MSA dataset. ||Model|Variant|Size|#Word| |-|-|:-:|-:|-:| ||`bert-base-arabic-camelbert-mix`|CA,DA,MSA|167GB|17.3B| ||`bert-base-arabic-camelbert-ca`|CA|6GB|847M| ||`bert-base-arabic-camelbert-da`|DA|54GB|5.8B| |✔|`bert-base-arabic-camelbert-msa`|MSA|107GB|12.6B| ||`bert-base-arabic-camelbert-msa-half`|MSA|53GB|6.3B| ||`bert-base-arabic-camelbert-msa-quarter`|MSA|27GB|3.1B| ||`bert-base-arabic-camelbert-msa-eighth`|MSA|14GB|1.6B| ||`bert-base-arabic-camelbert-msa-sixteenth`|MSA|6GB|746M| ## Intended uses You can use the released model for either masked language modeling or next sentence prediction. However, it is mostly intended to be fine-tuned on an NLP task, such as NER, POS tagging, sentiment analysis, dialect identification, and poetry classification. We release our fine-tuninig code [here](https://github.com/CAMeL-Lab/CAMeLBERT). #### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='CAMeL-Lab/bert-base-arabic-camelbert-msa') >>> unmasker("الهدف من الحياة هو [MASK] .") [{'sequence': '[CLS] الهدف من الحياة هو العمل. [SEP]', 'score': 0.08507660031318665, 'token': 2854, 'token_str': 'العمل'}, {'sequence': '[CLS] الهدف من الحياة هو الحياة. [SEP]', 'score': 0.058905381709337234, 'token': 3696, 'token_str': 'الحياة'}, {'sequence': '[CLS] الهدف من الحياة هو النجاح. [SEP]', 'score': 0.04660581797361374, 'token': 6232, 'token_str': 'النجاح'}, {'sequence': '[CLS] الهدف من الحياة هو الربح. [SEP]', 'score': 0.04156001657247543, 'token': 12413, 'token_str': 'الربح'}, {'sequence': '[CLS] الهدف من الحياة هو الحب. [SEP]', 'score': 0.03534102067351341, 'token': 3088, 'token_str': 'الحب'}] ``` *Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually. Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa') model = AutoModel.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa') text = "مرحبا يا عالم." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import AutoTokenizer, TFAutoModel tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa') model = TFAutoModel.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa') text = "مرحبا يا عالم." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Training data - MSA (Modern Standard Arabic) - [The Arabic Gigaword Fifth Edition](https://catalog.ldc.upenn.edu/LDC2011T11) - [Abu El-Khair Corpus](http://www.abuelkhair.net/index.php/en/arabic/abu-el-khair-corpus) - [OSIAN corpus](https://vlo.clarin.eu/search;jsessionid=31066390B2C9E8C6304845BA79869AC1?1&q=osian) - [Arabic Wikipedia](https://archive.org/details/arwiki-20190201) - The unshuffled version of the Arabic [OSCAR corpus](https://oscar-corpus.com/) ## Training procedure We use [the original implementation](https://github.com/google-research/bert) released by Google for pre-training. We follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified. ### Preprocessing - After extracting the raw text from each corpus, we apply the following pre-processing. - We first remove invalid characters and normalize white spaces using the utilities provided by [the original BERT implementation](https://github.com/google-research/bert/blob/eedf5716ce1268e56f0a50264a88cafad334ac61/tokenization.py#L286-L297). - We also remove lines without any Arabic characters. - We then remove diacritics and kashida using [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools). - Finally, we split each line into sentences with a heuristics-based sentence segmenter. - We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using [HuggingFace's tokenizers](https://github.com/huggingface/tokenizers). - We do not lowercase letters nor strip accents. ### Pre-training - The model was trained on a single cloud TPU (`v3-8`) for one million steps in total. - The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256. - The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. - We use whole word masking and a duplicate factor of 10. - We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens. - We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1. - The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ## Evaluation results - We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification. - We fine-tune and evaluate the models using 12 dataset. - We used Hugging Face's transformers to fine-tune our CAMeLBERT models. - We used transformers `v3.1.0` along with PyTorch `v1.5.1`. - The fine-tuning was done by adding a fully connected linear layer to the last hidden state. - We use \\(F_{1}\\) score as a metric for all tasks. - Code used for fine-tuning is available [here](https://github.com/CAMeL-Lab/CAMeLBERT). ### Results | Task | Dataset | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 | | -------------------- | --------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- | | NER | ANERcorp | MSA | 80.8% | 67.9% | 74.1% | 82.4% | 82.0% | 82.1% | 82.6% | 80.8% | | POS | PATB (MSA) | MSA | 98.1% | 97.8% | 97.7% | 98.3% | 98.2% | 98.3% | 98.2% | 98.2% | | | ARZTB (EGY) | DA | 93.6% | 92.3% | 92.7% | 93.6% | 93.6% | 93.7% | 93.6% | 93.6% | | | Gumar (GLF) | DA | 97.3% | 97.7% | 97.9% | 97.9% | 97.9% | 97.9% | 97.9% | 97.9% | | SA | ASTD | MSA | 76.3% | 69.4% | 74.6% | 76.9% | 76.0% | 76.8% | 76.7% | 75.3% | | | ArSAS | MSA | 92.7% | 89.4% | 91.8% | 93.0% | 92.6% | 92.5% | 92.5% | 92.3% | | | SemEval | MSA | 69.0% | 58.5% | 68.4% | 72.1% | 70.7% | 72.8% | 71.6% | 71.2% | | DID | MADAR-26 | DA | 62.9% | 61.9% | 61.8% | 62.6% | 62.0% | 62.8% | 62.0% | 62.2% | | | MADAR-6 | DA | 92.5% | 91.5% | 92.2% | 91.9% | 91.8% | 92.2% | 92.1% | 92.0% | | | MADAR-Twitter-5 | MSA | 75.7% | 71.4% | 74.2% | 77.6% | 78.5% | 77.3% | 77.7% | 76.2% | | | NADI | DA | 24.7% | 17.3% | 20.1% | 24.9% | 24.6% | 24.6% | 24.9% | 23.8% | | Poetry | APCD | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% | ### Results (Average) | | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 | | -------------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- | | Variant-wise-average<sup>[[1]](#footnote-1)</sup> | MSA | 82.1% | 75.7% | 80.1% | 83.4% | 83.0% | 83.3% | 83.2% | 82.3% | | | DA | 74.4% | 72.1% | 72.9% | 74.2% | 74.0% | 74.3% | 74.1% | 73.9% | | | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% | | Macro-Average | ALL | 78.7% | 74.7% | 77.1% | 79.2% | 79.0% | 79.2% | 79.1% | 78.6% | <a name="footnote-1">[1]</a>: Variant-wise-average refers to average over a group of tasks in the same language variant. ## Acknowledgements This research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC). ## Citation ```bibtex @inproceedings{inoue-etal-2021-interplay, title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models", author = "Inoue, Go and Alhafni, Bashar and Baimukan, Nurpeiis and Bouamor, Houda and Habash, Nizar", booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop", month = apr, year = "2021", address = "Kyiv, Ukraine (Online)", publisher = "Association for Computational Linguistics", abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.", } ```
10,757
[ [ -0.029876708984375, -0.042877197265625, -0.005222320556640625, 0.0225067138671875, -0.0302734375, 0.0110626220703125, -0.00870513916015625, -0.0262298583984375, 0.03436279296875, 0.0249786376953125, -0.03912353515625, -0.058380126953125, -0.06951904296875, 0...
plguillou/t5-base-fr-sum-cnndm
2022-05-07T15:03:50.000Z
[ "transformers", "pytorch", "t5", "text2text-generation", "seq2seq", "summarization", "fr", "dataset:cnn_dailymail", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
summarization
plguillou
null
null
plguillou/t5-base-fr-sum-cnndm
14
1,169
transformers
2022-03-02T23:29:05
--- language: fr tags: - pytorch - t5 - seq2seq - summarization datasets: cnn_dailymail widget: - text: "Apollo 11 est une mission du programme spatial américain Apollo au cours de laquelle, pour la première fois, des hommes se sont posés sur la Lune, le lundi 21 juillet 1969. L'agence spatiale américaine, la NASA, remplit ainsi l'objectif fixé par le président John F. Kennedy en 1961 de poser un équipage sur la Lune avant la fin de la décennie 1960. Il s'agissait de démontrer la supériorité des États-Unis sur l'Union soviétique qui avait été mise à mal par les succès soviétiques au début de l'ère spatiale dans le contexte de la guerre froide qui oppose alors ces deux pays. Ce défi est lancé alors que la NASA n'a pas encore placé en orbite un seul astronaute. Grâce à une mobilisation de moyens humains et financiers considérables, l'agence spatiale rattrape puis dépasse le programme spatial soviétique." example_title: "Apollo 11" --- # French T5 Abstractive Text Summarization ~~Version 1.0 (I will keep improving the model's performances.)~~ Version 2.0 is here! (with improved performances of course) I trained the model on 13x more data than v1. ROUGE-1: 44.5252 ROUGE-2: 22.652 ROUGE-L: 29.8866 ## Model description This model is a T5 Transformers model (JDBN/t5-base-fr-qg-fquad) that was fine-tuned in french for abstractive text summarization. ## How to use ```python from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("plguillou/t5-base-fr-sum-cnndm") model = T5ForConditionalGeneration.from_pretrained("plguillou/t5-base-fr-sum-cnndm") ``` To summarize an ARTICLE, just modify the string like this : "summarize: ARTICLE". ## Training data The base model I used is JDBN/t5-base-fr-qg-fquad (it can perform question generation, question answering and answer extraction). I used the "t5-base" model from the transformers library to translate in french the CNN / Daily Mail summarization dataset.
1,987
[ [ -0.0153961181640625, -0.035919189453125, 0.01959228515625, 0.0352783203125, -0.0357666015625, 0.00445556640625, -0.0162811279296875, -0.01235198974609375, -0.003082275390625, 0.044647216796875, -0.0262603759765625, -0.056610107421875, -0.0633544921875, 0.029...
timm/mobilevitv2_100.cvnets_in1k
2023-04-24T22:24:11.000Z
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2206.02680", "license:other", "region:us" ]
image-classification
timm
null
null
timm/mobilevitv2_100.cvnets_in1k
0
1,169
timm
2023-04-24T22:23:59
--- tags: - image-classification - timm library_name: timm license: other datasets: - imagenet-1k --- # Model card for mobilevitv2_100.cvnets_in1k A MobileViT-v2 image classification model. Trained on ImageNet-1k by paper authors. See license details at https://github.com/apple/ml-cvnets/blob/main/LICENSE ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 4.9 - GMACs: 1.8 - Activations (M): 16.1 - Image size: 256 x 256 - **Papers:** - Separable Self-attention for Mobile Vision Transformers: https://arxiv.org/abs/2206.02680 - **Original:** https://github.com/apple/ml-cvnets - **Dataset:** ImageNet-1k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('mobilevitv2_100.cvnets_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'mobilevitv2_100.cvnets_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 64, 128, 128]) # torch.Size([1, 128, 64, 64]) # torch.Size([1, 256, 32, 32]) # torch.Size([1, 384, 16, 16]) # torch.Size([1, 512, 8, 8]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'mobilevitv2_100.cvnets_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 512, 8, 8) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @article{Mehta2022SeparableSF, title={Separable Self-attention for Mobile Vision Transformers}, author={Sachin Mehta and Mohammad Rastegari}, journal={ArXiv}, year={2022}, volume={abs/2206.02680} } ```
3,699
[ [ -0.033538818359375, -0.0223541259765625, -0.00395965576171875, 0.01690673828125, -0.0275421142578125, -0.0276336669921875, -0.007198333740234375, -0.0199127197265625, 0.0196685791015625, 0.03411865234375, -0.0360107421875, -0.04913330078125, -0.047698974609375, ...
jin05102518/Astral-7B-Instruct-v0.01
2023-11-03T02:22:08.000Z
[ "transformers", "pytorch", "mistral", "text-generation", "ko", "dataset:beomi/KoAlpaca-v1.1a", "license:cc-by-nc-4.0", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
jin05102518
null
null
jin05102518/Astral-7B-Instruct-v0.01
2
1,168
transformers
2023-10-13T14:52:17
--- language: - ko datasets: - beomi/KoAlpaca-v1.1a library_name: transformers pipeline_tag: text-generation license: cc-by-nc-4.0 --- 영어 Instruct 데이터셋을 사용했고 한국어 데이터셋은 오직 KoAlpaca-v1.1a 데이터셋만을 Full-Finetune으로 학습한 Mistral 모델입니다. # **Prompt 예시** ``` prompt = "<|system|>\n</s>\n<|user|>\n쿠버네티스에 대해 알려줘</s>" ``` --- # **구동 예시** ``` <s> <|system|> </s> <|user|> 쿠버네티스에 대해 설명해 줘</s> <|assistant|> 쿠버네티스는 컨테이너를 관리하는 오픈소스 시스템입니다. 쿠버네티스는 컨테이너를 실행하고, 컨테이너가 실행되는 환경을 제공합니다. 쿠버네티스는 여러 개의 호스트 컴퓨터를 관리하며, 이러한 호스트 컴퓨터를 통해 컨테이너를 실행할 수 있습니다. 쿠버네티스는 컨테이너를 배치하고 관리하는 기능을 제공합니다. 또한, 쿠버네티스는 서비스 메쉬, 스토리지, 배치 컨트롤 플러그인 등 다양한 기능을 제공합니다. 이러한 기능들은 모두 컨테이너를 관리하고 배치하는데 유용하며, 쿠버네티스는 이러한 기능들을 통해 컨테이너를 효과적으로 관리하고 배치할 수 있는 시스템으로 자리 잡고 있습니다.</s> <|user|> 위의 설명을 영어로 번역해 줘</s> <|assistant|> Kubernetes is an open-source system for container management. Kubernetes manages the containers, and provides the environment for running the containers. Kubernetes manages multiple hosts, and runs the containers on these hosts. Kubernetes is a system for deploying and managing containers. Kubernetes also provides features such as service mesh, storage, and deployment control plugins. All of these features are useful for managing and deploying containers. Kubernetes is a system that effectively manages containers and deploys them efficiently.</s> ``` --- # **알립니다.** 해당 모델은 시험적이며 기술적 시험을 위해 일시적으로 만들어졌습니다. ---
1,400
[ [ -0.05548095703125, -0.047149658203125, 0.0340576171875, 0.047576904296875, -0.0302581787109375, -0.0030384063720703125, 0.024871826171875, -0.019927978515625, 0.0276947021484375, 0.03753662109375, -0.0259857177734375, -0.0283355712890625, -0.048370361328125, ...
textattack/albert-base-v2-imdb
2020-07-06T16:34:24.000Z
[ "transformers", "pytorch", "albert", "text-classification", "endpoints_compatible", "has_space", "region:us" ]
text-classification
textattack
null
null
textattack/albert-base-v2-imdb
0
1,167
transformers
2022-03-02T23:29:05
## TextAttack Model Card This `albert-base-v2` model was fine-tuned for sequence classification using TextAttack and the imdb dataset loaded using the `nlp` library. The model was fine-tuned for 5 epochs with a batch size of 32, a learning rate of 2e-05, and a maximum sequence length of 128. Since this was a classification task, the model was trained with a cross-entropy loss function. The best score the model achieved on this task was 0.89236, as measured by the eval set accuracy, found after 3 epochs. For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
609
[ [ -0.022491455078125, -0.020782470703125, 0.0174407958984375, 0.002727508544921875, -0.029510498046875, 0.0138397216796875, 0.004100799560546875, -0.038604736328125, -0.009735107421875, 0.0343017578125, -0.03887939453125, -0.04583740234375, -0.04388427734375, ...
facebook/mask2former-swin-base-ade-semantic
2023-09-11T13:46:21.000Z
[ "transformers", "pytorch", "safetensors", "mask2former", "vision", "image-segmentation", "dataset:coco", "arxiv:2112.01527", "arxiv:2107.06278", "license:other", "endpoints_compatible", "region:us" ]
image-segmentation
facebook
null
null
facebook/mask2former-swin-base-ade-semantic
0
1,167
transformers
2023-01-05T12:23:05
--- license: other tags: - vision - image-segmentation datasets: - coco widget: - src: http://images.cocodataset.org/val2017/000000039769.jpg example_title: Cats - src: http://images.cocodataset.org/val2017/000000039770.jpg example_title: Castle --- # Mask2Former Mask2Former model trained on ADE20k semantic segmentation (base-sized version, Swin backbone). It was introduced in the paper [Masked-attention Mask Transformer for Universal Image Segmentation ](https://arxiv.org/abs/2112.01527) and first released in [this repository](https://github.com/facebookresearch/Mask2Former/). Disclaimer: The team releasing Mask2Former did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description Mask2Former addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. Mask2Former outperforms the previous SOTA, [MaskFormer](https://arxiv.org/abs/2107.06278) both in terms of performance an efficiency by (i) replacing the pixel decoder with a more advanced multi-scale deformable attention Transformer, (ii) adopting a Transformer decoder with masked attention to boost performance without without introducing additional computation and (iii) improving training efficiency by calculating the loss on subsampled points instead of whole masks. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/mask2former_architecture.png) ## Intended uses & limitations You can use this particular checkpoint for panoptic segmentation. See the [model hub](https://huggingface.co/models?search=mask2former) to look for other fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ```python import requests import torch from PIL import Image from transformers import AutoImageProcessor, Mask2FormerForUniversalSegmentation # load Mask2Former fine-tuned on ADE20k semantic segmentation processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-base-ade-semantic") model = Mask2FormerForUniversalSegmentation.from_pretrained("facebook/mask2former-swin-base-ade-semantic") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = processor(images=image, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) # model predicts class_queries_logits of shape `(batch_size, num_queries)` # and masks_queries_logits of shape `(batch_size, num_queries, height, width)` class_queries_logits = outputs.class_queries_logits masks_queries_logits = outputs.masks_queries_logits # you can pass them to processor for postprocessing predicted_semantic_map = processor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0] # we refer to the demo notebooks for visualization (see "Resources" section in the Mask2Former docs) ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/mask2former).
3,160
[ [ -0.041839599609375, -0.051055908203125, 0.023193359375, 0.01690673828125, -0.0189666748046875, -0.02264404296875, 0.01029205322265625, -0.060302734375, 0.01264190673828125, 0.04638671875, -0.058197021484375, -0.03253173828125, -0.06488037109375, -0.027008056...
Deci/DeciLM-6b-instruct
2023-10-01T08:10:56.000Z
[ "transformers", "safetensors", "text-generation", "Deci AI", "DeciLM", "Instruction", "custom_code", "en", "dataset:cerebras/SlimPajama-627B", "dataset:Open-Orca/OpenOrca", "license:llama2", "license:other", "model-index", "has_space", "region:us" ]
text-generation
Deci
null
null
Deci/DeciLM-6b-instruct
125
1,167
transformers
2023-09-13T07:21:13
--- license: [llama2, other] datasets: - cerebras/SlimPajama-627B - Open-Orca/OpenOrca language: - en tags: - Deci AI - DeciLM - Instruction model-index: - name: DeciLM 6B results: - task: type: text-generation dataset: type: ai2/arc name: ai2_arc metrics: - name: ARC Challenge type: ARC Challenge value: 43.43 verified: false - task: type: text-generation dataset: type: ai2/arc name: ai2_arc metrics: - name: ARC Easy type: ARC Easy value: 70.58 verified: false - task: type: text-generation dataset: type: boolq name: boolq metrics: - name: BoolQ type: BoolQ value: 77.34 verified: false - task: type: text-generation dataset: type: hellaswag name: hellaswag metrics: - name: HellaSwag type: HellaSwag value: 74.57 verified: false - task: type: text-generation dataset: type: LAMBDA name: OpenAI LAMBDA metrics: - name: LAMBDA type: LAMBDA value: 70.1 verified: false - task: type: text-generation dataset: type: OpenBookQA name: openbookqa metrics: - name: OpenBookQA type: OpenBookQA value: 33 verified: false - task: type: text-generation dataset: type: PIQA name: piqa metrics: - name: PIQA type: PIQA value: 77.52 verified: false - task: type: text-generation dataset: type: truthful_qa name: truthful_qa metrics: - name: TruthfulQA type: TruthfulQA value: 43.89 verified: false - task: type: text-generation dataset: type: winogrande name: winogrande metrics: - name: Winogrande type: Winogrande value: 67.64 verified: false --- # DeciLM 6B-Instruct DeciLM 6B-Instruct is a model for short-form instruction following. It is built by LoRA fine-tuning [DeciLM 6B](https://huggingface.co/Deci/DeciLM-6b) on a subset of the [OpenOrca dataset](https://huggingface.co/datasets/Open-Orca/OpenOrca). - **Developed by:** Deci - **Model type:** DeciLM is an auto-regressive language model using an optimized transformer decoder architecture that includes variable Grouped-Query Attention. - **Language(s) (NLP):** English - **License:** [Llama 2 Community License Agreement](https://huggingface.co/Deci/DeciLM-6b-instruct/blob/main/LICENSE.md) with an extention of Deci regarding hosting service providers. ### Model Sources - **Paper:** [DeciLM 6B Technical Blog](https://deci.ai/blog/decilm-15-times-faster-than-llama2-nas-generated-llm-with-variable-gqa/) - **Demo:** [DeciLM 6B-Instruct Demo](https://huggingface.co/spaces/Deci/DeciLM-6b-instruct) - **Notebook:** [DeciLM 6B-Instruct Notebook](https://bit.ly/decilm-instruct-nb) ## Uses The model is intended for commercial and research use in English and can be fine-tuned for use in other languages. ## How to Get Started with the Model Use the code below to get started with the model. ```bibtex # pip install -q transformers import torch from transformers import AutoModelForCausalLM, AutoTokenizer checkpoint = "Deci/DeciLM-6b-instruct" device = "cuda" # for GPU usage or "cpu" for CPU usage tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForCausalLM.from_pretrained(checkpoint, torch_dtype=torch.bfloat16, trust_remote_code=True).to(device) inputs = tokenizer.encode("How do I make french toast? Think through it step by step", return_tensors="pt").to(device) outputs = model.generate(inputs, max_new_tokens=100, do_sample=True, top_p=0.95) print(tokenizer.decode(outputs[0])) ``` ## Training Details DeciLM 6B underwent training utilizing the SlimPijamas dataset, leveraging advanced proprietary methodologies allowing for fast training. DeciLM 6B was further finetuned on a subset of the OpenOrca dataset, giving rise to DeciLM-6B-Instruct. ## Evaluation Below are DeciLM's 6B-instruct evaluation results. | Average | ARC Challenge* | ARC Easy* | BoolQ | HellaSwag* | LAMBDA OpenAI | OpenBookQA | PIQA | TruthfulQA | Winogrande | |:----------|:----------|:----------|:----------|:----------|:----------|:----------|:----------|:----------|:----------| | 62.01 | 44.43 | 70.58 | 77.34 | 74.57 | 70.1 | 33 | 77.52 |43.89 | 67.64 | Accuracy-norm score* ## Runtime Benchmarks |Inference Tool/Hardware | A10 (tokens/sec) | |:----------|:----------| | PyTorch | 652.49 | | Infery LLM | 2,029.6 | - Throughput (tokens/sec) - Measured with optimal batch - PyTorch BS 64, Infery LLM BS 128 - In order to replicate the results of the PyTorch benchmark, use this [code example](https://huggingface.co/Deci/DeciLM-6b-instruct/blob/main/hf_benchmark_example.py) ## Disclaimer DeciLM 6B-Instruct has not been aligned for safety or trained using RLHF. ## How to Cite Please cite this model using this format. ```bibtex @misc{DeciFoundationModels, title = {DeciLM 6B Instruct}, author = {DeciAI Research Team}, year = {2023} url={[https://huggingface.co/Deci/DeciLM-6b-instruct](https://huggingface.co/Deci/DeciLM-6b-instruct)}, } ```
5,222
[ [ -0.027496337890625, -0.048583984375, 0.0161590576171875, 0.003566741943359375, -0.01373291015625, -0.0094146728515625, -0.0077667236328125, -0.038787841796875, 0.004680633544921875, 0.034271240234375, -0.044158935546875, -0.054656982421875, -0.04241943359375, ...
stablediffusionapi/animeyouv3-sdxl
2023-10-01T11:42:03.000Z
[ "diffusers", "stablediffusionapi.com", "stable-diffusion-api", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
stablediffusionapi
null
null
stablediffusionapi/animeyouv3-sdxl
1
1,166
diffusers
2023-09-30T20:44:02
--- license: creativeml-openrail-m tags: - stablediffusionapi.com - stable-diffusion-api - text-to-image - ultra-realistic pinned: true --- # AnimeYouV3-SDXL API Inference ![generated from stablediffusionapi.com](https://pub-3626123a908346a7a8be8d9295f44e26.r2.dev/generations/19243293901696110734.png) ## Get API Key Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed. Replace Key in below code, change **model_id** to "animeyouv3-sdxl" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs) Try model for free: [Generate Images](https://stablediffusionapi.com/models/animeyouv3-sdxl) Model link: [View model](https://stablediffusionapi.com/models/animeyouv3-sdxl) Credits: [View credits](https://civitai.com/?query=AnimeYouV3-SDXL) View all models: [View Models](https://stablediffusionapi.com/models) import requests import json url = "https://stablediffusionapi.com/api/v4/dreambooth" payload = json.dumps({ "key": "your_api_key", "model_id": "animeyouv3-sdxl", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) > Use this coupon code to get 25% off **DMGG0RBN**
2,482
[ [ -0.03399658203125, -0.049835205078125, 0.043670654296875, 0.019439697265625, -0.037384033203125, 0.0004892349243164062, 0.031585693359375, -0.032989501953125, 0.036285400390625, 0.048065185546875, -0.06427001953125, -0.061614990234375, -0.0261688232421875, -...
cointegrated/rut5-base-absum
2023-03-17T10:21:13.000Z
[ "transformers", "pytorch", "safetensors", "t5", "text2text-generation", "russian", "summarization", "ru", "dataset:IlyaGusev/gazeta", "dataset:csebuetnlp/xlsum", "dataset:mlsum", "dataset:wiki_lingua", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-infe...
summarization
cointegrated
null
null
cointegrated/rut5-base-absum
11
1,164
transformers
2022-03-02T23:29:05
--- language: ["ru"] tags: - russian - summarization datasets: - IlyaGusev/gazeta - csebuetnlp/xlsum - mlsum - wiki_lingua license: mit widget: - text: "Высота башни составляет 324 метра (1063 фута), примерно такая же высота, как у 81-этажного здания, и самое высокое сооружение в Париже. Его основание квадратно, размером 125 метров (410 футов) с любой стороны. Во время строительства Эйфелева башня превзошла монумент Вашингтона, став самым высоким искусственным сооружением в мире, и этот титул она удерживала в течение 41 года до завершения строительство здания Крайслер в Нью-Йорке в 1930 году. Это первое сооружение которое достигло высоты 300 метров. Из-за добавления вещательной антенны на вершине башни в 1957 году она сейчас выше здания Крайслер на 5,2 метра (17 футов). За исключением передатчиков, Эйфелева башня является второй самой высокой отдельно стоящей структурой во Франции после виадука Мийо." --- This is a model for abstractive Russian summarization, based on [cointegrated/rut5-base-multitask](https://huggingface.co/cointegrated/rut5-base-multitask) and fine-tuned on 4 datasets. It can be used as follows: ```python import torch from transformers import T5ForConditionalGeneration, T5Tokenizer MODEL_NAME = 'cointegrated/rut5-base-absum' model = T5ForConditionalGeneration.from_pretrained(MODEL_NAME) tokenizer = T5Tokenizer.from_pretrained(MODEL_NAME) model.cuda(); model.eval(); def summarize( text, n_words=None, compression=None, max_length=1000, num_beams=3, do_sample=False, repetition_penalty=10.0, **kwargs ): """ Summarize the text The following parameters are mutually exclusive: - n_words (int) is an approximate number of words to generate. - compression (float) is an approximate length ratio of summary and original text. """ if n_words: text = '[{}] '.format(n_words) + text elif compression: text = '[{0:.1g}] '.format(compression) + text x = tokenizer(text, return_tensors='pt', padding=True).to(model.device) with torch.inference_mode(): out = model.generate( **x, max_length=max_length, num_beams=num_beams, do_sample=do_sample, repetition_penalty=repetition_penalty, **kwargs ) return tokenizer.decode(out[0], skip_special_tokens=True) text = """Высота башни составляет 324 метра (1063 фута), примерно такая же высота, как у 81-этажного здания, и самое высокое сооружение в Париже. Его основание квадратно, размером 125 метров (410 футов) с любой стороны. Во время строительства Эйфелева башня превзошла монумент Вашингтона, став самым высоким искусственным сооружением в мире, и этот титул она удерживала в течение 41 года до завершения строительство здания Крайслер в Нью-Йорке в 1930 году. Это первое сооружение которое достигло высоты 300 метров. Из-за добавления вещательной антенны на вершине башни в 1957 году она сейчас выше здания Крайслер на 5,2 метра (17 футов). За исключением передатчиков, Эйфелева башня является второй самой высокой отдельно стоящей структурой во Франции после виадука Мийо.""" print(summarize(text)) # Эйфелева башня достигла высоты 300 метров. print(summarize(text, n_words=10)) # Французская Эйфелева башня достигла высоты 300 метров. ```
3,264
[ [ -0.02783203125, -0.03485107421875, 0.0180206298828125, 0.0226287841796875, -0.0357666015625, -0.0016794204711914062, -0.018585205078125, -0.01070404052734375, 0.029815673828125, 0.006259918212890625, -0.03118896484375, -0.05841064453125, -0.03863525390625, 0...
AJCEMCA/my-pet-dog-axr
2023-08-11T16:46:27.000Z
[ "diffusers", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "endpoints_compatible", "has_space", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
AJCEMCA
null
null
AJCEMCA/my-pet-dog-axr
0
1,164
diffusers
2023-08-11T16:42:35
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### My-Pet-Dog-AXR Dreambooth model trained by AJCEMCA following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: AJCE130 Sample pictures of this concept: ![0](https://huggingface.co/AJCEMCA/my-pet-dog-axr/resolve/main/sample_images/p3.jpg)
387
[ [ -0.06646728515625, -0.0171051025390625, 0.02655029296875, 0.00550079345703125, -0.00934600830078125, 0.031829833984375, 0.035064697265625, -0.033966064453125, 0.0341796875, 0.026947021484375, -0.03533935546875, -0.022247314453125, -0.0162506103515625, 0.0086...