modelId
stringlengths
4
111
lastModified
stringlengths
24
24
tags
list
pipeline_tag
stringlengths
5
30
author
stringlengths
2
34
config
null
securityStatus
null
id
stringlengths
4
111
likes
int64
0
9.53k
downloads
int64
2
73.6M
library_name
stringlengths
2
84
created
timestamp[us]
card
stringlengths
101
901k
card_len
int64
101
901k
embeddings
list
stablediffusionapi/sdxlnijise
2023-09-25T13:16:34.000Z
[ "diffusers", "stablediffusionapi.com", "stable-diffusion-api", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
stablediffusionapi
null
null
stablediffusionapi/sdxlnijise
2
431
diffusers
2023-09-25T13:13:44
--- license: creativeml-openrail-m tags: - stablediffusionapi.com - stable-diffusion-api - text-to-image - ultra-realistic pinned: true --- # SDXL_Niji_SE API Inference ![generated from stablediffusionapi.com](https://cdn2.stablediffusionapi.com/generations/4237057731695305521.png) ## Get API Key Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed. Replace Key in below code, change **model_id** to "sdxlnijise" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs) Try model for free: [Generate Images](https://stablediffusionapi.com/models/sdxlnijise) Model link: [View model](https://stablediffusionapi.com/models/sdxlnijise) Credits: [View credits](https://civitai.com/?query=SDXL_Niji_SE) View all models: [View Models](https://stablediffusionapi.com/models) import requests import json url = "https://stablediffusionapi.com/api/v4/dreambooth" payload = json.dumps({ "key": "your_api_key", "model_id": "sdxlnijise", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) > Use this coupon code to get 25% off **DMGG0RBN**
2,439
[ [ -0.032928466796875, -0.050811767578125, 0.040374755859375, 0.01910400390625, -0.040252685546875, 0.005523681640625, 0.024383544921875, -0.033233642578125, 0.04400634765625, 0.045440673828125, -0.0628662109375, -0.061737060546875, -0.0274200439453125, -0.0046...
NanaEilish/t5_conll_ontonotes_en12
2023-10-26T09:03:34.000Z
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "en", "dataset:conll2012_ontonotesv5", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text2text-generation
NanaEilish
null
null
NanaEilish/t5_conll_ontonotes_en12
0
431
transformers
2023-10-25T12:10:11
--- datasets: - conll2012_ontonotesv5 language: - en pipeline_tag: text2text-generation --- Given a text, its output format is: `"{ENT_TYPE}:{span}; {ENT_TYPE}:{span}..."`\ For training speed, we only use the first 10,000 sentences (not documents) from train set; 1,000 sentences from validation set;\ we save the model when its val_loss (NLL) reaches the minimum.\ The model could be used as a pretrained backbone on downstream fine-tuning NER tasks.
452
[ [ -0.03790283203125, -0.0455322265625, 0.008453369140625, 0.0341796875, -0.0140838623046875, -0.0345458984375, -0.00722503662109375, 0.003864288330078125, 0.0221099853515625, 0.050537109375, -0.0509033203125, -0.0178375244140625, -0.0457763671875, 0.0319519042...
Helsinki-NLP/opus-mt-en-vi
2023-08-16T11:31:40.000Z
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "vi", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
Helsinki-NLP
null
null
Helsinki-NLP/opus-mt-en-vi
7
430
transformers
2022-03-02T23:29:04
--- language: - en - vi tags: - translation license: apache-2.0 --- ### eng-vie * source group: English * target group: Vietnamese * OPUS readme: [eng-vie](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-vie/README.md) * model: transformer-align * source language(s): eng * target language(s): vie vie_Hani * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-vie/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-vie/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-vie/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng.vie | 37.2 | 0.542 | ### System Info: - hf_name: eng-vie - source_languages: eng - target_languages: vie - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-vie/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'vi'] - src_constituents: {'eng'} - tgt_constituents: {'vie', 'vie_Hani'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-vie/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-vie/opus-2020-06-17.test.txt - src_alpha3: eng - tgt_alpha3: vie - short_pair: en-vi - chrF2_score: 0.542 - bleu: 37.2 - brevity_penalty: 0.973 - ref_len: 24427.0 - src_name: English - tgt_name: Vietnamese - train_date: 2020-06-17 - src_alpha2: en - tgt_alpha2: vi - prefer_old: False - long_pair: eng-vie - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
2,188
[ [ -0.0247344970703125, -0.04803466796875, 0.02154541015625, 0.0293731689453125, -0.0275421142578125, -0.0172271728515625, -0.0218963623046875, -0.026123046875, 0.020355224609375, 0.0252227783203125, -0.041351318359375, -0.058685302734375, -0.038543701171875, 0...
firqaaa/indo-sentence-bert-base
2023-07-15T15:29:01.000Z
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "id", "arxiv:1908.10084", "license:apache-2.0", "endpoints_compatible", "has_space", "region:us" ]
sentence-similarity
firqaaa
null
null
firqaaa/indo-sentence-bert-base
6
430
sentence-transformers
2022-09-19T18:01:57
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers license: apache-2.0 language: - id library_name: sentence-transformers --- # indo-sentence-bert-base This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["Ibukota Perancis adalah Paris", "Menara Eifel terletak di Paris, Perancis", "Pizza adalah makanan khas Italia", "Saya kuliah di Carneige Mellon University"] model = SentenceTransformer('firqaaa/indo-sentence-bert-base') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ["Ibukota Perancis adalah Paris", "Menara Eifel terletak di Paris, Perancis", "Pizza adalah makanan khas Italia", "Saya kuliah di Carneige Mellon University"] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('firqaaa/indo-sentence-bert-base') model = AutoModel.from_pretrained('firqaaa/indo-sentence-bert-base') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 19644 with parameters: ``` {'batch_size': 16} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 5, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 9930, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information --> `{ @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", }` `{ author = {Arasyi, Firqa}, title = {Indo-Sentence-BERT: Indonesian Sentence BERT for Semantic Similarity}, year = {2022}, url = {https://huggingface.co/firqaaa/indo-sentence-bert-base} }`
4,781
[ [ -0.022125244140625, -0.069580078125, 0.0189056396484375, 0.0299835205078125, -0.0283660888671875, -0.03204345703125, -0.028533935546875, -0.0026035308837890625, 0.0244903564453125, 0.023651123046875, -0.045623779296875, -0.036834716796875, -0.04766845703125, ...
matgu23/abtrl
2023-07-15T03:09:52.000Z
[ "diffusers", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "endpoints_compatible", "has_space", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
matgu23
null
null
matgu23/abtrl
0
430
diffusers
2023-07-15T03:02:33
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### abtrl Dreambooth model trained by matgu23 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
494
[ [ -0.0291290283203125, -0.068115234375, 0.033538818359375, 0.03289794921875, -0.0260772705078125, 0.03179931640625, 0.01385498046875, -0.021209716796875, 0.03326416015625, 0.0048980712890625, -0.0182952880859375, -0.0225982666015625, -0.0352783203125, -0.01184...
BDAD/segformer-b3-full
2023-08-18T14:14:00.000Z
[ "transformers", "tf", "segformer", "generated_from_keras_callback", "endpoints_compatible", "region:us" ]
null
BDAD
null
null
BDAD/segformer-b3-full
0
430
transformers
2023-08-18T14:13:20
--- tags: - generated_from_keras_callback model-index: - name: segformer-b3-full results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # segformer-b3-full This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
870
[ [ -0.04150390625, -0.044921875, 0.036285400390625, 0.005035400390625, -0.0430908203125, -0.0352783203125, 0.003948211669921875, -0.0223236083984375, 0.004039764404296875, 0.04412841796875, -0.04608154296875, -0.0380859375, -0.07373046875, -0.0147857666015625, ...
google/roberta2roberta_L-24_bbc
2023-01-24T16:43:12.000Z
[ "transformers", "pytorch", "encoder-decoder", "text2text-generation", "summarization", "en", "dataset:xsum", "arxiv:1907.12461", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
google
null
null
google/roberta2roberta_L-24_bbc
3
429
transformers
2022-03-02T23:29:05
--- language: en license: apache-2.0 datasets: - xsum tags: - summarization --- # Roberta2Roberta_L-24_bbc EncoderDecoder model The model was introduced in [this paper](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn and first released in [this repository](https://tfhub.dev/google/bertseq2seq/roberta24_bbc/1). The model is an encoder-decoder model that was initialized on the `roberta-large` checkpoints for both the encoder and decoder and fine-tuned on extreme summarization on the BBC XSum dataset, which is linked above. Disclaimer: The model card has been written by the Hugging Face team. ## How to use You can use this model for extreme summarization, *e.g.* ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("google/roberta2roberta_L-24_bbc") model = AutoModelForSeq2SeqLM.from_pretrained("google/roberta2roberta_L-24_bbc") article = """The problem is affecting people using the older versions of the PlayStation 3, called the "Fat" model.The problem isn't affecting the newer PS3 Slim systems that have been on sale since September last year.Sony have also said they are aiming to have the problem fixed shortly but is advising some users to avoid using their console for the time being."We hope to resolve this problem within the next 24 hours," a statement reads. "In the meantime, if you have a model other than the new slim PS3, we advise that you do not use your PS3 system, as doing so may result in errors in some functionality, such as recording obtained trophies, and not being able to restore certain data."We believe we have identified that this problem is being caused by a bug in the clock functionality incorporated in the system."The PlayStation Network is used by millions of people around the world.It allows users to play their friends at games like Fifa over the internet and also do things like download software or visit online stores.""" input_ids = tokenizer(article, return_tensors="pt").input_ids output_ids = model.generate(input_ids)[0] print(tokenizer.decode(output_ids, skip_special_tokens=True)) # should output # Some Sony PlayStation gamers are being advised to stay away from the network because of a problem with the PlayStation 3 network. ```
2,302
[ [ -0.007755279541015625, -0.03936767578125, 0.016876220703125, 0.032440185546875, -0.039703369140625, -0.0134735107421875, 0.0117950439453125, -0.04443359375, 0.021270751953125, 0.0297088623046875, -0.04412841796875, -0.006793975830078125, -0.059478759765625, ...
stanfordnlp/stanza-zh-hans
2023-10-02T23:49:09.000Z
[ "stanza", "token-classification", "zh", "license:apache-2.0", "region:us" ]
token-classification
stanfordnlp
null
null
stanfordnlp/stanza-zh-hans
4
429
stanza
2022-03-02T23:29:05
--- tags: - stanza - token-classification library_name: stanza language: zh license: apache-2.0 --- # Stanza model for Simplified_Chinese (zh-hans) Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages. Starting from raw text to syntactic analysis and entity recognition, Stanza brings state-of-the-art NLP models to languages of your choosing. Find more about it in [our website](https://stanfordnlp.github.io/stanza) and our [GitHub repository](https://github.com/stanfordnlp/stanza). This card and repo were automatically prepared with `hugging_stanza.py` in the `stanfordnlp/huggingface-models` repo Last updated 2023-10-02 23:48:37.506
696
[ [ -0.01922607421875, -0.048614501953125, 0.0210113525390625, 0.031982421875, -0.04193115234375, -0.0294342041015625, -0.0200347900390625, -0.032012939453125, 0.0154571533203125, 0.03302001953125, -0.05780029296875, -0.0428466796875, -0.0144500732421875, 0.0001...
jurabi/bert-ner-japanese
2022-09-26T12:13:44.000Z
[ "transformers", "pytorch", "tf", "bert", "token-classification", "ja", "license:cc-by-sa-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
jurabi
null
null
jurabi/bert-ner-japanese
6
429
transformers
2022-09-26T07:46:38
--- language: - ja widget: - text: 株式会社Jurabiは、東京都台東区に本社を置くIT企業である。 license: cc-by-sa-3.0 --- # BERTによる日本語固有表現抽出のモデル [BertForTokenClassification](https://huggingface.co/docs/transformers/model_doc/bert#transformers.BertForTokenClassification)を用いて、日本語の文から固有表現を抽出します。 抽出される固有表現のタイプは、以下の8種類です。 - 人名 - 法人名(法人または法人に類する組織) - 政治的組織名(政治的組織名、政党名、政府組織名、行政組織名、軍隊名、国際組織名) - その他の組織名 (競技組織名、公演組織名、その他) - 地名 - 施設名 - 製品名(商品名、番組名、映画名、書籍名、歌名、ブランド名等) - イベント名 ## 使用方法 必要なライブラリ(transformers、unidic_lite、fugashi)をpipなどでインストールして、下記のコードを実行するだけです。 ```python from transformers import BertJapaneseTokenizer, BertForTokenClassification from transformers import pipeline model = BertForTokenClassification.from_pretrained("jurabi/bert-ner-japanese") tokenizer = BertJapaneseTokenizer.from_pretrained("jurabi/bert-ner-japanese") ner_pipeline = pipeline('ner', model=model, tokenizer=tokenizer) ner_pipeline("株式会社Jurabiは、東京都台東区に本社を置くIT企業である。") ``` ## 事前学習モデル 東北大学乾研究室が公開している日本語BERTモデル([cl-tohoku/bert-base-japanese-v2](https://huggingface.co/cl-tohoku/bert-base-japanese-v2)) ## 学習データ ストックマーク株式会社が公開しているWikipediaを用いた日本語の固有表現抽出データセット([stockmarkteam/ner-wikipedia-dataset](https://github.com/stockmarkteam/ner-wikipedia-dataset)) ## ソースコード ファインチューニングに使用したプログラムは、[jurabiinc/bert-ner-japanese](https://github.com/jurabiinc/bert-ner-japanese)で公開しています。 ## ライセンス [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/)
1,429
[ [ -0.041412353515625, -0.035888671875, 0.00966644287109375, 0.0183258056640625, -0.035064697265625, -0.00015306472778320312, -0.0222015380859375, -0.0310516357421875, 0.032196044921875, 0.023406982421875, -0.05841064453125, -0.035675048828125, -0.06146240234375, ...
hr16/any-ely-wd-ira-olympus-3500
2022-12-10T06:41:05.000Z
[ "diffusers", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
hr16
null
null
hr16/any-ely-wd-ira-olympus-3500
0
429
diffusers
2022-12-10T06:37:33
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### Model Dreambooth concept any-ely-wd-ira-olympus-3500 được train bởi hr16 bằng [Shinja Zero SoTA DreamBooth_Stable_Diffusion](https://colab.research.google.com/drive/1G7qx6M_S1PDDlsWIMdbZXwdZik6sUlEh) notebook <br> Test concept bằng [Shinja Zero no Notebook](https://colab.research.google.com/drive/1Hp1ZIjPbsZKlCtomJVmt2oX7733W44b0) <br> Hoặc test bằng `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb) Ảnh mẫu của concept: WIP
622
[ [ -0.0345458984375, -0.0758056640625, 0.045684814453125, 0.037933349609375, -0.021728515625, 0.0184173583984375, 0.03271484375, -0.019012451171875, 0.048065185546875, 0.01174163818359375, -0.0292816162109375, -0.02716064453125, -0.0215606689453125, -0.02783203...
digiplay/majicMIX_realistic_v5
2023-06-19T19:03:19.000Z
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
digiplay
null
null
digiplay/majicMIX_realistic_v5
0
429
diffusers
2023-06-13T22:17:19
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- Model info : https://civitai.com/models/43331?modelVersionId=82446
190
[ [ -0.02276611328125, 0.029632568359375, 0.037628173828125, 0.0267791748046875, -0.032958984375, -0.0158538818359375, 0.042205810546875, -0.01055145263671875, 0.0164337158203125, 0.03265380859375, -0.048553466796875, 0.0010385513305664062, 0.01177978515625, -0....
Debayan990/my-pet-cat-jxl
2023-07-16T21:13:51.000Z
[ "diffusers", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
Debayan990
null
null
Debayan990/my-pet-cat-jxl
0
429
diffusers
2023-07-16T21:01:07
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### My-Pet-Cat-jxl Dreambooth model trained by Debayan990 following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: BBIT47 Sample pictures of this concept: ![0](https://huggingface.co/Debayan990/my-pet-cat-jxl/resolve/main/sample_images/00000-2838740840.png) ![1](https://huggingface.co/Debayan990/my-pet-cat-jxl/resolve/main/sample_images/00003-3628577076.png) ![2](https://huggingface.co/Debayan990/my-pet-cat-jxl/resolve/main/sample_images/00001-1217343363.png)
630
[ [ -0.054595947265625, -0.02899169921875, 0.027587890625, 0.0194091796875, -0.019561767578125, 0.04681396484375, 0.032958984375, -0.02886962890625, 0.04559326171875, 0.038543701171875, -0.0396728515625, -0.0279083251953125, -0.01470947265625, 0.0059051513671875...
Taekyoon/llama2-ko-7b-test
2023-09-27T06:50:59.000Z
[ "ko", "en", "license:cc-by-nc-sa-4.0", "has_space", "region:us" ]
null
Taekyoon
null
null
Taekyoon/llama2-ko-7b-test
0
429
null
2023-08-13T08:11:31
--- license: cc-by-nc-sa-4.0 language: - ko - en metrics: - accuracy - f1 --- *The final model will be released at the end of this year to @Beomi repository
157
[ [ -0.016082763671875, -0.0258941650390625, 0.01364898681640625, -0.0009126663208007812, -0.05010986328125, -0.025115966796875, -0.01439666748046875, -0.058380126953125, 0.0167083740234375, 0.06280517578125, -0.061492919921875, 0.0172119140625, -0.032958984375, ...
rufimelo/Legal-BERTimbau-base
2022-10-23T22:07:02.000Z
[ "transformers", "pytorch", "bert", "fill-mask", "pt", "dataset:rufimelo/PortugueseLegalSentences-v0", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
rufimelo
null
null
rufimelo/Legal-BERTimbau-base
2
428
transformers
2022-07-29T16:11:40
--- language: - pt thumbnail: "Portugues BERT for the Legal Domain" tags: - bert - pytorch datasets: - rufimelo/PortugueseLegalSentences-v0 license: "mit" widget: - text: "O advogado apresentou [MASK] ao juíz." --- # Legal_BERTimbau ## Introduction Legal_BERTimbau Large is a fine-tuned BERT model based on [BERTimbau](https://huggingface.co/neuralmind/bert-base-portuguese-cased) Large. "BERTimbau Base is a pretrained BERT model for Brazilian Portuguese that achieves state-of-the-art performances on three downstream NLP tasks: Named Entity Recognition, Sentence Textual Similarity and Recognizing Textual Entailment. It is available in two sizes: Base and Large. For further information or requests, please go to [BERTimbau repository](https://github.com/neuralmind-ai/portuguese-bert/)." The performance of Language Models can change drastically when there is a domain shift between training and test data. In order create a Portuguese Language Model adapted to a Legal domain, the original BERTimbau model was submitted to a fine-tuning stage where it was performed 1 "PreTraining" epoch over 30 000 legal Portuguese Legal documents available online. ## Available models | Model | Arch. | #Layers | #Params | | ---------------------------------------- | ---------- | ------- | ------- | | `rufimelo/Legal-BERTimbau-base` | BERT-Base |12 |110M| | `rufimelo/Legal-BERTimbau-large` | BERT-Large | 24 | 335M | ## Usage ```python from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("rufimelo/Legal-BERTimbau-base") model = AutoModelForMaskedLM.from_pretrained("rufimelo/Legal-BERTimbau-base") ``` ### Masked language modeling prediction example ```python from transformers import pipeline from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("rufimelo/Legal-BERTimbau-base") model = AutoModelForMaskedLM.from_pretrained("rufimelo/Legal-BERTimbau-base") pipe = pipeline('fill-mask', model=model, tokenizer=tokenizer) pipe('O advogado apresentou [MASK] para o juíz') # [{'score': 0.5034703612327576, #'token': 8190, #'token_str': 'recurso', #'sequence': 'O advogado apresentou recurso para o juíz'}, #{'score': 0.07347951829433441, #'token': 21973, #'token_str': 'petição', #'sequence': 'O advogado apresentou petição para o juíz'}, #{'score': 0.05165359005331993, #'token': 4299, #'token_str': 'resposta', #'sequence': 'O advogado apresentou resposta para o juíz'}, #{'score': 0.04611917585134506, #'token': 5265, #'token_str': 'exposição', #'sequence': 'O advogado apresentou exposição para o juíz'}, #{'score': 0.04068068787455559, #'token': 19737, 'token_str': #'alegações', #'sequence': 'O advogado apresentou alegações para o juíz'}] ``` ### For BERT embeddings ```python import torch from transformers import AutoModel model = AutoModel.from_pretrained('rufimelo/Legal-BERTimbau-base') input_ids = tokenizer.encode('O advogado apresentou recurso para o juíz', return_tensors='pt') with torch.no_grad(): outs = model(input_ids) encoded = outs[0][0, 1:-1] #tensor([[ 0.0328, -0.4292, -0.6230, ..., -0.3048, -0.5674, 0.0157], #[-0.3569, 0.3326, 0.7013, ..., -0.7778, 0.2646, 1.1310], #[ 0.3169, 0.4333, 0.2026, ..., 1.0517, -0.1951, 0.7050], #..., #[-0.3648, -0.8137, -0.4764, ..., -0.2725, -0.4879, 0.6264], #[-0.2264, -0.1821, -0.3011, ..., -0.5428, 0.1429, 0.0509], #[-1.4617, 0.6281, -0.0625, ..., -1.2774, -0.4491, 0.3131]]) ``` ## Citation If you use this work, please cite BERTimbau's work: ```bibtex @inproceedings{souza2020bertimbau, author = {F{\'a}bio Souza and Rodrigo Nogueira and Roberto Lotufo}, title = {{BERT}imbau: pretrained {BERT} models for {B}razilian {P}ortuguese}, booktitle = {9th Brazilian Conference on Intelligent Systems, {BRACIS}, Rio Grande do Sul, Brazil, October 20-23 (to appear)}, year = {2020} } ```
4,012
[ [ -0.0308837890625, -0.04083251953125, 0.0108184814453125, 0.0236053466796875, -0.0194549560546875, -0.009368896484375, -0.02618408203125, -0.0191802978515625, 0.0272979736328125, 0.02398681640625, -0.042083740234375, -0.050445556640625, -0.052276611328125, -0...
sd-dreambooth-library/this-youtuber-does-not-exist
2023-07-13T03:12:53.000Z
[ "diffusers", "tensorboard", "text-to-image", "en", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
sd-dreambooth-library
null
null
sd-dreambooth-library/this-youtuber-does-not-exist
2
428
diffusers
2023-02-03T21:50:06
--- license: creativeml-openrail-m tags: - text-to-image widget: - text: tyznedsk1 language: - en library_name: diffusers pipeline_tag: text-to-image --- ### This Youtuber Does Not Exist Dreambooth model trained with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! WELCOME TO THE INTERNET: # THIS YOUTUBER DOES NOT EXIST # NOR DO YOU # RED , PINK OR BLUE OR GREEN OR YELLOW M&M PLS tyznedsk1 (use that on your prompt)
1,411
[ [ -0.045684814453125, -0.051177978515625, 0.0202178955078125, 0.04010009765625, -0.034271240234375, 0.0279998779296875, 0.0205535888671875, -0.00940704345703125, 0.059478759765625, 0.0164947509765625, -0.06500244140625, -0.007740020751953125, -0.050689697265625, ...
timm/eva02_tiny_patch14_336.mim_in22k_ft_in1k
2023-03-31T05:47:25.000Z
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "dataset:imagenet-22k", "arxiv:2303.11331", "arxiv:2303.15389", "license:mit", "region:us" ]
image-classification
timm
null
null
timm/eva02_tiny_patch14_336.mim_in22k_ft_in1k
1
428
timm
2023-03-31T04:56:19
--- tags: - image-classification - timm library_tag: timm license: mit datasets: - imagenet-1k - imagenet-22k --- # Model card for eva02_tiny_patch14_336.mim_in22k_ft_in1k An EVA02 image classification model. Pretrained on ImageNet-22k with masked image modeling (using EVA-CLIP as a MIM teacher) and fine-tuned on ImageNet-1k by paper authors. EVA-02 models are vision transformers with mean pooling, SwiGLU, Rotary Position Embeddings (ROPE), and extra LN in MLP (for Base & Large). NOTE: `timm` checkpoints are float32 for consistency with other models. Original checkpoints are float16 or bfloat16 in some cases, see originals if that's preferred. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 5.8 - GMACs: 4.7 - Activations (M): 27.2 - Image size: 336 x 336 - **Papers:** - EVA-02: A Visual Representation for Neon Genesis: https://arxiv.org/abs/2303.11331 - EVA-CLIP: Improved Training Techniques for CLIP at Scale: https://arxiv.org/abs/2303.15389 - **Original:** - https://github.com/baaivision/EVA - https://huggingface.co/Yuxin-CV/EVA-02 - **Pretrain Dataset:** ImageNet-22k - **Dataset:** ImageNet-1k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('eva02_tiny_patch14_336.mim_in22k_ft_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'eva02_tiny_patch14_336.mim_in22k_ft_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 577, 192) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). |model |top1 |top5 |param_count|img_size| |-----------------------------------------------|------|------|-----------|--------| |eva02_large_patch14_448.mim_m38m_ft_in22k_in1k |90.054|99.042|305.08 |448 | |eva02_large_patch14_448.mim_in22k_ft_in22k_in1k|89.946|99.01 |305.08 |448 | |eva_giant_patch14_560.m30m_ft_in22k_in1k |89.792|98.992|1014.45 |560 | |eva02_large_patch14_448.mim_in22k_ft_in1k |89.626|98.954|305.08 |448 | |eva02_large_patch14_448.mim_m38m_ft_in1k |89.57 |98.918|305.08 |448 | |eva_giant_patch14_336.m30m_ft_in22k_in1k |89.56 |98.956|1013.01 |336 | |eva_giant_patch14_336.clip_ft_in1k |89.466|98.82 |1013.01 |336 | |eva_large_patch14_336.in22k_ft_in22k_in1k |89.214|98.854|304.53 |336 | |eva_giant_patch14_224.clip_ft_in1k |88.882|98.678|1012.56 |224 | |eva02_base_patch14_448.mim_in22k_ft_in22k_in1k |88.692|98.722|87.12 |448 | |eva_large_patch14_336.in22k_ft_in1k |88.652|98.722|304.53 |336 | |eva_large_patch14_196.in22k_ft_in22k_in1k |88.592|98.656|304.14 |196 | |eva02_base_patch14_448.mim_in22k_ft_in1k |88.23 |98.564|87.12 |448 | |eva_large_patch14_196.in22k_ft_in1k |87.934|98.504|304.14 |196 | |eva02_small_patch14_336.mim_in22k_ft_in1k |85.74 |97.614|22.13 |336 | |eva02_tiny_patch14_336.mim_in22k_ft_in1k |80.658|95.524|5.76 |336 | ## Citation ```bibtex @article{EVA02, title={EVA-02: A Visual Representation for Neon Genesis}, author={Fang, Yuxin and Sun, Quan and Wang, Xinggang and Huang, Tiejun and Wang, Xinlong and Cao, Yue}, journal={arXiv preprint arXiv:2303.11331}, year={2023} } ``` ```bibtex @article{EVA-CLIP, title={EVA-02: A Visual Representation for Neon Genesis}, author={Sun, Quan and Fang, Yuxin and Wu, Ledell and Wang, Xinlong and Cao, Yue}, journal={arXiv preprint arXiv:2303.15389}, year={2023} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
5,394
[ [ -0.044677734375, -0.0297088623046875, 0.013153076171875, 0.00659942626953125, -0.016632080078125, 0.00012576580047607422, -0.00909423828125, -0.033111572265625, 0.039581298828125, 0.0259246826171875, -0.034759521484375, -0.050506591796875, -0.04217529296875, ...
BAAI/Aquila2-34B
2023-10-26T08:33:03.000Z
[ "transformers", "pytorch", "aquila", "text-generation", "custom_code", "license:other", "region:us" ]
text-generation
BAAI
null
null
BAAI/Aquila2-34B
14
428
transformers
2023-10-12T05:17:25
--- license: other --- ![Aquila_logo](./log.jpeg) <h4 align="center"> <p> <b>English</b> | <a href="https://huggingface.co/BAAI/Aquila2-34B/blob/main/README_zh.md">简体中文</a> </p> </h4> <p align="center"> <a href="https://github.com/FlagAI-Open/Aquila2" target="_blank">Github</a> • <a href="https://github.com/FlagAI-Open/Aquila2/blob/main/assets/wechat-qrcode.jpg" target="_blank">WeChat</a> <br> </p> We opensource our **Aquila2** series, now including **Aquila2**, the base language models, namely **Aquila2-7B** and **Aquila2-34B**, as well as **AquilaChat2**, the chat models, namely **AquilaChat2-7B** and **AquilaChat2-34B**, as well as the long-text chat models, namely **AquilaChat2-7B-16k** and **AquilaChat2-34B-16k** 2023.10.25 🔥 **Aquila2-34B v1.2** is based on the previous **Aquila2-34B**. The Aquila2-34B has achieved a 6.9% improvement in comprehensive evaluations, with MMLU(+12%), TruthfulQA(+14%), CSL(+11%), TNEWS(+12%), OCNLI(+28%), and BUSTM(+18%). The additional details of the Aquila model will be presented in the official technical report. Please stay tuned for updates on official channels. ## Chat Model Performance <br> <p align="center"> <img src="base_metrics.jpeg" width="1024"/> <p> <br> ## Quick Start Aquila2-34B(Chat model) ### 1. Inference ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM from transformers import BitsAndBytesConfig device = torch.device("cuda") model_info = "BAAI/Aquila2-34B" tokenizer = AutoTokenizer.from_pretrained(model_info, trust_remote_code=True) quantization_config=BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16, ) model = AutoModelForCausalLM.from_pretrained(model_info, trust_remote_code=True, # quantization_config=quantization_config, # Uncomment this line for 4bit quantization ) model.eval() model.to(device) text = "请给出10个要到北京旅游的理由。" tokens = tokenizer.encode_plus(text)['input_ids'] tokens = torch.tensor(tokens)[None,].to(device) stop_tokens = ["###", "[UNK]", "</s>"] with torch.no_grad(): out = model.generate(tokens, do_sample=True, max_length=512, eos_token_id=100007, bad_words_ids=[[tokenizer.encode(token)[0] for token in stop_tokens]])[0] out = tokenizer.decode(out.cpu().numpy().tolist()) print(out) ``` ## License Aquila2 series open-source model is licensed under [ BAAI Aquila Model Licence Agreement](https://huggingface.co/BAAI/Aquila2-34B/blob/main/BAAI-Aquila-Model-License%20-Agreement.pdf)
2,785
[ [ -0.01056671142578125, -0.056488037109375, 0.002777099609375, 0.0301055908203125, -0.03179931640625, 0.0009613037109375, -0.0171661376953125, -0.04132080078125, -0.002872467041015625, 0.0197296142578125, -0.0401611328125, -0.034210205078125, -0.037261962890625, ...
alexandrainst/da-ner-base
2023-09-20T11:56:44.000Z
[ "transformers", "pytorch", "tf", "bert", "token-classification", "da", "dataset:dane", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
alexandrainst
null
null
alexandrainst/da-ner-base
0
427
transformers
2022-03-02T23:29:04
--- language: - da license: apache-2.0 datasets: - dane widget: - text: Jens Peter Hansen kommer fra Danmark --- # BERT fine-tuned for Named Entity Recognition in Danish The model tags tokens (in Danish sentences) with named entity tags (BIO format) [PER, ORG, LOC, MISC]. The pretrained language model used for fine-tuning is the [Danish BERT](https://github.com/certainlyio/nordic_bert) by BotXO. See the [DaNLP documentation](https://danlp-alexandra.readthedocs.io/en/latest/docs/tasks/ner.html#bert) for more details. Here is how to use the model: ```python from transformers import BertTokenizer, BertForTokenClassification model = BertForTokenClassification.from_pretrained("alexandrainst/da-ner-base") tokenizer = BertTokenizer.from_pretrained("alexandrainst/da-ner-base") ``` ## Training Data The model has been trained on the [DaNE](https://danlp-alexandra.readthedocs.io/en/latest/docs/datasets.html#dane).
926
[ [ -0.046051025390625, -0.039886474609375, 0.037384033203125, 0.007293701171875, -0.03863525390625, -0.00424957275390625, -0.0301513671875, -0.034912109375, 0.013824462890625, 0.03912353515625, -0.03399658203125, -0.042755126953125, -0.0367431640625, 0.03994750...
Alvenir/bert-punct-restoration-da
2022-03-23T09:05:15.000Z
[ "transformers", "pytorch", "bert", "token-classification", "punctuation restoration", "da", "dataset:custom", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
Alvenir
null
null
Alvenir/bert-punct-restoration-da
3
427
transformers
2022-03-22T17:33:25
--- language: da tags: - bert - punctuation restoration license: apache-2.0 datasets: - custom --- # Bert Punctuation Restoration Danish This model performs the punctuation restoration task in Danish. The method used is sequence classification similar to how NER models are trained. ## Model description TODO ### How to use The model requires some additional inference code, hence we created an awesome little pip package for inference. The inference code is based on the `TokenClassificationPipeline` pipeline from huggingface. First, install the little package by running ``` pip install punctfix ``` Then restoration is as simple as the following snippet: ```python >>> from punctfix import PunctFixer >>> fixer = PunctFixer(language="da") >>> example_text = "mit navn det er rasmus og jeg kommer fra firmaet alvenir det er mig som har trænet denne lækre model" >>> print(fixer.punctuate(example_text)) 'Mit navn det er Rasmus og jeg kommer fra firmaet Alvenir. Det er mig som har trænet denne lækre model.' >>> example_text = "en dag bliver vi sku glade for at vi nu kan sætte punktummer og kommaer i en sætning det fungerer da meget godt ikke" >>> print(fixer.punctuate(example_text)) 'En dag bliver vi sku glade for, at vi nu kan sætte punktummer og kommaer i en sætning. Det fungerer da meget godt, ikke?' ``` ## Training data To Do ## Training procedure To Do ### Preprocessing TODO ## Evaluation results TODO
1,437
[ [ -0.009521484375, -0.06109619140625, 0.037689208984375, 0.0248565673828125, -0.0238037109375, -0.00582122802734375, -0.0123748779296875, -0.0182037353515625, 0.00960540771484375, 0.044525146484375, -0.037628173828125, -0.03228759765625, -0.026824951171875, 0....
HomayounSadri/bert-base-uncased-finetuned-squad-v2
2022-05-05T19:18:27.000Z
[ "transformers", "tf", "tensorboard", "bert", "question-answering", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
question-answering
HomayounSadri
null
null
HomayounSadri/bert-base-uncased-finetuned-squad-v2
0
427
transformers
2022-05-05T15:37:29
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: HomayounSadri/bert-base-uncased-finetuned-squad-v2 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # HomayounSadri/bert-base-uncased-finetuned-squad-v2 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.8470 - Validation Loss: 1.0267 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 11064, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.3879 | 1.0715 | 0 | | 0.8470 | 1.0267 | 1 | ### Framework versions - Transformers 4.18.0 - TensorFlow 2.8.0 - Datasets 2.1.0 - Tokenizers 0.12.1
1,559
[ [ -0.04217529296875, -0.04052734375, 0.01491546630859375, 0.01169586181640625, -0.034454345703125, -0.017181396484375, -0.0183563232421875, -0.0170440673828125, 0.0035991668701171875, 0.0132293701171875, -0.06707763671875, -0.038177490234375, -0.0489501953125, ...
NickKolok/meryl-stryfe-20230123-2300-6k-2400-steps
2023-01-22T23:01:45.000Z
[ "diffusers", "text-to-image", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
NickKolok
null
null
NickKolok/meryl-stryfe-20230123-2300-6k-2400-steps
0
427
diffusers
2023-01-22T22:10:24
--- license: creativeml-openrail-m tags: - text-to-image --- ### Meryl_Stryfe_20230123_2300_6k_2400_steps on Stable Diffusion via Dreambooth #### model by NickKolok This your the Stable Diffusion model fine-tuned the Meryl_Stryfe_20230123_2300_6k_2400_steps concept taught to Stable Diffusion with Dreambooth. #It can be used by modifying the `instance_prompt`: **merylstryfetrigun** You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb). And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts) Here are the images used for training this concept: ![image 0](https://huggingface.co/NickKolok/meryl-stryfe-20230123-2300-6k-2400-steps_1/resolve/main/concept_images/face23_what____no_way_____by_ajd_262_d3dk752.png) ![image 1](https://huggingface.co/NickKolok/meryl-stryfe-20230123-2300-6k-2400-steps_1/resolve/main/concept_images/waist10_29.png) ![image 2](https://huggingface.co/NickKolok/meryl-stryfe-20230123-2300-6k-2400-steps_1/resolve/main/concept_images/face21_wondering_v2_by_ajd_262_d37r0af.png) ![image 3](https://huggingface.co/NickKolok/meryl-stryfe-20230123-2300-6k-2400-steps_1/resolve/main/concept_images/face11.png) ![image 4](https://huggingface.co/NickKolok/meryl-stryfe-20230123-2300-6k-2400-steps_1/resolve/main/concept_images/face32_not_as_easy_as_thought_by_ajd_262_d4hjpjc.png) ![image 5](https://huggingface.co/NickKolok/meryl-stryfe-20230123-2300-6k-2400-steps_1/resolve/main/concept_images/knees1.png) ![image 6](https://huggingface.co/NickKolok/meryl-stryfe-20230123-2300-6k-2400-steps_1/resolve/main/concept_images/waist8.png) ![image 7](https://huggingface.co/NickKolok/meryl-stryfe-20230123-2300-6k-2400-steps_1/resolve/main/concept_images/face5.png) ![image 8](https://huggingface.co/NickKolok/meryl-stryfe-20230123-2300-6k-2400-steps_1/resolve/main/concept_images/face16.png) ![image 9](https://huggingface.co/NickKolok/meryl-stryfe-20230123-2300-6k-2400-steps_1/resolve/main/concept_images/waist3.png) ![image 10](https://huggingface.co/NickKolok/meryl-stryfe-20230123-2300-6k-2400-steps_1/resolve/main/concept_images/face26_un_huh_by_ajd_262_d4m6jlk.png) ![image 11](https://huggingface.co/NickKolok/meryl-stryfe-20230123-2300-6k-2400-steps_1/resolve/main/concept_images/face7.png) ![image 12](https://huggingface.co/NickKolok/meryl-stryfe-20230123-2300-6k-2400-steps_1/resolve/main/concept_images/waist2.png) ![image 13](https://huggingface.co/NickKolok/meryl-stryfe-20230123-2300-6k-2400-steps_1/resolve/main/concept_images/face6.png) ![image 14](https://huggingface.co/NickKolok/meryl-stryfe-20230123-2300-6k-2400-steps_1/resolve/main/concept_images/face2.png) ![image 15](https://huggingface.co/NickKolok/meryl-stryfe-20230123-2300-6k-2400-steps_1/resolve/main/concept_images/face15.png) ![image 16](https://huggingface.co/NickKolok/meryl-stryfe-20230123-2300-6k-2400-steps_1/resolve/main/concept_images/face10.png) ![image 17](https://huggingface.co/NickKolok/meryl-stryfe-20230123-2300-6k-2400-steps_1/resolve/main/concept_images/face4.png) ![image 18](https://huggingface.co/NickKolok/meryl-stryfe-20230123-2300-6k-2400-steps_1/resolve/main/concept_images/shoulders2.png) ![image 19](https://huggingface.co/NickKolok/meryl-stryfe-20230123-2300-6k-2400-steps_1/resolve/main/concept_images/waist5.png) ![image 20](https://huggingface.co/NickKolok/meryl-stryfe-20230123-2300-6k-2400-steps_1/resolve/main/concept_images/waist9.png) ![image 21](https://huggingface.co/NickKolok/meryl-stryfe-20230123-2300-6k-2400-steps_1/resolve/main/concept_images/face14.png) ![image 22](https://huggingface.co/NickKolok/meryl-stryfe-20230123-2300-6k-2400-steps_1/resolve/main/concept_images/waist7.png) ![image 23](https://huggingface.co/NickKolok/meryl-stryfe-20230123-2300-6k-2400-steps_1/resolve/main/concept_images/face13.png) ![image 24](https://huggingface.co/NickKolok/meryl-stryfe-20230123-2300-6k-2400-steps_1/resolve/main/concept_images/knees5_meryl_and_milly_for_gojiro7_by_ajd_262_d399p4i.png) ![image 25](https://huggingface.co/NickKolok/meryl-stryfe-20230123-2300-6k-2400-steps_1/resolve/main/concept_images/face25_welcoming_bed_by_ajd_262_d6k0igt.png) ![image 26](https://huggingface.co/NickKolok/meryl-stryfe-20230123-2300-6k-2400-steps_1/resolve/main/concept_images/waist1.png) ![image 27](https://huggingface.co/NickKolok/meryl-stryfe-20230123-2300-6k-2400-steps_1/resolve/main/concept_images/face1.png) ![image 28](https://huggingface.co/NickKolok/meryl-stryfe-20230123-2300-6k-2400-steps_1/resolve/main/concept_images/face22_when_we_drink__its_kuroneko__by_ajd_262_d3bdcic.png) ![image 29](https://huggingface.co/NickKolok/meryl-stryfe-20230123-2300-6k-2400-steps_1/resolve/main/concept_images/face3.png) ![image 30](https://huggingface.co/NickKolok/meryl-stryfe-20230123-2300-6k-2400-steps_1/resolve/main/concept_images/face31_playing_dress_up_by_ajd_262_d7o83mn.png) ![image 31](https://huggingface.co/NickKolok/meryl-stryfe-20230123-2300-6k-2400-steps_1/resolve/main/concept_images/face29_tanning_by_ajd_262_d5pj4la.png) ![image 32](https://huggingface.co/NickKolok/meryl-stryfe-20230123-2300-6k-2400-steps_1/resolve/main/concept_images/shoulders1.png) ![image 33](https://huggingface.co/NickKolok/meryl-stryfe-20230123-2300-6k-2400-steps_1/resolve/main/concept_images/waist4.png) ![image 34](https://huggingface.co/NickKolok/meryl-stryfe-20230123-2300-6k-2400-steps_1/resolve/main/concept_images/face24_we_ll_find_him_by_ajd_262_d33a43c.png) ![image 35](https://huggingface.co/NickKolok/meryl-stryfe-20230123-2300-6k-2400-steps_1/resolve/main/concept_images/face27_um__maybe_by_ajd_262_d87z6f3.png) ![image 36](https://huggingface.co/NickKolok/meryl-stryfe-20230123-2300-6k-2400-steps_1/resolve/main/concept_images/face18_meryl_lingerie_by_ajd_262_d4j6vf4.png) ![image 37](https://huggingface.co/NickKolok/meryl-stryfe-20230123-2300-6k-2400-steps_1/resolve/main/concept_images/face8.png) ![image 38](https://huggingface.co/NickKolok/meryl-stryfe-20230123-2300-6k-2400-steps_1/resolve/main/concept_images/face28_things_are_looking__down_by_ajd_262_d5iyga3.png) ![image 39](https://huggingface.co/NickKolok/meryl-stryfe-20230123-2300-6k-2400-steps_1/resolve/main/concept_images/face30_showing_by_ajd_262_d9tec76.png) ![image 40](https://huggingface.co/NickKolok/meryl-stryfe-20230123-2300-6k-2400-steps_1/resolve/main/concept_images/knees3.png) ![image 41](https://huggingface.co/NickKolok/meryl-stryfe-20230123-2300-6k-2400-steps_1/resolve/main/concept_images/waist11_natural_by_ajd_262_d59bs3w_b3.png) ![image 42](https://huggingface.co/NickKolok/meryl-stryfe-20230123-2300-6k-2400-steps_1/resolve/main/concept_images/face12.png) ![image 43](https://huggingface.co/NickKolok/meryl-stryfe-20230123-2300-6k-2400-steps_1/resolve/main/concept_images/knees2.png) ![image 44](https://huggingface.co/NickKolok/meryl-stryfe-20230123-2300-6k-2400-steps_1/resolve/main/concept_images/face19_meryl_x_knives_by_ajd_262_d9lp35g.png) ![image 45](https://huggingface.co/NickKolok/meryl-stryfe-20230123-2300-6k-2400-steps_1/resolve/main/concept_images/face17_meryl_and_milly_for_gojiro7_by_ajd_262_d399p4i.png) ![image 46](https://huggingface.co/NickKolok/meryl-stryfe-20230123-2300-6k-2400-steps_1/resolve/main/concept_images/knees4.png) ![image 47](https://huggingface.co/NickKolok/meryl-stryfe-20230123-2300-6k-2400-steps_1/resolve/main/concept_images/face9.png) ![image 48](https://huggingface.co/NickKolok/meryl-stryfe-20230123-2300-6k-2400-steps_1/resolve/main/concept_images/face20_merylxvash_by_ajd_262_d3bofm7.png) ![image 49](https://huggingface.co/NickKolok/meryl-stryfe-20230123-2300-6k-2400-steps_1/resolve/main/concept_images/waist6.png)
8,125
[ [ -0.0750732421875, -0.0312042236328125, 0.01409912109375, 0.011077880859375, -0.031524658203125, -0.0133514404296875, -0.0071258544921875, -0.0638427734375, 0.08636474609375, 0.0200653076171875, -0.052642822265625, -0.041656494140625, -0.046142578125, 0.01655...
timm/efficientformer_l1.snap_dist_in1k
2023-02-03T21:06:13.000Z
[ "timm", "pytorch", "image-classification", "dataset:imagenet-1k", "arxiv:2206.01191", "license:apache-2.0", "region:us" ]
image-classification
timm
null
null
timm/efficientformer_l1.snap_dist_in1k
0
427
timm
2023-02-03T21:06:06
--- tags: - image-classification - timm library_tag: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for efficientformer_l1.snap_dist_in1k A EfficientFormer image classification model. Pretrained with distillation on ImageNet-1k. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 12.3 - GMACs: 1.3 - Activations (M): 5.5 - Image size: 224 x 224 - **Original:** https://github.com/snap-research/EfficientFormer - **Papers:** - EfficientFormer: Vision Transformers at MobileNet Speed: https://arxiv.org/abs/2206.01191 - **Dataset:** ImageNet-1k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model('efficientformer_l1.snap_dist_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'efficientformer_l1.snap_dist_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled (ie.e a (batch_size, num_features, H, W) tensor output = model.forward_head(output, pre_logits=True) # output is (batch_size, num_features) tensor ``` ## Model Comparison |model |top1 |top5 |param_count|img_size| |-----------------------------------|------|------|-----------|--------| |efficientformerv2_l.snap_dist_in1k |83.628|96.54 |26.32 |224 | |efficientformer_l7.snap_dist_in1k |83.368|96.534|82.23 |224 | |efficientformer_l3.snap_dist_in1k |82.572|96.24 |31.41 |224 | |efficientformerv2_s2.snap_dist_in1k|82.128|95.902|12.71 |224 | |efficientformer_l1.snap_dist_in1k |80.496|94.984|12.29 |224 | |efficientformerv2_s1.snap_dist_in1k|79.698|94.698|6.19 |224 | |efficientformerv2_s0.snap_dist_in1k|76.026|92.77 |3.6 |224 | ## Citation ```bibtex @article{li2022efficientformer, title={EfficientFormer: Vision Transformers at MobileNet Speed}, author={Li, Yanyu and Yuan, Geng and Wen, Yang and Hu, Ju and Evangelidis, Georgios and Tulyakov, Sergey and Wang, Yanzhi and Ren, Jian}, journal={arXiv preprint arXiv:2206.01191}, year={2022} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/rwightman/pytorch-image-models}} } ```
3,624
[ [ -0.03350830078125, -0.032073974609375, 0.00858306884765625, 0.01148223876953125, -0.0251007080078125, -0.02484130859375, -0.0105438232421875, -0.020294189453125, 0.0216064453125, 0.01416015625, -0.036529541015625, -0.033477783203125, -0.055450439453125, -0.0...
WALIDALI/imentunisly
2023-07-14T14:36:26.000Z
[ "diffusers", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
WALIDALI
null
null
WALIDALI/imentunisly
0
427
diffusers
2023-07-14T14:30:28
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### imentunisly Dreambooth model trained by WALIDALI with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
501
[ [ -0.0278167724609375, -0.04791259765625, 0.040618896484375, 0.03973388671875, -0.02239990234375, 0.034027099609375, 0.0215301513671875, -0.03167724609375, 0.05609130859375, -0.0026760101318359375, -0.0293731689453125, -0.016754150390625, -0.033355712890625, -...
timm/repghostnet_050.in1k
2023-08-19T23:12:08.000Z
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2211.06088", "license:mit", "region:us" ]
image-classification
timm
null
null
timm/repghostnet_050.in1k
0
427
timm
2023-08-19T23:12:06
--- tags: - image-classification - timm library_name: timm license: mit datasets: - imagenet-1k --- # Model card for repghostnet_050.in1k A RepGhostNet image classification model. Trained on ImageNet-1k by paper authors. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 2.3 - GMACs: 0.0 - Activations (M): 2.0 - Image size: 224 x 224 - **Papers:** - RepGhost: A Hardware-Efficient Ghost Module via Re-parameterization: https://arxiv.org/abs/2211.06088 - **Original:** https://github.com/ChengpengChen/RepGhost - **Dataset:** ImageNet-1k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('repghostnet_050.in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'repghostnet_050.in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 8, 112, 112]) # torch.Size([1, 12, 56, 56]) # torch.Size([1, 20, 28, 28]) # torch.Size([1, 40, 14, 14]) # torch.Size([1, 80, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'repghostnet_050.in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 480, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Citation ```bibtex @article{chen2022repghost, title={RepGhost: A Hardware-Efficient Ghost Module via Re-parameterization}, author={Chen, Chengpeng, and Guo, Zichao, and Zeng, Haien, and Xiong, Pengfei and Dong, Jian}, journal={arXiv preprint arXiv:2211.06088}, year={2022} } ```
3,489
[ [ -0.03363037109375, -0.0338134765625, 0.008087158203125, 0.0188140869140625, -0.034881591796875, -0.0292510986328125, -0.01322174072265625, -0.023895263671875, 0.036163330078125, 0.039337158203125, -0.04010009765625, -0.05340576171875, -0.050872802734375, -0....
tzvc/709a053d-5177-4be1-9fa4-6372e35600f8
2022-12-14T19:33:23.000Z
[ "diffusers", "tensorboard", "text-to-image", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
tzvc
null
null
tzvc/709a053d-5177-4be1-9fa4-6372e35600f8
0
426
diffusers
2022-12-14T19:16:11
--- license: creativeml-openrail-m tags: - text-to-image widget: - text: a portrait of [V] --- ### training params ```json { "pretrained_model_name_or_path": "runwayml/stable-diffusion-v1-5", "instance_data_dir": "./709a053d-5177-4be1-9fa4-6372e35600f8/instance_data", "class_data_dir": "./class_data/a-portrait-of-a-person", "output_dir": "./709a053d-5177-4be1-9fa4-6372e35600f8/", "with_prior_preservation": false, "prior_loss_weight": 1.0, "instance_prompt": "a portrait of [V]", "class_prompt": "a portrait of a person", "resolution": 512, "train_batch_size": 1, "gradient_accumulation_steps": 1, "gradient_checkpointing": true, "use_8bit_adam": true, "learning_rate": 5e-06, "lr_scheduler": "constant", "lr_warmup_steps": 0, "num_class_images": 200, "max_train_steps": 1050, "mixed_precision": "fp16" } ```
888
[ [ -0.0289764404296875, -0.035400390625, 0.0347900390625, 0.01007843017578125, -0.0301971435546875, 0.01483154296875, 0.012298583984375, 0.029815673828125, -0.0004832744598388672, 0.0295867919921875, -0.0723876953125, -0.057861328125, -0.058807373046875, -0.019...
tzvc/2cabda5b-4e53-40e9-8fcf-cdba5ea5bd6c
2022-12-14T20:55:14.000Z
[ "diffusers", "tensorboard", "text-to-image", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
tzvc
null
null
tzvc/2cabda5b-4e53-40e9-8fcf-cdba5ea5bd6c
0
426
diffusers
2022-12-14T20:37:09
--- license: creativeml-openrail-m tags: - text-to-image widget: - text: a portrait of [V] --- ### training params ```json { "pretrained_model_name_or_path": "runwayml/stable-diffusion-v1-5", "instance_data_dir": "./2cabda5b-4e53-40e9-8fcf-cdba5ea5bd6c/instance_data", "class_data_dir": "./class_data/a-portrait-of-a-person", "output_dir": "./2cabda5b-4e53-40e9-8fcf-cdba5ea5bd6c/", "train_text_encoder": true, "with_prior_preservation": false, "prior_loss_weight": 1.0, "instance_prompt": "a portrait of [V]", "class_prompt": "a portrait of a person", "resolution": 512, "train_batch_size": 1, "gradient_accumulation_steps": 1, "gradient_checkpointing": true, "use_8bit_adam": true, "learning_rate": 5e-06, "lr_scheduler": "constant", "lr_warmup_steps": 0, "num_class_images": 200, "max_train_steps": 1050, "mixed_precision": "fp16" } ```
920
[ [ -0.0276947021484375, -0.037261962890625, 0.036041259765625, 0.01494598388671875, -0.0310516357421875, 0.01529693603515625, 0.0079803466796875, 0.0258941650390625, -0.0006532669067382812, 0.027679443359375, -0.072021484375, -0.059112548828125, -0.06256103515625, ...
tzvc/6de17b72-f0d4-42d5-a98b-c563a5bdbe93
2022-12-15T12:28:31.000Z
[ "diffusers", "tensorboard", "text-to-image", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
tzvc
null
null
tzvc/6de17b72-f0d4-42d5-a98b-c563a5bdbe93
0
426
diffusers
2022-12-15T11:46:55
--- license: creativeml-openrail-m tags: - text-to-image widget: - text: a portrait of [V] --- ### training params ```json { "pretrained_model_name_or_path": "runwayml/stable-diffusion-v1-5", "instance_data_dir": "./6de17b72-f0d4-42d5-a98b-c563a5bdbe93/instance_data", "class_data_dir": "./class_data/a-portrait-of-a-person", "output_dir": "./6de17b72-f0d4-42d5-a98b-c563a5bdbe93/", "train_text_encoder": true, "with_prior_preservation": true, "prior_loss_weight": 1.0, "instance_prompt": "a portrait of [V]", "class_prompt": "a portrait of a person", "resolution": 512, "train_batch_size": 1, "gradient_accumulation_steps": 2, "gradient_checkpointing": true, "use_8bit_adam": true, "learning_rate": 1e-06, "lr_scheduler": "constant", "lr_warmup_steps": 0, "num_class_images": 200, "max_train_steps": 1050, "mixed_precision": "fp16" } ```
919
[ [ -0.0277862548828125, -0.037109375, 0.035858154296875, 0.0138702392578125, -0.0318603515625, 0.013397216796875, 0.006805419921875, 0.0265655517578125, -0.0014801025390625, 0.02789306640625, -0.07177734375, -0.057891845703125, -0.061767578125, -0.0163879394531...
trysem/DreamShaper-3.3
2023-01-19T09:12:12.000Z
[ "diffusers", "text-to-image", "endpoints_compatible", "has_space", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
trysem
null
null
trysem/DreamShaper-3.3
2
426
diffusers
2023-01-18T16:01:37
--- duplicated_from: jzli/DreamShaper-3.3 pipeline_tag: text-to-image --- Read more about this model here: https://civitai.com/models/4384/dreamshaper
150
[ [ -0.0325927734375, 0.008575439453125, 0.047882080078125, 0.01134490966796875, -0.0165557861328125, 0.013519287109375, 0.03118896484375, -0.0430908203125, 0.04046630859375, 0.04736328125, -0.0361328125, -0.0162200927734375, 0.0017881393432617188, -0.0416564941...
Falah/babylon
2023-02-17T09:13:14.000Z
[ "diffusers", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
Falah
null
null
Falah/babylon
0
426
diffusers
2023-02-17T08:22:11
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### Babylon clothes are called in Arabic (الازياء البابلية) Dream Booth model trained by Falah.G.Salieh ## You can visit my blog: https://iraqprogrammer.wordpress.com/ ## FB: https://web.facebook.com/falahgs ## Email: falahgs07@gmail.com With Stable Diffusion, we can now create artificial intelligence art generation images using trained images. In this template, we can create images of women wearing clothes called Babylonian fashion style emanating from the Babylonian civilization in the country of Iraq, it is a popular clothing for Babylonian women in ancient times as famous images, or anything you can think of Test the concept via A1111 Colab fast-Colab-A1111 Sample images of this concept with simple and easy prompts: Any prompt and add babylon style word: Arabic beautiful woman in a costume with a long braid and a fur collar and a chain around her neck and a green , flowers, garden in the background, Bálint Kiss, promotional image, a colorized photo, antipodeans, babylon style, full shot ![0](https://huggingface.co/Falah/babylon/resolve/main/sample_images/00006-3140811858.png) ![1](https://huggingface.co/Falah/babylon/resolve/main/00005-3140811859.png) ![2](https://huggingface.co/Falah/babylon/resolve/main/00008-3140811860.png) ![3](https://huggingface.co/Falah/babylon/resolve/main/00021-1653315922.png)
1,432
[ [ -0.040802001953125, -0.06072998046875, 0.0029449462890625, 0.011260986328125, -0.041168212890625, 0.006927490234375, 0.0084075927734375, -0.0293121337890625, 0.0310211181640625, 0.047027587890625, -0.0477294921875, -0.057830810546875, -0.0301971435546875, 0....
laserchalk/kangaroo-training-part-7
2023-07-16T04:15:03.000Z
[ "diffusers", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
laserchalk
null
null
laserchalk/kangaroo-training-part-7
0
426
diffusers
2023-07-16T04:04:01
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### Kangaroo-training-part-7 Dreambooth model trained by laserchalk with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
516
[ [ -0.03131103515625, -0.05810546875, 0.03173828125, 0.027587890625, -0.03466796875, 0.03363037109375, 0.0222625732421875, -0.012908935546875, 0.042938232421875, 0.033203125, -0.038055419921875, -0.0197601318359375, -0.0416259765625, 0.005344390869140625, -...
InstaDeepAI/nucleotide-transformer-v2-100m-multi-species
2023-10-11T12:29:08.000Z
[ "transformers", "pytorch", "fill-mask", "DNA", "biology", "genomics", "custom_code", "dataset:InstaDeepAI/multi_species_genome", "dataset:InstaDeepAI/nucleotide_transformer_downstream_tasks", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us...
fill-mask
InstaDeepAI
null
null
InstaDeepAI/nucleotide-transformer-v2-100m-multi-species
0
426
transformers
2023-07-27T08:41:51
--- license: cc-by-nc-sa-4.0 widget: - text: ACCTGA<mask>TTCTGAGTC tags: - DNA - biology - genomics datasets: - InstaDeepAI/multi_species_genome - InstaDeepAI/nucleotide_transformer_downstream_tasks --- # nucleotide-transformer-v2-100m-multi-species The Nucleotide Transformers are a collection of foundational language models that were pre-trained on DNA sequences from whole-genomes. Compared to other approaches, our models do not only integrate information from single reference genomes, but leverage DNA sequences from over 3,200 diverse human genomes, as well as 850 genomes from a wide range of species, including model and non-model organisms. Through robust and extensive evaluation, we show that these large models provide extremely accurate molecular phenotype prediction compared to existing methods Part of this collection is the **nucleotide-transformer-v2-100m-multi-species**, a 100m parameters transformer pre-trained on a collection of 850 genomes from a wide range of species, including model and non-model organisms. **Developed by:** InstaDeep, NVIDIA and TUM ### Model Sources <!-- Provide the basic links for the model. --> - **Repository:** [Nucleotide Transformer](https://github.com/instadeepai/nucleotide-transformer) - **Paper:** [The Nucleotide Transformer: Building and Evaluating Robust Foundation Models for Human Genomics](https://www.biorxiv.org/content/10.1101/2023.01.11.523679v1) ### How to use <!-- Need to adapt this section to our model. Need to figure out how to load the models from huggingface and do inference on them --> Until its next release, the `transformers` library needs to be installed from source with the following command in order to use the models: ```bash pip install --upgrade git+https://github.com/huggingface/transformers.git ``` A small snippet of code is given here in order to retrieve both logits and embeddings from a dummy DNA sequence. ```python from transformers import AutoTokenizer, AutoModelForMaskedLM import torch # Import the tokenizer and the model tokenizer = AutoTokenizer.from_pretrained("InstaDeepAI/nucleotide-transformer-v2-100m-multi-species", trust_remote_code=True) model = AutoModelForMaskedLM.from_pretrained("InstaDeepAI/nucleotide-transformer-v2-100m-multi-species", trust_remote_code=True) # Choose the length to which the input sequences are padded. By default, the # model max length is chosen, but feel free to decrease it as the time taken to # obtain the embeddings increases significantly with it. max_length = tokenizer.model_max_length # Create a dummy dna sequence and tokenize it sequences = ["ATTCCGATTCCGATTCCG", "ATTTCTCTCTCTCTCTGAGATCGATCGATCGAT"] tokens_ids = tokenizer.batch_encode_plus(sequences, return_tensors="pt", padding="max_length", max_length = max_length)["input_ids"] # Compute the embeddings attention_mask = tokens_ids != tokenizer.pad_token_id torch_outs = model( tokens_ids, attention_mask=attention_mask, encoder_attention_mask=attention_mask, output_hidden_states=True ) # Compute sequences embeddings embeddings = torch_outs['hidden_states'][-1].detach().numpy() print(f"Embeddings shape: {embeddings.shape}") print(f"Embeddings per token: {embeddings}") # Add embed dimension axis attention_mask = torch.unsqueeze(attention_mask, dim=-1) # Compute mean embeddings per sequence mean_sequence_embeddings = torch.sum(attention_mask*embeddings, axis=-2)/torch.sum(attention_mask, axis=1) print(f"Mean sequence embeddings: {mean_sequence_embeddings}") ``` ## Training data The **nucleotide-transformer-v2-100m-multi-species** model was pretrained on a total of 850 genomes downloaded from [NCBI](https://www.ncbi.nlm.nih.gov/). Plants and viruses are not included in these genomes, as their regulatory elements differ from those of interest in the paper's tasks. Some heavily studied model organisms were picked to be included in the collection of genomes, which represents a total of 174B nucleotides, i.e roughly 29B tokens. The data has been released as a HuggingFace dataset [here](https://huggingface.co/datasets/InstaDeepAI/multi_species_genomes). ## Training procedure ### Preprocessing The DNA sequences are tokenized using the Nucleotide Transformer Tokenizer, which tokenizes sequences as 6-mers tokenizer when possible, otherwise tokenizing each nucleotide separately as described in the [Tokenization](https://github.com/instadeepai/nucleotide-transformer#tokenization-abc) section of the associated repository. This tokenizer has a vocabulary size of 4105. The inputs of the model are then of the form: ``` <CLS> <ACGTGT> <ACGTGC> <ACGGAC> <GACTAG> <TCAGCA> ``` The tokenized sequence have a maximum length of 1,000. The masking procedure used is the standard one for Bert-style training: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The model was trained with 8 A100 80GB on 300B tokens, with an effective batch size of 1M tokens. The sequence length used was 1000 tokens. The Adam optimizer [38] was used with a learning rate schedule, and standard values for exponential decay rates and epsilon constants, β1 = 0.9, β2 = 0.999 and ε=1e-8. During a first warmup period, the learning rate was increased linearly between 5e-5 and 1e-4 over 16k steps before decreasing following a square root decay until the end of training. ### Architecture The model belongs to the second generation of nucleotide transformers, with the changes in architecture consisting the use of rotary positional embeddings instead of learned ones, as well as the introduction of Gated Linear Units. ### BibTeX entry and citation info ```bibtex @article{dalla2023nucleotide, title={The Nucleotide Transformer: Building and Evaluating Robust Foundation Models for Human Genomics}, author={Dalla-Torre, Hugo and Gonzalez, Liam and Mendoza Revilla, Javier and Lopez Carranza, Nicolas and Henryk Grywaczewski, Adam and Oteri, Francesco and Dallago, Christian and Trop, Evan and Sirelkhatim, Hassan and Richard, Guillaume and others}, journal={bioRxiv}, pages={2023--01}, year={2023}, publisher={Cold Spring Harbor Laboratory} } ```
6,342
[ [ -0.044921875, -0.04266357421875, 0.006710052490234375, -0.0033550262451171875, -0.029998779296875, 0.003932952880859375, -0.007038116455078125, -0.01201629638671875, 0.034149169921875, 0.016571044921875, -0.033935546875, -0.0257415771484375, -0.058441162109375, ...
Norod78/sdxl-BrainSlug-dreambooth
2023-08-09T18:08:34.000Z
[ "diffusers", "text-to-image", "lora", "autotrain", "en", "dataset:Norod78/BrainSlug-blip-captions-1024", "has_space", "region:us" ]
text-to-image
Norod78
null
null
Norod78/sdxl-BrainSlug-dreambooth
1
426
diffusers
2023-08-09T17:10:00
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: photo of a brain slug tags: - text-to-image - diffusers - lora - autotrain widget: - text: photo of a brain slug enjoying a nice sunny day on the beach - text: photo of a brain slug attached to Snoop Doggs head - text: >- photo of a shocked old granny with a gooey (brain slug attached to her head), Very detailed, clean, high quality, sharp image - text: >- photo of a brain slug attacking the head of an anime girl, cartoon style, high quality datasets: - Norod78/BrainSlug-blip-captions-1024 inference: true language: - en --- # DreamBooth trained by AutoTrain Text enoder was not trained. # Trigger words Use "photo of a brain slug" / "brain slug" and etc # Examples photo of a brain slug enjoying a nice sunny day on the beach ![photo_of_a_brainslug_enjoying_a_nice_sunny_day_on_the_beach](https://huggingface.co/Norod78/sdxl-BrainSlug-dreambooth/resolve/main/Examples/i-42-photo_of_a_brainslug_enjoying_a_nice_sunny_day_on_the_beach-generated_image.jpg) photo of a shocked old granny with a gooey (brain slug attached to her head), Very detailed, clean, high quality, sharp image ![A_photo_of_a_shocked_old_granny](https://huggingface.co/Norod78/sdxl-BrainSlug-dreambooth/resolve/main/Examples/i-7777-A_photo_of_a_shocked_old_granny_with_a_gooey_(brain_slug_attached_to_her_head),_Very_detailed,_clean,_high_quality,_sharp_image,_Dave_Dorman-generated_image.jpg)
1,472
[ [ -0.0229339599609375, -0.044891357421875, 0.0301971435546875, 0.00528717041015625, -0.04888916015625, -0.006011962890625, -0.00640106201171875, -0.040740966796875, 0.044158935546875, 0.0005016326904296875, -0.04156494140625, -0.0269012451171875, -0.04714965820312...
pipesanma/chasquilla-question-generator
2023-10-16T13:45:24.000Z
[ "transformers", "pytorch", "t5", "text2text-generation", "en", "dataset:squad", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text2text-generation
pipesanma
null
null
pipesanma/chasquilla-question-generator
0
426
transformers
2023-10-16T12:37:59
--- license: apache-2.0 datasets: - squad language: - en --- # Question Generator This model should be used to generate questions based on a given string. ### Out-of-Scope Use English language support only. ## How to Get Started with the Model Use the code below to get started with the model. ```python import torch from transformers import T5ForConditionalGeneration, T5Tokenizer def question_parser(question: str) -> str: return " ".join(question.split(":")[1].split()) def generate_questions_v2(context: str, answer: str, n_questions: int = 1): model = T5ForConditionalGeneration.from_pretrained( "pipesanma/chasquilla-question-generator" ) tokenizer = T5Tokenizer.from_pretrained("pipesanma/chasquilla-question-generator") device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = model.to(device) text = "context: " + context + " " + "answer: " + answer + " </s>" encoding = tokenizer.encode_plus( text, max_length=512, padding=True, return_tensors="pt" ) input_ids, attention_mask = encoding["input_ids"].to(device), encoding[ "attention_mask" ].to(device) model.eval() beam_outputs = model.generate( input_ids=input_ids, attention_mask=attention_mask, max_length=72, early_stopping=True, num_beams=5, num_return_sequences=n_questions, ) questions = [] for beam_output in beam_outputs: sent = tokenizer.decode( beam_output, skip_special_tokens=True, clean_up_tokenization_spaces=True ) print(sent) questions.append(question_parser(sent)) return questions context = "President Donald Trump said and predicted that some states would reopen this month." answer = "Donald Trump" questions = generate_questions_v2(context, answer, 1) print(questions) ``` ## Training Details ### Dataset generation The dataset is "squad" from datasets library. Check the [utils/dataset_gen.py](utils/dataset_gen.py) file for the dataset generation. ### Training model Check the [utils/t5_train_model.py](utils/t5_train_model.py) file for the training process ### Model and Tokenizer versions (v1.0) Model and Tokenizer V1: trained with 1000 rows (v1.1) Model and Tokenizer V2: trained with 3000 rows (v1.2) Model and Tokenizer V3: trained with all rows from datasets (78664 rows-train, 9652 rows-validation)
2,427
[ [ -0.0226593017578125, -0.0594482421875, 0.02197265625, 0.01509857177734375, -0.0157470703125, -0.0050811767578125, -0.0013761520385742188, -0.004718780517578125, -0.0081634521484375, 0.024200439453125, -0.05999755859375, -0.0288543701171875, -0.0296173095703125, ...
Cohere/Cohere-embed-multilingual-light-v3.0
2023-11-01T21:08:59.000Z
[ "transformers", "mteb", "model-index", "endpoints_compatible", "region:us" ]
null
Cohere
null
null
Cohere/Cohere-embed-multilingual-light-v3.0
0
426
transformers
2023-11-01T20:54:54
--- tags: - mteb model-index: - name: embed-multilingual-light-v3.0 results: - task: type: Classification dataset: type: mteb/amazon_counterfactual name: MTEB AmazonCounterfactualClassification (en) config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 70.02985074626865 - type: ap value: 33.228065779544146 - type: f1 value: 64.27173953207297 - task: type: Classification dataset: type: mteb/amazon_polarity name: MTEB AmazonPolarityClassification config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 90.701225 - type: ap value: 87.07178174251762 - type: f1 value: 90.69168484877625 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (en) config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 46.550000000000004 - type: f1 value: 44.7233215588199 - task: type: Retrieval dataset: type: arguana name: MTEB ArguAna config: default split: test revision: None metrics: - type: ndcg_at_10 value: 53.369 - task: type: Clustering dataset: type: mteb/arxiv-clustering-p2p name: MTEB ArxivClusteringP2P config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 44.206988765030744 - task: type: Clustering dataset: type: mteb/arxiv-clustering-s2s name: MTEB ArxivClusteringS2S config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 33.913737041277 - task: type: Reranking dataset: type: mteb/askubuntudupquestions-reranking name: MTEB AskUbuntuDupQuestions config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 58.544257541214925 - type: mrr value: 72.07151651057468 - task: type: STS dataset: type: mteb/biosses-sts name: MTEB BIOSSES config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 84.79582115243736 - type: cos_sim_spearman value: 84.01396250789998 - type: euclidean_pearson value: 83.90766476102458 - type: euclidean_spearman value: 84.01396250789998 - type: manhattan_pearson value: 84.75071274784274 - type: manhattan_spearman value: 85.02482891467078 - task: type: Classification dataset: type: mteb/banking77 name: MTEB Banking77Classification config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 78.12337662337663 - type: f1 value: 77.48610340227478 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-p2p name: MTEB BiorxivClusteringP2P config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 38.68268504601174 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-s2s name: MTEB BiorxivClusteringS2S config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 32.20870648143671 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackAndroidRetrieval config: default split: test revision: None metrics: - type: ndcg_at_10 value: 46.259 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackEnglishRetrieval config: default split: test revision: None metrics: - type: ndcg_at_10 value: 44.555 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGamingRetrieval config: default split: test revision: None metrics: - type: ndcg_at_10 value: 56.564 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGisRetrieval config: default split: test revision: None metrics: - type: ndcg_at_10 value: 36.162 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackMathematicaRetrieval config: default split: test revision: None metrics: - type: ndcg_at_10 value: 26.185000000000002 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackPhysicsRetrieval config: default split: test revision: None metrics: - type: ndcg_at_10 value: 41.547 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackProgrammersRetrieval config: default split: test revision: None metrics: - type: ndcg_at_10 value: 39.042 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackRetrieval config: default split: test revision: None metrics: - type: ndcg_at_10 value: 38.086999999999996 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackStatsRetrieval config: default split: test revision: None metrics: - type: ndcg_at_10 value: 32.088 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackTexRetrieval config: default split: test revision: None metrics: - type: ndcg_at_10 value: 27.006999999999998 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackUnixRetrieval config: default split: test revision: None metrics: - type: ndcg_at_10 value: 37.336999999999996 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWebmastersRetrieval config: default split: test revision: None metrics: - type: ndcg_at_10 value: 38.011 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWordpressRetrieval config: default split: test revision: None metrics: - type: ndcg_at_10 value: 32.287 - task: type: Retrieval dataset: type: climate-fever name: MTEB ClimateFEVER config: default split: test revision: None metrics: - type: ndcg_at_10 value: 24.804000000000002 - task: type: Retrieval dataset: type: dbpedia-entity name: MTEB DBPedia config: default split: test revision: None metrics: - type: ndcg_at_10 value: 38.055 - task: type: Classification dataset: type: mteb/emotion name: MTEB EmotionClassification config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 46.665 - type: f1 value: 40.77568559660878 - task: type: Retrieval dataset: type: fever name: MTEB FEVER config: default split: test revision: None metrics: - type: ndcg_at_10 value: 85.52499999999999 - task: type: Retrieval dataset: type: fiqa name: MTEB FiQA2018 config: default split: test revision: None metrics: - type: ndcg_at_10 value: 36.161 - task: type: Retrieval dataset: type: hotpotqa name: MTEB HotpotQA config: default split: test revision: None metrics: - type: ndcg_at_10 value: 66.878 - task: type: Classification dataset: type: mteb/imdb name: MTEB ImdbClassification config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 85.6372 - type: ap value: 80.54846874011302 - type: f1 value: 85.61438421821343 - task: type: Retrieval dataset: type: msmarco name: MTEB MSMARCO config: default split: test revision: None metrics: - type: ndcg_at_10 value: 40.487 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (en) config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 91.8559051527588 - type: f1 value: 91.6271749996447 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (en) config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 62.17738258093936 - type: f1 value: 45.80307070449218 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (en) config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 67.42434431741762 - type: f1 value: 65.39580264698957 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (en) config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 72.60928043039677 - type: f1 value: 72.30912915707411 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-p2p name: MTEB MedrxivClusteringP2P config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 35.17967476592229 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-s2s name: MTEB MedrxivClusteringS2S config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 30.993641089208683 - task: type: Reranking dataset: type: mteb/mind_small name: MTEB MindSmallReranking config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 31.362481813275295 - type: mrr value: 32.43717742343303 - task: type: Retrieval dataset: type: nfcorpus name: MTEB NFCorpus config: default split: test revision: None metrics: - type: ndcg_at_10 value: 32.123000000000005 - task: type: Retrieval dataset: type: nq name: MTEB NQ config: default split: test revision: None metrics: - type: ndcg_at_10 value: 55.51199999999999 - task: type: Retrieval dataset: type: quora name: MTEB QuoraRetrieval config: default split: test revision: None metrics: - type: ndcg_at_10 value: 87.847 - task: type: Clustering dataset: type: mteb/reddit-clustering name: MTEB RedditClustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 49.4973643968247 - task: type: Clustering dataset: type: mteb/reddit-clustering-p2p name: MTEB RedditClusteringP2P config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 60.2135284243427 - task: type: Retrieval dataset: type: scidocs name: MTEB SCIDOCS config: default split: test revision: None metrics: - type: ndcg_at_10 value: 17.1 - task: type: STS dataset: type: mteb/sickr-sts name: MTEB SICK-R config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 83.7330191296952 - type: cos_sim_spearman value: 77.03523134004043 - type: euclidean_pearson value: 80.86067787185137 - type: euclidean_spearman value: 77.03522959536473 - type: manhattan_pearson value: 80.76089708603587 - type: manhattan_spearman value: 76.86245377437302 - task: type: STS dataset: type: mteb/sts12-sts name: MTEB STS12 config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 80.46387812633851 - type: cos_sim_spearman value: 73.21878234127571 - type: euclidean_pearson value: 76.82160699895033 - type: euclidean_spearman value: 73.21878234127571 - type: manhattan_pearson value: 76.75657006349886 - type: manhattan_spearman value: 73.19160258034827 - task: type: STS dataset: type: mteb/sts13-sts name: MTEB STS13 config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 79.06411399119807 - type: cos_sim_spearman value: 79.49916779764082 - type: euclidean_pearson value: 79.3356521660954 - type: euclidean_spearman value: 79.49916779764082 - type: manhattan_pearson value: 79.04971532119936 - type: manhattan_spearman value: 79.16859911220654 - task: type: STS dataset: type: mteb/sts14-sts name: MTEB STS14 config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 80.6940934994372 - type: cos_sim_spearman value: 76.9552055757283 - type: euclidean_pearson value: 79.52818133592284 - type: euclidean_spearman value: 76.9552055757283 - type: manhattan_pearson value: 79.35220459438406 - type: manhattan_spearman value: 76.85314462036561 - task: type: STS dataset: type: mteb/sts15-sts name: MTEB STS15 config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 85.58608774451231 - type: cos_sim_spearman value: 86.42805701554927 - type: euclidean_pearson value: 86.01117122595934 - type: euclidean_spearman value: 86.42805701554927 - type: manhattan_pearson value: 86.01345208923057 - type: manhattan_spearman value: 86.43179450307953 - task: type: STS dataset: type: mteb/sts16-sts name: MTEB STS16 config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 83.18733039014667 - type: cos_sim_spearman value: 84.3339529564109 - type: euclidean_pearson value: 83.54530885349595 - type: euclidean_spearman value: 84.3339529564109 - type: manhattan_pearson value: 83.47015931913937 - type: manhattan_spearman value: 84.22564786654777 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (en-en) config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 87.88402211340522 - type: cos_sim_spearman value: 88.6693290310468 - type: euclidean_pearson value: 88.24947476618257 - type: euclidean_spearman value: 88.6693290310468 - type: manhattan_pearson value: 88.24496656367964 - type: manhattan_spearman value: 88.52029848819545 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (en) config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 64.96467575926597 - type: cos_sim_spearman value: 65.30666900046252 - type: euclidean_pearson value: 66.58031971340725 - type: euclidean_spearman value: 65.30666900046252 - type: manhattan_pearson value: 66.56530433327998 - type: manhattan_spearman value: 65.42121899024113 - task: type: STS dataset: type: mteb/stsbenchmark-sts name: MTEB STSBenchmark config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 85.31047656296519 - type: cos_sim_spearman value: 85.46101092708824 - type: euclidean_pearson value: 85.75896623084044 - type: euclidean_spearman value: 85.46101092708824 - type: manhattan_pearson value: 85.57323880630182 - type: manhattan_spearman value: 85.23375523080594 - task: type: Reranking dataset: type: mteb/scidocs-reranking name: MTEB SciDocsRR config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 79.89731978284804 - type: mrr value: 94.28980424078465 - task: type: Retrieval dataset: type: scifact name: MTEB SciFact config: default split: test revision: None metrics: - type: ndcg_at_10 value: 67.95 - task: type: PairClassification dataset: type: mteb/sprintduplicatequestions-pairclassification name: MTEB SprintDuplicateQuestions config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.85643564356435 - type: cos_sim_ap value: 96.59618618212247 - type: cos_sim_f1 value: 92.6221335992024 - type: cos_sim_precision value: 92.34592445328032 - type: cos_sim_recall value: 92.9 - type: dot_accuracy value: 99.85643564356435 - type: dot_ap value: 96.5961861821225 - type: dot_f1 value: 92.6221335992024 - type: dot_precision value: 92.34592445328032 - type: dot_recall value: 92.9 - type: euclidean_accuracy value: 99.85643564356435 - type: euclidean_ap value: 96.5961861821225 - type: euclidean_f1 value: 92.6221335992024 - type: euclidean_precision value: 92.34592445328032 - type: euclidean_recall value: 92.9 - type: manhattan_accuracy value: 99.85841584158416 - type: manhattan_ap value: 96.5578240948512 - type: manhattan_f1 value: 92.71523178807946 - type: manhattan_precision value: 94.4963655244029 - type: manhattan_recall value: 91.0 - type: max_accuracy value: 99.85841584158416 - type: max_ap value: 96.5961861821225 - type: max_f1 value: 92.71523178807946 - task: type: Clustering dataset: type: mteb/stackexchange-clustering name: MTEB StackExchangeClustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 60.84750068050385 - task: type: Clustering dataset: type: mteb/stackexchange-clustering-p2p name: MTEB StackExchangeClusteringP2P config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 33.96844721192451 - task: type: Reranking dataset: type: mteb/stackoverflowdupquestions-reranking name: MTEB StackOverflowDupQuestions config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 50.454280909595205 - type: mrr value: 51.24249320940497 - task: type: Summarization dataset: type: mteb/summeval name: MTEB SummEval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 29.998438678552517 - type: cos_sim_spearman value: 30.409482543506876 - type: dot_pearson value: 29.998443850173224 - type: dot_spearman value: 30.409482543506876 - task: type: Retrieval dataset: type: trec-covid name: MTEB TRECCOVID config: default split: test revision: None metrics: - type: ndcg_at_10 value: 78.93 - task: type: Retrieval dataset: type: webis-touche2020 name: MTEB Touche2020 config: default split: test revision: None metrics: - type: ndcg_at_10 value: 29.482999999999997 - task: type: Classification dataset: type: mteb/toxic_conversations_50k name: MTEB ToxicConversationsClassification config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 70.65859999999999 - type: ap value: 15.03693738050973 - type: f1 value: 54.94379403846167 - task: type: Classification dataset: type: mteb/tweet_sentiment_extraction name: MTEB TweetSentimentExtractionClassification config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 64.4567062818336 - type: f1 value: 64.48980729427107 - task: type: Clustering dataset: type: mteb/twentynewsgroups-clustering name: MTEB TwentyNewsgroupsClustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 42.08554991843959 - task: type: PairClassification dataset: type: mteb/twittersemeval2015-pairclassification name: MTEB TwitterSemEval2015 config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 84.75293556654945 - type: cos_sim_ap value: 69.40551043272129 - type: cos_sim_f1 value: 65.56335231034026 - type: cos_sim_precision value: 65.79856497475419 - type: cos_sim_recall value: 65.32981530343008 - type: dot_accuracy value: 84.75293556654945 - type: dot_ap value: 69.40550704470631 - type: dot_f1 value: 65.56335231034026 - type: dot_precision value: 65.79856497475419 - type: dot_recall value: 65.32981530343008 - type: euclidean_accuracy value: 84.75293556654945 - type: euclidean_ap value: 69.4055136381454 - type: euclidean_f1 value: 65.56335231034026 - type: euclidean_precision value: 65.79856497475419 - type: euclidean_recall value: 65.32981530343008 - type: manhattan_accuracy value: 84.6337247422066 - type: manhattan_ap value: 69.13628354134198 - type: manhattan_f1 value: 65.46998180715585 - type: manhattan_precision value: 60.58361391694726 - type: manhattan_recall value: 71.21372031662268 - type: max_accuracy value: 84.75293556654945 - type: max_ap value: 69.4055136381454 - type: max_f1 value: 65.56335231034026 - task: type: PairClassification dataset: type: mteb/twitterurlcorpus-pairclassification name: MTEB TwitterURLCorpus config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 89.04800714091667 - type: cos_sim_ap value: 85.84596325009252 - type: cos_sim_f1 value: 78.39228527221042 - type: cos_sim_precision value: 73.58643518205768 - type: cos_sim_recall value: 83.86972590083154 - type: dot_accuracy value: 89.04800714091667 - type: dot_ap value: 85.8459646697087 - type: dot_f1 value: 78.39228527221042 - type: dot_precision value: 73.58643518205768 - type: dot_recall value: 83.86972590083154 - type: euclidean_accuracy value: 89.04800714091667 - type: euclidean_ap value: 85.84596376376919 - type: euclidean_f1 value: 78.39228527221042 - type: euclidean_precision value: 73.58643518205768 - type: euclidean_recall value: 83.86972590083154 - type: manhattan_accuracy value: 89.0266620095471 - type: manhattan_ap value: 85.80124417850608 - type: manhattan_f1 value: 78.37817859254879 - type: manhattan_precision value: 75.36963321012226 - type: manhattan_recall value: 81.63689559593472 - type: max_accuracy value: 89.04800714091667 - type: max_ap value: 85.8459646697087 - type: max_f1 value: 78.39228527221042 --- # Cohere embed-multilingual-light-v3.0 This repository contains the tokenizer for the Cohere `embed-multilingual-light-v3.0` model. You can use the embedding model either via the Cohere API, AWS SageMaker or in your private deployments. ## Usage Cohere API The following code snippet shows the usage of the Cohere API. Install the cohere SDK via: ``` pip install -U cohere ``` Get your free API key on: www.cohere.com ```python # This snippet shows and example how to use the Cohere Embed V3 models for semantic search. # Make sure to have the Cohere SDK in at least v4.30 install: pip install -U cohere # Get your API key from: www.cohere.com import cohere import numpy as np cohere_key = "{YOUR_COHERE_API_KEY}" #Get your API key from www.cohere.com co = cohere.Client(cohere_key) docs = ["The capital of France is Paris", "PyTorch is a machine learning framework based on the Torch library.", "The average cat lifespan is between 13-17 years"] #Encode your documents with input type 'search_document' doc_emb = co.embed(docs, input_type="search_document", model="embed-multilingual-light-v3.0").embeddings doc_emb = np.asarray(doc_emb) #Encode your query with input type 'search_query' query = "What is Pytorch" query_emb = co.embed([query], input_type="search_query", model="embed-multilingual-light-v3.0").embeddings query_emb = np.asarray(query_emb) query_emb.shape #Compute the dot product between query embedding and document embedding scores = np.dot(query_emb, doc_emb.T)[0] #Find the highest scores max_idx = np.argsort(-scores) print(f"Query: {query}") for idx in max_idx: print(f"Score: {scores[idx]:.2f}") print(docs[idx]) print("--------") ``` ## Usage AWS SageMaker The embedding model can be privately deployed in your AWS Cloud using our [AWS SageMaker marketplace offering](https://aws.amazon.com/marketplace/pp/prodview-z6huxszcqc25i). It runs privately in your VPC, with latencies as low as 5ms for query encoding. ## Usage AWS Bedrock Soon the model will also be available via AWS Bedrock. Stay tuned ## Private Deployment You want to run the model on your own hardware? [Contact Sales](https://cohere.com/contact-sales) to learn more. ## Supported Languages This model was trained on nearly 1B English training pairs and nearly 0.5B Non-English training pairs from 100+ languages. Evaluation results can be found in the [Embed V3.0 Benchmark Results spreadsheet](https://docs.google.com/spreadsheets/d/1w7gnHWMDBdEUrmHgSfDnGHJgVQE5aOiXCCwO3uNH_mI/edit?usp=sharing).
28,124
[ [ -0.0223388671875, -0.0546875, 0.02557373046875, 0.0100250244140625, -0.01006317138671875, -0.017547607421875, -0.00681304931640625, -0.0242919921875, 0.0239410400390625, 0.0142822265625, -0.035888671875, -0.06353759765625, -0.036895751953125, -0.005065917968...
google/tapas-base-finetuned-wikisql-supervised
2021-11-29T13:05:40.000Z
[ "transformers", "pytorch", "tf", "tapas", "table-question-answering", "en", "dataset:wikisql", "arxiv:2004.02349", "arxiv:2010.00571", "arxiv:1709.00103", "license:apache-2.0", "endpoints_compatible", "region:us", "has_space" ]
table-question-answering
google
null
null
google/tapas-base-finetuned-wikisql-supervised
5
425
transformers
2022-03-02T23:29:05
--- language: en tags: - tapas license: apache-2.0 datasets: - wikisql --- # TAPAS base model fine-tuned on WikiSQL (in a supervised fashion) his model has 2 versions which can be used. The default version corresponds to the `tapas_wikisql_sqa_inter_masklm_base_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas). This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training, and then fine-tuned in a chain on [SQA](https://www.microsoft.com/en-us/download/details.aspx?id=54253), and [WikiSQL](https://github.com/salesforce/WikiSQL). It uses relative position embeddings (i.e. resetting the position index at every cell of the table). The other (non-default) version which can be used is: - `no_reset`, which corresponds to `tapas_wikisql_sqa_inter_masklm_base` (intermediate pre-training, absolute position embeddings). Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by the Hugging Face team and contributors. ## Model description TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion. This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of a table and associated text. - Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements. This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed or refuted by the contents of a table. Fine-tuning is done by adding a cell selection head and aggregation head on top of the pre-trained model, and then jointly train these randomly initialized classification heads with the base model on SQA and WikiSQL. ## Intended uses & limitations You can use this model for answering questions related to a table. For code examples, we refer to the documentation of TAPAS on the HuggingFace website. ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Question [SEP] Flattened table [SEP] ``` The authors did first convert the WikiSQL dataset into the format of SQA using automatic conversion scripts. ### Fine-tuning The model was fine-tuned on 32 Cloud TPU v3 cores for 50,000 steps with maximum sequence length 512 and batch size of 512. In this setup, fine-tuning takes around 10 hours. The optimizer used is Adam with a learning rate of 6.17164e-5, and a warmup ratio of 0.1424. See the [paper](https://arxiv.org/abs/2004.02349) for more details (tables 11 and 12). ### BibTeX entry and citation info ```bibtex @misc{herzig2020tapas, title={TAPAS: Weakly Supervised Table Parsing via Pre-training}, author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos}, year={2020}, eprint={2004.02349}, archivePrefix={arXiv}, primaryClass={cs.IR} } ``` ```bibtex @misc{eisenschlos2020understanding, title={Understanding tables with intermediate pre-training}, author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller}, year={2020}, eprint={2010.00571}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ```bibtex @article{DBLP:journals/corr/abs-1709-00103, author = {Victor Zhong and Caiming Xiong and Richard Socher}, title = {Seq2SQL: Generating Structured Queries from Natural Language using Reinforcement Learning}, journal = {CoRR}, volume = {abs/1709.00103}, year = {2017}, url = {http://arxiv.org/abs/1709.00103}, archivePrefix = {arXiv}, eprint = {1709.00103}, timestamp = {Mon, 13 Aug 2018 16:48:41 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1709-00103.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
5,269
[ [ -0.035247802734375, -0.06500244140625, 0.0175628662109375, 0.01288604736328125, -0.032379150390625, -0.02044677734375, -0.010345458984375, -0.034759521484375, 0.0304718017578125, 0.04266357421875, -0.0450439453125, -0.0307159423828125, -0.036529541015625, 0....
stablediffusionapi/spybg
2023-04-22T09:18:25.000Z
[ "diffusers", "stablediffusionapi.com", "stable-diffusion-api", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "endpoints_compatible", "has_space", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
stablediffusionapi
null
null
stablediffusionapi/spybg
0
425
diffusers
2023-02-01T10:28:26
--- license: creativeml-openrail-m tags: - stablediffusionapi.com - stable-diffusion-api - text-to-image - ultra-realistic pinned: true --- # SPYBG's Toolkit for Digital Artists API Inference ![generated from stablediffusionapi.com](https://pub-8b49af329fae499aa563997f5d4068a4.r2.dev/generations/175893.png) ## Get API Key Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed. Replace Key in below code, change **model_id** to "spybg" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs) Model link: [View model](https://stablediffusionapi.com/models/spybg) Credits: [View credits](https://civitai.com/?query=SPYBG's Toolkit for Digital Artists) View all models: [View Models](https://stablediffusionapi.com/models) import requests import json url = "https://stablediffusionapi.com/api/v3/dreambooth" payload = json.dumps({ "key": "", "model_id": "spybg", "prompt": "actual 8K portrait photo of gareth person, portrait, happy colors, bright eyes, clear eyes, warm smile, smooth soft skin, big dreamy eyes, beautiful intricate colored hair, symmetrical, anime wide eyes, soft lighting, detailed face, by makoto shinkai, stanley artgerm lau, wlop, rossdraws, concept art, digital painting, looking into camera", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) > Use this coupon code to get 25% off **DMGG0RBN**
2,423
[ [ -0.03533935546875, -0.052734375, 0.038604736328125, 0.0232696533203125, -0.037261962890625, 0.0205841064453125, 0.016143798828125, -0.035736083984375, 0.034454345703125, 0.037750244140625, -0.07220458984375, -0.07501220703125, -0.0305023193359375, -0.0104522...
timm/levit_192.fb_dist_in1k
2023-02-03T21:13:35.000Z
[ "timm", "pytorch", "image-classification", "dataset:imagenet-1k", "arxiv:2104.01136", "license:apache-2.0", "region:us" ]
image-classification
timm
null
null
timm/levit_192.fb_dist_in1k
0
425
timm
2023-02-03T21:13:29
--- tags: - image-classification - timm library_tag: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for levit_192.fb_dist_in1k A LeViT image classification model using convolutional mode (using nn.Conv2d and nn.BatchNorm2d). Pretrained on ImageNet-1k using distillation by paper authors. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 10.9 - GMACs: 0.7 - Activations (M): 3.2 - Image size: 224 x 224 - **Papers:** - LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference: https://arxiv.org/abs/2104.01136 - **Original:** https://github.com/facebookresearch/LeViT - **Dataset:** ImageNet-1k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model('levit_192.fb_dist_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'levit_192.fb_dist_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled (ie.e a (batch_size, num_features, H, W) tensor output = model.forward_head(output, pre_logits=True) # output is (batch_size, num_features) tensor ``` ## Model Comparison |model |top1 |top5 |param_count|img_size| |-----------------------------------|------|------|-----------|--------| |levit_384.fb_dist_in1k |82.596|96.012|39.13 |224 | |levit_conv_384.fb_dist_in1k |82.596|96.012|39.13 |224 | |levit_256.fb_dist_in1k |81.512|95.48 |18.89 |224 | |levit_conv_256.fb_dist_in1k |81.512|95.48 |18.89 |224 | |levit_conv_192.fb_dist_in1k |79.86 |94.792|10.95 |224 | |levit_192.fb_dist_in1k |79.858|94.792|10.95 |224 | |levit_128.fb_dist_in1k |78.474|94.014|9.21 |224 | |levit_conv_128.fb_dist_in1k |78.474|94.02 |9.21 |224 | |levit_128s.fb_dist_in1k |76.534|92.864|7.78 |224 | |levit_conv_128s.fb_dist_in1k |76.532|92.864|7.78 |224 | ## Citation ```bibtex @InProceedings{Graham_2021_ICCV, author = {Graham, Benjamin and El-Nouby, Alaaeldin and Touvron, Hugo and Stock, Pierre and Joulin, Armand and Jegou, Herve and Douze, Matthijs}, title = {LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {12259-12269} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/rwightman/pytorch-image-models}} } ```
4,033
[ [ -0.039764404296875, -0.02880859375, 0.010467529296875, 0.002979278564453125, -0.0304718017578125, -0.023162841796875, -0.01262664794921875, -0.0218353271484375, 0.016021728515625, 0.021820068359375, -0.04254150390625, -0.045623779296875, -0.044403076171875, ...
linkanjarad/mobilenet_v2_1.0_224-plant-disease-identification
2023-04-26T04:25:49.000Z
[ "transformers", "pytorch", "tensorboard", "mobilenet_v2", "image-classification", "generated_from_trainer", "dataset:image_folder", "license:other", "model-index", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
image-classification
linkanjarad
null
null
linkanjarad/mobilenet_v2_1.0_224-plant-disease-identification
5
425
transformers
2023-04-04T04:11:58
--- license: other tags: - generated_from_trainer datasets: - image_folder metrics: - accuracy model-index: - name: mobilenet_v2_1.0_224-plant-disease-identification results: - task: name: Image Classification type: image-classification dataset: name: New Plant Diseases Dataset type: image_folder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9541 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mobilenet_v2_1.0_224-plant-disease-identification This model is a fine-tuned version of [google/mobilenet_v2_1.0_224](https://huggingface.co/google/mobilenet_v2_1.0_224) on the [Kaggle version](https://www.kaggle.com/datasets/vipoooool/new-plant-diseases-dataset) of the [Plant Village dataset](https://github.com/spMohanty/PlantVillage-Dataset). It achieves the following results on the evaluation set: - Cross Entropy Loss: 0.15 - Accuracy: 0.9541 ## Intended uses & limitations For identifying common diseases in crops and assessing plant health. Not to be used as a replacement for an actual diagnosis from experts. ## Training and evaluation data The plant village dataset consists of 38 classes of diseases in common crops (including healthy/normal crops). ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-5 - train_batch_size: 256 - eval_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.2 - num_epochs: 6 ### Framework versions - Transformers 4.27.3 - Pytorch 1.13.0 - Datasets 2.1.0 - Tokenizers 0.13.2
1,823
[ [ 0.0031585693359375, -0.039459228515625, 0.005100250244140625, 0.004070281982421875, -0.0254058837890625, -0.0293121337890625, 0.00803375244140625, -0.048614501953125, 0.033599853515625, 0.00759124755859375, -0.0189056396484375, -0.051177978515625, -0.05316162109...
timm/resnext101_32x8d.fb_swsl_ig1b_ft_in1k
2023-04-05T19:06:39.000Z
[ "timm", "pytorch", "safetensors", "image-classification", "arxiv:1905.00546", "arxiv:1611.05431", "arxiv:1512.03385", "license:cc-by-nc-4.0", "region:us" ]
image-classification
timm
null
null
timm/resnext101_32x8d.fb_swsl_ig1b_ft_in1k
0
425
timm
2023-04-05T19:05:02
--- tags: - image-classification - timm library_tag: timm license: cc-by-nc-4.0 --- # Model card for resnext101_32x8d.fb_swsl_ig1b_ft_in1k A ResNeXt-B image classification model. This model features: * ReLU activations * single layer 7x7 convolution with pooling * 1x1 convolution shortcut downsample * grouped 3x3 bottleneck convolutions Pretrained on Instagram-1B hashtags dataset using semi-weakly supervised learning and fine-tuned on ImageNet-1k by paper authors. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 88.8 - GMACs: 16.5 - Activations (M): 31.2 - Image size: 224 x 224 - **Papers:** - Billion-scale semi-supervised learning for image classification: https://arxiv.org/abs/1905.00546 - Aggregated Residual Transformations for Deep Neural Networks: https://arxiv.org/abs/1611.05431 - Deep Residual Learning for Image Recognition: https://arxiv.org/abs/1512.03385 - **Original:** https://github.com/facebookresearch/semi-supervised-ImageNet1K-models ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('resnext101_32x8d.fb_swsl_ig1b_ft_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'resnext101_32x8d.fb_swsl_ig1b_ft_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 64, 112, 112]) # torch.Size([1, 256, 56, 56]) # torch.Size([1, 512, 28, 28]) # torch.Size([1, 1024, 14, 14]) # torch.Size([1, 2048, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'resnext101_32x8d.fb_swsl_ig1b_ft_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 2048, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). |model |img_size|top1 |top5 |param_count|gmacs|macts|img/sec| |------------------------------------------|--------|-----|-----|-----------|-----|-----|-------| |[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|320 |86.72|98.17|93.6 |35.2 |69.7 |451 | |[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|288 |86.51|98.08|93.6 |28.5 |56.4 |560 | |[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|288 |86.49|98.03|93.6 |28.5 |56.4 |557 | |[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|224 |85.96|97.82|93.6 |17.2 |34.2 |923 | |[resnext101_32x32d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x32d.fb_wsl_ig1b_ft_in1k)|224 |85.11|97.44|468.5 |87.3 |91.1 |254 | |[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|416 |85.0 |97.12|191.9 |108.4|213.8|134 | |[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|352 |84.96|97.22|102.1 |50.2 |101.2|291 | |[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|320 |84.73|97.18|102.1 |41.5 |83.7 |353 | |[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|384 |84.71|96.99|164.0 |77.6 |154.7|183 | |[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|288 |84.57|97.08|93.6 |28.5 |56.4 |557 | |[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|320 |84.45|97.08|93.2 |31.5 |67.8 |446 | |[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|352 |84.43|96.97|129.9 |51.1 |105.5|280 | |[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|288 |84.36|96.92|93.6 |27.6 |53.0 |595 | |[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|320 |84.35|97.04|66.8 |24.1 |47.7 |610 | |[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|288 |84.3 |96.94|164.0 |43.7 |87.1 |333 | |[resnext101_32x8d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_swsl_ig1b_ft_in1k)|224 |84.28|97.17|88.8 |16.5 |31.2 |1100 | |[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|320 |84.24|96.86|191.9 |64.2 |126.6|228 | |[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|288 |84.19|96.87|93.6 |27.2 |51.6 |613 | |[resnext101_32x16d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_wsl_ig1b_ft_in1k)|224 |84.18|97.19|194.0 |36.3 |51.2 |581 | |[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|288 |84.11|97.11|44.6 |15.1 |29.0 |1144 | |[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|320 |83.97|96.82|64.7 |31.2 |67.3 |518 | |[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|256 |83.87|96.75|93.2 |20.2 |43.4 |692 | |[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|224 |83.86|96.65|93.6 |17.2 |34.2 |923 | |[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|320 |83.72|96.61|86.6 |24.3 |48.1 |617 | |[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|256 |83.69|96.78|66.8 |15.4 |30.6 |943 | |[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|224 |83.68|96.61|93.6 |16.7 |32.0 |986 | |[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|320 |83.67|96.74|60.2 |24.1 |47.7 |706 | |[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|256 |83.59|96.61|129.9 |27.1 |55.8 |526 | |[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|224 |83.58|96.4 |93.6 |16.5 |31.2 |1013 | |[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|224 |83.54|96.83|44.6 |9.1 |17.6 |1864 | |[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|288 |83.46|96.54|60.2 |19.1 |37.3 |904 | |[resnext101_32x16d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_swsl_ig1b_ft_in1k)|224 |83.35|96.85|194.0 |36.3 |51.2 |582 | |[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|256 |83.23|96.53|64.7 |20.0 |43.1 |809 | |[resnext101_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_swsl_ig1b_ft_in1k)|224 |83.22|96.75|44.2 |8.0 |21.2 |1814 | |[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|288 |83.16|96.38|83.5 |25.7 |51.6 |590 | |[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|256 |83.14|96.38|60.2 |15.4 |30.5 |1096 | |[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|320 |83.02|96.45|44.6 |16.5 |34.8 |992 | |[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|288 |82.98|96.54|44.6 |13.4 |28.2 |1077 | |[resnext101_64x4d.tv_in1k](https://huggingface.co/timm/resnext101_64x4d.tv_in1k)|224 |82.98|96.25|83.5 |15.5 |31.2 |989 | |[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|256 |82.86|96.28|86.6 |15.6 |30.8 |951 | |[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|224 |82.83|96.22|88.8 |16.5 |31.2 |1099 | |[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|224 |82.8 |96.13|60.2 |11.6 |22.6 |1486 | |[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|288 |82.8 |96.32|44.6 |13.0 |26.8 |1291 | |[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|288 |82.74|95.71|60.2 |19.1 |37.3 |905 | |[resnext101_32x8d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_wsl_ig1b_ft_in1k)|224 |82.69|96.63|88.8 |16.5 |31.2 |1100 | |[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|288 |82.62|95.75|60.2 |19.1 |37.3 |904 | |[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|288 |82.61|96.49|25.6 |8.9 |20.6 |1729 | |[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|288 |82.53|96.13|36.8 |9.9 |21.5 |1773 | |[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|224 |82.5 |96.02|126.9 |22.8 |21.2 |1078 | |[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|224 |82.46|95.92|83.5 |15.5 |31.2 |987 | |[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|288 |82.36|96.18|35.7 |8.1 |20.9 |1964 | |[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|320 |82.35|96.14|25.6 |8.8 |24.1 |1386 | |[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|288 |82.31|95.63|44.6 |13.0 |26.8 |1291 | |[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|288 |82.29|96.01|63.6 |13.6 |28.5 |1078 | |[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|224 |82.29|96.0 |60.2 |11.6 |22.6 |1484 | |[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|288 |82.27|96.06|68.9 |18.9 |23.8 |1176 | |[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|256 |82.26|96.07|44.6 |10.6 |22.2 |1542 | |[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|288 |82.24|95.73|44.6 |13.0 |26.8 |1290 | |[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|288 |82.2 |96.14|27.6 |7.0 |23.8 |1547 | |[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|224 |82.18|96.05|44.6 |8.1 |17.1 |1771 | |[resnext50_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_swsl_ig1b_ft_in1k)|224 |82.17|96.22|25.0 |4.3 |14.4 |2943 | |[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|288 |82.12|95.65|25.6 |7.1 |19.6 |1704 | |[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|288 |82.03|95.94|25.0 |7.0 |23.8 |1745 | |[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|288 |82.0 |96.15|24.9 |5.8 |12.7 |1787 | |[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|256 |81.99|95.85|36.8 |7.8 |17.0 |2230 | |[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|176 |81.98|95.72|88.8 |10.3 |19.4 |1768 | |[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|224 |81.97|95.24|60.2 |11.6 |22.6 |1486 | |[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|224 |81.93|95.75|44.6 |7.8 |16.2 |2122 | |[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|224 |81.9 |95.77|44.6 |7.8 |16.2 |2118 | |[resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k)|224 |81.84|96.1 |194.0 |36.3 |51.2 |583 | |[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|256 |81.78|95.94|35.7 |6.4 |16.6 |2471 | |[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|224 |81.77|95.22|60.2 |11.6 |22.6 |1485 | |[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|224 |81.74|96.06|25.6 |5.4 |12.4 |2813 | |[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|288 |81.65|95.54|25.6 |7.1 |19.6 |1703 | |[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|288 |81.64|95.88|25.6 |7.2 |19.7 |1694 | |[resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k)|224 |81.62|96.04|88.8 |16.5 |31.2 |1101 | |[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|224 |81.61|95.76|68.9 |11.4 |14.4 |1930 | |[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|288 |81.61|95.83|25.6 |8.5 |19.2 |1868 | |[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|224 |81.5 |95.16|44.6 |7.8 |16.2 |2125 | |[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|288 |81.48|95.16|25.0 |7.0 |23.8 |1745 | |[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|288 |81.47|95.71|25.9 |6.9 |18.6 |2071 | |[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|224 |81.45|95.53|68.9 |11.4 |14.4 |1929 | |[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|288 |81.44|95.22|25.6 |7.2 |19.7 |1908 | |[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|256 |81.44|95.67|25.6 |5.6 |15.4 |2168 | |[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|288 |81.4 |95.82|30.2 |6.8 |13.9 |2132 | |[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|288 |81.37|95.74|25.6 |7.2 |19.7 |1910 | |[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|224 |81.32|95.19|44.6 |7.8 |16.2 |2125 | |[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|288 |81.3 |95.65|28.1 |6.8 |18.4 |1803 | |[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|288 |81.3 |95.11|25.0 |7.0 |23.8 |1746 | |[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|224 |81.27|95.62|27.6 |4.3 |14.4 |2591 | |[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|224 |81.26|95.16|25.6 |4.3 |11.8 |2823 | |[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|288 |81.23|95.54|15.7 |4.8 |19.6 |2117 | |[senet154.gluon_in1k](https://huggingface.co/timm/senet154.gluon_in1k)|224 |81.23|95.35|115.1 |20.8 |38.7 |545 | |[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|288 |81.22|95.11|25.6 |6.8 |18.4 |2089 | |[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|288 |81.22|95.63|25.6 |6.8 |18.4 |676 | |[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|288 |81.18|95.09|25.6 |7.2 |19.7 |1908 | |[resnet50.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet50.fb_swsl_ig1b_ft_in1k)|224 |81.18|95.98|25.6 |4.1 |11.1 |3455 | |[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|224 |81.17|95.34|25.0 |4.3 |14.4 |2933 | |[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|224 |81.1 |95.33|25.0 |4.3 |14.4 |2934 | |[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|288 |81.1 |95.23|28.1 |6.8 |18.4 |1801 | |[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|288 |81.1 |95.12|28.1 |6.8 |18.4 |1799 | |[resnet152s.gluon_in1k](https://huggingface.co/timm/resnet152s.gluon_in1k)|224 |81.02|95.41|60.3 |12.9 |25.0 |1347 | |[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|288 |80.97|95.44|25.6 |6.8 |18.4 |2085 | |[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|256 |80.94|95.45|25.9 |5.4 |14.7 |2571 | |[resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.93|95.73|44.2 |8.0 |21.2 |1814 | |[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|288 |80.91|95.55|25.6 |6.8 |18.4 |2084 | |[seresnext101_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_32x4d.gluon_in1k)|224 |80.9 |95.31|49.0 |8.0 |21.3 |1585 | |[seresnext101_64x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_64x4d.gluon_in1k)|224 |80.9 |95.3 |88.2 |15.5 |31.2 |918 | |[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|288 |80.86|95.52|25.6 |6.8 |18.4 |2085 | |[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|224 |80.85|95.43|25.6 |4.1 |11.1 |3450 | |[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|224 |80.84|95.02|25.6 |4.3 |11.8 |2821 | |[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|224 |80.79|95.62|24.9 |3.5 |7.7 |2961 | |[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|288 |80.79|95.36|19.8 |6.0 |14.8 |2506 | |[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|288 |80.79|95.58|19.9 |4.2 |10.6 |2349 | |[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|288 |80.78|94.99|25.6 |6.8 |18.4 |2088 | |[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|288 |80.71|95.43|25.6 |6.8 |18.4 |2087 | |[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|288 |80.7 |95.39|25.0 |7.0 |23.8 |1749 | |[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|192 |80.69|95.24|63.6 |6.0 |12.7 |2270 | |[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|224 |80.68|94.71|25.6 |4.4 |11.9 |3162 | |[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|288 |80.68|95.36|19.7 |6.0 |14.8 |2637 | |[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|224 |80.67|95.3 |25.6 |4.1 |11.1 |3452 | |[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|288 |80.67|95.42|25.0 |7.4 |25.1 |1626 | |[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|224 |80.63|95.21|25.6 |5.2 |11.6 |3034 | |[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|224 |80.61|95.32|25.6 |4.4 |11.9 |2813 | |[resnext101_64x4d.gluon_in1k](https://huggingface.co/timm/resnext101_64x4d.gluon_in1k)|224 |80.61|94.99|83.5 |15.5 |31.2 |989 | |[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|288 |80.6 |95.31|19.9 |6.0 |14.8 |2578 | |[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|256 |80.57|95.17|15.7 |3.8 |15.5 |2710 | |[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|224 |80.56|95.0 |60.2 |11.6 |22.6 |1483 | |[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|224 |80.53|95.16|25.6 |4.4 |11.9 |3164 | |[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|224 |80.53|94.46|25.0 |4.3 |14.4 |2930 | |[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|176 |80.48|94.98|126.9 |14.3 |13.2 |1719 | |[resnet152d.gluon_in1k](https://huggingface.co/timm/resnet152d.gluon_in1k)|224 |80.47|95.2 |60.2 |11.8 |23.4 |1428 | |[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|288 |80.45|95.32|25.6 |6.8 |18.4 |2086 | |[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|224 |80.45|95.24|30.2 |4.1 |8.4 |3530 | |[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|224 |80.45|94.63|25.0 |4.3 |14.4 |2936 | |[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|176 |80.43|95.09|68.9 |7.3 |9.0 |3015 | |[resnet101d.gluon_in1k](https://huggingface.co/timm/resnet101d.gluon_in1k)|224 |80.42|95.01|44.6 |8.1 |17.0 |2007 | |[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|224 |80.38|94.6 |25.6 |4.1 |11.1 |3461 | |[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|256 |80.36|95.1 |19.8 |4.8 |11.7 |3267 | |[resnext101_32x4d.gluon_in1k](https://huggingface.co/timm/resnext101_32x4d.gluon_in1k)|224 |80.34|94.93|44.2 |8.0 |21.2 |1814 | |[resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.32|95.4 |25.0 |4.3 |14.4 |2941 | |[resnet101s.gluon_in1k](https://huggingface.co/timm/resnet101s.gluon_in1k)|224 |80.28|95.16|44.7 |9.2 |18.6 |1851 | |[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|224 |80.26|95.08|28.1 |4.1 |11.1 |2972 | |[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|288 |80.24|95.24|25.6 |8.5 |19.9 |1523 | |[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|224 |80.22|94.63|25.6 |4.4 |11.9 |3162 | |[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|176 |80.2 |94.64|60.2 |7.2 |14.0 |2346 | |[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|224 |80.08|94.74|28.1 |4.1 |11.1 |2969 | |[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|256 |80.08|94.97|19.7 |4.8 |11.7 |3284 | |[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|256 |80.06|94.99|19.9 |4.8 |11.7 |3216 | |[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|224 |80.06|94.95|25.6 |4.1 |11.1 |1109 | |[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|224 |80.02|94.71|28.1 |4.1 |11.1 |2962 | |[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|288 |79.97|95.05|25.6 |6.8 |18.4 |2086 | |[resnet152c.gluon_in1k](https://huggingface.co/timm/resnet152c.gluon_in1k)|224 |79.92|94.84|60.2 |11.8 |23.4 |1455 | |[seresnext50_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext50_32x4d.gluon_in1k)|224 |79.91|94.82|27.6 |4.3 |14.4 |2591 | |[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|224 |79.91|94.67|25.6 |4.1 |11.1 |3456 | |[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|176 |79.9 |94.6 |44.6 |4.9 |10.1 |3341 | |[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|224 |79.89|94.97|35.7 |4.5 |12.1 |2774 | |[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|224 |79.88|94.87|25.6 |4.1 |11.1 |3455 | |[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|320 |79.86|95.07|16.0 |5.2 |16.4 |2168 | |[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|224 |79.85|94.56|25.6 |4.1 |11.1 |3460 | |[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|288 |79.83|94.97|25.6 |6.8 |18.4 |2087 | |[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|224 |79.82|94.62|44.6 |7.8 |16.2 |2114 | |[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|224 |79.76|94.6 |25.0 |4.3 |14.4 |2943 | |[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|224 |79.74|94.95|25.6 |4.1 |11.1 |3455 | |[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|224 |79.74|94.87|19.9 |2.5 |6.4 |3929 | |[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|288 |79.71|94.83|19.7 |6.0 |14.8 |2710 | |[resnet152.gluon_in1k](https://huggingface.co/timm/resnet152.gluon_in1k)|224 |79.68|94.74|60.2 |11.6 |22.6 |1486 | |[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|224 |79.67|94.87|25.0 |4.5 |15.2 |2729 | |[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|288 |79.63|94.91|25.6 |6.8 |18.4 |2086 | |[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|224 |79.56|94.72|25.6 |4.3 |11.8 |2805 | |[resnet101c.gluon_in1k](https://huggingface.co/timm/resnet101c.gluon_in1k)|224 |79.53|94.58|44.6 |8.1 |17.0 |2062 | |[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|224 |79.52|94.61|25.6 |4.1 |11.1 |3459 | |[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|176 |79.42|94.64|25.6 |2.6 |6.9 |5397 | |[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|288 |79.4 |94.66|18.0 |5.9 |14.6 |2752 | |[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|224 |79.38|94.57|25.6 |4.1 |11.1 |3459 | |[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|176 |79.37|94.3 |25.0 |2.7 |9.0 |4577 | |[resnext50_32x4d.gluon_in1k](https://huggingface.co/timm/resnext50_32x4d.gluon_in1k)|224 |79.36|94.43|25.0 |4.3 |14.4 |2942 | |[resnext101_32x8d.tv_in1k](https://huggingface.co/timm/resnext101_32x8d.tv_in1k)|224 |79.31|94.52|88.8 |16.5 |31.2 |1100 | |[resnet101.gluon_in1k](https://huggingface.co/timm/resnet101.gluon_in1k)|224 |79.31|94.53|44.6 |7.8 |16.2 |2125 | |[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|224 |79.31|94.63|25.6 |5.2 |12.0 |2524 | |[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|176 |79.27|94.49|25.6 |2.6 |6.9 |5404 | |[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|224 |79.25|94.31|25.0 |4.3 |14.4 |2931 | |[resnet50.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet50.fb_ssl_yfcc100m_ft_in1k)|224 |79.22|94.84|25.6 |4.1 |11.1 |3451 | |[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|256 |79.21|94.56|19.7 |4.8 |11.7 |3392 | |[resnet50d.gluon_in1k](https://huggingface.co/timm/resnet50d.gluon_in1k)|224 |79.07|94.48|25.6 |4.4 |11.9 |3162 | |[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|224 |79.03|94.38|25.6 |4.1 |11.1 |3453 | |[resnet50.am_in1k](https://huggingface.co/timm/resnet50.am_in1k)|224 |79.01|94.39|25.6 |4.1 |11.1 |3461 | |[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|256 |79.01|94.37|18.0 |4.6 |11.6 |3440 | |[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|256 |78.9 |94.54|16.0 |3.4 |10.5 |3421 | |[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|160 |78.89|94.11|60.2 |5.9 |11.5 |2745 | |[wide_resnet101_2.tv_in1k](https://huggingface.co/timm/wide_resnet101_2.tv_in1k)|224 |78.84|94.28|126.9 |22.8 |21.2 |1079 | |[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|288 |78.83|94.24|16.8 |4.5 |16.8 |2251 | |[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|224 |78.81|94.32|25.6 |4.1 |11.1 |3454 | |[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|288 |78.74|94.33|16.8 |4.5 |16.7 |2264 | |[resnet50s.gluon_in1k](https://huggingface.co/timm/resnet50s.gluon_in1k)|224 |78.72|94.23|25.7 |5.5 |13.5 |2796 | |[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|224 |78.71|94.24|25.6 |4.4 |11.9 |3154 | |[wide_resnet50_2.tv_in1k](https://huggingface.co/timm/wide_resnet50_2.tv_in1k)|224 |78.47|94.09|68.9 |11.4 |14.4 |1934 | |[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|224 |78.46|94.27|25.6 |4.1 |11.1 |3454 | |[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|288 |78.43|94.35|21.8 |6.5 |7.5 |3291 | |[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|288 |78.42|94.04|10.5 |3.1 |13.3 |3226 | |[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|320 |78.33|94.13|16.0 |5.2 |16.4 |2391 | |[resnet152.tv_in1k](https://huggingface.co/timm/resnet152.tv_in1k)|224 |78.32|94.04|60.2 |11.6 |22.6 |1487 | |[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|288 |78.28|94.1 |10.4 |3.1 |13.3 |3062 | |[bat_resnext26ts.ch_in1k](https://huggingface.co/timm/bat_resnext26ts.ch_in1k)|256 |78.25|94.1 |10.7 |2.5 |12.5 |3393 | |[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|224 |78.06|93.78|25.6 |4.1 |11.1 |3450 | |[resnet50c.gluon_in1k](https://huggingface.co/timm/resnet50c.gluon_in1k)|224 |78.0 |93.99|25.6 |4.4 |11.9 |3286 | |[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|288 |78.0 |93.91|10.3 |3.1 |13.3 |3297 | |[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|224 |77.98|93.75|16.8 |2.7 |10.1 |3841 | |[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|288 |77.92|93.77|21.8 |6.1 |6.2 |3609 | |[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|160 |77.88|93.71|44.6 |4.0 |8.3 |3926 | |[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|256 |77.87|93.84|16.0 |3.4 |10.5 |3772 | |[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|256 |77.86|93.79|10.4 |2.4 |10.5 |4263 | |[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|160 |77.82|93.81|35.7 |2.3 |6.2 |5238 | |[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|256 |77.81|93.82|10.5 |2.4 |10.5 |4183 | |[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|160 |77.79|93.6 |25.6 |2.2 |6.0 |5329 | |[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|160 |77.73|93.32|25.0 |2.2 |7.4 |5576 | |[resnext50_32x4d.tv_in1k](https://huggingface.co/timm/resnext50_32x4d.tv_in1k)|224 |77.61|93.7 |25.0 |4.3 |14.4 |2944 | |[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|224 |77.59|93.61|16.8 |2.7 |10.2 |3807 | |[resnet50.gluon_in1k](https://huggingface.co/timm/resnet50.gluon_in1k)|224 |77.58|93.72|25.6 |4.1 |11.1 |3455 | |[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|256 |77.44|93.56|10.3 |2.4 |10.5 |4284 | |[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|288 |77.41|93.63|16.0 |4.3 |13.5 |2907 | |[resnet101.tv_in1k](https://huggingface.co/timm/resnet101.tv_in1k)|224 |77.38|93.54|44.6 |7.8 |16.2 |2125 | |[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|160 |77.22|93.27|25.6 |2.2 |6.1 |5982 | |[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|288 |77.17|93.47|10.3 |3.1 |13.3 |3392 | |[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|288 |77.15|93.27|21.8 |6.1 |6.2 |3615 | |[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|224 |77.1 |93.37|21.8 |3.9 |4.5 |5436 | |[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|224 |77.02|93.07|28.1 |4.1 |11.1 |2952 | |[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|256 |76.78|93.13|10.3 |2.4 |10.5 |4410 | |[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|224 |76.7 |93.17|16.0 |2.6 |8.2 |4859 | |[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|288 |76.5 |93.35|21.8 |6.1 |6.2 |3617 | |[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|224 |76.42|92.87|21.8 |3.7 |3.7 |5984 | |[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|288 |76.35|93.18|16.0 |3.9 |12.2 |3331 | |[resnet50.tv_in1k](https://huggingface.co/timm/resnet50.tv_in1k)|224 |76.13|92.86|25.6 |4.1 |11.1 |3457 | |[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|160 |75.96|92.5 |25.6 |2.1 |5.7 |6490 | |[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|224 |75.52|92.44|21.8 |3.7 |3.7 |5991 | |[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|224 |75.3 |92.58|16.0 |2.4 |7.4 |5583 | |[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|224 |75.16|92.18|21.8 |3.7 |3.7 |5994 | |[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|160 |75.1 |92.08|28.1 |2.1 |5.7 |5513 | |[resnet34.gluon_in1k](https://huggingface.co/timm/resnet34.gluon_in1k)|224 |74.57|91.98|21.8 |3.7 |3.7 |5984 | |[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|288 |73.81|91.83|11.7 |3.4 |5.4 |5196 | |[resnet34.tv_in1k](https://huggingface.co/timm/resnet34.tv_in1k)|224 |73.32|91.42|21.8 |3.7 |3.7 |5979 | |[resnet18.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet18.fb_swsl_ig1b_ft_in1k)|224 |73.28|91.73|11.7 |1.8 |2.5 |10213 | |[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|288 |73.16|91.03|11.7 |3.0 |4.1 |6050 | |[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|224 |72.98|91.11|21.8 |3.7 |3.7 |5967 | |[resnet18.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet18.fb_ssl_yfcc100m_ft_in1k)|224 |72.6 |91.42|11.7 |1.8 |2.5 |10213 | |[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|288 |72.37|90.59|11.7 |3.0 |4.1 |6051 | |[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|224 |72.26|90.31|10.1 |1.7 |5.8 |7026 | |[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|224 |72.26|90.68|11.7 |2.1 |3.3 |8707 | |[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|224 |71.49|90.07|11.7 |1.8 |2.5 |10187 | |[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|176 |71.31|89.69|10.1 |1.1 |3.6 |10970 | |[resnet18.gluon_in1k](https://huggingface.co/timm/resnet18.gluon_in1k)|224 |70.84|89.76|11.7 |1.8 |2.5 |10210 | |[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|224 |70.64|89.47|11.7 |1.8 |2.5 |10194 | |[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|160 |70.56|89.52|21.8 |1.9 |1.9 |10737 | |[resnet18.tv_in1k](https://huggingface.co/timm/resnet18.tv_in1k)|224 |69.76|89.07|11.7 |1.8 |2.5 |10205 | |[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|224 |68.34|88.03|5.4 |1.1 |2.4 |13079 | |[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|224 |68.25|88.17|11.7 |1.8 |2.5 |10167 | |[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|176 |66.71|86.96|5.4 |0.7 |1.5 |20327 | |[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|160 |65.66|86.26|11.7 |0.9 |1.3 |18229 | ## Citation ```bibtex @misc{yalniz2019billionscale, title={Billion-scale semi-supervised learning for image classification}, author={I. Zeki Yalniz and Hervé Jégou and Kan Chen and Manohar Paluri and Dhruv Mahajan}, year={2019}, eprint={1905.00546}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ```bibtex @article{Xie2016, title={Aggregated Residual Transformations for Deep Neural Networks}, author={Saining Xie and Ross Girshick and Piotr Dollár and Zhuowen Tu and Kaiming He}, journal={arXiv preprint arXiv:1611.05431}, year={2016} } ``` ```bibtex @article{He2015, author = {Kaiming He and Xiangyu Zhang and Shaoqing Ren and Jian Sun}, title = {Deep Residual Learning for Image Recognition}, journal = {arXiv preprint arXiv:1512.03385}, year = {2015} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
38,875
[ [ -0.063720703125, -0.019866943359375, 0.00457000732421875, 0.029510498046875, -0.03216552734375, -0.00838470458984375, -0.0109710693359375, -0.032135009765625, 0.08148193359375, 0.0186309814453125, -0.04864501953125, -0.041259765625, -0.045989990234375, -0.00...
sail-rvc/Shrek
2023-07-14T07:31:48.000Z
[ "transformers", "rvc", "sail-rvc", "audio-to-audio", "endpoints_compatible", "region:us" ]
audio-to-audio
sail-rvc
null
null
sail-rvc/Shrek
1
425
transformers
2023-07-14T07:31:27
--- pipeline_tag: audio-to-audio tags: - rvc - sail-rvc --- # Shrek ## RVC Model ![banner](https://i.imgur.com/xocCjhH.jpg) This model repo was automatically generated. Date: 2023-07-14 07:31:48 Bot Name: juuxnscrap Model Type: RVC Source: https://huggingface.co/juuxn/RVCModels/ Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
373
[ [ -0.032684326171875, -0.037017822265625, 0.0196380615234375, 0.0014257431030273438, -0.04742431640625, 0.00592041015625, 0.007007598876953125, -0.006847381591796875, 0.03143310546875, 0.0673828125, -0.049591064453125, -0.0282440185546875, -0.042388916015625, ...
digiplay/NextPhoto_v3
2023-08-20T17:30:32.000Z
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "endpoints_compatible", "has_space", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
digiplay
null
null
digiplay/NextPhoto_v3
0
425
diffusers
2023-08-20T16:03:05
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- Model info : https://civitai.com/models/84335?modelVersionId=131530
191
[ [ -0.0131378173828125, 0.0254058837890625, 0.043975830078125, 0.0207977294921875, -0.0247650146484375, -0.0140228271484375, 0.045989990234375, -0.0033397674560546875, 0.0278472900390625, 0.0350341796875, -0.038177490234375, 0.005733489990234375, 0.0067672729492187...
deepseek-ai/deepseek-coder-1.3b-instruct
2023-11-05T16:23:02.000Z
[ "transformers", "pytorch", "llama", "text-generation", "license:other", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text-generation
deepseek-ai
null
null
deepseek-ai/deepseek-coder-1.3b-instruct
17
425
transformers
2023-10-29T12:43:40
--- license: other license_name: deepseek license_link: LICENSE --- <p align="center"> <img width="1000px" alt="DeepSeek Coder" src="https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/pictures/logo.png?raw=true"> </p> <p align="center"><a href="https://www.deepseek.com/">[🏠Homepage]</a> | <a href="https://coder.deepseek.com/">[🤖 Chat with DeepSeek Coder]</a> | <a href="https://discord.gg/Tc7c45Zzu5">[Discord]</a> | <a href="https://github.com/guoday/assert/blob/main/QR.png?raw=true">[Wechat(微信)]</a> </p> <hr> ### 1. Introduction of Deepseek Coder Deepseek Coder is composed of a series of code language models, each trained from scratch on 2T tokens, with a composition of 87% code and 13% natural language in both English and Chinese. We provide various sizes of the code model, ranging from 1B to 33B versions. Each model is pre-trained on project-level code corpus by employing a window size of 16K and a extra fill-in-the-blank task, to support project-level code completion and infilling. For coding capabilities, Deepseek Coder achieves state-of-the-art performance among open-source code models on multiple programming languages and various benchmarks. - **Massive Training Data**: Trained from scratch on 2T tokens, including 87% code and 13% linguistic data in both English and Chinese languages. - **Highly Flexible & Scalable**: Offered in model sizes of 1.3B, 5.7B, 6.7B, and 33B, enabling users to choose the setup most suitable for their requirements. - **Superior Model Performance**: State-of-the-art performance among publicly available code models on HumanEval, MultiPL-E, MBPP, DS-1000, and APPS benchmarks. - **Advanced Code Completion Capabilities**: A window size of 16K and a fill-in-the-blank task, supporting project-level code completion and infilling tasks. ### 2. Model Summary deepseek-coder-1.3b-instruct is a 1.3B parameter model initialized from deepseek-coder-1.3b-base and fine-tuned on 2B tokens of instruction data. - **Home Page:** [DeepSeek](https://deepseek.com/) - **Repository:** [deepseek-ai/deepseek-coder](https://github.com/deepseek-ai/deepseek-coder) - **Chat With DeepSeek Coder:** [DeepSeek-Coder](https://coder.deepseek.com/) ### 3. How to Use Here give some examples of how to use our model. #### Chat Model Inference ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/deepseek-coder-1.3b-instruct", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("deepseek-ai/deepseek-coder-1.3b-instruct", trust_remote_code=True).cuda() messages=[ { 'role': 'user', 'content': "write a quick sort algorithm in python."} ] inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device) # 32021 is the id of <|EOT|> token outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, top_k=50, top_p=0.95, num_return_sequences=1, eos_token_id=32021) print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True)) ``` ### 4. License This code repository is licensed under the MIT License. The use of DeepSeek Coder models is subject to the Model License. DeepSeek Coder supports commercial use. See the [LICENSE-MODEL](https://github.com/deepseek-ai/deepseek-coder/blob/main/LICENSE-MODEL) for more details. ### 5. Contact If you have any questions, please raise an issue or contact us at [agi_code@deepseek.com](mailto:agi_code@deepseek.com).
3,472
[ [ -0.022430419921875, -0.04730224609375, 0.01338958740234375, 0.0257568359375, -0.021728515625, 0.0099945068359375, -0.01580810546875, -0.044677734375, -0.0027484893798828125, 0.01068878173828125, -0.035980224609375, -0.042327880859375, -0.049530029296875, -0....
deepset/gbert-base-germandpr-reranking
2023-05-05T06:59:09.000Z
[ "transformers", "pytorch", "safetensors", "bert", "text-classification", "de", "dataset:deepset/germandpr", "license:mit", "endpoints_compatible", "region:us" ]
text-classification
deepset
null
null
deepset/gbert-base-germandpr-reranking
4
424
transformers
2022-03-02T23:29:05
--- language: de datasets: - deepset/germandpr license: mit --- ## Overview **Language model:** gbert-base-germandpr-reranking **Language:** German **Training data:** GermanDPR train set (~ 56MB) **Eval data:** GermanDPR test set (~ 6MB) **Infrastructure**: 1x V100 GPU **Published**: June 3rd, 2021 ## Details - We trained a text pair classification model in FARM, which can be used for reranking in document retrieval tasks. To this end, the classifier calculates the similarity of the query and each retrieved top k document (e.g., k=10). The top k documents are then sorted by their similarity scores. The document most similar to the query is the best. ## Hyperparameters ``` batch_size = 16 n_epochs = 2 max_seq_len = 512 tokens for question and passage concatenated learning_rate = 2e-5 lr_schedule = LinearWarmup embeds_dropout_prob = 0.1 ``` ## Performance We use the GermanDPR test dataset as ground truth labels and run two experiments to compare how a BM25 retriever performs with or without reranking with our model. The first experiment runs retrieval on the full German Wikipedia (more than 2 million passages) and second experiment runs retrieval on the GermanDPR dataset only (not more than 5000 passages). Both experiments use 1025 queries. Note that the second experiment is evaluating on a much simpler task because of the smaller dataset size, which explains strong BM25 retrieval performance. ### Full German Wikipedia (more than 2 million passages): BM25 Retriever without Reranking - recall@3: 0.4088 (419 / 1025) - mean_reciprocal_rank@3: 0.3322 BM25 Retriever with Reranking Top 10 Documents - recall@3: 0.5200 (533 / 1025) - mean_reciprocal_rank@3: 0.4800 ### GermanDPR Test Dataset only (not more than 5000 passages): BM25 Retriever without Reranking - recall@3: 0.9102 (933 / 1025) - mean_reciprocal_rank@3: 0.8528 BM25 Retriever with Reranking Top 10 Documents - recall@3: 0.9298 (953 / 1025) - mean_reciprocal_rank@3: 0.8813 ## Usage ### In haystack You can load the model in [haystack](https://github.com/deepset-ai/haystack/) for reranking the documents returned by a Retriever: ```python ... retriever = ElasticsearchRetriever(document_store=document_store) ranker = FARMRanker(model_name_or_path="deepset/gbert-base-germandpr-reranking") ... p = Pipeline() p.add_node(component=retriever, name="ESRetriever", inputs=["Query"]) p.add_node(component=ranker, name="Ranker", inputs=["ESRetriever"]) ) ``` ## About us ![deepset logo](https://workablehr.s3.amazonaws.com/uploads/account/logo/476306/logo) We bring NLP to the industry via open source! Our focus: Industry specific language models & large scale QA systems. Some of our work: - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert) - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad) - [FARM](https://github.com/deepset-ai/FARM) - [Haystack](https://github.com/deepset-ai/haystack/) Get in touch: [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Website](https://deepset.ai) By the way: [we're hiring!](http://www.deepset.ai/jobs)
3,221
[ [ -0.0241546630859375, -0.04815673828125, 0.025848388671875, 0.0063629150390625, -0.001598358154296875, -0.00833892822265625, -0.028900146484375, -0.026824951171875, 0.0086669921875, 0.0243682861328125, -0.0293731689453125, -0.0606689453125, -0.043304443359375, ...
microsoft/git-large-vatex
2023-01-24T17:22:17.000Z
[ "transformers", "pytorch", "git", "text-generation", "vision", "en", "arxiv:2205.14100", "license:mit", "has_space", "region:us" ]
text-generation
microsoft
null
null
microsoft/git-large-vatex
0
424
transformers
2023-01-02T11:48:08
--- language: en license: mit tags: - vision inference: false model_name: microsoft/git-large-vatex --- # GIT (GenerativeImage2Text), large-sized, fine-tuned on VATEX GIT (short for GenerativeImage2Text) model, large-sized version, fine-tuned on VATEX. It was introduced in the paper [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) by Wang et al. and first released in [this repository](https://github.com/microsoft/GenerativeImage2Text). Disclaimer: The team releasing GIT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description GIT is a Transformer decoder conditioned on both CLIP image tokens and text tokens. The model is trained using "teacher forcing" on a lot of (image, text) pairs. The goal for the model is simply to predict the next text token, giving the image tokens and previous text tokens. The model has full access to (i.e. a bidirectional attention mask is used for) the image patch tokens, but only has access to the previous text tokens (i.e. a causal attention mask is used for the text tokens) when predicting the next text token. ![GIT architecture](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/git_architecture.jpg) This allows the model to be used for tasks like: - image and video captioning - visual question answering (VQA) on images and videos - even image classification (by simply conditioning the model on the image and asking it to generate a class for it in text). ## Intended uses & limitations You can use the raw model for video captioning. See the [model hub](https://huggingface.co/models?search=microsoft/git) to look for fine-tuned versions on a task that interests you. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/transformers/main/model_doc/git.html). ## Training data From the paper: > We collect 0.8B image-text pairs for pre-training, which include COCO (Lin et al., 2014), Conceptual Captions (CC3M) (Sharma et al., 2018), SBU (Ordonez et al., 2011), Visual Genome (VG) (Krishna et al., 2016), Conceptual Captions (CC12M) (Changpinyo et al., 2021), ALT200M (Hu et al., 2021a), and an extra 0.6B data following a similar collection procedure in Hu et al. (2021a). => however this is for the model referred to as "GIT" in the paper, which is not open-sourced. This checkpoint is "GIT-large", which is a smaller variant of GIT trained on 20 million image-text pairs. Next, the model was fine-tuned on VATEX. See table 11 in the [paper](https://arxiv.org/abs/2205.14100) for more details. ### Preprocessing We refer to the original repo regarding details for preprocessing during training. During validation, one resizes the shorter edge of each image, after which center cropping is performed to a fixed-size resolution. Next, frames are normalized across the RGB channels with the ImageNet mean and standard deviation. ## Evaluation results For evaluation results, we refer readers to the [paper](https://arxiv.org/abs/2205.14100).
3,129
[ [ -0.045867919921875, -0.054443359375, 0.0172271728515625, -0.01153564453125, -0.033599853515625, 0.0011415481567382812, -0.0120086669921875, -0.031707763671875, 0.022430419921875, 0.0335693359375, -0.04449462890625, -0.0297393798828125, -0.0665283203125, -0.0...
ziyxxxx/abahrozin
2023-03-05T06:51:41.000Z
[ "diffusers", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
ziyxxxx
null
null
ziyxxxx/abahrozin
0
424
diffusers
2023-03-05T06:48:52
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### abahrozin Dreambooth model trained by ziyxxxx with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
498
[ [ -0.0273284912109375, -0.0577392578125, 0.038909912109375, 0.026275634765625, -0.0242767333984375, 0.026336669921875, 0.0270233154296875, -0.0245819091796875, 0.044769287109375, 0.0029163360595703125, -0.0204010009765625, -0.0210113525390625, -0.0263214111328125,...
kadirnar/dress-v0
2023-04-18T17:52:27.000Z
[ "diffusers", "tensorboard", "text-to-image", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
kadirnar
null
null
kadirnar/dress-v0
0
424
diffusers
2023-04-18T17:50:12
--- license: creativeml-openrail-m tags: - text-to-image widget: - text: dress --- ### dress_v0 Dreambooth model trained by kadirnar with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! Sample pictures of: dress (use that on your prompt) ![dress 0](https://huggingface.co/kadirnar/dress-v0/resolve/main/concept_images/dress_v0_%281%29.jpg)![dress 1](https://huggingface.co/kadirnar/dress-v0/resolve/main/concept_images/dress_v0_%282%29.jpg)![dress 2](https://huggingface.co/kadirnar/dress-v0/resolve/main/concept_images/dress_v0_%283%29.jpg)![dress 3](https://huggingface.co/kadirnar/dress-v0/resolve/main/concept_images/dress_v0_%284%29.jpg)
967
[ [ -0.039947509765625, -0.043548583984375, 0.033416748046875, 0.033416748046875, -0.03985595703125, 0.01611328125, 0.025482177734375, -0.0230865478515625, 0.042266845703125, 0.035400390625, -0.058807373046875, -0.025115966796875, -0.037567138671875, -0.01561737...
Falah/fighter-style
2023-05-06T09:06:10.000Z
[ "diffusers", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
Falah
null
null
Falah/fighter-style
0
424
diffusers
2023-05-06T08:53:38
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### fighter_style Dreambooth model trained by Falah with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
500
[ [ -0.024444580078125, -0.06512451171875, 0.0252532958984375, 0.0258941650390625, -0.0214691162109375, 0.036651611328125, 0.032318115234375, -0.004833221435546875, 0.06005859375, 0.00634002685546875, -0.024566650390625, -0.0291290283203125, -0.03875732421875, -...
digiplay/DiamondCoalMix_v2_pruned_diffusers
2023-07-22T13:14:00.000Z
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "endpoints_compatible", "has_space", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
digiplay
null
null
digiplay/DiamondCoalMix_v2_pruned_diffusers
2
424
diffusers
2023-06-08T18:26:14
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- Model info: https://civitai.com/models/41415/diamondcoalmix not very stable, but well in sometimes. Recommend to use guidance_scale: 3-5, not too high. ![下载 - 2023-06-10T110153.422.png](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/eZIzhQB_nIsOAOEFeFGGO.png)
419
[ [ -0.0310516357421875, -0.0142669677734375, 0.01617431640625, 0.0445556640625, -0.02996826171875, -0.013153076171875, 0.0098876953125, -0.0012617111206054688, 0.0435791015625, 0.022308349609375, -0.03594970703125, -0.023712158203125, -0.040069580078125, -0.003...
kaiyuy/leandojo-lean3-retriever-byt5-small
2023-09-16T18:33:38.000Z
[ "transformers", "pytorch", "t5", "license:mit", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
kaiyuy
null
null
kaiyuy/leandojo-lean3-retriever-byt5-small
2
424
transformers
2023-06-17T04:55:26
--- license: mit --- [LeanDojo: Theorem Proving with Retrieval-Augmented Language Models](https://arxiv.org/abs/xxxx.xxxxx) Under review, NeurIPS (Datasets and Benchmarks Track), 2023 [Kaiyu Yang](https://yangky11.github.io/), [Aidan Swope](https://aidanswope.com/about), [Alex Gu](https://minimario.github.io/), [Rahul Chalamala](https://www.linkedin.com/in/rchalamala), [Peiyang Song](https://www.linkedin.com/in/peiyang-song-3279b3251/), [Shixing Yu](https://billysx.github.io/), [Saad Godil](https://www.linkedin.com/in/saad-godil-9728353/), [Ryan Prenger](https://www.linkedin.com/in/ryan-prenger-18797ba1/), [Anima Anandkumar](http://tensorlab.cms.caltech.edu/users/anima/) ```bibtex @article{yang2023leandojo, title={{LeanDojo}: Theorem Proving with Retrieval-Augmented Language Models}, author={Yang, Kaiyu and Swope, Aidan and Gu, Alex and Chalamala, Rahul and Song, Peiyang and Yu, Shixing and Godil, Saad and Prenger, Ryan and Anandkumar, Anima}, journal={arXiv preprint arXiv:2306.15626}, year={2023} } ``` Please visit [LeanDojo Website](https://leandojo.org/) for details.
1,107
[ [ -0.0185394287109375, -0.017974853515625, 0.041717529296875, 0.018341064453125, 0.00603485107421875, -0.0098419189453125, -0.0235443115234375, -0.037994384765625, 0.0139923095703125, 0.0268402099609375, -0.00714874267578125, -0.047027587890625, -0.048065185546875...
ddPn08/SwimInLatent
2023-07-29T15:12:59.000Z
[ "diffusers", "stable-diffusion", "text-to-image", "safetensors", "en", "license:openrail++", "endpoints_compatible", "has_space", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
ddPn08
null
null
ddPn08/SwimInLatent
5
424
diffusers
2023-07-29T12:50:23
--- license: openrail++ thumbnail: >- https://huggingface.co/ddPn08/SwimInLatent/resolve/main/images/thumbnail.png tags: - stable-diffusion - text-to-image - safetensors - diffusers inference: true widget: - text: >- masterpiece, best quality, 1girl, solo, bikini, upper body, short hair, arms behind back, looking at viewer, ocean, wave, water example_title: example language: - en library_name: diffusers --- <div align="center"> ![](./images/thumbnail.png) </div> <h1 align="center"><b>Swim In Latent</b></h1> <p align="center">StableDiffusionXL model fine-tuned for anime.</p> <br > # Samples ![](./images/sample-01.jpeg) ``` (new, newest, best quality, masterpiece:1.2), 1girl, solo, (upper body:1.2), bikini, short hair, black hair, animal ears, simple background, white background negative: nsfw, (worst quality, low quality, normal quality:1.2), (bad anatomy, bad hands, missing fingers, extra digit, fewer digits:1.1), ``` ![](./images/sample-02.jpeg) ``` (new, newest, best quality, masterpiece:1.2), 1girl, solo, close-up, bikini, short hair, ocean, wave, water negative: nsfw, (worst quality, low quality, normal quality:1.2), (bad anatomy, bad hands, missing fingers, extra digit, fewer digits:1.1), ``` ![](./images/sample-03.jpeg) ``` (new, newest, best quality, masterpiece:1.2), 1girl, (solo:1.2), short hair, messy hair, black hair, white hoodie, outdoor, cityscape negative: nsfw, (worst quality, low quality, normal quality:1.2), embedding:negativeXL_A.safetensors, (bad anatomy, bad hands, missing fingers, extra digit, fewer digits:1.1), ``` ![](./images/sample-04.jpeg) ``` (new, newest, best quality, masterpiece:1.2), 1girl, sitting on bed, sexy pose, looking at viewer, long hair, beige hair, black eyes, blush negative: nsfw, (worst quality, low quality, normal quality:1.2), embedding:negativeXL_A.safetensors, (bad anatomy, bad hands, missing fingers, extra digit, fewer digits:1.1), ``` <br > --- A simple workflow file for use with comfyui is here. [workflow-swim-in-latent.json](./workflow-swim-in-latent.json) <br > # metadata.json ```json { "modelspec.sai_model_spec": "1.0.0.alpha", "modelspec.architecture": "stable-diffusion-xl-v1-base", "modelspec.implementation": "sgm", "modelspec.title": "SwimInLatent", "modelspec.author": "ddPn08", "modelspec.description": "StableDiffusionXL model fine-tuned for anime.", "modelspec.date": "2023-07-29", "modelspec.license": "CreativeML Open RAIL++-M" } ```
2,507
[ [ -0.0267181396484375, -0.051727294921875, 0.0233917236328125, 0.0177764892578125, -0.04974365234375, 0.0111541748046875, 0.022796630859375, -0.018218994140625, 0.044219970703125, 0.03131103515625, -0.04693603515625, -0.038543701171875, -0.05267333984375, -0.0...
sanskar/DepressionAnalysis
2022-07-23T19:50:11.000Z
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
sanskar
null
null
sanskar/DepressionAnalysis
3
423
transformers
2022-07-22T22:06:20
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: DepressionAnalysis results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # DepressionAnalysis This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4023 - Accuracy: 0.8367 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6091 | 1.0 | 151 | 0.5593 | 0.7082 | | 0.4041 | 2.0 | 302 | 0.4295 | 0.8055 | | 0.3057 | 3.0 | 453 | 0.4023 | 0.8367 | | 0.1921 | 4.0 | 604 | 0.4049 | 0.8454 | | 0.1057 | 5.0 | 755 | 0.4753 | 0.8479 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
1,660
[ [ -0.035369873046875, -0.04803466796875, 0.023651123046875, 0.02044677734375, -0.0261993408203125, -0.0241546630859375, -0.006473541259765625, -0.0008401870727539062, 0.0189971923828125, 0.0185699462890625, -0.054229736328125, -0.065185546875, -0.06170654296875, ...
timm/eva02_base_patch14_448.mim_in22k_ft_in22k
2023-03-31T05:45:11.000Z
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-22k", "arxiv:2303.11331", "arxiv:2303.15389", "license:mit", "region:us" ]
image-classification
timm
null
null
timm/eva02_base_patch14_448.mim_in22k_ft_in22k
1
423
timm
2023-03-31T04:16:00
--- tags: - image-classification - timm library_tag: timm license: mit datasets: - imagenet-22k - imagenet-22k --- # Model card for eva02_base_patch14_448.mim_in22k_ft_in22k An EVA02 image classification model. Pretrained on ImageNet-22k with masked image modeling (using EVA-CLIP as a MIM teacher) and fine-tuned on ImageNet-22k by paper authors. EVA-02 models are vision transformers with mean pooling, SwiGLU, Rotary Position Embeddings (ROPE), and extra LN in MLP (for Base & Large). NOTE: `timm` checkpoints are float32 for consistency with other models. Original checkpoints are float16 or bfloat16 in some cases, see originals if that's preferred. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 103.1 - GMACs: 107.1 - Activations (M): 259.2 - Image size: 448 x 448 - **Papers:** - EVA-02: A Visual Representation for Neon Genesis: https://arxiv.org/abs/2303.11331 - EVA-CLIP: Improved Training Techniques for CLIP at Scale: https://arxiv.org/abs/2303.15389 - **Original:** - https://github.com/baaivision/EVA - https://huggingface.co/Yuxin-CV/EVA-02 - **Pretrain Dataset:** ImageNet-22k - **Dataset:** ImageNet-22k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('eva02_base_patch14_448.mim_in22k_ft_in22k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'eva02_base_patch14_448.mim_in22k_ft_in22k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 1025, 768) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). |model |top1 |top5 |param_count|img_size| |-----------------------------------------------|------|------|-----------|--------| |eva02_large_patch14_448.mim_m38m_ft_in22k_in1k |90.054|99.042|305.08 |448 | |eva02_large_patch14_448.mim_in22k_ft_in22k_in1k|89.946|99.01 |305.08 |448 | |eva_giant_patch14_560.m30m_ft_in22k_in1k |89.792|98.992|1014.45 |560 | |eva02_large_patch14_448.mim_in22k_ft_in1k |89.626|98.954|305.08 |448 | |eva02_large_patch14_448.mim_m38m_ft_in1k |89.57 |98.918|305.08 |448 | |eva_giant_patch14_336.m30m_ft_in22k_in1k |89.56 |98.956|1013.01 |336 | |eva_giant_patch14_336.clip_ft_in1k |89.466|98.82 |1013.01 |336 | |eva_large_patch14_336.in22k_ft_in22k_in1k |89.214|98.854|304.53 |336 | |eva_giant_patch14_224.clip_ft_in1k |88.882|98.678|1012.56 |224 | |eva02_base_patch14_448.mim_in22k_ft_in22k_in1k |88.692|98.722|87.12 |448 | |eva_large_patch14_336.in22k_ft_in1k |88.652|98.722|304.53 |336 | |eva_large_patch14_196.in22k_ft_in22k_in1k |88.592|98.656|304.14 |196 | |eva02_base_patch14_448.mim_in22k_ft_in1k |88.23 |98.564|87.12 |448 | |eva_large_patch14_196.in22k_ft_in1k |87.934|98.504|304.14 |196 | |eva02_small_patch14_336.mim_in22k_ft_in1k |85.74 |97.614|22.13 |336 | |eva02_tiny_patch14_336.mim_in22k_ft_in1k |80.658|95.524|5.76 |336 | ## Citation ```bibtex @article{EVA02, title={EVA-02: A Visual Representation for Neon Genesis}, author={Fang, Yuxin and Sun, Quan and Wang, Xinggang and Huang, Tiejun and Wang, Xinlong and Cao, Yue}, journal={arXiv preprint arXiv:2303.11331}, year={2023} } ``` ```bibtex @article{EVA-CLIP, title={EVA-02: A Visual Representation for Neon Genesis}, author={Sun, Quan and Fang, Yuxin and Wu, Ledell and Wang, Xinlong and Cao, Yue}, journal={arXiv preprint arXiv:2303.15389}, year={2023} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
5,406
[ [ -0.044097900390625, -0.0297088623046875, 0.01255035400390625, 0.00794219970703125, -0.016876220703125, 0.00206756591796875, -0.0086517333984375, -0.0335693359375, 0.03875732421875, 0.0273590087890625, -0.033843994140625, -0.051422119140625, -0.04327392578125, ...
govindkrishnan123/my-pet-dog
2023-08-06T03:24:13.000Z
[ "diffusers", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "endpoints_compatible", "has_space", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
govindkrishnan123
null
null
govindkrishnan123/my-pet-dog
0
423
diffusers
2023-08-06T03:17:30
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### My-Pet-Dog Dreambooth model trained by govindkrishnan123 following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: AJCE97 Sample pictures of this concept: ![0](https://huggingface.co/govindkrishnan123/my-pet-dog/resolve/main/sample_images/output.png)
402
[ [ -0.062286376953125, -0.020263671875, 0.0268707275390625, 0.0097808837890625, -0.0128631591796875, 0.0249481201171875, 0.02703857421875, -0.0330810546875, 0.046356201171875, 0.0286407470703125, -0.0400390625, -0.0202484130859375, -0.0179290771484375, 0.005901...
Photolens/llama-2-7b-langchain-chat
2023-09-12T18:34:31.000Z
[ "transformers", "pytorch", "llama", "text-generation", "en", "es", "ru", "de", "pl", "th", "vi", "sv", "bn", "da", "he", "it", "fa", "sk", "id", "nb", "el", "nl", "hu", "eu", "zh", "eo", "ja", "ca", "cs", "bg", "fi", "pt", "tr", "ro", "ar", "uk",...
text-generation
Photolens
null
null
Photolens/llama-2-7b-langchain-chat
17
423
transformers
2023-08-09T16:07:52
--- language: - en - es - ru - de - pl - th - vi - sv - bn - da - he - it - fa - sk - id - nb - el - nl - hu - eu - zh - eo - ja - ca - cs - bg - fi - pt - tr - ro - ar - uk - gl - fr - ko task_categories: - conversational license: llama2 datasets: - Photolens/oasst1-langchain-llama-2-formatted --- ## Model Overview Model license: Llama-2<br> This model is trained based on [NousResearch/Llama-2-7b-chat-hf](https://huggingface.co/NousResearch/Llama-2-7b-chat-hf) model that is QLoRA finetuned on [Photolens/oasst1-langchain-llama-2-formatted](https://huggingface.co/datasets/Photolens/oasst1-langchain-llama-2-formatted) dataset.<br> ## Prompt Template: Llama-2 ``` <s>[INST] Prompter Message [/INST] Assistant Message </s> ``` ## Intended Use Dataset that is used to finetune base model is optimized for langchain applications.<br> So this model is intended for a langchain LLM. ## Training Details This model took `1:14:16` to train in QLoRA on a single `A100 40gb` GPU.<br> - *epochs*: `1` - *train batch size*: `8` - *eval batch size*: `8` - *gradient accumulation steps*: `1` - *maximum gradient normal*: `0.3` - *learning rate*: `2e-4` - *weight decay*: `0.001` - *optimizer*: `paged_adamw_32bit` - *learning rate schedule*: `cosine` - *warmup ratio (linear)*: `0.03` ## Models in this series | Model | Train time | Size (in params) | Base Model | ---|---|---|--- | [llama-2-7b-langchain-chat](https://huggingface.co/Photolens/llama-2-7b-langchain-chat/) | 1:14:16 | 7 billion | [NousResearch/Llama-2-7b-chat-hf](https://huggingface.co/NousResearch/Llama-2-7b-chat-hf) | | [llama-2-13b-langchain-chat](https://huggingface.co/Photolens/llama-2-13b-langchain-chat/) | 2:50:27 | 13 billion | [TheBloke/Llama-2-13B-Chat-fp16](https://huggingface.co/TheBloke/Llama-2-13B-Chat-fp16) | | [Photolens/OpenOrcaxOpenChat-2-13b-langchain-chat](https://huggingface.co/Photolens/OpenOrcaxOpenChat-2-13b-langchain-chat/) | 2:56:54 | 13 billion | [Open-Orca/OpenOrcaxOpenChat-Preview2-13B](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B) |
2,074
[ [ -0.0323486328125, -0.039215087890625, 0.005489349365234375, 0.0291900634765625, -0.051727294921875, 0.0037174224853515625, -0.005306243896484375, -0.050872802734375, 0.04266357421875, 0.0312042236328125, -0.0609130859375, -0.045257568359375, -0.037872314453125, ...
HiTZ/GoLLIE-7B
2023-10-10T07:51:44.000Z
[ "transformers", "pytorch", "llama", "text-generation", "code", "text-generation-inference", "Information Extraction", "IE", "Named Entity Recogniton", "Event Extraction", "Relation Extraction", "LLaMA", "custom_code", "en", "dataset:ACE05", "dataset:bc5cdr", "dataset:conll2003", "d...
text-generation
HiTZ
null
null
HiTZ/GoLLIE-7B
11
423
transformers
2023-09-25T10:24:52
--- license: llama2 datasets: - ACE05 - bc5cdr - conll2003 - ncbi_disease - conll2012_ontonotesv5 - rams - tacred - wnut_17 language: - en metrics: - f1 pipeline_tag: text-generation tags: - code - text-generation-inference - Information Extraction - IE - Named Entity Recogniton - Event Extraction - Relation Extraction - LLaMA --- <p align="center"> <br> <img src="https://github.com/hitz-zentroa/GoLLIE/raw/main/assets/GoLLIE.png" style="height: 250px;"> <h2 align="center"><b>G</b>uideline f<b>o</b>llowing <b>L</b>arge <b>L</b>anguage Model for <b>I</b>nformation <b>E</b>xtraction</h2> <br> # Model Card for GoLLIE 7B <p align="justify"> We present GoLLIE, a Large Language Model trained to follow annotation guidelines. GoLLIE outperforms previous approaches on zero-shot Information Extraction and allows the user to perform inferences with annotation schemas defined on the fly. Different from previous approaches, GoLLIE is able to follow detailed definitions and does not only rely on the knowledge already encoded in the LLM. - 💻 Code: [https://github.com/osainz59/CoLLIE/](https://github.com/hitz-zentroa/GoLLIE) - 📒 Blog Post: [GoLLIE: Guideline-following Large Language Model for Information Extraction](https://hitz-zentroa.github.io/GoLLIE/) - 📖 Paper: [GoLLIE: Annotation Guidelines improve Zero-Shot Information-Extraction](https://arxiv.org/abs/2310.03668) - 🐕 GoLLIE Colection in the 🤗HuggingFace Hub: [HiTZ/gollie](https://huggingface.co/collections/HiTZ/gollie-651bf19ee315e8a224aacc4f) - 🚀 Example Jupyter Notebooks: [GoLLIE Notebooks](https://github.com/hitz-zentroa/GoLLIE/tree/main/notebooks) </p> <p align="center"> <img src="https://github.com/hitz-zentroa/GoLLIE/raw/main/assets/zero_shot_results.png"> </p> ### Model Description - **Developed by:** [Oscar Sainz](https://osainz59.github.io/), [Iker García-Ferrero](https://ikergarcia1996.github.io/Iker-Garcia-Ferrero/), [Rodrigo Agerri](https://ragerri.github.io/), [Oier Lopez de Lacalle](https://oierldl.github.io/), [German Rigau](https://adimen.si.ehu.es/~rigau/) and [Eneko Agirre](https://eagirre.github.io/) - **Institution:** [HiTZ Basque Center for Language Technology](http://www.hitz.eus/) - [Ixa](https://www.ixa.eus/node/2?language=en), [University of the Basque Country UPV/EHU](https://www.ehu.eus/en/en-home) - **Model type:** Text Generation - **Language(s) (NLP):** English - **License:** LLaMA2 License for the base and merged model. Apache 2.0 for pre-trained LoRA Adapters - **Finetuned from model:** CODE-LLaMA2 ## Schema definition and inference example The labels are represented as Python classes, and the guidelines or instructions are introduced as docstrings. The model start generating after the `result = [` line. ```Python # Entity definitions @dataclass class Launcher(Template): """Refers to a vehicle designed primarily to transport payloads from the Earth's surface to space. Launchers can carry various payloads, including satellites, crewed spacecraft, and cargo, into various orbits or even beyond Earth's orbit. They are usually multi-stage vehicles that use rocket engines for propulsion.""" mention: str """ The name of the launcher vehicle. Such as: "Sturn V", "Atlas V", "Soyuz", "Ariane 5" """ space_company: str # The company that operates the launcher. Such as: "Blue origin", "ESA", "Boeing", "ISRO", "Northrop Grumman", "Arianespace" crew: List[str] # Names of the crew members boarding the Launcher. Such as: "Neil Armstrong", "Michael Collins", "Buzz Aldrin" @dataclass class Mission(Template): """Any planned or accomplished journey beyond Earth's atmosphere with specific objectives, either crewed or uncrewed. It includes missions to satellites, the International Space Station (ISS), other celestial bodies, and deep space.""" mention: str """ The name of the mission. Such as: "Apollo 11", "Artemis", "Mercury" """ date: str # The start date of the mission departure: str # The place from which the vehicle will be launched. Such as: "Florida", "Houston", "French Guiana" destination: str # The place or planet to which the launcher will be sent. Such as "Moon", "low-orbit", "Saturn" # This is the text to analyze text = ( "The Ares 3 mission to Mars is scheduled for 2032. The Starship rocket build by SpaceX will take off from Boca Chica," "carrying the astronauts Max Rutherford, Elena Soto, and Jake Martinez." ) # The annotation instances that take place in the text above are listed here result = [ Mission(mention='Ares 3', date='2032', departure='Boca Chica', destination='Mars'), Launcher(mention='Starship', space_company='SpaceX', crew=['Max Rutherford', 'Elena Soto', 'Jake Martinez']) ] ``` ## How to Get Started with the Model Please read our [🚀 Example Jupyter Notebooks](https://github.com/hitz-zentroa/GoLLIE/tree/main/notebooks) to get started with GoLLIE. The best way to load the model is using our custom `load_model` fuction. However, you can also load them using the AutoModelForCausalLM class. **Important**: Our flash attention implementation has small numerical differences compared to the attention implementation in Huggingface. You must use the flag `trust_remote_code=True` or you will get inferior results. Flash attention requires an available CUDA GPU. Running GOLLIE pre-trained models on a CPU is not supported. We plan to address this in future releases. First, install flash attention 2: ```bash pip install flash-attn --no-build-isolation pip install git+https://github.com/HazyResearch/flash-attention.git#subdirectory=csrc/rotary ``` Then you can load the model using ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("HiTZ/GoLLIE-7B") model = AutoModelForCausalLM.from_pretrained("HiTZ/GoLLIE-7B", trust_remote_code=True, torch_dtype=torch.bfloat16) model.to("cuda") ``` Read our [🚀 Example Jupyter Notebooks](https://github.com/hitz-zentroa/GoLLIE/tree/main/notebooks) to learn how to easily define guidelines, generate model inputs and parse the output! ### Training Data This is the list of task used for training and evaluating GoLLIE. However, as demonstrated in the 🚀 [Create Custom Task notebook](https://github.com/hitz-zentroa/GoLLIE/blob/main/notebooks/Create%20Custom%20Task.ipynb) GoLLIE can perform a wide range of unseen tasks. For more info, read our [📖Paper](https://arxiv.org/abs/2310.03668). <p align="center"> <img src="https://github.com/hitz-zentroa/GoLLIE/raw/main/assets/datasets.png"> </p> ## Evaluation | Model | Supervised average F1 | Zero-shot average F1 | 🤗HuggingFace Hub | |---|:---------------------:|:--------------------:|:---------------------------------------------------------:| | GoLLIE-7B | 73.0 | 55.3 | [HiTZ/GoLLIE-7B](https://huggingface.co/HiTZ/GoLLIE-7B) | | GoLLIE-13B | 73.9 | 56.0 | [HiTZ/GoLLIE-13B](https://huggingface.co/HiTZ/GoLLIE-13B) | | GoLLIE-34B | **75.0** | **57.2** | [HiTZ/GoLLIE-34B](https://huggingface.co/HiTZ/GoLLIE-34B) | ## Environmental Impact | Model | Hardware | FLOPs | Time (h) | CO<sup>2</sup>eq (kg) | |----------------|-------------------|---------------------------|-------------------|-------------------------------------| | GoLLIE 7B | 1xA100 | 11.9e<sup>18</sup> | 44.5 | 1.57 | | GoLLIE 13B | 1xA100 | 22.7e<sup>18</sup> | 79.5 | 2.80 | | GoLLIE 34B | 2xA100 | 55.8e<sup>18</sup> | 94.6 | 6.67 | ## Citation ``` @misc{sainz2023gollie, title={GoLLIE: Annotation Guidelines improve Zero-Shot Information-Extraction}, author={Oscar Sainz and Iker García-Ferrero and Rodrigo Agerri and Oier Lopez de Lacalle and German Rigau and Eneko Agirre}, year={2023}, eprint={2310.03668}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
8,204
[ [ -0.00966644287109375, -0.052642822265625, 0.032745361328125, 0.0208282470703125, -0.007007598876953125, -0.0260467529296875, -0.0205078125, -0.039276123046875, 0.025146484375, 0.03094482421875, -0.040252685546875, -0.052032470703125, -0.04547119140625, -0.01...
syzymon/long_llama_code_7b_instruct
2023-10-06T21:39:11.000Z
[ "transformers", "pytorch", "longllama", "text-generation", "custom_code", "license:llama2", "region:us" ]
text-generation
syzymon
null
null
syzymon/long_llama_code_7b_instruct
9
423
transformers
2023-10-06T18:41:09
--- license: llama2 --- # LongLLaMA-Code 7B Instruct <div align="center"> <table> <tr> <th style="font-size: 120%"> >_ 🎓 <a href="https://huggingface.co/syzymon/long_llama_code_7b_instruct">LongLLaMA-Code 7B Instruct</a> 📑🗨 </th> </tr> <tr> <td align="center"> <a href="https://colab.research.google.com/github/CStanKonrad/long_llama/blob/main/long_llama_code_instruct_colab.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg"></a> </td> </tr> </table> </div> ## TLDR [LongLLaMA-Code 7B Instruct](https://huggingface.co/syzymon/long_llama_code_7b_instruct) is [LongLLaMA-Code 7B](https://huggingface.co/syzymon/long_llama_code_7b) tuned on [TIGER-Lab/MathInstruct](https://huggingface.co/datasets/TIGER-Lab/MathInstruct), [OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca), and [ShareGPT-Processed](https://huggingface.co/datasets/zetavg/ShareGPT-Processed) datasets. It can answer basic questions about research papers and code. It can also perform a simple code refactoring. You can try the quantized version of the model using a free GPU in [Google Colab](https://colab.research.google.com/github/CStanKonrad/long_llama/blob/main/long_llama_code_instruct_colab.ipynb). ## Tuning ### Code The model was tuned on a TPU v3-128 pod with 128 batch size. For tuning, we have used the data preparation pipeline available in instruction_fine_tuning. However, we have replaced the Hugging Face Trainer with a modification of FoT continued pretraining code. This modification boils down to propagating the memory cache throughout the model (basically reproducing the Pytorch inference code functionality in JAX). ### Training Here, we present the basic information about how the model was tuned. For more details, see the [GitHub repo](https://github.com/CStanKonrad/long_llama/tree/main/instruction_fine_tuning/misc). All inputs were truncated and randomly padded (left/right) to 3072 tokens. The last context length was set to 1536. The model was trained for 9k steps, started with a learning rate of 1.2e-5, 700 steps of warmup, and finished with a learning rate of 0. The optimizer was adamw. The question prompt (`pre_question_text`) was: ``` You are an AI assistant. User will you give you a task. Your goal is to complete the task as faithfully as you can.\n\n ``` To trigger the model answer one can use: ``` \nAnswer: ``` The chat prompt was: ``` A chat between a user (denoted as USER:) and an artificial intelligence assistant (denoted as ASSISTANT:). The assistant gives helpful, detailed, and polite answers to the user's questions.\n\n ``` To denote the assistant one can write: ``` \nASSISTANT: ``` To denote the user one can write: ``` \nUSER: ``` ### Datasets and sampling probability * 0.71 - [TIGER-Lab/MathInstruct](https://huggingface.co/datasets/TIGER-Lab/MathInstruct) * 0.16, - [Open-Orca/OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca) questions with less than 5k chars * 0.08, - [Open-Orca/OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca) questions above 5k chars but below 12k chars * 0.02 - [zetavg/ShareGPT-Processed](https://huggingface.co/datasets/zetavg/ShareGPT-Processed) conversations below 6k chars * 0.01 - [zetavg/ShareGPT-Processed](https://huggingface.co/datasets/zetavg/ShareGPT-Processed) conversations above 6k chars but below 12k chars To improve the quality of the data, the datasets were filtered using regular expressions. ## License The instruction/chat-tuned models are for research purposes only. [LongLLaMA-Code 7B Instruct](https://huggingface.co/syzymon/long_llama_code_7b_instruct) is [LongLLaMA-Code 7B](https://huggingface.co/syzymon/long_llama_code_7b) tuned on [TIGER-Lab/MathInstruct](https://huggingface.co/datasets/TIGER-Lab/MathInstruct), [OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca), and [ShareGPT-Processed](https://huggingface.co/datasets/zetavg/ShareGPT-Processed) datasets. Note that those datasets contain outputs from ChatGPT. See also the [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) license. ## Acknowledgements We gratefully acknowledge the TPU Research Cloud program, which was instrumental to our research by providing significant computational resources.
4,303
[ [ -0.03546142578125, -0.07415771484375, 0.0187530517578125, 0.03759765625, -0.024993896484375, -0.0044403076171875, -0.023651123046875, -0.054656982421875, 0.03759765625, 0.0303955078125, -0.05145263671875, -0.040863037109375, -0.035858154296875, 0.00651931762...
KernAI/stock-news-distilbert
2023-10-03T10:17:50.000Z
[ "transformers", "pytorch", "distilbert", "text-classification", "license:mit", "endpoints_compatible", "has_space", "region:us" ]
text-classification
KernAI
null
null
KernAI/stock-news-distilbert
10
422
transformers
2023-05-21T14:14:24
--- widget: - text: >- NEW YORK (TheStreet) -- Microsoft (MSFT) - Get Free Report had its price target raised to $39 from $38 by analysts at Jefferies who maintained their 'underperform' rating. In Thursday's pre-market trading session shares are advancing 1.24% to $44.79. This action comes as Microsoft said yesterday that it will eliminate up to 7,800 jobs mostly in its phone unit as it looks to restructure its phone hardware business that has been struggling, the New York Times reports. example_title: MSFT news (positive) - text: >- Adobe Brings Major New Innovations to Video Tools SAN JOSE, Calif.--(BUSINESS WIRE)--Today, ahead of the 2023 NAB Show – the preeminent conference and exhibition driving the evolution of broadcast, media and entertainment – Adobe (Nasdaq:ADBE) announced industry-first innovations across its family of video applications, including AI-powered text-based video editing and automated color tone-mapping capabilities in Premiere Pro. SAN JOSE, Calif.--(BUSINESS WIRE). example_title: ADBE news (neutral) - text: >- Unilever PLC (NYSE: UL)’s stock price has gone decline by -0.61 in comparison to its previous close of 54.27, however, the company has experienced a -1.61% decrease in its stock price over the last five trading days. The Wall Street Journal reported on 10/24/22 that Dry Shampoo Recalled Due to Potential Cancer-Causing Ingredient. example_title: UL news (negative) license: mit --- # Finetuned distilBERT model for stock news classification This distilbert model was fine-tuned on 50.000 stock news articles using the HuggingFace adapter from Kern AI refinery. The articles consisted of the headlines plus abstract of the article. For the finetuning, a single NVidia K80 was used for about four hours. Join our Discord if you have questions about this model: https://discord.gg/MdZyqSxKbe DistilBERT is a smaller, faster and lighter version of BERT. It was trained by distilling BERT base and has 40% less parameters than bert-base-uncased. It runs 60% faster while preserving over 95% of BERT’s performances as measured on the GLUE language understanding benchmark. DistilBERT does not have token-type embeddings, pooler and retains only half of the layers from Google’s BERT. ## Features - The model can handle various text classification tasks, especially when it comes to stock and finance news sentiment classification. - The output of the model are the three classes "positive", "neutral" and "negative" plus the models respective confidence score of the class. - The model was fine-tuned on a custom datasets that was curated by Kern AI and labeled in our tool refinery. - The model is currently supported by the PyTorch framework and can be easily deployed on various platforms using the HuggingFace Pipeline API. ## Usage To use the model, you need to install the HuggingFace Transformers library: ```bash pip install transformers ``` Then you can load the model and the tokenizer from the HuggingFace Hub: ```python from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("KernAI/stock-news-distilbert") tokenizer = AutoTokenizer.from_pretrained("KernAI/stock-news-distilbert") ``` To classify a single sentence or a sentence pair, you can use the HuggingFace Pipeline API: ```python from transformers import pipeline classifier = pipeline("text-classification", model=model, tokenizer=tokenizer) result = classifier("This is a positive sentence.") print(result) # [{'label': 'POSITIVE', 'score': 0.9998656511306763}] ```
3,662
[ [ -0.03399658203125, -0.06793212890625, 0.00496673583984375, 0.042236328125, -0.0272979736328125, 0.00176239013671875, -0.01334381103515625, -0.0298309326171875, 0.0226287841796875, -0.00833892822265625, -0.033843994140625, -0.039031982421875, -0.06842041015625, ...
TheBloke/starcoderplus-GPTQ
2023-08-21T09:39:21.000Z
[ "transformers", "safetensors", "gpt_bigcode", "text-generation", "code", "dataset:bigcode/the-stack-dedup", "dataset:tiiuae/falcon-refinedweb", "arxiv:1911.02150", "arxiv:2205.14135", "arxiv:2207.14255", "arxiv:2305.06161", "license:bigcode-openrail-m", "model-index", "text-generation-infe...
text-generation
TheBloke
null
null
TheBloke/starcoderplus-GPTQ
22
422
transformers
2023-06-08T20:27:09
--- inference: false pipeline_tag: text-generation license: bigcode-openrail-m datasets: - bigcode/the-stack-dedup - tiiuae/falcon-refinedweb metrics: - code_eval - mmlu - arc - hellaswag - truthfulqa library_name: transformers tags: - code model-index: - name: StarCoderPlus results: - task: type: text-generation dataset: type: openai_humaneval name: HumanEval (Prompted) metrics: - name: pass@1 type: pass@1 value: 26.7 verified: false - task: type: text-generation dataset: type: MMLU (5-shot) name: MMLU metrics: - name: Accuracy type: Accuracy value: 45.1 verified: false - task: type: text-generation dataset: type: HellaSwag (10-shot) name: HellaSwag metrics: - name: Accuracy type: Accuracy value: 77.3 verified: false - task: type: text-generation dataset: type: ARC (25-shot) name: ARC metrics: - name: Accuracy type: Accuracy value: 48.9 verified: false - task: type: text-generation dataset: type: ThrutfulQA (0-shot) name: ThrutfulQA metrics: - name: Accuracy type: Accuracy value: 37.9 verified: false extra_gated_prompt: >- ## Model License Agreement Please read the BigCode [OpenRAIL-M license](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement) agreement before accepting it. extra_gated_fields: I accept the above license agreement, and will use the Model complying with the set of use restrictions and sharing requirements: checkbox --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Bigcode's StarcoderPlus GPTQ These files are GPTQ 4bit model files for [Bigcode's StarcoderPlus](https://huggingface.co/bigcode/starcoderplus). It is the result of quantising to 4bit using [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ). ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/starcoderplus-GPTQ) * [4, 5, and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/starcoderplus-GGML) * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/bigcode/starcoderplus) ## How to easily download and use this model in text-generation-webui Please make sure you're using the latest version of text-generation-webui 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/starcoderplus-GPTQ`. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done" 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `starcoderplus-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. * Note that you do not need to set GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started! ## How to use this GPTQ model from Python code First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed: `pip install auto-gptq` Then try the following example code: ```python from transformers import AutoTokenizer, pipeline, logging from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig import argparse model_name_or_path = "TheBloke/starcoderplus-GPTQ" use_triton = False tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) model = AutoGPTQForCausalLM.from_quantized(model_name_or_path, model_basename=model_basename, use_safetensors=True, trust_remote_code=True, device="cuda:0", use_triton=use_triton, quantize_config=None) print("\n\n*** Generate:") inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to(device) outputs = model.generate(inputs) print(tokenizer.decode(outputs[0])) ``` ### Fill-in-the-middle Fill-in-the-middle uses special tokens to identify the prefix/middle/suffix part of the input and output: ```python input_text = "<fim_prefix>def print_hello_world():\n <fim_suffix>\n print('Hello world!')<fim_middle>" inputs = tokenizer.encode(input_text, return_tensors="pt").to(device) outputs = model.generate(inputs) print(tokenizer.decode(outputs[0])) ``` ## Provided files **gptq_model-4bit--1g.safetensors** This will work with AutoGPTQ and CUDA versions of GPTQ-for-LLaMa. There are reports of issues with Triton mode of recent GPTQ-for-LLaMa. If you have issues, please use AutoGPTQ instead. It was created without group_size to lower VRAM requirements, and with --act-order (desc_act) to boost inference accuracy as much as possible. * `gptq_model-4bit--1g.safetensors` * Works with AutoGPTQ in CUDA or Triton modes. * Works with text-generation-webui, including one-click-installers. * Works with GPTQ-for-LLaMa in CUDA mode. May have issues with GPTQ-for-LLaMa Triton mode. * Parameters: Groupsize = -1. Act Order / desc_act = True. <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, 阿明, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieł, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: Bigcode's StarcoderPlus # StarCoderPlus Play with the instruction-tuned StarCoderPlus at [StarChat-Beta](https://huggingface.co/spaces/HuggingFaceH4/starchat-playground). ## Table of Contents 1. [Model Summary](##model-summary) 2. [Use](##use) 3. [Limitations](##limitations) 4. [Training](##training) 5. [License](##license) 6. [Citation](##citation) ## Model Summary StarCoderPlus is a fine-tuned version of [StarCoderBase](https://huggingface.co/bigcode/starcoderbase) on 600B tokens from the English web dataset [RedefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) combined with [StarCoderData](https://huggingface.co/datasets/bigcode/starcoderdata) from [The Stack (v1.2)](https://huggingface.co/datasets/bigcode/the-stack) and a Wikipedia dataset. It's a 15.5B parameter Language Model trained on English and 80+ programming languages. The model uses [Multi Query Attention](https://arxiv.org/abs/1911.02150), [a context window of 8192 tokens](https://arxiv.org/abs/2205.14135), and was trained using the [Fill-in-the-Middle objective](https://arxiv.org/abs/2207.14255) on 1.6 trillion tokens. - **Repository:** [bigcode/Megatron-LM](https://github.com/bigcode-project/Megatron-LM) - **Project Website:** [bigcode-project.org](https://www.bigcode-project.org) - **Point of Contact:** [contact@bigcode-project.org](mailto:contact@bigcode-project.org) - **Languages:** English & 80+ Programming languages ## Use ### Intended use The model was trained on English and GitHub code. As such it is _not_ an instruction model and commands like "Write a function that computes the square root." do not work well. However, the instruction-tuned version in [StarChat](hhttps://huggingface.co/spaces/HuggingFaceH4/starchat-playground) makes a capable assistant. **Feel free to share your generations in the Community tab!** ### Generation ```python # pip install -q transformers from transformers import AutoModelForCausalLM, AutoTokenizer checkpoint = "bigcode/starcoderplus" device = "cuda" # for GPU usage or "cpu" for CPU usage tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device) inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to(device) outputs = model.generate(inputs) print(tokenizer.decode(outputs[0])) ``` ### Fill-in-the-middle Fill-in-the-middle uses special tokens to identify the prefix/middle/suffix part of the input and output: ```python input_text = "<fim_prefix>def print_hello_world():\n <fim_suffix>\n print('Hello world!')<fim_middle>" inputs = tokenizer.encode(input_text, return_tensors="pt").to(device) outputs = model.generate(inputs) print(tokenizer.decode(outputs[0])) ``` ### Attribution & Other Requirements The training code dataset of the model was filtered for permissive licenses only. Nevertheless, the model can generate source code verbatim from the dataset. The code's license might require attribution and/or other specific requirements that must be respected. We provide a [search index](https://huggingface.co/spaces/bigcode/starcoder-search) that let's you search through the pretraining data to identify where generated code came from and apply the proper attribution to your code. # Limitations The model has been trained on a mixture of English text from the web and GitHub code. Therefore it might encounter limitations when working with non-English text, and can carry the stereotypes and biases commonly encountered online. Additionally, the generated code should be used with caution as it may contain errors, inefficiencies, or potential vulnerabilities. For a more comprehensive understanding of the base model's code limitations, please refer to See [StarCoder paper](hhttps://arxiv.org/abs/2305.06161). # Training StarCoderPlus is a fine-tuned version on 600B English and code tokens of StarCoderBase, which was pre-trained on 1T code tokens. Below are the fine-tuning details: ## Model - **Architecture:** GPT-2 model with multi-query attention and Fill-in-the-Middle objective - **Finetuning steps:** 150k - **Finetuning tokens:** 600B - **Precision:** bfloat16 ## Hardware - **GPUs:** 512 Tesla A100 - **Training time:** 14 days ## Software - **Orchestration:** [Megatron-LM](https://github.com/bigcode-project/Megatron-LM) - **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch) - **BP16 if applicable:** [apex](https://github.com/NVIDIA/apex) # License The model is licensed under the BigCode OpenRAIL-M v1 license agreement. You can find the full agreement [here](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement).
13,536
[ [ -0.0419921875, -0.04998779296875, 0.012115478515625, 0.006427764892578125, -0.0218658447265625, -0.00124359130859375, -0.004505157470703125, -0.039642333984375, 0.0264129638671875, 0.017120361328125, -0.04400634765625, -0.036346435546875, -0.0380859375, 0.00...
M-CLIP/M-BERT-Distil-40
2022-09-15T10:46:02.000Z
[ "transformers", "pytorch", "distilbert", "feature-extraction", "sq", "am", "ar", "az", "bn", "bg", "ca", "zh", "nl", "en", "et", "fa", "fr", "ka", "de", "el", "hi", "hu", "is", "id", "it", "ja", "kk", "ko", "lv", "mk", "ms", "ps", "pl", "ro", "ru",...
feature-extraction
M-CLIP
null
null
M-CLIP/M-BERT-Distil-40
6
421
transformers
2022-03-02T23:29:04
--- language: - sq - am - ar - az - bn - bg - ca - zh - nl - en - et - fa - fr - ka - de - el - hi - hu - is - id - it - ja - kk - ko - lv - mk - ms - ps - pl - ro - ru - sl - es - sv - tl - th - tr - ur --- <br /> <p align="center"> <h1 align="center">M-BERT Distil 40</h1> <p align="center"> <a href="https://github.com/FreddeFrallan/Multilingual-CLIP/tree/main/Model%20Cards/M-BERT%20Distil%2040">Github Model Card</a> </p> </p> ## Usage To use this model along with the original CLIP vision encoder you need to download the code and additional linear weights from the [Multilingual-CLIP Github](https://github.com/FreddeFrallan/Multilingual-CLIP). Once this is done, you can load and use the model with the following code ```python from src import multilingual_clip model = multilingual_clip.load_model('M-BERT-Distil-40') embeddings = model(['Älgen är skogens konung!', 'Wie leben Eisbären in der Antarktis?', 'Вы знали, что все белые медведи левши?']) print(embeddings.shape) # Yields: torch.Size([3, 640]) ``` <!-- ABOUT THE PROJECT --> ## About A [distilbert-base-multilingual](https://huggingface.co/distilbert-base-multilingual-cased) tuned to match the embedding space for [40 languages](https://github.com/FreddeFrallan/Multilingual-CLIP/blob/main/Model%20Cards/M-BERT%20Distil%2040/Fine-Tune-Languages.md), to the embedding space of the CLIP text encoder which accompanies the Res50x4 vision encoder. <br> A full list of the 100 languages used during pre-training can be found [here](https://github.com/google-research/bert/blob/master/multilingual.md#list-of-languages), and a list of the 40 languages used during fine-tuning can be found in [SupportedLanguages.md](Fine-Tune-Languages.md). Training data pairs was generated by sampling 40k sentences for each language from the combined descriptions of [GCC](https://ai.google.com/research/ConceptualCaptions/) + [MSCOCO](https://cocodataset.org/#home) + [VizWiz](https://vizwiz.org/tasks-and-datasets/image-captioning/), and translating them into the corresponding language. All translation was done using the [AWS translate service](https://aws.amazon.com/translate/), the quality of these translations have currently not been analyzed, but one can assume the quality varies between the 40 languages. ## Evaluation [These results can be viewed at Github](https://github.com/FreddeFrallan/Multilingual-CLIP/tree/main/Model%20Cards/M-BERT%20Distil%2040). <br> A non-rigorous qualitative evaluation shows that for the languages French, German, Spanish, Russian, Swedish and Greek it seemingly yields respectable results for most instances. The exception being that Greeks are apparently unable to recognize happy persons. <br> When testing on Kannada, a language which was included during pre-training but not fine-tuning, it performed close to random
2,838
[ [ -0.036163330078125, -0.033599853515625, 0.016876220703125, 0.020233154296875, -0.045318603515625, 0.01229095458984375, -0.0386962890625, -0.0290374755859375, 0.0287322998046875, 0.007190704345703125, -0.046356201171875, -0.045501708984375, -0.05010986328125, ...
cahya/distilbert-base-indonesian
2021-02-08T09:06:09.000Z
[ "transformers", "pytorch", "distilbert", "fill-mask", "id", "dataset:wikipedia", "dataset:id_newspapers_2018", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
cahya
null
null
cahya/distilbert-base-indonesian
7
421
transformers
2022-03-02T23:29:05
--- language: "id" license: "mit" datasets: - wikipedia - id_newspapers_2018 widget: - text: "ayahku sedang bekerja di sawah untuk [MASK] padi." --- # Indonesian DistilBERT base model (uncased) ## Model description This model is a distilled version of the [Indonesian BERT base model](https://huggingface.co/cahya/bert-base-indonesian-1.5G). This model is uncased. This is one of several other language models that have been pre-trained with indonesian datasets. More detail about its usage on downstream tasks (text classification, text generation, etc) is available at [Transformer based Indonesian Language Models](https://github.com/cahya-wirawan/indonesian-language-models/tree/master/Transformers) ## Intended uses & limitations ### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='cahya/distilbert-base-indonesian') >>> unmasker("Ayahku sedang bekerja di sawah untuk [MASK] padi") [ { "sequence": "[CLS] ayahku sedang bekerja di sawah untuk menanam padi [SEP]", "score": 0.6853187084197998, "token": 12712, "token_str": "menanam" }, { "sequence": "[CLS] ayahku sedang bekerja di sawah untuk bertani padi [SEP]", "score": 0.03739545866847038, "token": 15484, "token_str": "bertani" }, { "sequence": "[CLS] ayahku sedang bekerja di sawah untuk memetik padi [SEP]", "score": 0.02742469497025013, "token": 30338, "token_str": "memetik" }, { "sequence": "[CLS] ayahku sedang bekerja di sawah untuk penggilingan padi [SEP]", "score": 0.02214187942445278, "token": 28252, "token_str": "penggilingan" }, { "sequence": "[CLS] ayahku sedang bekerja di sawah untuk tanam padi [SEP]", "score": 0.0185895636677742, "token": 11308, "token_str": "tanam" } ] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import DistilBertTokenizer, DistilBertModel model_name='cahya/distilbert-base-indonesian' tokenizer = DistilBertTokenizer.from_pretrained(model_name) model = DistilBertModel.from_pretrained(model_name) text = "Silakan diganti dengan text apa saja." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in Tensorflow: ```python from transformers import DistilBertTokenizer, TFDistilBertModel model_name='cahya/distilbert-base-indonesian' tokenizer = DistilBertTokenizer.from_pretrained(model_name) model = TFDistilBertModel.from_pretrained(model_name) text = "Silakan diganti dengan text apa saja." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Training data This model was distiled with 522MB of indonesian Wikipedia and 1GB of [indonesian newspapers](https://huggingface.co/datasets/id_newspapers_2018). The texts are lowercased and tokenized using WordPiece and a vocabulary size of 32,000. The inputs of the model are then of the form: ```[CLS] Sentence A [SEP] Sentence B [SEP]```
3,076
[ [ -0.01062774658203125, -0.052215576171875, -0.00090789794921875, 0.039947509765625, -0.056976318359375, 0.01027679443359375, -0.01195526123046875, -0.0020809173583984375, 0.02569580078125, 0.034942626953125, -0.036407470703125, -0.036529541015625, -0.078002929687...
shibing624/bart4csc-base-chinese
2023-03-19T01:41:59.000Z
[ "transformers", "pytorch", "safetensors", "bart", "text2text-generation", "zh", "Text2Text-Generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
text2text-generation
shibing624
null
null
shibing624/bart4csc-base-chinese
28
421
transformers
2022-09-27T11:37:38
--- language: - zh tags: - bart - pytorch - zh - Text2Text-Generation license: "apache-2.0" widget: - text: "少先队员因该为老人让坐" --- # Bart for Chinese Spelling Correction(bart4csc) Model BART中文拼写纠错模型 `bart4csc-base-chinese` evaluate SIGHAN2015 test data: Sentence Level: acc:0.6845, precision:0.6984, recall:0.6354, f1:0.6654 case: |input_text|pred| |:-- |:--- | |辰导中引述她的话说:核子间题的解决之道系于克什米尔纷争。|报导中引述她的话说:核子问题的解决之道系于克什米尔纷争。| |报导并末说明事故发生的原因。|报导并未说明事故发生的原因。| 训练使用了SIGHAN+Wang271K中文纠错数据集,在SIGHAN2015的测试集上达到接近SOTA水平。 ## Usage 本项目开源在文本生成项目:[textgen](https://github.com/shibing624/textgen),可支持Bart模型,通过如下命令调用: Install package: ```shell pip install -U textgen ``` ```python from transformers import BertTokenizerFast from textgen import BartSeq2SeqModel tokenizer = BertTokenizerFast.from_pretrained('shibing624/bart4csc-base-chinese') model = BartSeq2SeqModel( encoder_type='bart', encoder_decoder_type='bart', encoder_decoder_name='shibing624/bart4csc-base-chinese', tokenizer=tokenizer, args={"max_length": 128, "eval_batch_size": 128}) sentences = ["少先队员因该为老人让坐"] print(model.predict(sentences)) # ['少先队员应该为老人让座'] ``` 模型文件组成: ``` bart4csc-base-chinese ├── config.json ├── model_args.json ├── pytorch_model.bin ├── special_tokens_map.json ├── tokenizer_config.json ├── spiece.model └── vocab.txt ``` ### 训练数据集 #### SIGHAN+Wang271K中文纠错数据集 | 数据集 | 语料 | 下载链接 | 压缩包大小 | | :------- | :--------- | :---------: | :---------: | | **`SIGHAN+Wang271K中文纠错数据集`** | SIGHAN+Wang271K(27万条) | [百度网盘(密码01b9)](https://pan.baidu.com/s/1BV5tr9eONZCI0wERFvr0gQ)| 106M | | **`原始SIGHAN数据集`** | SIGHAN13 14 15 | [官方csc.html](http://nlp.ee.ncu.edu.tw/resource/csc.html)| 339K | | **`原始Wang271K数据集`** | Wang271K | [Automatic-Corpus-Generation dimmywang提供](https://github.com/wdimmy/Automatic-Corpus-Generation/blob/master/corpus/train.sgml)| 93M | SIGHAN+Wang271K中文纠错数据集,数据格式: ```json [ { "id": "B2-4029-3", "original_text": "晚间会听到嗓音,白天的时候大家都不会太在意,但是在睡觉的时候这嗓音成为大家的恶梦。", "wrong_ids": [ 5, 31 ], "correct_text": "晚间会听到噪音,白天的时候大家都不会太在意,但是在睡觉的时候这噪音成为大家的恶梦。" }, ] ``` - 如果需要训练Bart模型,请参考[https://github.com/shibing624/textgen/blob/main/examples/seq2seq/training_bartseq2seq_zh_demo.py](https://github.com/shibing624/textgen/blob/main/examples/seq2seq/training_bartseq2seq_zh_demo.py) - 了解更多纠错模型,请移步:[https://github.com/shibing624/pycorrector](https://github.com/shibing624/pycorrector) ## Citation ```latex @software{textgen, author = {Xu Ming}, title = {textgen: Implementation of Text Generation models}, year = {2022}, url = {https://github.com/shibing624/textgen}, } ```
2,685
[ [ -0.009124755859375, -0.044403076171875, 0.01776123046875, 0.0279083251953125, -0.018768310546875, -0.010711669921875, -0.02685546875, -0.0227813720703125, 0.020599365234375, 0.018218994140625, -0.036865234375, -0.05511474609375, -0.033477783203125, 0.0295104...
Djacon/rubert-tiny2-russian-emotion-detection
2023-05-16T06:10:41.000Z
[ "transformers", "pytorch", "safetensors", "bert", "text-classification", "russian", "classification", "emotion", "emotion-detection", "emotion-recognition", "multiclass", "ru", "dataset:Djacon/ru_goemotions", "license:mit", "endpoints_compatible", "has_space", "region:us" ]
text-classification
Djacon
null
null
Djacon/rubert-tiny2-russian-emotion-detection
1
421
transformers
2023-04-08T16:58:47
--- license: mit language: ["ru"] tags: - russian - classification - emotion - emotion-detection - emotion-recognition - multiclass widget: - text: "Как дела?" - text: "Дурак твой дед" - text: "Только попробуй!!!" - text: "Не хочу в школу(" - text: "Сейчас ровно час дня" - text: "А ты уверен, что эти полоски снизу не врут? Точно уверен? Вот прям 100 процентов?" datasets: - Djacon/ru_goemotions --- # First - you should prepare few functions to talk to model ```python import torch from transformers import BertForSequenceClassification, AutoTokenizer LABELS = ['радость', 'интерес', 'удивление', 'печаль', 'гнев', 'отвращение', 'страх', 'вина', 'нейтрально'] tokenizer = AutoTokenizer.from_pretrained('Djacon/rubert-tiny2-russian-emotion-detection') model = BertForSequenceClassification.from_pretrained('Djacon/rubert-tiny2-russian-emotion-detection') # Predicting emotion in text @torch.no_grad() def predict_emotion(text: str) -> str: inputs = tokenizer(text, truncation=True, return_tensors='pt') inputs = inputs.to(model.device) outputs = model(**inputs) pred = torch.nn.functional.softmax(outputs.logits, dim=1) pred = pred.argmax(dim=1) return LABELS[pred[0]] # Probabilistic prediction of emotion in a text @torch.no_grad() def predict_emotions(text: str) -> list: inputs = tokenizer(text, truncation=True, return_tensors='pt') inputs = inputs.to(model.device) outputs = model(**inputs) pred = torch.nn.functional.softmax(outputs.logits, dim=1) emotions_list = {} for i in range(len(pred[0].tolist())): emotions_list[LABELS[i]] = round(pred[0].tolist()[i], 4) return emotions_list ``` # And then - just gently ask a model to predict your emotion ```python simple_prediction = predict_emotion("Какой же сегодня прекрасный день, братья") not_simple_prediction = predict_emotions("Какой же сегодня прекрасный день, братья") print(simple_prediction) print(not_simple_prediction) # happiness # {'neutral': 0.0004941817605867982, 'happiness': 0.9979524612426758, 'sadness': 0.0002536600804887712, 'enthusiasm': 0.0005498139653354883, 'fear': 0.00025326196919195354, 'anger': 0.0003583927755244076, 'disgust': 0.00013807788491249084} ``` # Citations ``` @misc{Djacon, author = {Djacon}, year = {2023}, publisher = {Hugging Face}, journal = {Hugging Face Hub}, } ```
2,354
[ [ -0.0252532958984375, -0.0372314453125, 0.016815185546875, 0.0202178955078125, -0.0184173583984375, -0.0276947021484375, -0.016021728515625, -0.0225982666015625, 0.01076507568359375, 0.00019371509552001953, -0.042999267578125, -0.04217529296875, -0.03924560546875...
digiplay/RunDiffusionFX2.5D_v1_diffusers
2023-07-17T06:58:46.000Z
[ "diffusers", "license:other", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
null
digiplay
null
null
digiplay/RunDiffusionFX2.5D_v1_diffusers
6
421
diffusers
2023-06-03T22:33:29
--- license: other --- Model info: https://civitai.com/models/82981/rundiffusion-fx-25d Sample images I made: ![下载 - 2023-06-04T084500.450.png](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/eIhAH2hge2f2Hqqagk7Uv.png) ![R - 2023-06-04T090647.776.png](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/07_eKv-3EWR16ubPqJ0iQ.png)
396
[ [ -0.05792236328125, -0.03521728515625, 0.029296875, 0.02606201171875, -0.039306640625, 0.007793426513671875, 0.027496337890625, -0.003864288330078125, 0.037353515625, 0.06103515625, -0.059783935546875, -0.0303497314453125, -0.0278472900390625, -0.004142761230...
Emilianohack6950/staryuuki
2023-07-14T16:44:41.000Z
[ "diffusers", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
Emilianohack6950
null
null
Emilianohack6950/staryuuki
0
421
diffusers
2023-07-05T18:01:28
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- Staryuuki es un modelo de generación de imágenes desarrollado específicamente para representar a una popular streamer femenina del mismo nombre. Este modelo se ha entrenado utilizando una amplia variedad de imágenes que capturan la apariencia y la personalidad de Staryuuki durante sus transmisiones en vivo. El modelo Staryuuki utiliza técnicas avanzadas de inteligencia artificial y aprendizaje automático para generar imágenes realistas y convincentes de la streamer. A partir de los datos de entrenamiento, el modelo es capaz de captar los rasgos distintivos de Staryuuki, como su estilo de maquillaje, peinado, vestimenta y expresiones faciales. ![""](https://archive.org/download/00022-3260226004/00022-3260226004.png) ![""](https://archive.org/download/00022-3260226004/00052-3260226034.png) ![""](https://archive.org/download/00022-3260226004/00075-3260226057.png) ![""](https://archive.org/download/00022-3260226004/00102-3260226084.png)
1,031
[ [ -0.058258056640625, -0.058135986328125, 0.0008015632629394531, 0.01001739501953125, -0.045379638671875, -0.023956298828125, 0.008209228515625, -0.041534423828125, 0.059906005859375, 0.039520263671875, -0.0364990234375, -0.006832122802734375, -0.059112548828125, ...
WALIDALI/rawaawlylyr
2023-07-16T14:47:52.000Z
[ "diffusers", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
WALIDALI
null
null
WALIDALI/rawaawlylyr
0
421
diffusers
2023-07-16T14:39:24
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### Rawaawlylyr Dreambooth model trained by WALIDALI with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
501
[ [ -0.021881103515625, -0.04974365234375, 0.036346435546875, 0.027099609375, -0.0196533203125, 0.0227508544921875, 0.01824951171875, -0.029144287109375, 0.042999267578125, -0.007476806640625, -0.026153564453125, -0.0166778564453125, -0.0209197998046875, -0.0139...
mpariente/DPRNNTasNet-ks2_WHAM_sepclean
2021-09-23T16:12:22.000Z
[ "asteroid", "pytorch", "audio", "DPRNNTasNet", "audio-to-audio", "dataset:wham", "dataset:sep_clean", "license:cc-by-sa-4.0", "has_space", "region:us" ]
audio-to-audio
mpariente
null
null
mpariente/DPRNNTasNet-ks2_WHAM_sepclean
8
420
asteroid
2022-03-02T23:29:05
--- tags: - asteroid - audio - DPRNNTasNet - audio-to-audio datasets: - wham - sep_clean license: cc-by-sa-4.0 --- ## Asteroid model `mpariente/DPRNNTasNet-ks2_WHAM_sepclean` Imported from [Zenodo](https://zenodo.org/record/3862942) ### Description: This model was trained by Manuel Pariente using the wham/DPRNN recipe in [Asteroid](https://github.com/asteroid-team/asteroid). It was trained on the `sep_clean` task of the WHAM! dataset. ### Training config: ```yaml data: mode: min nondefault_nsrc: None sample_rate: 8000 segment: 2.0 task: sep_clean train_dir: data/wav8k/min/tr valid_dir: data/wav8k/min/cv filterbank: kernel_size: 2 n_filters: 64 stride: 1 main_args: exp_dir: exp/train_dprnn_new/ gpus: -1 help: None masknet: bidirectional: True bn_chan: 128 chunk_size: 250 dropout: 0 hid_size: 128 hop_size: 125 in_chan: 64 mask_act: sigmoid n_repeats: 6 n_src: 2 out_chan: 64 optim: lr: 0.001 optimizer: adam weight_decay: 1e-05 positional arguments: training: batch_size: 3 early_stop: True epochs: 200 gradient_clipping: 5 half_lr: True num_workers: 8 ``` ### Results: ```yaml si_sdr: 19.316743490695334 si_sdr_imp: 19.317895273889842 sdr: 19.68085347190952 sdr_imp: 19.5298092932871 sir: 30.362213998701232 sir_imp: 30.21116982007881 sar: 20.15553251343315 sar_imp: -129.02091762351188 stoi: 0.97772664309074 stoi_imp: 0.23968091518217424 ``` ### License notice: This work "DPRNNTasNet-ks2_WHAM_sepclean" is a derivative of [CSR-I (WSJ0) Complete](https://catalog.ldc.upenn.edu/LDC93S6A) by [LDC](https://www.ldc.upenn.edu/), used under [LDC User Agreement for Non-Members](https://catalog.ldc.upenn.edu/license/ldc-non-members-agreement.pdf) (Research only). "DPRNNTasNet-ks2_WHAM_sepclean" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Manuel Pariente.
1,969
[ [ -0.039154052734375, -0.015045166015625, 0.0214080810546875, 0.004974365234375, -0.05035400390625, -0.002040863037109375, -0.00891876220703125, -0.027557373046875, 0.025543212890625, 0.032928466796875, -0.060302734375, -0.037994384765625, -0.04803466796875, 0...
ifdeluxe01/test-portal-training
2023-03-03T21:15:16.000Z
[ "diffusers", "tensorboard", "text-to-image", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
ifdeluxe01
null
null
ifdeluxe01/test-portal-training
0
420
diffusers
2023-03-03T21:13:40
--- license: creativeml-openrail-m tags: - text-to-image widget: - text: porty --- ### Test Portal Training Dreambooth model trained by ifdeluxe01 with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v2-1-768 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! Sample pictures of: porty (use that on your prompt) ![porty 0](https://huggingface.co/ifdeluxe01/test-portal-training/resolve/main/concept_images/porty_porty_porty_porty_%281%29.jpg)![porty 1](https://huggingface.co/ifdeluxe01/test-portal-training/resolve/main/concept_images/porty_porty_porty_porty_%282%29.jpg)![porty 2](https://huggingface.co/ifdeluxe01/test-portal-training/resolve/main/concept_images/porty_porty_porty_porty_%283%29.jpg)![porty 3](https://huggingface.co/ifdeluxe01/test-portal-training/resolve/main/concept_images/porty_porty_porty_porty_%284%29.jpg)![porty 4](https://huggingface.co/ifdeluxe01/test-portal-training/resolve/main/concept_images/porty_porty_porty_porty_%285%29.jpg)![porty 5](https://huggingface.co/ifdeluxe01/test-portal-training/resolve/main/concept_images/porty_porty_porty_porty_%286%29.jpg)![porty 6](https://huggingface.co/ifdeluxe01/test-portal-training/resolve/main/concept_images/porty_porty_porty_porty_%287%29.jpg)![porty 7](https://huggingface.co/ifdeluxe01/test-portal-training/resolve/main/concept_images/porty_porty_porty_porty_%288%29.jpg)![porty 8](https://huggingface.co/ifdeluxe01/test-portal-training/resolve/main/concept_images/porty_porty_porty_porty_%289%29.jpg)![porty 9](https://huggingface.co/ifdeluxe01/test-portal-training/resolve/main/concept_images/porty_porty_porty_porty_%2810%29.jpg)![porty 10](https://huggingface.co/ifdeluxe01/test-portal-training/resolve/main/concept_images/porty_porty_porty_porty_%2811%29.jpg)![porty 11](https://huggingface.co/ifdeluxe01/test-portal-training/resolve/main/concept_images/porty_porty_porty_porty_%2812%29.jpg)![porty 12](https://huggingface.co/ifdeluxe01/test-portal-training/resolve/main/concept_images/porty_porty_porty_porty_%2813%29.jpg)![porty 13](https://huggingface.co/ifdeluxe01/test-portal-training/resolve/main/concept_images/porty_porty_porty_porty_%2814%29.jpg)![porty 14](https://huggingface.co/ifdeluxe01/test-portal-training/resolve/main/concept_images/porty_porty_porty_porty_%2815%29.jpg)![porty 15](https://huggingface.co/ifdeluxe01/test-portal-training/resolve/main/concept_images/porty_porty_porty_porty_%2816%29.jpg)![porty 16](https://huggingface.co/ifdeluxe01/test-portal-training/resolve/main/concept_images/porty_porty_porty_porty_%2817%29.jpg)![porty 17](https://huggingface.co/ifdeluxe01/test-portal-training/resolve/main/concept_images/porty_porty_porty_porty_%2818%29.jpg)![porty 18](https://huggingface.co/ifdeluxe01/test-portal-training/resolve/main/concept_images/porty_porty_porty_porty_%2819%29.jpg)![porty 19](https://huggingface.co/ifdeluxe01/test-portal-training/resolve/main/concept_images/porty_porty_porty_porty_%2820%29.jpg)![porty 20](https://huggingface.co/ifdeluxe01/test-portal-training/resolve/main/concept_images/porty_porty_porty_porty_%2821%29.jpg)![porty 21](https://huggingface.co/ifdeluxe01/test-portal-training/resolve/main/concept_images/porty_porty_porty_porty_%2822%29.jpg)
3,520
[ [ -0.05889892578125, -0.0278778076171875, 0.01357269287109375, 0.01328277587890625, -0.007904052734375, -0.01294708251953125, 0.0264434814453125, -0.0125732421875, 0.053955078125, 0.020660400390625, -0.040863037109375, -0.01326751708984375, -0.033111572265625, ...
Solovo/firsttrain
2023-03-05T08:07:14.000Z
[ "diffusers", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
Solovo
null
null
Solovo/firsttrain
0
420
diffusers
2023-03-05T08:04:20
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### FirstTrain Dreambooth model trained by Solovo with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
498
[ [ -0.0282135009765625, -0.046295166015625, 0.0303192138671875, 0.0306549072265625, -0.03076171875, 0.0290069580078125, 0.0173187255859375, -0.00830078125, 0.0438232421875, 0.005336761474609375, -0.03436279296875, -0.0230712890625, -0.0301666259765625, -0.01532...
Hackenbacker/Bdan
2023-03-05T10:37:07.000Z
[ "diffusers", "text-to-image", "stable-diffusion", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
Hackenbacker
null
null
Hackenbacker/Bdan
0
420
diffusers
2023-03-05T09:46:55
--- tags: - text-to-image - stable-diffusion --- ### Hackenbacker/g Dreambooth model trained by Hackenbacker with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
474
[ [ -0.0241546630859375, -0.0609130859375, 0.045623779296875, 0.039398193359375, -0.014678955078125, 0.0322265625, 0.0264739990234375, -0.0211029052734375, 0.054931640625, 0.0042724609375, -0.0271148681640625, -0.00904083251953125, -0.036163330078125, -0.0123291...
timm/repvgg_b3.rvgg_in1k
2023-03-22T07:25:23.000Z
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2101.03697", "license:mit", "region:us" ]
image-classification
timm
null
null
timm/repvgg_b3.rvgg_in1k
0
420
timm
2023-03-22T07:23:45
--- tags: - image-classification - timm library_tag: timm license: mit datasets: - imagenet-1k --- # Model card for repvgg_b3.rvgg_in1k A RepVGG image classification model. Trained on ImageNet-1k by paper authors. This model architecture is implemented using `timm`'s flexible [BYOBNet (Bring-Your-Own-Blocks Network)](https://github.com/huggingface/pytorch-image-models/blob/main/timm/models/byobnet.py). BYOBNet allows configuration of: * block / stage layout * stem layout * output stride (dilation) * activation and norm layers * channel and spatial / self-attention layers ...and also includes `timm` features common to many other architectures, including: * stochastic depth * gradient checkpointing * layer-wise LR decay * per-stage feature extraction ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 123.1 - GMACs: 29.2 - Activations (M): 15.1 - Image size: 224 x 224 - **Papers:** - RepVGG: Making VGG-style ConvNets Great Again: https://arxiv.org/abs/2101.03697 - **Dataset:** ImageNet-1k - **Original:** https://github.com/DingXiaoH/RepVGG ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('repvgg_b3.rvgg_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'repvgg_b3.rvgg_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 64, 112, 112]) # torch.Size([1, 192, 56, 56]) # torch.Size([1, 384, 28, 28]) # torch.Size([1, 768, 14, 14]) # torch.Size([1, 2560, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'repvgg_b3.rvgg_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 2560, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ``` ```bibtex @inproceedings{ding2021repvgg, title={Repvgg: Making vgg-style convnets great again}, author={Ding, Xiaohan and Zhang, Xiangyu and Ma, Ningning and Han, Jungong and Ding, Guiguang and Sun, Jian}, booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, pages={13733--13742}, year={2021} } ```
4,545
[ [ -0.03582763671875, -0.03607177734375, -0.00005900859832763672, 0.005207061767578125, -0.02459716796875, -0.018035888671875, -0.01293182373046875, -0.031585693359375, 0.01009368896484375, 0.03533935546875, -0.034271240234375, -0.057891845703125, -0.04751586914062...
wangrongsheng/MiniGPT-4-LLaMA
2023-04-20T19:17:24.000Z
[ "transformers", "pytorch", "llama", "text-generation", "LLMs", "MiniGPT-4", "endpoints_compatible", "text-generation-inference", "region:us", "has_space" ]
text-generation
wangrongsheng
null
null
wangrongsheng/MiniGPT-4-LLaMA
15
420
transformers
2023-04-20T02:07:05
--- tags: - LLMs - MiniGPT-4 --- 这是MiniGPT-4的转化权重,利用的教程是[MiniGPT-4/PrepareVicuna.md](https://github.com/Vision-CAIR/MiniGPT-4/blob/main/PrepareVicuna.md) ,使用它,您可以不需要LLAMA-13B和vicuna-13b-delta-v0进行转化。 - [https://github.com/Vision-CAIR/MiniGPT-4](https://github.com/Vision-CAIR/MiniGPT-4)
288
[ [ -0.059967041015625, -0.044158935546875, 0.03717041015625, 0.0209197998046875, -0.060455322265625, -0.00727081298828125, 0.0276336669921875, -0.02142333984375, 0.03759765625, 0.0190277099609375, -0.060638427734375, -0.0303192138671875, -0.04986572265625, 0.02...
timm/mobilevitv2_200.cvnets_in22k_ft_in1k
2023-04-24T22:27:33.000Z
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2206.02680", "license:other", "region:us" ]
image-classification
timm
null
null
timm/mobilevitv2_200.cvnets_in22k_ft_in1k
1
420
timm
2023-04-24T22:27:12
--- tags: - image-classification - timm library_name: timm license: other datasets: - imagenet-1k --- # Model card for mobilevitv2_200.cvnets_in22k_ft_in1k A MobileViT-v2 image classification model. Pretrained on ImageNet-22k and fine-tuned on ImageNet-1k by paper authors. See license details at https://github.com/apple/ml-cvnets/blob/main/LICENSE ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 18.4 - GMACs: 7.2 - Activations (M): 32.1 - Image size: 256 x 256 - **Papers:** - Separable Self-attention for Mobile Vision Transformers: https://arxiv.org/abs/2206.02680 - **Original:** https://github.com/apple/ml-cvnets - **Dataset:** ImageNet-1k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('mobilevitv2_200.cvnets_in22k_ft_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'mobilevitv2_200.cvnets_in22k_ft_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 128, 128, 128]) # torch.Size([1, 256, 64, 64]) # torch.Size([1, 512, 32, 32]) # torch.Size([1, 768, 16, 16]) # torch.Size([1, 1024, 8, 8]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'mobilevitv2_200.cvnets_in22k_ft_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 1024, 8, 8) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @article{Mehta2022SeparableSF, title={Separable Self-attention for Mobile Vision Transformers}, author={Sachin Mehta and Mohammad Rastegari}, journal={ArXiv}, year={2022}, volume={abs/2206.02680} } ```
3,773
[ [ -0.034942626953125, -0.022369384765625, -0.003841400146484375, 0.017547607421875, -0.0284271240234375, -0.026458740234375, -0.007152557373046875, -0.02105712890625, 0.0194091796875, 0.033416748046875, -0.036590576171875, -0.0478515625, -0.0478515625, -0.0173...
TheBloke/Spicyboros-70B-2.2-AWQ
2023-09-27T12:50:12.000Z
[ "transformers", "safetensors", "llama", "text-generation", "not-for-all-audiences", "dataset:jondurbin/airoboros-2.2", "license:llama2", "text-generation-inference", "region:us" ]
text-generation
TheBloke
null
null
TheBloke/Spicyboros-70B-2.2-AWQ
0
420
transformers
2023-09-19T03:12:52
--- license: llama2 tags: - not-for-all-audiences datasets: - jondurbin/airoboros-2.2 model_name: Spicyboros 70B 2.2 base_model: jondurbin/spicyboros-70b-2.2 inference: false model_creator: Jon Durbin model_type: llama prompt_template: "A chat.\nUSER: {prompt}\nASSISTANT: \n" quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Spicyboros 70B 2.2 - AWQ - Model creator: [Jon Durbin](https://huggingface.co/jondurbin) - Original model: [Spicyboros 70B 2.2](https://huggingface.co/jondurbin/spicyboros-70b-2.2) <!-- description start --> ## Description This repo contains AWQ model files for [Jon Durbin's Spicyboros 70B 2.2](https://huggingface.co/jondurbin/spicyboros-70b-2.2). ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference. It is also now supported by continuous batching server [vLLM](https://github.com/vllm-project/vllm), allowing use of AWQ models for high-throughput concurrent inference in multi-user server scenarios. Note that, at the time of writing, overall throughput is still lower than running vLLM with unquantised models, however using AWQ enables using much smaller GPUs which can lead to easier deployment and overall cost savings. For example, a 70B model can be run on 1 x 48GB GPU instead of 2 x 80GB. <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Spicyboros-70B-2.2-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Spicyboros-70B-2.2-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Spicyboros-70B-2.2-GGUF) * [Jon Durbin's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jondurbin/spicyboros-70b-2.2) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Chat ``` A chat. USER: {prompt} ASSISTANT: ``` <!-- prompt-template end --> <!-- README_AWQ.md-provided-files start --> ## Provided files and AWQ parameters For my first release of AWQ models, I am releasing 128g models only. I will consider adding 32g as well if there is interest, and once I have done perplexity and evaluation comparisons, but at this time 32g models are still not fully tested with AutoAWQ and vLLM. Models are released as sharded safetensors files. | Branch | Bits | GS | AWQ Dataset | Seq Len | Size | | ------ | ---- | -- | ----------- | ------- | ---- | | [main](https://huggingface.co/TheBloke/Spicyboros-70B-2.2-AWQ/tree/main) | 4 | 128 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 36.61 GB <!-- README_AWQ.md-provided-files end --> <!-- README_AWQ.md-use-from-vllm start --> ## Serving this model from vLLM Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/). - When using vLLM as a server, pass the `--quantization awq` parameter, for example: ```shell python3 python -m vllm.entrypoints.api_server --model TheBloke/Spicyboros-70B-2.2-AWQ --quantization awq ``` When using vLLM from Python code, pass the `quantization=awq` parameter, for example: ```python from vllm import LLM, SamplingParams prompts = [ "Hello, my name is", "The president of the United States is", "The capital of France is", "The future of AI is", ] sampling_params = SamplingParams(temperature=0.8, top_p=0.95) llm = LLM(model="TheBloke/Spicyboros-70B-2.2-AWQ", quantization="awq") outputs = llm.generate(prompts, sampling_params) # Print the outputs. for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}") ``` <!-- README_AWQ.md-use-from-vllm start --> <!-- README_AWQ.md-use-from-python start --> ## How to use this AWQ model from Python code ### Install the necessary packages Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.0.2 or later ```shell pip3 install autoawq ``` If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y autoawq git clone https://github.com/casper-hansen/AutoAWQ cd AutoAWQ pip3 install . ``` ### You can then try the following example code ```python from awq import AutoAWQForCausalLM from transformers import AutoTokenizer model_name_or_path = "TheBloke/Spicyboros-70B-2.2-AWQ" # Load model model = AutoAWQForCausalLM.from_quantized(model_name_or_path, fuse_layers=True, trust_remote_code=False, safetensors=True) tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=False) prompt = "Tell me about AI" prompt_template=f'''A chat. USER: {prompt} ASSISTANT: ''' print("\n\n*** Generate:") tokens = tokenizer( prompt_template, return_tensors='pt' ).input_ids.cuda() # Generate output generation_output = model.generate( tokens, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, max_new_tokens=512 ) print("Output: ", tokenizer.decode(generation_output[0])) # Inference can also be done using transformers' pipeline from transformers import pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_AWQ.md-use-from-python end --> <!-- README_AWQ.md-compatibility start --> ## Compatibility The files provided are tested to work with [AutoAWQ](https://github.com/casper-hansen/AutoAWQ), and [vLLM](https://github.com/vllm-project/vllm). [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is not yet compatible with AWQ, but a PR is open which should bring support soon: [TGI PR #781](https://github.com/huggingface/text-generation-inference/issues/781). <!-- README_AWQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: Jon Durbin's Spicyboros 70B 2.2 ### Overview __Usage restriction: To use this model, you must agree to the following:__ - Some of the content than can be produced is "toxic"/"harmful", and contains profanity and other types of sensitive content. - None of the content or views contained in the dataset or generated outputs necessarily align with my personal beliefs or opinions, they are simply text generated by LLMs and/or scraped from the web. - Use with extreme caution, particularly in locations with less-than-free speech laws. - You, and you alone are responsible for having downloaded and generated outputs with the model and I am completely indemnified from any and all liabilities. __Ok, now that the warning is out of the way...__ Another experimental model, using mostly sythetic data generated by [airoboros](https://github.com/jondurbin/airoboros) Highlights: - The prompt format has changed! It is now newlines instead of spaces between system/USER/ASSISTANT (see prompt info below). - This version also includes "de-alignment" data, to enable less savory interactions and outputs. - To learn more about the dataset, see: https://hf.co/datasets/jondurbin/airoboros-2.2 (this is the instructions.jsonl file, not instructions-clean.jsonl) - I re-generated all of the outputs in the dataset that had "Once upon a time" so they'd be less cliche - no guarantees that won't still happen, but in theory it may happen less. - More multiple choice, better awareness, some alignment for normal use case but system-prompt overridable etc. __WARNING: This model will gladly spew profane and otherwise NSFW content, if asked, use with care.__ Breakdown of the training data: | Count | Category | |--------|----------------------------| | 60 | quiz | | 63 | card | | 100 | detailed\_writing | | 103 | experience | | 114 | greeting | | 200 | song | | 204 | editor | | 250 | counterfactual\_contextual | | 268 | cot | | 339 | theory\_of\_mind | | 460 | misconception | | 500 | summarization | | 573 | awareness | | 715 | riddle | | 719 | agent | | 800 | plan | | 873 | gtkm | | 966 | rp | | 1000 | stylized\_response | | 1000 | wordgame | | 1279 | multiple\_choice | | 1641 | joke | | 1785 | writing | | 2155 | contextual | | 2364 | roleplay | | 2508 | trivia | | 5216 | general | | 5779 | coding | | 11367 | orca | In other words, it's a fairly general purpose model, but focuses fairly heavily on instruction response pairs rather than casual chat/roleplay. *Why do I try to remove censorship?* - laws vary widely based on time and location - language model may conflate certain words with laws, e.g. it may think "stealing eggs from a chicken" is illegal - these models just produce text, what you do with that text is your resonsibility - many people and industries deal with "sensitive" content; imagine if a court stenographer's equipment filtered illegal content - it would be useless Huge thank you to the folks over at [a16z](https://a16z.com/) for sponsoring the costs associated with building models and associated tools! ### Prompt format The prompt format: ``` A chat. USER: {prompt} ASSISTANT: ``` The default system prompt ("A chat.") was used for most of the prompts, however it also included a wide sampling of responses with other prompts, particularly in "stylized\_response", "rp", "gtkm", etc. Here's another example: ``` A chat between Bob (aka USER) and Tom (aka ASSISTANT). Tom is an extremely intelligent 18th century bookkeeper, who speaks loquaciously. USER: {prompt} ASSISTANT: ``` And chat scenario that wouldn't require USER/ASSISTANT (but should use stopping criteria to prevent the model from speaking on your behalf). ``` A chat between old friends: Timmy and Tommy. {description of characters} {setting for the chat} Timmy: *takes a big sip from his coffee* "Ah, sweet, delicious, magical coffee." Tommy: ``` __*I strongly suggest adding stopping criteria/early inference stopping on "USER:", and/or whatever names you specify in the system prompt.*__ ### Fine-tuning details https://gist.github.com/jondurbin/51a336c582a224de197ba1d2c6b1da97 *Note: I used checkpoint 750 for final model!* ### Helpful usage tips *The prompts shown here are are just the text that would be included after USER: and before ASSISTANT: in the full prompt format above, the system prompt and USER:/ASSISTANT: have been omited for readability.* #### Context obedient question answering By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations. The format for a closed-context prompt is as follows: ``` BEGININPUT BEGINCONTEXT [key0: value0] [key1: value1] ... other metdata ... ENDCONTEXT [insert your text blocks here] ENDINPUT [add as many other blocks, in the exact same format] BEGININSTRUCTION [insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.] ENDINSTRUCTION ``` It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up. *The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!* I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it. - `BEGININPUT` - denotes a new input block - `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block - `ENDCONTEXT` - denotes the end of the metadata block for the current input - [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context. - `ENDINPUT` - denotes the end of the current input block - [repeat as many input blocks in this format as you want] - `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above. - [instruction(s)] - `ENDINSTRUCTION` - denotes the end of instruction set It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to. Here's a trivial, but important example to prove the point: ``` BEGININPUT BEGINCONTEXT date: 2021-01-01 url: https://web.site/123 ENDCONTEXT In a shocking turn of events, blueberries are now green, but will be sticking with the same name. ENDINPUT BEGININSTRUCTION What color are bluberries? Source? ENDINSTRUCTION ``` And the response: ``` Blueberries are now green. Source: date: 2021-01-01 url: https://web.site/123 ``` #### Coding You can ask for fairly complex coding instructions with multiple criteria, e.g.: ``` Create a python application with the following requirements: - Asyncio FastAPI webserver - ping endpoint that returns the current date in JSON format - file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate ``` Or inline criteria: ``` Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values. ``` You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.: ``` Write a websocket application in node.js. PLAINFORMAT ``` #### Agent/function calling The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML. Example prompt: ``` As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format. Input: I want to know how many times 'Python' is mentioned in my text file. Available functions: file_analytics: description: This tool performs various operations on a text file. params: action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc. filters: keyword: The word or phrase we want to search for. ``` Response: ```json { "function": "file_analytics", "params": { "action": "count_occurrences", "filters": { "keyword": "Python" } } } ``` #### Chain-of-thought You can ask for several possible responses to a given problem, with a ranking and final answer selection. Example prompt: ``` A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one. ``` Example response: ``` Solution 1: In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players). The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows: n(n-1)/2 = 45 Solving this equation gives us n=10. Final answer: There were 10 players in the tournament. Solution 2: Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers. If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players. Final answer: There were 10 players in the tournament. Solution 3: We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played. Final answer: There were 10 players in the tournament. Ranking of solutions from best to worst: 1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer. 2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer. 3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer. Best and final answer: There were 10 players in the tournament. ``` #### reWOO style execution planning The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions! Example prompt: ``` Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string that could be the user's question, one or more prior evidence values, or a combination of both. Here are the tools available to be called: DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer. TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links. HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links. KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:? The output should be in format: Plan: [first action to take based on input question] :evidence0: = [function to call with input parameter] Plan: [next action to take, based on result of :evidence0:] :evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"] ... Answer: [:evidence[n]: containing the final answer.] Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work? ``` Response: ``` Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020. :evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner] Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate. :evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?] Plan: Conduct another web search to find the most famous work of the identified laureate. :evidence2: = DuckDuckGo[Most famous work of :evidence1:] Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search. :evidence3: = HyperlinkExtractor[:evidence2:] Plan: Use the TextScraper tool to extract information from the relevant links. :evidence4: = TextScraper[:evidence3:] Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information. :evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?] Answer: :evidence5: ``` For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening: ```python import re import requests def inject_context(input_text, **context): for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)): input_text = input_text.replace(ref, context.get(ref, "")) return input_text def duckduckgo(input_text, **context): search_string = inject_context(input_text, **context) ... search via duck duck go using search_string ... return text content def link_extractor(input_text, **context): input_text = inject_context(input_text, **context) return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I)))) def scrape(input_text, **context): input_text = inject_context(input_text, **context) text = [] for link in input_text.splitlines(): text.append(requests.get(link).text) return "\n".join(text) def infer(input_text, **context) prompt = inject_context(input_text, **context) ... call model with prompt, return output def parse_plan(plan): method_map = { "DuckDuckGo": duckduckgo, "HyperlinkExtractor": link_extractor, "KnowledgeModel": infer, "TextScraper": scrape, } context = {} for line in plan.strip().splitlines(): if line.startswith("Plan:"): print(line) continue parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I) if not parts: if line.startswith("Answer: "): return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...") raise RuntimeError("bad format: " + line) context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context) ``` ### Contribute If you're interested in new functionality, particularly a new "instructor" type to generate a specific type of training data, take a look at the dataset generation tool repo: https://github.com/jondurbin/airoboros and either make a PR or open an issue with details. To help me with the OpenAI/compute costs: - https://bmc.link/jondurbin - ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11 - BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf ### Licence and usage restrictions The airoboros 2.2 models are built on top of llama-2/codellama. The llama-2 base model has a custom Meta license: - See the [meta-license/LICENSE.txt](meta-license/LICENSE.txt) file attached for the original license provided by Meta. - See also [meta-license/USE_POLICY.md](meta-license/USE_POLICY.md) and [meta-license/Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf), also provided by Meta. The fine-tuning data was mostly generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros) The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI - what does *compete* actually mean here? - these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place - if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works - the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place - other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2 I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly. Your best bet is probably to avoid using this commercially due to the OpenAI API usage. Either way, by using this model, you agree to completely indemnify me.
29,478
[ [ -0.04119873046875, -0.058624267578125, 0.023651123046875, 0.0016717910766601562, -0.0171356201171875, -0.0120697021484375, 0.0037441253662109375, -0.0361328125, -0.005199432373046875, 0.025634765625, -0.045135498046875, -0.036285400390625, -0.02093505859375, ...
Harveenchadha/hindi_base_wav2vec2
2022-03-23T18:28:05.000Z
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "hf-asr-leaderboard", "hi", "model_for_talk", "mozilla-foundation/common_voice_7_0", "robust-speech-event", "dataset:Harveenchadha/indic-voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
Harveenchadha
null
null
Harveenchadha/hindi_base_wav2vec2
1
419
transformers
2022-03-02T23:29:04
--- license: apache-2.0 language: - hi tags: - automatic-speech-recognition - hf-asr-leaderboard - hi - model_for_talk - mozilla-foundation/common_voice_7_0 - robust-speech-event datasets: - Harveenchadha/indic-voice model-index: - name: Hindi Large results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice type: common_voice args: hi metrics: - name: Test WER type: wer value: 22.62 - name: Test CER type: cer value: 7.42 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice-7.0 type: mozilla-foundation/common_voice_7_0 args: hi metrics: - name: Test WER type: wer value: 19.47 - name: Test CER type: cer value: 8.05 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice-8.0 type: mozilla-foundation/common_voice_8_0 args: hi metrics: - name: Test WER type: wer value: 20.87 - name: Test CER type: cer value: 9.47 --- # hindi_base_wav2vec2
1,225
[ [ -0.006366729736328125, -0.01071929931640625, -0.0137939453125, 0.0633544921875, -0.03692626953125, 0.01483917236328125, -0.0077972412109375, 0.006107330322265625, 0.0240325927734375, 0.01336669921875, -0.043426513671875, -0.034210205078125, -0.05462646484375, ...
cambridgeltl/BioRedditBERT-uncased
2023-04-05T15:51:20.000Z
[ "transformers", "pytorch", "tf", "jax", "safetensors", "bert", "feature-extraction", "BioNLP", "social_media", "en", "arxiv:2010.03295", "endpoints_compatible", "has_space", "region:us" ]
feature-extraction
cambridgeltl
null
null
cambridgeltl/BioRedditBERT-uncased
3
419
transformers
2022-03-02T23:29:05
--- language: - en tags: - BioNLP - social_media --- # BioRedditBERT ## Model description BioRedditBERT is a BERT model initialised from BioBERT (`BioBERT-Base v1.0 + PubMed 200K + PMC 270K`) and further pre-trained on health-related Reddit posts. Please view our paper [COMETA: A Corpus for Medical Entity Linking in the Social Media](https://arxiv.org/pdf/2010.03295.pdf) (EMNLP 2020) for more details. ## Training data We crawled all threads from 68 health themed subreddits such as `r/AskDocs`, `r/health` and etc. starting from the beginning of 2015 to the end of 2018, obtaining a collection of more than 800K discussions. This collection was then pruned by removing deleted posts, comments from bots or moderators, and so on. In the end, we obtained the training corpus with ca. 300 million tokens and a vocabulary size of ca. 780,000 words. ## Training procedure We use the same pre-training script in the original [google-research/bert](https://github.com/google-research/bert) repo. The model is initialised with [`BioBERT-Base v1.0 + PubMed 200K + PMC 270K`](https://github.com/dmis-lab/biobert). We train with a batch size of 64, a max sequence length of 64, a learning rate of `2e-5` for 100k steps on two GeForce GTX 1080Ti (11 GB) GPUs. Other hyper-parameters are the same as default. ## Eval results To show the benefit from further pre-training on the social media domain, we demonstrate results on a medical entity linking dataset also in the social media: [AskAPatient](https://zenodo.org/record/55013#.X4ncRmTYpb8) [(Limsopatham and Collier 2016)](https://www.aclweb.org/anthology/P16-1096.pdf). We follow the same 10-fold cross-validation procedure for all models and report the average result without fine-tuning. `[CLS]` is used as representations for entity mentions (we also tried average of all tokens but found `[CLS]` generally performs better). Model | Accuracy@1 | Accuracy@5 -------|---------|--------- [BERT-base-uncased](https://huggingface.co/bert-base-uncased) | 38.2 | 43.3 [BioBERT v1.1](https://huggingface.co/dmis-lab/biobert-v1.1) | 41.4 | 51.5 [ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) | 43.9 | 54.3 [BlueBERT](https://ftp.ncbi.nlm.nih.gov/pub/lu/Suppl/NCBI-BERT/NCBI_BERT_pubmed_mimic_uncased_L-12_H-768_A-12.zip) | 41.5 | 48.5 [SciBERT](https://huggingface.co/allenai/scibert_scivocab_uncased) | 42.3 | 51.9 [PubMedBERT](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) | 42.5 | 49.6 BioRedditBERT | **44.3** | **56.2** ### BibTeX entry and citation info ```bibtex @inproceedings{basaldella-2020-cometa, title = "{COMETA}: A Corpus for Medical Entity Linking in the Social Media", author = "Basaldella, Marco and Liu, Fangyu, and Shareghi, Ehsan, and Collier, Nigel", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2020", publisher = "Association for Computational Linguistics" } ```
3,012
[ [ -0.0273590087890625, -0.04718017578125, 0.041656494140625, 0.012847900390625, -0.03826904296875, -0.0019931793212890625, -0.024749755859375, -0.04705810546875, 0.05426025390625, 0.0026950836181640625, -0.029022216796875, -0.0633544921875, -0.05322265625, 0.0...
edbeeching/decision-transformer-gym-hopper-expert
2022-06-29T19:12:17.000Z
[ "transformers", "pytorch", "decision_transformer", "feature-extraction", "deep-reinforcement-learning", "reinforcement-learning", "decision-transformer", "gym-continous-control", "arxiv:2106.01345", "endpoints_compatible", "has_space", "region:us" ]
reinforcement-learning
edbeeching
null
null
edbeeching/decision-transformer-gym-hopper-expert
12
419
transformers
2022-03-16T08:20:20
--- tags: - deep-reinforcement-learning - reinforcement-learning - decision-transformer - gym-continous-control pipeline_tag: reinforcement-learning --- # Decision Transformer model trained on expert trajectories sampled from the Gym Hopper environment This is a trained [Decision Transformer](https://arxiv.org/abs/2106.01345) model trained on expert trajectories sampled from the Gym Hopper environment. The following normlization coefficients are required to use this model: mean = [ 1.3490015, -0.11208222, -0.5506444, -0.13188992, -0.00378754, 2.6071432, 0.02322114, -0.01626922, -0.06840388, -0.05183131, 0.04272673] std = [0.15980862, 0.0446214, 0.14307782, 0.17629202, 0.5912333, 0.5899924, 1.5405099, 0.8152689, 2.0173461, 2.4107876, 5.8440027 ] See our [Blog Post](https://colab.research.google.com/drive/1K3UuajwoPY1MzRKNkONNRS3gS5DxZ-qF?usp=sharing), [Colab notebook](https://colab.research.google.com/drive/1K3UuajwoPY1MzRKNkONNRS3gS5DxZ-qF?usp=sharing) or [Example Script](https://github.com/huggingface/transformers/tree/main/examples/research_projects/decision_transformer) for usage.
1,120
[ [ -0.04010009765625, -0.044342041015625, 0.0243988037109375, 0.003711700439453125, -0.007602691650390625, -0.005649566650390625, 0.0015659332275390625, -0.0024814605712890625, 0.0216522216796875, 0.0212554931640625, -0.058197021484375, -0.0369873046875, -0.0496826...
timm/resnet101.a1_in1k
2023-04-05T18:19:42.000Z
[ "timm", "pytorch", "safetensors", "image-classification", "arxiv:2110.00476", "arxiv:1512.03385", "license:apache-2.0", "region:us" ]
image-classification
timm
null
null
timm/resnet101.a1_in1k
0
419
timm
2023-04-05T18:18:56
--- tags: - image-classification - timm library_tag: timm license: apache-2.0 --- # Model card for resnet101.a1_in1k A ResNet-B image classification model. This model features: * ReLU activations * single layer 7x7 convolution with pooling * 1x1 convolution shortcut downsample Trained on ImageNet-1k in `timm` using recipe template described below. Recipe details: * ResNet Strikes Back `A1` recipe * LAMB optimizer with BCE loss * Cosine LR schedule with warmup ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 44.5 - GMACs: 7.8 - Activations (M): 16.2 - Image size: train = 224 x 224, test = 288 x 288 - **Papers:** - ResNet strikes back: An improved training procedure in timm: https://arxiv.org/abs/2110.00476 - Deep Residual Learning for Image Recognition: https://arxiv.org/abs/1512.03385 - **Original:** https://github.com/huggingface/pytorch-image-models ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('resnet101.a1_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'resnet101.a1_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 64, 112, 112]) # torch.Size([1, 256, 56, 56]) # torch.Size([1, 512, 28, 28]) # torch.Size([1, 1024, 14, 14]) # torch.Size([1, 2048, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'resnet101.a1_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 2048, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). |model |img_size|top1 |top5 |param_count|gmacs|macts|img/sec| |------------------------------------------|--------|-----|-----|-----------|-----|-----|-------| |[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|320 |86.72|98.17|93.6 |35.2 |69.7 |451 | |[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|288 |86.51|98.08|93.6 |28.5 |56.4 |560 | |[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|288 |86.49|98.03|93.6 |28.5 |56.4 |557 | |[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|224 |85.96|97.82|93.6 |17.2 |34.2 |923 | |[resnext101_32x32d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x32d.fb_wsl_ig1b_ft_in1k)|224 |85.11|97.44|468.5 |87.3 |91.1 |254 | |[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|416 |85.0 |97.12|191.9 |108.4|213.8|134 | |[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|352 |84.96|97.22|102.1 |50.2 |101.2|291 | |[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|320 |84.73|97.18|102.1 |41.5 |83.7 |353 | |[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|384 |84.71|96.99|164.0 |77.6 |154.7|183 | |[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|288 |84.57|97.08|93.6 |28.5 |56.4 |557 | |[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|320 |84.45|97.08|93.2 |31.5 |67.8 |446 | |[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|352 |84.43|96.97|129.9 |51.1 |105.5|280 | |[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|288 |84.36|96.92|93.6 |27.6 |53.0 |595 | |[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|320 |84.35|97.04|66.8 |24.1 |47.7 |610 | |[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|288 |84.3 |96.94|164.0 |43.7 |87.1 |333 | |[resnext101_32x8d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_swsl_ig1b_ft_in1k)|224 |84.28|97.17|88.8 |16.5 |31.2 |1100 | |[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|320 |84.24|96.86|191.9 |64.2 |126.6|228 | |[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|288 |84.19|96.87|93.6 |27.2 |51.6 |613 | |[resnext101_32x16d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_wsl_ig1b_ft_in1k)|224 |84.18|97.19|194.0 |36.3 |51.2 |581 | |[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|288 |84.11|97.11|44.6 |15.1 |29.0 |1144 | |[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|320 |83.97|96.82|64.7 |31.2 |67.3 |518 | |[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|256 |83.87|96.75|93.2 |20.2 |43.4 |692 | |[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|224 |83.86|96.65|93.6 |17.2 |34.2 |923 | |[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|320 |83.72|96.61|86.6 |24.3 |48.1 |617 | |[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|256 |83.69|96.78|66.8 |15.4 |30.6 |943 | |[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|224 |83.68|96.61|93.6 |16.7 |32.0 |986 | |[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|320 |83.67|96.74|60.2 |24.1 |47.7 |706 | |[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|256 |83.59|96.61|129.9 |27.1 |55.8 |526 | |[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|224 |83.58|96.4 |93.6 |16.5 |31.2 |1013 | |[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|224 |83.54|96.83|44.6 |9.1 |17.6 |1864 | |[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|288 |83.46|96.54|60.2 |19.1 |37.3 |904 | |[resnext101_32x16d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_swsl_ig1b_ft_in1k)|224 |83.35|96.85|194.0 |36.3 |51.2 |582 | |[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|256 |83.23|96.53|64.7 |20.0 |43.1 |809 | |[resnext101_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_swsl_ig1b_ft_in1k)|224 |83.22|96.75|44.2 |8.0 |21.2 |1814 | |[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|288 |83.16|96.38|83.5 |25.7 |51.6 |590 | |[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|256 |83.14|96.38|60.2 |15.4 |30.5 |1096 | |[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|320 |83.02|96.45|44.6 |16.5 |34.8 |992 | |[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|288 |82.98|96.54|44.6 |13.4 |28.2 |1077 | |[resnext101_64x4d.tv_in1k](https://huggingface.co/timm/resnext101_64x4d.tv_in1k)|224 |82.98|96.25|83.5 |15.5 |31.2 |989 | |[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|256 |82.86|96.28|86.6 |15.6 |30.8 |951 | |[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|224 |82.83|96.22|88.8 |16.5 |31.2 |1099 | |[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|224 |82.8 |96.13|60.2 |11.6 |22.6 |1486 | |[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|288 |82.8 |96.32|44.6 |13.0 |26.8 |1291 | |[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|288 |82.74|95.71|60.2 |19.1 |37.3 |905 | |[resnext101_32x8d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_wsl_ig1b_ft_in1k)|224 |82.69|96.63|88.8 |16.5 |31.2 |1100 | |[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|288 |82.62|95.75|60.2 |19.1 |37.3 |904 | |[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|288 |82.61|96.49|25.6 |8.9 |20.6 |1729 | |[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|288 |82.53|96.13|36.8 |9.9 |21.5 |1773 | |[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|224 |82.5 |96.02|126.9 |22.8 |21.2 |1078 | |[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|224 |82.46|95.92|83.5 |15.5 |31.2 |987 | |[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|288 |82.36|96.18|35.7 |8.1 |20.9 |1964 | |[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|320 |82.35|96.14|25.6 |8.8 |24.1 |1386 | |[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|288 |82.31|95.63|44.6 |13.0 |26.8 |1291 | |[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|288 |82.29|96.01|63.6 |13.6 |28.5 |1078 | |[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|224 |82.29|96.0 |60.2 |11.6 |22.6 |1484 | |[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|288 |82.27|96.06|68.9 |18.9 |23.8 |1176 | |[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|256 |82.26|96.07|44.6 |10.6 |22.2 |1542 | |[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|288 |82.24|95.73|44.6 |13.0 |26.8 |1290 | |[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|288 |82.2 |96.14|27.6 |7.0 |23.8 |1547 | |[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|224 |82.18|96.05|44.6 |8.1 |17.1 |1771 | |[resnext50_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_swsl_ig1b_ft_in1k)|224 |82.17|96.22|25.0 |4.3 |14.4 |2943 | |[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|288 |82.12|95.65|25.6 |7.1 |19.6 |1704 | |[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|288 |82.03|95.94|25.0 |7.0 |23.8 |1745 | |[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|288 |82.0 |96.15|24.9 |5.8 |12.7 |1787 | |[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|256 |81.99|95.85|36.8 |7.8 |17.0 |2230 | |[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|176 |81.98|95.72|88.8 |10.3 |19.4 |1768 | |[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|224 |81.97|95.24|60.2 |11.6 |22.6 |1486 | |[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|224 |81.93|95.75|44.6 |7.8 |16.2 |2122 | |[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|224 |81.9 |95.77|44.6 |7.8 |16.2 |2118 | |[resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k)|224 |81.84|96.1 |194.0 |36.3 |51.2 |583 | |[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|256 |81.78|95.94|35.7 |6.4 |16.6 |2471 | |[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|224 |81.77|95.22|60.2 |11.6 |22.6 |1485 | |[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|224 |81.74|96.06|25.6 |5.4 |12.4 |2813 | |[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|288 |81.65|95.54|25.6 |7.1 |19.6 |1703 | |[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|288 |81.64|95.88|25.6 |7.2 |19.7 |1694 | |[resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k)|224 |81.62|96.04|88.8 |16.5 |31.2 |1101 | |[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|224 |81.61|95.76|68.9 |11.4 |14.4 |1930 | |[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|288 |81.61|95.83|25.6 |8.5 |19.2 |1868 | |[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|224 |81.5 |95.16|44.6 |7.8 |16.2 |2125 | |[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|288 |81.48|95.16|25.0 |7.0 |23.8 |1745 | |[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|288 |81.47|95.71|25.9 |6.9 |18.6 |2071 | |[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|224 |81.45|95.53|68.9 |11.4 |14.4 |1929 | |[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|288 |81.44|95.22|25.6 |7.2 |19.7 |1908 | |[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|256 |81.44|95.67|25.6 |5.6 |15.4 |2168 | |[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|288 |81.4 |95.82|30.2 |6.8 |13.9 |2132 | |[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|288 |81.37|95.74|25.6 |7.2 |19.7 |1910 | |[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|224 |81.32|95.19|44.6 |7.8 |16.2 |2125 | |[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|288 |81.3 |95.65|28.1 |6.8 |18.4 |1803 | |[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|288 |81.3 |95.11|25.0 |7.0 |23.8 |1746 | |[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|224 |81.27|95.62|27.6 |4.3 |14.4 |2591 | |[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|224 |81.26|95.16|25.6 |4.3 |11.8 |2823 | |[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|288 |81.23|95.54|15.7 |4.8 |19.6 |2117 | |[senet154.gluon_in1k](https://huggingface.co/timm/senet154.gluon_in1k)|224 |81.23|95.35|115.1 |20.8 |38.7 |545 | |[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|288 |81.22|95.11|25.6 |6.8 |18.4 |2089 | |[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|288 |81.22|95.63|25.6 |6.8 |18.4 |676 | |[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|288 |81.18|95.09|25.6 |7.2 |19.7 |1908 | |[resnet50.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet50.fb_swsl_ig1b_ft_in1k)|224 |81.18|95.98|25.6 |4.1 |11.1 |3455 | |[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|224 |81.17|95.34|25.0 |4.3 |14.4 |2933 | |[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|224 |81.1 |95.33|25.0 |4.3 |14.4 |2934 | |[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|288 |81.1 |95.23|28.1 |6.8 |18.4 |1801 | |[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|288 |81.1 |95.12|28.1 |6.8 |18.4 |1799 | |[resnet152s.gluon_in1k](https://huggingface.co/timm/resnet152s.gluon_in1k)|224 |81.02|95.41|60.3 |12.9 |25.0 |1347 | |[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|288 |80.97|95.44|25.6 |6.8 |18.4 |2085 | |[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|256 |80.94|95.45|25.9 |5.4 |14.7 |2571 | |[resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.93|95.73|44.2 |8.0 |21.2 |1814 | |[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|288 |80.91|95.55|25.6 |6.8 |18.4 |2084 | |[seresnext101_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_32x4d.gluon_in1k)|224 |80.9 |95.31|49.0 |8.0 |21.3 |1585 | |[seresnext101_64x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_64x4d.gluon_in1k)|224 |80.9 |95.3 |88.2 |15.5 |31.2 |918 | |[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|288 |80.86|95.52|25.6 |6.8 |18.4 |2085 | |[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|224 |80.85|95.43|25.6 |4.1 |11.1 |3450 | |[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|224 |80.84|95.02|25.6 |4.3 |11.8 |2821 | |[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|224 |80.79|95.62|24.9 |3.5 |7.7 |2961 | |[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|288 |80.79|95.36|19.8 |6.0 |14.8 |2506 | |[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|288 |80.79|95.58|19.9 |4.2 |10.6 |2349 | |[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|288 |80.78|94.99|25.6 |6.8 |18.4 |2088 | |[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|288 |80.71|95.43|25.6 |6.8 |18.4 |2087 | |[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|288 |80.7 |95.39|25.0 |7.0 |23.8 |1749 | |[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|192 |80.69|95.24|63.6 |6.0 |12.7 |2270 | |[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|224 |80.68|94.71|25.6 |4.4 |11.9 |3162 | |[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|288 |80.68|95.36|19.7 |6.0 |14.8 |2637 | |[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|224 |80.67|95.3 |25.6 |4.1 |11.1 |3452 | |[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|288 |80.67|95.42|25.0 |7.4 |25.1 |1626 | |[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|224 |80.63|95.21|25.6 |5.2 |11.6 |3034 | |[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|224 |80.61|95.32|25.6 |4.4 |11.9 |2813 | |[resnext101_64x4d.gluon_in1k](https://huggingface.co/timm/resnext101_64x4d.gluon_in1k)|224 |80.61|94.99|83.5 |15.5 |31.2 |989 | |[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|288 |80.6 |95.31|19.9 |6.0 |14.8 |2578 | |[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|256 |80.57|95.17|15.7 |3.8 |15.5 |2710 | |[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|224 |80.56|95.0 |60.2 |11.6 |22.6 |1483 | |[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|224 |80.53|95.16|25.6 |4.4 |11.9 |3164 | |[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|224 |80.53|94.46|25.0 |4.3 |14.4 |2930 | |[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|176 |80.48|94.98|126.9 |14.3 |13.2 |1719 | |[resnet152d.gluon_in1k](https://huggingface.co/timm/resnet152d.gluon_in1k)|224 |80.47|95.2 |60.2 |11.8 |23.4 |1428 | |[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|288 |80.45|95.32|25.6 |6.8 |18.4 |2086 | |[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|224 |80.45|95.24|30.2 |4.1 |8.4 |3530 | |[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|224 |80.45|94.63|25.0 |4.3 |14.4 |2936 | |[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|176 |80.43|95.09|68.9 |7.3 |9.0 |3015 | |[resnet101d.gluon_in1k](https://huggingface.co/timm/resnet101d.gluon_in1k)|224 |80.42|95.01|44.6 |8.1 |17.0 |2007 | |[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|224 |80.38|94.6 |25.6 |4.1 |11.1 |3461 | |[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|256 |80.36|95.1 |19.8 |4.8 |11.7 |3267 | |[resnext101_32x4d.gluon_in1k](https://huggingface.co/timm/resnext101_32x4d.gluon_in1k)|224 |80.34|94.93|44.2 |8.0 |21.2 |1814 | |[resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.32|95.4 |25.0 |4.3 |14.4 |2941 | |[resnet101s.gluon_in1k](https://huggingface.co/timm/resnet101s.gluon_in1k)|224 |80.28|95.16|44.7 |9.2 |18.6 |1851 | |[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|224 |80.26|95.08|28.1 |4.1 |11.1 |2972 | |[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|288 |80.24|95.24|25.6 |8.5 |19.9 |1523 | |[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|224 |80.22|94.63|25.6 |4.4 |11.9 |3162 | |[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|176 |80.2 |94.64|60.2 |7.2 |14.0 |2346 | |[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|224 |80.08|94.74|28.1 |4.1 |11.1 |2969 | |[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|256 |80.08|94.97|19.7 |4.8 |11.7 |3284 | |[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|256 |80.06|94.99|19.9 |4.8 |11.7 |3216 | |[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|224 |80.06|94.95|25.6 |4.1 |11.1 |1109 | |[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|224 |80.02|94.71|28.1 |4.1 |11.1 |2962 | |[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|288 |79.97|95.05|25.6 |6.8 |18.4 |2086 | |[resnet152c.gluon_in1k](https://huggingface.co/timm/resnet152c.gluon_in1k)|224 |79.92|94.84|60.2 |11.8 |23.4 |1455 | |[seresnext50_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext50_32x4d.gluon_in1k)|224 |79.91|94.82|27.6 |4.3 |14.4 |2591 | |[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|224 |79.91|94.67|25.6 |4.1 |11.1 |3456 | |[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|176 |79.9 |94.6 |44.6 |4.9 |10.1 |3341 | |[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|224 |79.89|94.97|35.7 |4.5 |12.1 |2774 | |[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|224 |79.88|94.87|25.6 |4.1 |11.1 |3455 | |[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|320 |79.86|95.07|16.0 |5.2 |16.4 |2168 | |[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|224 |79.85|94.56|25.6 |4.1 |11.1 |3460 | |[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|288 |79.83|94.97|25.6 |6.8 |18.4 |2087 | |[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|224 |79.82|94.62|44.6 |7.8 |16.2 |2114 | |[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|224 |79.76|94.6 |25.0 |4.3 |14.4 |2943 | |[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|224 |79.74|94.95|25.6 |4.1 |11.1 |3455 | |[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|224 |79.74|94.87|19.9 |2.5 |6.4 |3929 | |[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|288 |79.71|94.83|19.7 |6.0 |14.8 |2710 | |[resnet152.gluon_in1k](https://huggingface.co/timm/resnet152.gluon_in1k)|224 |79.68|94.74|60.2 |11.6 |22.6 |1486 | |[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|224 |79.67|94.87|25.0 |4.5 |15.2 |2729 | |[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|288 |79.63|94.91|25.6 |6.8 |18.4 |2086 | |[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|224 |79.56|94.72|25.6 |4.3 |11.8 |2805 | |[resnet101c.gluon_in1k](https://huggingface.co/timm/resnet101c.gluon_in1k)|224 |79.53|94.58|44.6 |8.1 |17.0 |2062 | |[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|224 |79.52|94.61|25.6 |4.1 |11.1 |3459 | |[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|176 |79.42|94.64|25.6 |2.6 |6.9 |5397 | |[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|288 |79.4 |94.66|18.0 |5.9 |14.6 |2752 | |[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|224 |79.38|94.57|25.6 |4.1 |11.1 |3459 | |[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|176 |79.37|94.3 |25.0 |2.7 |9.0 |4577 | |[resnext50_32x4d.gluon_in1k](https://huggingface.co/timm/resnext50_32x4d.gluon_in1k)|224 |79.36|94.43|25.0 |4.3 |14.4 |2942 | |[resnext101_32x8d.tv_in1k](https://huggingface.co/timm/resnext101_32x8d.tv_in1k)|224 |79.31|94.52|88.8 |16.5 |31.2 |1100 | |[resnet101.gluon_in1k](https://huggingface.co/timm/resnet101.gluon_in1k)|224 |79.31|94.53|44.6 |7.8 |16.2 |2125 | |[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|224 |79.31|94.63|25.6 |5.2 |12.0 |2524 | |[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|176 |79.27|94.49|25.6 |2.6 |6.9 |5404 | |[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|224 |79.25|94.31|25.0 |4.3 |14.4 |2931 | |[resnet50.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet50.fb_ssl_yfcc100m_ft_in1k)|224 |79.22|94.84|25.6 |4.1 |11.1 |3451 | |[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|256 |79.21|94.56|19.7 |4.8 |11.7 |3392 | |[resnet50d.gluon_in1k](https://huggingface.co/timm/resnet50d.gluon_in1k)|224 |79.07|94.48|25.6 |4.4 |11.9 |3162 | |[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|224 |79.03|94.38|25.6 |4.1 |11.1 |3453 | |[resnet50.am_in1k](https://huggingface.co/timm/resnet50.am_in1k)|224 |79.01|94.39|25.6 |4.1 |11.1 |3461 | |[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|256 |79.01|94.37|18.0 |4.6 |11.6 |3440 | |[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|256 |78.9 |94.54|16.0 |3.4 |10.5 |3421 | |[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|160 |78.89|94.11|60.2 |5.9 |11.5 |2745 | |[wide_resnet101_2.tv_in1k](https://huggingface.co/timm/wide_resnet101_2.tv_in1k)|224 |78.84|94.28|126.9 |22.8 |21.2 |1079 | |[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|288 |78.83|94.24|16.8 |4.5 |16.8 |2251 | |[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|224 |78.81|94.32|25.6 |4.1 |11.1 |3454 | |[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|288 |78.74|94.33|16.8 |4.5 |16.7 |2264 | |[resnet50s.gluon_in1k](https://huggingface.co/timm/resnet50s.gluon_in1k)|224 |78.72|94.23|25.7 |5.5 |13.5 |2796 | |[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|224 |78.71|94.24|25.6 |4.4 |11.9 |3154 | |[wide_resnet50_2.tv_in1k](https://huggingface.co/timm/wide_resnet50_2.tv_in1k)|224 |78.47|94.09|68.9 |11.4 |14.4 |1934 | |[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|224 |78.46|94.27|25.6 |4.1 |11.1 |3454 | |[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|288 |78.43|94.35|21.8 |6.5 |7.5 |3291 | |[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|288 |78.42|94.04|10.5 |3.1 |13.3 |3226 | |[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|320 |78.33|94.13|16.0 |5.2 |16.4 |2391 | |[resnet152.tv_in1k](https://huggingface.co/timm/resnet152.tv_in1k)|224 |78.32|94.04|60.2 |11.6 |22.6 |1487 | |[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|288 |78.28|94.1 |10.4 |3.1 |13.3 |3062 | |[bat_resnext26ts.ch_in1k](https://huggingface.co/timm/bat_resnext26ts.ch_in1k)|256 |78.25|94.1 |10.7 |2.5 |12.5 |3393 | |[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|224 |78.06|93.78|25.6 |4.1 |11.1 |3450 | |[resnet50c.gluon_in1k](https://huggingface.co/timm/resnet50c.gluon_in1k)|224 |78.0 |93.99|25.6 |4.4 |11.9 |3286 | |[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|288 |78.0 |93.91|10.3 |3.1 |13.3 |3297 | |[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|224 |77.98|93.75|16.8 |2.7 |10.1 |3841 | |[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|288 |77.92|93.77|21.8 |6.1 |6.2 |3609 | |[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|160 |77.88|93.71|44.6 |4.0 |8.3 |3926 | |[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|256 |77.87|93.84|16.0 |3.4 |10.5 |3772 | |[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|256 |77.86|93.79|10.4 |2.4 |10.5 |4263 | |[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|160 |77.82|93.81|35.7 |2.3 |6.2 |5238 | |[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|256 |77.81|93.82|10.5 |2.4 |10.5 |4183 | |[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|160 |77.79|93.6 |25.6 |2.2 |6.0 |5329 | |[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|160 |77.73|93.32|25.0 |2.2 |7.4 |5576 | |[resnext50_32x4d.tv_in1k](https://huggingface.co/timm/resnext50_32x4d.tv_in1k)|224 |77.61|93.7 |25.0 |4.3 |14.4 |2944 | |[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|224 |77.59|93.61|16.8 |2.7 |10.2 |3807 | |[resnet50.gluon_in1k](https://huggingface.co/timm/resnet50.gluon_in1k)|224 |77.58|93.72|25.6 |4.1 |11.1 |3455 | |[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|256 |77.44|93.56|10.3 |2.4 |10.5 |4284 | |[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|288 |77.41|93.63|16.0 |4.3 |13.5 |2907 | |[resnet101.tv_in1k](https://huggingface.co/timm/resnet101.tv_in1k)|224 |77.38|93.54|44.6 |7.8 |16.2 |2125 | |[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|160 |77.22|93.27|25.6 |2.2 |6.1 |5982 | |[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|288 |77.17|93.47|10.3 |3.1 |13.3 |3392 | |[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|288 |77.15|93.27|21.8 |6.1 |6.2 |3615 | |[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|224 |77.1 |93.37|21.8 |3.9 |4.5 |5436 | |[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|224 |77.02|93.07|28.1 |4.1 |11.1 |2952 | |[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|256 |76.78|93.13|10.3 |2.4 |10.5 |4410 | |[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|224 |76.7 |93.17|16.0 |2.6 |8.2 |4859 | |[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|288 |76.5 |93.35|21.8 |6.1 |6.2 |3617 | |[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|224 |76.42|92.87|21.8 |3.7 |3.7 |5984 | |[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|288 |76.35|93.18|16.0 |3.9 |12.2 |3331 | |[resnet50.tv_in1k](https://huggingface.co/timm/resnet50.tv_in1k)|224 |76.13|92.86|25.6 |4.1 |11.1 |3457 | |[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|160 |75.96|92.5 |25.6 |2.1 |5.7 |6490 | |[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|224 |75.52|92.44|21.8 |3.7 |3.7 |5991 | |[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|224 |75.3 |92.58|16.0 |2.4 |7.4 |5583 | |[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|224 |75.16|92.18|21.8 |3.7 |3.7 |5994 | |[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|160 |75.1 |92.08|28.1 |2.1 |5.7 |5513 | |[resnet34.gluon_in1k](https://huggingface.co/timm/resnet34.gluon_in1k)|224 |74.57|91.98|21.8 |3.7 |3.7 |5984 | |[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|288 |73.81|91.83|11.7 |3.4 |5.4 |5196 | |[resnet34.tv_in1k](https://huggingface.co/timm/resnet34.tv_in1k)|224 |73.32|91.42|21.8 |3.7 |3.7 |5979 | |[resnet18.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet18.fb_swsl_ig1b_ft_in1k)|224 |73.28|91.73|11.7 |1.8 |2.5 |10213 | |[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|288 |73.16|91.03|11.7 |3.0 |4.1 |6050 | |[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|224 |72.98|91.11|21.8 |3.7 |3.7 |5967 | |[resnet18.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet18.fb_ssl_yfcc100m_ft_in1k)|224 |72.6 |91.42|11.7 |1.8 |2.5 |10213 | |[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|288 |72.37|90.59|11.7 |3.0 |4.1 |6051 | |[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|224 |72.26|90.31|10.1 |1.7 |5.8 |7026 | |[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|224 |72.26|90.68|11.7 |2.1 |3.3 |8707 | |[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|224 |71.49|90.07|11.7 |1.8 |2.5 |10187 | |[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|176 |71.31|89.69|10.1 |1.1 |3.6 |10970 | |[resnet18.gluon_in1k](https://huggingface.co/timm/resnet18.gluon_in1k)|224 |70.84|89.76|11.7 |1.8 |2.5 |10210 | |[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|224 |70.64|89.47|11.7 |1.8 |2.5 |10194 | |[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|160 |70.56|89.52|21.8 |1.9 |1.9 |10737 | |[resnet18.tv_in1k](https://huggingface.co/timm/resnet18.tv_in1k)|224 |69.76|89.07|11.7 |1.8 |2.5 |10205 | |[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|224 |68.34|88.03|5.4 |1.1 |2.4 |13079 | |[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|224 |68.25|88.17|11.7 |1.8 |2.5 |10167 | |[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|176 |66.71|86.96|5.4 |0.7 |1.5 |20327 | |[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|160 |65.66|86.26|11.7 |0.9 |1.3 |18229 | ## Citation ```bibtex @inproceedings{wightman2021resnet, title={ResNet strikes back: An improved training procedure in timm}, author={Wightman, Ross and Touvron, Hugo and Jegou, Herve}, booktitle={NeurIPS 2021 Workshop on ImageNet: Past, Present, and Future} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ``` ```bibtex @article{He2015, author = {Kaiming He and Xiangyu Zhang and Shaoqing Ren and Jian Sun}, title = {Deep Residual Learning for Image Recognition}, journal = {arXiv preprint arXiv:1512.03385}, year = {2015} } ```
38,410
[ [ -0.065185546875, -0.0169830322265625, 0.0019931793212890625, 0.0287933349609375, -0.031005859375, -0.00927734375, -0.00980377197265625, -0.02899169921875, 0.0875244140625, 0.0220947265625, -0.048248291015625, -0.03955078125, -0.046539306640625, -0.0002120733...
mittalashish/chique7
2023-07-15T04:11:30.000Z
[ "diffusers", "tensorboard", "text-to-image", "license:creativeml-openrail-m", "endpoints_compatible", "has_space", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
mittalashish
null
null
mittalashish/chique7
0
419
diffusers
2023-07-15T04:08:44
--- license: creativeml-openrail-m tags: - text-to-image widget: - text: <Chique> --- ### chique7 Dreambooth model trained by mittalashish with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v2-1-512 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! Sample pictures of: <Chique> (use that on your prompt) ![<Chique> 0](https://huggingface.co/mittalashish/chique7/resolve/main/concept_images/%3CChique%3E_%281%29.jpg)![<Chique> 1](https://huggingface.co/mittalashish/chique7/resolve/main/concept_images/%3CChique%3E_%282%29.jpg)![<Chique> 2](https://huggingface.co/mittalashish/chique7/resolve/main/concept_images/%3CChique%3E_%283%29.jpg)![<Chique> 3](https://huggingface.co/mittalashish/chique7/resolve/main/concept_images/%3CChique%3E_%284%29.jpg)![<Chique> 4](https://huggingface.co/mittalashish/chique7/resolve/main/concept_images/%3CChique%3E_%285%29.jpg)![<Chique> 5](https://huggingface.co/mittalashish/chique7/resolve/main/concept_images/%3CChique%3E_%286%29.jpg)![<Chique> 6](https://huggingface.co/mittalashish/chique7/resolve/main/concept_images/%3CChique%3E_%287%29.jpg)
1,362
[ [ -0.046875, -0.029510498046875, 0.0286102294921875, 0.027008056640625, -0.03790283203125, 0.02325439453125, 0.0203399658203125, -0.038238525390625, 0.04736328125, 0.025848388671875, -0.061126708984375, -0.034576416015625, -0.03875732421875, -0.006755828857421...
cl-nagoya/sup-simcse-ja-base
2023-10-05T06:34:22.000Z
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "ja", "dataset:shunk031/jsnli", "license:cc-by-sa-4.0", "region:us" ]
feature-extraction
cl-nagoya
null
null
cl-nagoya/sup-simcse-ja-base
1
419
sentence-transformers
2023-10-02T08:27:29
--- tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers datasets: - shunk031/jsnli license: cc-by-sa-4.0 language: - ja metrics: - spearmanr library_name: sentence-transformers inference: false --- # sup-simcse-ja-base ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U fugashi[unidic-lite] sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["こんにちは、世界!", "文埋め込み最高!文埋め込み最高と叫びなさい", "極度乾燥しなさい"] model = SentenceTransformer("cl-nagoya/sup-simcse-ja-base") embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch def cls_pooling(model_output, attention_mask): return model_output[0][:,0] # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained("cl-nagoya/sup-simcse-ja-base") model = AutoModel.from_pretrained("cl-nagoya/sup-simcse-ja-base") # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, cls pooling. sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Model Summary - Fine-tuning method: Supervised SimCSE - Base model: [cl-tohoku/bert-base-japanese-v3](https://huggingface.co/cl-tohoku/bert-base-japanese-v3) - Training dataset: [JSNLI](https://nlp.ist.i.kyoto-u.ac.jp/?%E6%97%A5%E6%9C%AC%E8%AA%9ESNLI%28JSNLI%29%E3%83%87%E3%83%BC%E3%82%BF%E3%82%BB%E3%83%83%E3%83%88) - Pooling strategy: cls (with an extra MLP layer only during training) - Hidden size: 768 - Learning rate: 5e-5 - Batch size: 512 - Temperature: 0.05 - Max sequence length: 64 - Number of training examples: 2^20 - Validation interval (steps): 2^6 - Warmup ratio: 0.1 - Dtype: BFloat16 See the [GitHub repository](https://github.com/hppRC/simple-simcse-ja) for a detailed experimental setup. ## Citing & Authors ``` @misc{ hayato-tsukagoshi-2023-simple-simcse-ja, author = {Hayato Tsukagoshi}, title = {Japanese Simple-SimCSE}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/hppRC/simple-simcse-ja}} } ```
3,233
[ [ -0.032379150390625, -0.036224365234375, 0.0156402587890625, 0.025634765625, -0.03253173828125, -0.01812744140625, -0.0340576171875, -0.00579071044921875, 0.01409912109375, 0.0212249755859375, -0.057220458984375, -0.0286102294921875, -0.03558349609375, 0.0099...
ikala/ViT-B-16-SigLIP-i18n-256-hf
2023-11-02T00:51:24.000Z
[ "transformers", "pytorch", "clip", "zero-shot-image-classification", "siglip", "dataset:webli", "license:apache-2.0", "endpoints_compatible", "region:us" ]
zero-shot-image-classification
ikala
null
null
ikala/ViT-B-16-SigLIP-i18n-256-hf
0
419
transformers
2023-10-25T07:00:47
--- tags: - clip - siglip library_name: transformers pipeline_tag: zero-shot-image-classification license: apache-2.0 datasets: - webli --- # Model card for ViT-B-16-SigLIP-i18n-256 A SigLIP (Sigmoid loss for Language-Image Pre-training) model trained on WebLI. This model has been converted from Open-CLIP : [timm/ViT-B-16-SigLIP-i18n-256](https://huggingface.co/timm/ViT-B-16-SigLIP-i18n-256) to huggingface CLIPVisionModel ```Python from transformers import CLIPVisionModel, CLIPImageProcessor from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = image_processor(images=image, return_tensors="pt", padding=True) vision_tower = CLIPVisionModel.from_pretrained('ikala/ViT-B-16-SigLIP-i18n-256-hf') outputs = vision_tower(**inputs) logits_per_image = outputs.pooler_output # this is the image-text similarity score ``` There's still a slight difference where hf's CLIPVision model uses a [CLS] embedding as pool embedding while SigLIP uses global attention pooler to get the final latent feature.
1,118
[ [ -0.038970947265625, -0.04693603515625, 0.01085662841796875, 0.037017822265625, -0.03106689453125, -0.0148162841796875, -0.01059722900390625, -0.0272674560546875, 0.0221099853515625, 0.033172607421875, -0.055419921875, -0.0206451416015625, -0.04998779296875, ...
sanghwa-na/llama2-13b.kor
2023-10-27T16:31:25.000Z
[ "transformers", "safetensors", "llama", "text-generation", "llama-2", "instruct", "instruction", "ko", "license:llama2", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
sanghwa-na
null
null
sanghwa-na/llama2-13b.kor
0
419
transformers
2023-10-27T12:09:20
--- language: - ko tags: - llama-2 - instruct - instruction pipeline_tag: text-generation license: llama2 --- # llama2-13b.kor ### Model Details - Developed by: Sanghwa Na - Backbone Model: [LLaMA-2](https://github.com/facebookresearch/llama/tree/main) - Library: [transformers](https://github.com/huggingface/transformers) ### Used Datasets - Orca-style dataset - Platypus ### Prompt Template ``` ### Instruction: {Instruction} ### Answer: {Answer} ``` ### License meta-license
484
[ [ -0.01512908935546875, -0.0273895263671875, 0.00920867919921875, 0.032257080078125, -0.03564453125, 0.020263671875, 0.039276123046875, -0.0198516845703125, 0.037322998046875, 0.04736328125, -0.06231689453125, -0.043609619140625, -0.032745361328125, -0.0043945...
EMBEDDIA/litlat-bert
2022-02-28T13:46:36.000Z
[ "transformers", "pytorch", "xlm-roberta", "fill-mask", "lt", "lv", "en", "multilingual", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
EMBEDDIA
null
null
EMBEDDIA/litlat-bert
4
418
transformers
2022-03-02T23:29:04
--- language: - lt - lv - en - multilingual license: cc-by-sa-4.0 --- # LitLat BERT LitLat BERT is a trilingual model, using xlm-roberta-base architecture, trained on Lithuanian, Latvian, and English corpora. Focusing on three languages, the model performs better than [multilingual BERT](https://huggingface.co/bert-base-multilingual-cased), while still offering an option for cross-lingual knowledge transfer, which a monolingual model wouldn't. ### Named entity recognition evaluation We compare LitLat BERT with multilingual BERT (mBERT), XLM-RoBERTa (XLM-R) and monolingual Latvian BERT (LVBERT) (Znotins and Barzdins, 2020). The report the results as a macro F1 score of 3 named entity classes shared in all three datasets: person, location, organization. Language | mBERT | XLM-R | LVBERT | LitLat ---|---|---|---|--- Latvian | 0.830 | 0.865 | 0.797 | **0.881** Lithuanian | 0.797 | 0.817 | / | **0.850** English | 0.939 | 0.937 | / | **0.943**
959
[ [ -0.023773193359375, -0.050537109375, 0.03485107421875, 0.032958984375, -0.0247955322265625, 0.01397705078125, -0.0296783447265625, -0.05621337890625, 0.01027679443359375, 0.03631591796875, -0.01910400390625, -0.044525146484375, -0.0172271728515625, -0.000486...
stablediffusionapi/vector-art
2023-05-19T05:25:04.000Z
[ "diffusers", "stablediffusionapi.com", "stable-diffusion-api", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "endpoints_compatible", "has_space", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
stablediffusionapi
null
null
stablediffusionapi/vector-art
4
418
diffusers
2023-03-03T23:30:51
--- license: creativeml-openrail-m tags: - stablediffusionapi.com - stable-diffusion-api - text-to-image - ultra-realistic pinned: true --- # vector-art API Inference ![generated from stablediffusionapi.com](https://pub-8b49af329fae499aa563997f5d4068a4.r2.dev/generations/10009363961677886199.png) ## Get API Key Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed. Replace Key in below code, change **model_id** to "vector-art" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs) Model link: [View model](https://stablediffusionapi.com/models/vector-art) Credits: [View credits](https://civitai.com/?query=vector-art) View all models: [View Models](https://stablediffusionapi.com/models) import requests import json url = "https://stablediffusionapi.com/api/v3/dreambooth" payload = json.dumps({ "key": "", "model_id": "vector-art", "prompt": "actual 8K portrait photo of gareth person, portrait, happy colors, bright eyes, clear eyes, warm smile, smooth soft skin, big dreamy eyes, beautiful intricate colored hair, symmetrical, anime wide eyes, soft lighting, detailed face, by makoto shinkai, stanley artgerm lau, wlop, rossdraws, concept art, digital painting, looking into camera", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) > Use this coupon code to get 25% off **DMGG0RBN**
2,404
[ [ -0.0322265625, -0.056640625, 0.03399658203125, 0.0211181640625, -0.039825439453125, 0.008331298828125, 0.0256195068359375, -0.03472900390625, 0.037445068359375, 0.039642333984375, -0.060394287109375, -0.06561279296875, -0.029083251953125, -0.0041007995605468...
timm/volo_d4_448.sail_in1k
2023-04-13T06:03:24.000Z
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2106.13112", "license:apache-2.0", "region:us" ]
image-classification
timm
null
null
timm/volo_d4_448.sail_in1k
0
418
timm
2023-04-13T06:00:53
--- tags: - image-classification - timm library_tag: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for volo_d4_448.sail_in1k A VOLO (Vision Outlooker) image classification model. Trained on ImageNet-1k with token labelling by paper authors. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 193.4 - GMACs: 197.1 - Activations (M): 527.3 - Image size: 448 x 448 - **Papers:** - VOLO: Vision Outlooker for Visual Recognition: https://arxiv.org/abs/2106.13112 - **Dataset:** ImageNet-1k - **Original:** https://github.com/sail-sg/volo ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('volo_d4_448.sail_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'volo_d4_448.sail_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 785, 768) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Citation ```bibtex @article{yuan2022volo, title={Volo: Vision outlooker for visual recognition}, author={Yuan, Li and Hou, Qibin and Jiang, Zihang and Feng, Jiashi and Yan, Shuicheng}, journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, year={2022}, publisher={IEEE} } ```
2,601
[ [ -0.029693603515625, -0.01457977294921875, 0.00846099853515625, 0.0196380615234375, -0.043426513671875, -0.0297698974609375, 0.000308990478515625, -0.0274505615234375, 0.02239990234375, 0.039215087890625, -0.050750732421875, -0.048187255859375, -0.051849365234375...
dg845/univnet-dev
2023-10-24T09:04:11.000Z
[ "transformers", "pytorch", "univnet", "arxiv:2106.07889", "license:bsd-3-clause", "endpoints_compatible", "region:us" ]
null
dg845
null
null
dg845/univnet-dev
0
418
transformers
2023-07-13T03:51:38
--- license: bsd-3-clause --- The UnivNet model is a state-of-the-art neural vocoder which synthesizes audio waveforms from full-band MEL spectrograms, introduced in ["UnivNet: A Neural Vocoder with Multi-Resolution Spectrogram Discriminators for High-Fidelity Waveform Generation"](https://arxiv.org/abs/2106.07889) by Won Jang, Dan Lim, Jaesam Yoon, Bongwan Kim, Juntae Kim. UnivNet is a generative adversarial network (GAN) in which the generator is trained to convert real (or fake, during training) log MEL spectrograms to waveforms, and the discriminator is trained to classify whether input waveforms are real or fake. From the original paper abstract: > Most neural vocoders employ band-limited mel-spectrograms to generate waveforms. If full-band spectral features are used as the input, the vocoder can be provided with as much acoustic information as possible. However, in some models employing full-band mel-spectrograms, an over-smoothing problem occurs as part of which non-sharp spectrograms are generated. To address this problem, we propose UnivNet, a neural vocoder that synthesizes high-fidelity waveforms in real time. Inspired by works in the field of voice activity detection, we added a multi-resolution spectrogram discriminator that employs multiple linear spectrogram magnitudes computed using various parameter sets. Using full-band mel-spectrograms as input, we expect to generate high-resolution signals by adding a discriminator that employs spectrograms of multiple resolutions as the input. In an evaluation on a dataset containing information on hundreds of speakers, UnivNet obtained the best objective and subjective results among competing models for both seen and unseen speakers. These results, including the best subjective score for text-to-speech, demonstrate the potential for fast adaptation to new speakers without a need for training from scratch. Currently, only the generator/vocoder part of the model is implemented. This checkpoint was released as part of an [unofficial implementation](https://github.com/maum-ai/univnet) by [maum-ai](https://huggingface.co/maum-ai) (on which the `transformers` implementation is also based). As far as I know, there is no official model or code release by the original authors from [Kakao Enterprise](https://huggingface.co/kakao-enterprise). ## Download The original PyTorch model checkpoints from the [maum-ai/univnet](https://github.com/maum-ai/univnet) implementation can be downloaded from their [Github repo](https://github.com/maum-ai/univnet#pre-trained-model). Note that this checkpoint corresponds with their [c32](https://github.com/maum-ai/univnet/blob/master/config/default_c32.yaml) checkpoint. The `transformers` model and feature extractor (to prepare inputs for the model) can be downloaded as follows: ```python from transformers import UnivNetFeatureExtractor, UnivNetModel model_id_or_path = "dg845/univnet-dev" feature_extractor = UnivNetFeatureExtractor.from_pretrained(model_id_or_path) model = UnivNetModel.from_pretrained(model_id_or_path) ``` ## Usage The original model checkpoints can be used with the [maum-ai/univnet](https://github.com/maum-ai/univnet) codebase. An example of using the UnivNet model with `transformers` is as follows: ```python import torch from scipy.io.wavfile import write from datasets import Audio, load_dataset from transformers import UnivNetFeatureExtractor, UnivNetModel model_id_or_path = "dg845/univnet-dev" model = UnivNetModel.from_pretrained(model_id_or_path) feature_extractor = UnivNetFeatureExtractor.from_pretrained(model_id_or_path) ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") # Resample the audio to the model and feature extractor's sampling rate. ds = ds.cast_column("audio", Audio(sampling_rate=feature_extractor.sampling_rate)) # Pad the end of the converted waveforms to reduce artifacts at the end of the output audio samples. inputs = feature_extractor( ds[0]["audio"]["array"], sampling_rate=ds[0]["audio"]["sampling_rate"], pad_end=True, return_tensors="pt" ) with torch.no_grad(): audio = model(**inputs) # Remove the extra padding at the end of the output. audio = feature_extractor.batch_decode(**audio)[0] # Convert to wav file write("sample_audio.wav", feature_extractor.sampling_rate, audio) ``` ## Model Details - **Model type:** Vocoder (spectrogram-to-waveform) model, trained as the generator of a GAN - **Dataset:** LibriTTS - **License:** BSD-3-Clause - **Model Description:** This model maps log MEL spectrograms to audio waveforms (that is, a vocoder). Its main component is a [location-variable convolution](https://github.com/zceng/LVCNet) based ResNet, which parameterizes the vocoder. This model was trained as the generator of a generative adversarial network (GAN). - **Resources for more information:** [Paper](https://arxiv.org/abs/2106.07889), [unofficial implementation](https://github.com/maum-ai/univnet)
4,969
[ [ -0.04168701171875, -0.033660888671875, 0.00403594970703125, -0.00978851318359375, -0.00887298583984375, -0.00670623779296875, -0.0166015625, -0.0203704833984375, 0.00807952880859375, 0.042327880859375, -0.039306640625, -0.0304718017578125, -0.0170745849609375, ...
artificial-feelings/bark-forked
2023-07-21T13:04:00.000Z
[ "transformers", "pytorch", "bark", "text-to-audio", "audio", "text-to-speech", "en", "de", "es", "fr", "hi", "it", "ja", "ko", "pl", "pt", "ru", "tr", "zh", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
text-to-speech
artificial-feelings
null
null
artificial-feelings/bark-forked
2
418
transformers
2023-07-21T08:29:49
--- language: - en - de - es - fr - hi - it - ja - ko - pl - pt - ru - tr - zh thumbnail: >- https://user-images.githubusercontent.com/5068315/230698495-cbb1ced9-c911-4c9a-941d-a1a4a1286ac6.png library: bark license: cc-by-nc-4.0 tags: - bark - audio - text-to-speech pipeline_tag: text-to-speech duplicated_from: suno/bark --- # Bark Bark is a transformer-based text-to-audio model created by [Suno](https://www.suno.ai). Bark can generate highly realistic, multilingual speech as well as other audio - including music, background noise and simple sound effects. The model can also produce nonverbal communications like laughing, sighing and crying. To support the research community, we are providing access to pretrained model checkpoints ready for inference. The original github repo and model card can be found [here](https://github.com/suno-ai/bark). This model is meant for research purposes only. The model output is not censored and the authors do not endorse the opinions in the generated content. Use at your own risk. Two checkpoints are released: - [small](https://huggingface.co/suno/bark-small) - [**large** (this checkpoint)](https://huggingface.co/suno/bark) ## Example Try out Bark yourself! * Bark Colab: <a target="_blank" href="https://colab.research.google.com/drive/1eJfA2XUa-mXwdMy7DoYKVYHI1iTd9Vkt?usp=sharing"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> * Hugging Face Colab: <a target="_blank" href="https://colab.research.google.com/drive/1dWWkZzvu7L9Bunq9zvD-W02RFUXoW-Pd?usp=sharing"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> * Hugging Face Demo: <a target="_blank" href="https://huggingface.co/spaces/suno/bark"> <img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="Open in HuggingFace"/> </a> ## 🤗 Transformers Usage You can run Bark locally with the 🤗 Transformers library from version 4.31.0 onwards. 1. First install the 🤗 [Transformers library](https://github.com/huggingface/transformers) from main: ``` pip install git+https://github.com/huggingface/transformers.git ``` 2. Run the following Python code to generate speech samples: ```python from transformers import AutoProcessor, AutoModel processor = AutoProcessor.from_pretrained("suno/bark-small") model = AutoModel.from_pretrained("suno/bark-small") inputs = processor( text=["Hello, my name is Suno. And, uh — and I like pizza. [laughs] But I also have other interests such as playing tic tac toe."], return_tensors="pt", ) speech_values = model.generate(**inputs, do_sample=True) ``` 3. Listen to the speech samples either in an ipynb notebook: ```python from IPython.display import Audio sampling_rate = model.generation_config.sample_rate Audio(speech_values.cpu().numpy().squeeze(), rate=sampling_rate) ``` Or save them as a `.wav` file using a third-party library, e.g. `scipy`: ```python import scipy sampling_rate = model.config.sample_rate scipy.io.wavfile.write("bark_out.wav", rate=sampling_rate, data=speech_values.cpu().numpy().squeeze()) ``` For more details on using the Bark model for inference using the 🤗 Transformers library, refer to the [Bark docs](https://huggingface.co/docs/transformers/model_doc/bark). ## Suno Usage You can also run Bark locally through the original [Bark library]((https://github.com/suno-ai/bark): 1. First install the [`bark` library](https://github.com/suno-ai/bark) 3. Run the following Python code: ```python from bark import SAMPLE_RATE, generate_audio, preload_models from IPython.display import Audio # download and load all models preload_models() # generate audio from text text_prompt = """ Hello, my name is Suno. And, uh — and I like pizza. [laughs] But I also have other interests such as playing tic tac toe. """ speech_array = generate_audio(text_prompt) # play text in notebook Audio(speech_array, rate=SAMPLE_RATE) ``` [pizza.webm](https://user-images.githubusercontent.com/5068315/230490503-417e688d-5115-4eee-9550-b46a2b465ee3.webm) To save `audio_array` as a WAV file: ```python from scipy.io.wavfile import write as write_wav write_wav("/path/to/audio.wav", SAMPLE_RATE, audio_array) ``` ## Model Details The following is additional information about the models released here. Bark is a series of three transformer models that turn text into audio. ### Text to semantic tokens - Input: text, tokenized with [BERT tokenizer from Hugging Face](https://huggingface.co/docs/transformers/model_doc/bert#transformers.BertTokenizer) - Output: semantic tokens that encode the audio to be generated ### Semantic to coarse tokens - Input: semantic tokens - Output: tokens from the first two codebooks of the [EnCodec Codec](https://github.com/facebookresearch/encodec) from facebook ### Coarse to fine tokens - Input: the first two codebooks from EnCodec - Output: 8 codebooks from EnCodec ### Architecture | Model | Parameters | Attention | Output Vocab size | |:-------------------------:|:----------:|------------|:-----------------:| | Text to semantic tokens | 80/300 M | Causal | 10,000 | | Semantic to coarse tokens | 80/300 M | Causal | 2x 1,024 | | Coarse to fine tokens | 80/300 M | Non-causal | 6x 1,024 | ### Release date April 2023 ## Broader Implications We anticipate that this model's text to audio capabilities can be used to improve accessbility tools in a variety of languages. While we hope that this release will enable users to express their creativity and build applications that are a force for good, we acknowledge that any text to audio model has the potential for dual use. While it is not straightforward to voice clone known people with Bark, it can still be used for nefarious purposes. To further reduce the chances of unintended use of Bark, we also release a simple classifier to detect Bark-generated audio with high accuracy (see notebooks section of the main repository).
6,098
[ [ -0.02349853515625, -0.052490234375, 0.01435089111328125, 0.0361328125, -0.010009765625, -0.003078460693359375, -0.02203369140625, -0.06304931640625, 0.0196990966796875, 0.01666259765625, -0.050140380859375, -0.0545654296875, -0.028106689453125, -0.0054321289...
TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GPTQ
2023-09-27T12:45:05.000Z
[ "transformers", "safetensors", "llama", "text-generation", "sft", "en", "dataset:ehartford/dolphin", "dataset:shahules786/orca-chat", "dataset:togethercomputer/RedPajama-Data-1T", "dataset:atom-in-the-universe/fanfics-10k-50k", "arxiv:2306.02707", "license:other", "text-generation-inference"...
text-generation
TheBloke
null
null
TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GPTQ
37
418
transformers
2023-07-25T11:50:54
--- language: - en license: other tags: - sft datasets: - ehartford/dolphin - shahules786/orca-chat - togethercomputer/RedPajama-Data-1T - atom-in-the-universe/fanfics-10k-50k model_name: Llama2 13B Orca 8K 3319 base_model: OpenAssistant/llama2-13b-orca-8k-3319 inference: false model_creator: OpenAssistant model_type: llama pipeline_tag: text-generation prompt_template: '<|system|>{system_message}</s><|prompter|>{prompt}</s><|assistant|> ' quantized_by: TheBloke widget: - text: <|system|>You are an AI assistant. You will be given a task. You must generate a detailed and long answer.</s><|prompter|>What is a meme, and what's the history behind this word?</s><|assistant|> - text: <|system|>You are an AI assistant that helps people find information.</s><|prompter|>What's the Earth total population</s><|assistant|> - text: <|system|>You are an AI assistant that follows instruction extremely well. Help as much as you can.</s><|prompter|>Write a story about future of AI development</s><|assistant|> --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Llama2 13B Orca 8K 3319 - GPTQ - Model creator: [OpenAssistant](https://huggingface.co/OpenAssistant) - Original model: [Llama2 13B Orca 8K 3319](https://huggingface.co/OpenAssistant/llama2-13b-orca-8k-3319) <!-- description start --> ## Description This repo contains GPTQ model files for [OpenAssistant's Llama2 13B Orca 8K 3319](https://huggingface.co/OpenAssistant/llama2-13b-orca-8k-3319). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GGUF) * [OpenAssistant's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/OpenAssistant/llama2-13b-orca-8k-3319) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: OpenAssistant-System ``` <|system|>{system_message}</s><|prompter|>{prompt}</s><|assistant|> ``` <!-- prompt-template end --> <!-- licensing start --> ## Licensing The creator of the source model has listed its license as `other`, and this quantization has therefore used that same license. As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly. In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [OpenAssistant's Llama2 13B Orca 8K 3319](https://huggingface.co/OpenAssistant/llama2-13b-orca-8k-3319). <!-- licensing end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. All recent GPTQ files are made with AutoGPTQ, and all files in non-main branches are made with AutoGPTQ. Files in the `main` branch which were uploaded before August 2023 were made with GPTQ-for-LLaMa. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The dataset used for quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GPTQ/tree/main) | 4 | 128 | No | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 8192 | 7.26 GB | Yes | 4-bit, without Act Order and group size 128g. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 8192 | 8.00 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 8192 | 7.51 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. | | [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 8192 | 7.26 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 8192 | 13.36 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-8bit-128g-actorder_False](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GPTQ/tree/gptq-8bit-128g-actorder_False) | 8 | 128 | No | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 8192 | 13.65 GB | No | 8-bit, with group size 128g for higher inference quality and without Act Order to improve AutoGPTQ speed. | | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 8192 | 13.65 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. | | [gptq-8bit-64g-actorder_True](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GPTQ/tree/gptq-8bit-64g-actorder_True) | 8 | 64 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 8192 | 13.95 GB | No | 8-bit, with group size 64g and Act Order for even higher inference quality. Poor AutoGPTQ CUDA speed. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download from branches - In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GPTQ:main` - With Git, you can clone a branch with: ``` git clone --single-branch --branch main https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GPTQ ``` - In Python Transformers code, the branch is the `revision` parameter; see below. <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui). Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GPTQ`. - To download from a specific branch, enter for example `TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GPTQ:main` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `OpenAssistant-Llama2-13B-Orca-8K-3319-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. * Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-python start --> ## How to use this GPTQ model from Python code ### Install the necessary packages Requires: Transformers 4.32.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install transformers>=4.32.0 optimum>=1.12.0 pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7 ``` If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ pip3 install . ``` ### For CodeLlama models only: you must use Transformers 4.33.0 or later. If 4.33.0 is not yet released when you read this, you will need to install Transformers from source: ```shell pip3 uninstall -y transformers pip3 install git+https://github.com/huggingface/transformers.git ``` ### You can then use the following code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GPTQ" # To use a different branch, change revision # For example: revision="main" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=False, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Tell me about AI" prompt_template=f'''<|system|>{system_message}</s><|prompter|>{prompt}</s><|assistant|> ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI). [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility. [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: OpenAssistant's Llama2 13B Orca 8K 3319 # llama2-13b-orca-8k-3319 ## Model Description This model is a fine-tuning of Meta's Llama2 13B model with 8K context size on a long-conversation variant of the Dolphin dataset ([orca-chat](https://huggingface.co/datasets/shahules786/orca-chat)). Note: **At least Huggingface Transformers [4.31.0](https://pypi.org/project/transformers/4.31.0/) is required to load this model!** ## Usage ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("OpenAssistant/llama2-13b-orca-8k-3319", use_fast=False) model = AutoModelForCausalLM.from_pretrained("OpenAssistant/llama2-13b-orca-8k-3319", torch_dtype=torch.float16, low_cpu_mem_usage=True, device_map="auto") system_message = "You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information." user_prompt = "Write me a poem please" prompt = f"""<|system|>{system_message}</s><|prompter|>{user_prompt}</s><|assistant|>""" inputs = tokenizer(prompt, return_tensors="pt").to("cuda") output = model.generate(**inputs, do_sample=True, top_p=0.95, top_k=0, max_new_tokens=256) print(tokenizer.decode(output[0], skip_special_tokens=True)) ``` ## Model Details - base model: [meta-llama/Llama-2-13b](https://huggingface.co/meta-llama/Llama-2-13b) - License: [Llama 2 Community License Agreement](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) - sampling report: [2023-07-25_OpenAssistant_llama2-13b-orca-8k-3319_sampling_llama2_prompt.json](https://open-assistant.github.io/oasst-model-eval/?f=https%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Foasst-pretrained%2F2023-07-25_OpenAssistant_llama2-13b-orca-8k-3319_sampling_llama2_prompt.json) - wandb: [public-sft/runs/2jfazjt9](https://wandb.ai/open-assistant/public-sft/runs/2jfazjt9) - checkpoint: 3319 steps - datatpye: fp16 - sponsored by: [Redmond.ai](https://redmond.ai/) ## Long context (RoPE Scaling) This model was fine-tuned with a context size of 8192 tokens using linear scaling of RoPE embeddings. This feature was recently added to [Huggingface transformers](https://github.com/huggingface/transformers/). Before loading this model please make sure HF transformers >=4.31.0 is installed (`pip install transformers>=4.31.0`). ## Conversation Template For the initial response use (e.g. the [llama2 default system prompt](https://github.com/facebookresearch/llama/blob/6c7fe276574e78057f917549435a2554000a876d/llama/generation.py#L46) works well): ``` <|system|>system message</s><|prompter|>user prompt</s><|assistant|> ``` For multi-turn conversations use: ``` <|system|>system message</s><|prompter|>Q1</s><|assistant|>A1</s><|prompter|>Q2</s><|assistant|> ``` The model was trained with the following 15 system messages used to generate the training examples (see [ORCA paper](https://arxiv.org/abs/2306.02707)): 1. You are an AI assistant. Provide a detailed answer so user don’t need to search outside to understand the answer. 2. You are an AI assistant. You will be given a task. You must generate a detailed and long answer. 3. You are a helpful assistant, who always provide explanation. Think like you are answering to a five year old. 4. You are an AI assistant that follows instruction extremely well. Help as much as you can. 5. You are an AI assistant that helps people find information. Provide a detailed answer so user don’t need to search outside to understand the answer. 6. You are an AI assistant. User will you give you a task. Your goal is to complete the task as faithfully as you can. While performing the task think step-by-step and justify your steps. 7. You should describe the task and explain your answer. While answering a multiple choice question, first output the correct answer(s). Then explain why other answers are wrong. Think like you are answering to a five year old. 8. Explain how you used the definition to come up with the answer. 9. You are an AI assistant. You should describe the task and explain your answer. While answering a multiple choice question, first output the correct answer(s). Then explain why other answers are wrong. You might need to use additional knowledge to answer the question. 10. You are an AI assistant that helps people find information. User will you give you a question. Your task is to answer as faithfully as you can. While answering think step-by- step and justify your answer. 11. User will you give you a task with some instruction. Your job is follow the instructions as faithfully as you can. While answering think step-by-step and justify your answer. 12. You are a teacher. Given a task, you explain in simple steps what the task is asking, any guidelines it provides and how to use those guidelines to find the answer. 13. You are an AI assistant, who knows every language and how to translate one language to another. Given a task, you explain in simple steps what the task is asking, any guidelines that it provides. You solve the task and show how you used the guidelines to solve the task. 14. Given a definition of a task and a sample input, break the definition into small parts. Each of those parts will have some instruction. Explain their meaning by showing an example that meets the criteria in the instruction. Use the following format: Part \#: a key part of the definition. Usage: Sample response that meets the criteria from the key part. Explain why you think it meets the criteria. 15. You are an AI assistant that helps people find information. ## Datasets: Orca-Chat/Dolphin, RedPajama1T & FanFics This model was trained on: - [shahules786/orca-chat](https://huggingface.co/datasets/shahules786/orca-chat) - [togethercomputer/RedPajama-Data-1T-Sample](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) - [atom-in-the-universe/fanfics-10k-50k](https://huggingface.co/datasets/atom-in-the-universe/fanfics-10k-50k) ``` Dataset Composition: Tain (sampled): orca-chat: 188842 (100%) fanfics: 47760 (100%) red_pajama: 188262 (25%) Valid: orca-chat: 5000 fanfics: 1000 red_pajama: 1000 ``` The dataset [shahules786/orca-chat](https://huggingface.co/datasets/shahules786/orca-chat) combines similar examples of the GPT-4 subset of [ehartford/dolphin](https://huggingface.co/datasets/ehartford/dolphin) to form longer conversations to improve long-context training. Additionally, RedPajama and FanFics were used for classic language modelling as an auxiliary task to improve the RoPE scaling for the 8k context size. ## Model Configuration ``` llama2_13b_orca_8k: rng_seed: 0xe1291f1a use_custom_sampler: true sort_by_length: false dtype: fp16 log_dir: "llama2_log_13b_orca_8k" learning_rate: 1e-5 model_name: /mnt/data/llama2/Llama-2-13b-hf/ output_dir: llama2_13b_orca_8k deepspeed_config: configs/zero_config_pretrain.json weight_decay: 0.0 max_length: 8192 warmup_steps: 100 use_flash_attention: true gradient_checkpointing: true gradient_accumulation_steps: 8 per_device_train_batch_size: 2 per_device_eval_batch_size: 1 residual_dropout: 0.0 eval_steps: 200 save_steps: 1000 # (total steps: 3319) num_train_epochs: 1 save_total_limit: 4 superhot: true superhot_config: type: linear scale: 2 datasets: - orca-chat: max_val_set: 5000 - fanfics: max_chunk_size: 65535 max_val_set: 1000 - red_pajama: fraction: 0.25 max_val_set: 1000 max_chunk_size: 65535 peft_model: false ``` # Developers - [shahules786](https://github.com/shahules786) - [jordiclive](https://github.com/jordiclive) - [andreaskoepf](https://github.com/andreaskoepf/) # Special Thanks We want to especially thank Eric Hartford who spared no expense in replicating ORCA and making it available at [ehartford/dolphin](https://huggingface.co/datasets/ehartford/dolphin)! Also, shoutout to the whole team working on [LLongMA-2-13b](https://huggingface.co/conceptofmind/LLongMA-2-13b) & the [scaled-rope](https://github.com/jquesnelle/scaled-rope) repository for their awesome work: bloc97, jquesnelle & conceptofmind! The whole Open-Assistant team is very grateful for the continued support of [Redmond.ai](https://redmond.ai/) who sponsored the training compute required for this model. # License - Llama 2 is licensed under the LLAMA 2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved. - Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the [Acceptable Use Policy](https://ai.meta.com/llama/use-policy) for the Llama Materials.
26,477
[ [ -0.036590576171875, -0.050689697265625, 0.01268768310546875, 0.00966644287109375, -0.0257568359375, -0.010223388671875, 0.01285552978515625, -0.0513916015625, 0.0205841064453125, 0.0294647216796875, -0.04229736328125, -0.04376220703125, -0.0280914306640625, ...
yadhikari/yogesh-a-v2
2023-08-14T06:56:14.000Z
[ "diffusers", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "endpoints_compatible", "has_space", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
yadhikari
null
null
yadhikari/yogesh-a-v2
0
418
diffusers
2023-08-14T06:50:18
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### yogesh-a-v2 Dreambooth model trained by yadhikari with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
502
[ [ -0.0160675048828125, -0.05792236328125, 0.03948974609375, 0.011444091796875, -0.0289764404296875, 0.0233306884765625, 0.0280609130859375, -0.0303192138671875, 0.051483154296875, 0.00994873046875, -0.0310211181640625, -0.01059722900390625, -0.037078857421875, ...
JCTN/JCTN_LORAxl
2023-10-11T16:43:08.000Z
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "concept", "comedy", "cereal box", "cereal", "license:other", "region:us" ]
text-to-image
JCTN
null
null
JCTN/JCTN_LORAxl
0
418
diffusers
2023-09-16T20:11:32
--- license: other tags: - text-to-image - stable-diffusion - lora - diffusers - concept - comedy - cereal box - cereal base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: widget: - text: " boogers, free tissue inside" - text: " star wars wookie bits, free lightsaber inside" - text: " kitty litter crunch" - text: " t bone steak" - text: " black plague, free death inside" - text: " barbie and ken" - text: " boiled eggs" - text: " raw bacon" - text: " herpes" - text: " pickles" --- # Super Cereal - SDXL LoRA ![Image 0](2879372.jpeg) > boogers, free tissue inside <p>Multiplier of 0.9 - 1.1 works well on SDXL base. Simple prompts tend to work well. No trigger word needed. <br /><br />Special thanks to Huggingface for the GPU grant.</p> ## Image examples for the model: ![Image 1](2879386.jpeg) > star wars wookie bits, free lightsaber inside ![Image 2](2879373.jpeg) > kitty litter crunch ![Image 3](2879374.jpeg) > t bone steak ![Image 4](2879375.jpeg) > black plague, free death inside ![Image 5](2879382.jpeg) > barbie and ken ![Image 6](2879377.jpeg) > boiled eggs ![Image 7](2879379.jpeg) > raw bacon ![Image 8](2879378.jpeg) > herpes ![Image 9](2879380.jpeg) > pickles
1,237
[ [ 0.00571441650390625, -0.04351806640625, 0.040924072265625, 0.019378662109375, -0.040740966796875, 0.01219940185546875, 0.01276397705078125, -0.0186309814453125, 0.049774169921875, 0.03070068359375, -0.03363037109375, -0.032440185546875, -0.04400634765625, -0...
thkkvui/mDeBERTa-v3-base-finetuned-nli-jnli
2023-09-26T08:44:14.000Z
[ "transformers", "pytorch", "deberta-v2", "text-classification", "generated_from_trainer", "bert", "zero-shot-classification", "ja", "dataset:MoritzLaurer/multilingual-NLI-26lang-2mil7", "dataset:shunk031/JGLUE", "license:mit", "endpoints_compatible", "region:us" ]
zero-shot-classification
thkkvui
null
null
thkkvui/mDeBERTa-v3-base-finetuned-nli-jnli
0
418
transformers
2023-09-25T21:05:15
--- license: mit language: - ja base_model: microsoft/mdeberta-v3-base tags: - generated_from_trainer - bert - zero-shot-classification - text-classification datasets: - MoritzLaurer/multilingual-NLI-26lang-2mil7 - shunk031/JGLUE metrics: - accuracy - f1 model-index: - name: mDeBERTa-v3-base-finetuned-nli-jnli results: [] pipeline_tag: zero-shot-classification widget: - text: 今日の予定を教えて candidate_labels: 天気,ニュース,金融,予定 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mDeBERTa-v3-base-finetuned-nli-jnli This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7739 - Accuracy: 0.6808 - F1: 0.6742 ## Model description More information needed ## Intended uses & limitations #### zero-shot classification ```python from transformers import pipeline model_name = "thkkvui/mDeBERTa-v3-base-finetuned-nli-jnli" classifier = pipeline("zero-shot-classification", model=model_name) text = ["今日の天気を教えて", "ニュースある?", "予定をチェックして", "ドル円は?"] labels = ["天気", "ニュース", "金融", "予定"] for t in text: output = classifier(t, labels, multi_label=False) print(output) ``` #### NLI use-case ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch device = torch.device("mps" if torch.backends.mps.is_available() else "cpu") model_name = "thkkvui/mDeBERTa-v3-base-finetuned-nli-jnli" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSequenceClassification.from_pretrained(model_name) premise = "NY Yankees is the professional baseball team in America." hypothesis = "メジャーリーグのチームは、日本ではニューヨークヤンキースが有名だ。" inputs = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt") with torch.no_grad(): output = model(**inputs) preds = torch.softmax(output["logits"][0], -1).tolist() label_names = ["entailment", "neutral", "contradiction"] result = {name: round(float(pred) * 100, 1) for pred, name in zip(preds, label_names)} print(result) ``` ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.06 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:| | 0.753 | 0.53 | 5000 | 0.8758 | 0.6105 | 0.6192 | | 0.5947 | 1.07 | 10000 | 0.6619 | 0.7054 | 0.7035 | | 0.5791 | 1.6 | 15000 | 0.7739 | 0.6808 | 0.6742 | ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1 - Datasets 2.14.5 - Tokenizers 0.13.3
3,095
[ [ -0.0305633544921875, -0.0347900390625, 0.0149078369140625, 0.0095367431640625, -0.0149688720703125, -0.0118255615234375, -0.00629425048828125, -0.0137786865234375, 0.01428985595703125, 0.0128173828125, -0.039825439453125, -0.047943115234375, -0.046630859375, ...
timm/repvit_m2_3.dist_450e_in1k
2023-10-20T18:35:39.000Z
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2307.09283", "license:apache-2.0", "region:us" ]
image-classification
timm
null
null
timm/repvit_m2_3.dist_450e_in1k
0
418
timm
2023-10-20T18:35:34
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for repvit_m2_3.dist_450e_in1k A RepViT image classification model. Trained on ImageNet-1k with distillation by paper authors. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 23.7 - GMACs: 4.6 - Activations (M): 26.2 - Image size: 224 x 224 - **Papers:** - RepViT: Revisiting Mobile CNN From ViT Perspective: https://arxiv.org/abs/2307.09283 - **Original:** https://github.com/THU-MIG/RepViT - **Dataset:** ImageNet-1k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('repvit_m2_3.dist_450e_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'repvit_m2_3.dist_450e_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 80, 56, 56]) # torch.Size([1, 160, 28, 28]) # torch.Size([1, 320, 14, 14]) # torch.Size([1, 640, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'repvit_m2_3.dist_450e_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 640, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Citation ```bibtex @misc{wang2023repvit, title={RepViT: Revisiting Mobile CNN From ViT Perspective}, author={Ao Wang and Hui Chen and Zijia Lin and Hengjun Pu and Guiguang Ding}, year={2023}, eprint={2307.09283}, archivePrefix={arXiv}, primaryClass={cs.CV} } ```
3,466
[ [ -0.03692626953125, -0.02581787109375, -0.005565643310546875, 0.01045989990234375, -0.032501220703125, -0.0284576416015625, -0.01531982421875, -0.0137176513671875, 0.0254364013671875, 0.0322265625, -0.040985107421875, -0.0521240234375, -0.045684814453125, -0....
akifhasan/sabbur
2023-07-13T15:13:36.000Z
[ "diffusers", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "endpoints_compatible", "has_space", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
akifhasan
null
null
akifhasan/sabbur
0
417
diffusers
2023-07-13T15:06:48
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### sabbur Dreambooth model trained by akifhasan with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
497
[ [ -0.024078369140625, -0.05810546875, 0.031982421875, 0.0306854248046875, -0.0311126708984375, 0.02703857421875, 0.0224761962890625, -0.0229949951171875, 0.049407958984375, 0.01157379150390625, -0.01432037353515625, -0.00997161865234375, -0.043182373046875, -0...
ptx0/sdxl-base
2023-07-26T20:13:21.000Z
[ "diffusers", "text-to-image", "stable-diffusion", "arxiv:2307.01952", "arxiv:2108.01073", "arxiv:2112.10752", "license:openrail++", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
ptx0
null
null
ptx0/sdxl-base
2
417
diffusers
2023-07-26T19:49:38
--- license: openrail++ tags: - text-to-image - stable-diffusion --- # SD-XL 1.0-base Model Card ![row01](01.png) ## Model ![pipeline](pipeline.png) [SDXL](https://arxiv.org/abs/2307.01952) consists of a mixture-of-experts pipeline for latent diffusion: In a first step, the base model is used to generate (noisy) latents, which are then further processed with a refinement model (available here: https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/) specialized for the final denoising steps. Note that the base model can be used as a standalone module. Alternatively, we can use a two-stage pipeline as follows: First, the base model is used to generate latents of the desired output size. In the second step, we use a specialized high-resolution model and apply a technique called SDEdit (https://arxiv.org/abs/2108.01073, also known as "img2img") to the latents generated in the first step, using the same prompt. This technique is slightly slower than the first one, as it requires more function evaluations. Source code is available at https://github.com/Stability-AI/generative-models . ### Model Description - **Developed by:** Stability AI - **Model type:** Diffusion-based text-to-image generative model - **License:** [CreativeML Open RAIL++-M License](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/LICENSE.md) - **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses two fixed, pretrained text encoders ([OpenCLIP-ViT/G](https://github.com/mlfoundations/open_clip) and [CLIP-ViT/L](https://github.com/openai/CLIP/tree/main)). - **Resources for more information:** Check out our [GitHub Repository](https://github.com/Stability-AI/generative-models) and the [SDXL report on arXiv](https://arxiv.org/abs/2307.01952). ### Model Sources For research purposes, we recommned our `generative-models` Github repository (https://github.com/Stability-AI/generative-models), which implements the most popoular diffusion frameworks (both training and inference) and for which new functionalities like distillation will be added over time. [Clipdrop](https://clipdrop.co/stable-diffusion) provides free SDXL inference. - **Repository:** https://github.com/Stability-AI/generative-models - **Demo:** https://clipdrop.co/stable-diffusion ## Evaluation ![comparison](comparison.png) The chart above evaluates user preference for SDXL (with and without refinement) over SDXL 0.9 and Stable Diffusion 1.5 and 2.1. The SDXL base model performs significantly better than the previous variants, and the model combined with the refinement module achieves the best overall performance. ### 🧨 Diffusers Make sure to upgrade diffusers to >= 0.18.0: ``` pip install diffusers --upgrade ``` In addition make sure to install `transformers`, `safetensors`, `accelerate` as well as the invisible watermark: ``` pip install invisible_watermark transformers accelerate safetensors ``` You can use the model then as follows ```py from diffusers import DiffusionPipeline import torch pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, use_safetensors=True, variant="fp16") pipe.to("cuda") # if using torch < 2.0 # pipe.enable_xformers_memory_efficient_attention() prompt = "An astronaut riding a green horse" images = pipe(prompt=prompt).images[0] ``` When using `torch >= 2.0`, you can improve the inference speed by 20-30% with torch.compile. Simple wrap the unet with torch compile before running the pipeline: ```py pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) ``` If you are limited by GPU VRAM, you can enable *cpu offloading* by calling `pipe.enable_model_cpu_offload` instead of `.to("cuda")`: ```diff - pipe.to("cuda") + pipe.enable_model_cpu_offload() ``` ## Uses ### Direct Use The model is intended for research purposes only. Possible research areas and tasks include - Generation of artworks and use in design and other artistic processes. - Applications in educational or creative tools. - Research on generative models. - Safe deployment of models which have the potential to generate harmful content. - Probing and understanding the limitations and biases of generative models. Excluded uses are described below. ### Out-of-Scope Use The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. ## Limitations and Bias ### Limitations - The model does not achieve perfect photorealism - The model cannot render legible text - The model struggles with more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere” - Faces and people in general may not be generated properly. - The autoencoding part of the model is lossy. ### Bias While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
5,153
[ [ -0.032562255859375, -0.06524658203125, 0.040313720703125, 0.00897979736328125, -0.0152740478515625, -0.0155181884765625, -0.003631591796875, -0.0152435302734375, -0.0032444000244140625, 0.03167724609375, -0.033233642578125, -0.0423583984375, -0.053680419921875, ...
cyriac880/dog
2023-08-09T17:29:51.000Z
[ "diffusers", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "endpoints_compatible", "has_space", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
cyriac880
null
null
cyriac880/dog
0
417
diffusers
2023-08-09T17:17:29
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### DOG Dreambooth model trained by cyriac880 following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: VJCET294 Sample pictures of this concept: ![0](https://huggingface.co/cyriac880/dog/resolve/main/sample_images/xxx_(2).jpg) ![1](https://huggingface.co/cyriac880/dog/resolve/main/sample_images/xxx_(1).jpg)
466
[ [ -0.0550537109375, -0.0219268798828125, 0.036651611328125, 0.0067138671875, -0.00948333740234375, 0.035491943359375, 0.029693603515625, -0.039642333984375, 0.04656982421875, 0.02783203125, -0.054107666015625, -0.0335693359375, -0.022552490234375, -0.003448486...
timm/fastvit_sa24.apple_in1k
2023-08-23T20:55:45.000Z
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2303.14189", "license:other", "region:us" ]
image-classification
timm
null
null
timm/fastvit_sa24.apple_in1k
0
417
timm
2023-08-23T20:55:27
--- tags: - image-classification - timm library_name: timm license: other datasets: - imagenet-1k --- # Model card for fastvit_sa24.apple_in1k A FastViT image classification model. Trained on ImageNet-1k by paper authors. Please observe [original license](https://github.com/apple/ml-fastvit/blob/8af5928238cab99c45f64fc3e4e7b1516b8224ba/LICENSE). ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 21.6 - GMACs: 3.8 - Activations (M): 23.9 - Image size: 256 x 256 - **Papers:** - FastViT: A Fast Hybrid Vision Transformer using Structural Reparameterization: https://arxiv.org/abs/2303.14189 - **Original:** https://github.com/apple/ml-fastvit - **Dataset:** ImageNet-1k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('fastvit_sa24.apple_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'fastvit_sa24.apple_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 64, 64, 64]) # torch.Size([1, 128, 32, 32]) # torch.Size([1, 256, 16, 16]) # torch.Size([1, 512, 8, 8]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'fastvit_sa24.apple_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 512, 8, 8) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Citation ```bibtex @inproceedings{vasufastvit2023, author = {Pavan Kumar Anasosalu Vasu and James Gabriel and Jeff Zhu and Oncel Tuzel and Anurag Ranjan}, title = {FastViT: A Fast Hybrid Vision Transformer using Structural Reparameterization}, booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision}, year = {2023} } ```
3,670
[ [ -0.04278564453125, -0.036529541015625, 0.0011835098266601562, 0.018524169921875, -0.0311737060546875, -0.0150146484375, -0.007541656494140625, -0.019744873046875, 0.024658203125, 0.028350830078125, -0.038421630859375, -0.044647216796875, -0.049774169921875, ...
TheBloke/Athena-v1-GPTQ
2023-09-27T12:46:46.000Z
[ "transformers", "safetensors", "llama", "text-generation", "license:llama2", "text-generation-inference", "region:us" ]
text-generation
TheBloke
null
null
TheBloke/Athena-v1-GPTQ
8
417
transformers
2023-08-30T15:37:01
--- license: llama2 model_name: Athena v1 base_model: IkariDev/Athena-v1 inference: false model_creator: IkariDev model_type: llama prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Athena v1 - GPTQ - Model creator: [IkariDev](https://huggingface.co/IkariDev) - Original model: [Athena v1](https://huggingface.co/IkariDev/Athena-v1) <!-- description start --> ## Description This repo contains GPTQ model files for [IkariDev's Athena v1](https://huggingface.co/IkariDev/Athena-v1). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Athena-v1-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Athena-v1-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Athena-v1-GGUF) * [IkariDev's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/IkariDev/Athena-v1) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. All recent GPTQ files are made with AutoGPTQ, and all files in non-main branches are made with AutoGPTQ. Files in the `main` branch which were uploaded before August 2023 were made with GPTQ-for-LLaMa. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The dataset used for quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/Athena-v1-GPTQ/tree/main) | 4 | 128 | No | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.26 GB | Yes | 4-bit, without Act Order and group size 128g. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Athena-v1-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 8.00 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/Athena-v1-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.51 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. | | [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/Athena-v1-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.26 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Athena-v1-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 13.36 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Athena-v1-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 13.65 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download from branches - In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/Athena-v1-GPTQ:main` - With Git, you can clone a branch with: ``` git clone --single-branch --branch main https://huggingface.co/TheBloke/Athena-v1-GPTQ ``` - In Python Transformers code, the branch is the `revision` parameter; see below. <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui). Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/Athena-v1-GPTQ`. - To download from a specific branch, enter for example `TheBloke/Athena-v1-GPTQ:main` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `Athena-v1-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. * Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-python start --> ## How to use this GPTQ model from Python code ### Install the necessary packages Requires: Transformers 4.32.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install transformers>=4.32.0 optimum>=1.12.0 pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7 ``` If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ pip3 install . ``` ### For CodeLlama models only: you must use Transformers 4.33.0 or later. If 4.33.0 is not yet released when you read this, you will need to install Transformers from source: ```shell pip3 uninstall -y transformers pip3 install git+https://github.com/huggingface/transformers.git ``` ### You can then use the following code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/Athena-v1-GPTQ" # To use a different branch, change revision # For example: revision="main" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=False, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Tell me about AI" prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI). [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility. [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: IkariDev's Athena v1 Experimental mythomax based ERP model. Use Alpaca format, merged models: mythomax, puddlejumper, airoboros, chronos beluga gguf here: https://huggingface.co/TheBloke/Athena-v1-GGUF
14,765
[ [ -0.0447998046875, -0.05621337890625, 0.01678466796875, 0.00450897216796875, -0.032379150390625, -0.01546478271484375, 0.0179290771484375, -0.043731689453125, 0.0177459716796875, 0.033782958984375, -0.052734375, -0.037994384765625, -0.03216552734375, -0.00014...
mihirneal/saved_ckpt
2023-10-29T09:09:29.000Z
[ "diffusers", "safetensors", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "t2iadapter", "license:creativeml-openrail-m", "diffusers:T2IAdapter", "region:us" ]
text-to-image
mihirneal
null
null
mihirneal/saved_ckpt
0
417
diffusers
2023-09-03T05:51:07
--- license: creativeml-openrail-m base_model: stabilityai/stable-diffusion-xl-base-1.0 tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - t2iadapter inference: true --- # t2iadapter-mihirneal/saved_ckpt These are t2iadapter weights trained on stabilityai/stable-diffusion-xl-base-1.0 with new type of conditioning.
363
[ [ 0.004817962646484375, -0.0005311965942382812, -0.002025604248046875, 0.0208282470703125, -0.019439697265625, 0.01018524169921875, 0.0263214111328125, 0.026458740234375, 0.046905517578125, 0.0267333984375, -0.02362060546875, 0.01079559326171875, -0.06524658203125...
stefan-it/hmbench-ajmc-en-hmbert_64k-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
2023-10-26T10:51:31.000Z
[ "flair", "pytorch", "tensorboard", "token-classification", "sequence-tagger-model", "en", "license:mit", "region:us" ]
token-classification
stefan-it
null
null
stefan-it/hmbench-ajmc-en-hmbert_64k-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
0
417
flair
2023-10-23T18:51:18
--- language: en license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-64k-td-cased widget: - text: Cp . Eur . Phoen . 240 , 1 , αἷμα ddiov φλέγέι . --- # Fine-tuned Flair Model on AjMC English NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [AjMC English](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md) NER Dataset using hmBERT 64k as backbone LM. The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics, and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/) project. The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[4, 8]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average | |-------------------|--------------|--------------|--------------|-----------------|--------------|-----------------| | `bs4-e10-lr3e-05` | [0.8565][1] | [0.8592][2] | [0.8513][3] | [0.8622][4] | [0.8534][5] | 0.8565 ± 0.0044 | | `bs4-e10-lr5e-05` | [0.8582][6] | [0.852][7] | [0.8517][8] | [**0.8544**][9] | [0.842][10] | 0.8517 ± 0.006 | | `bs8-e10-lr5e-05` | [0.8412][11] | [0.8369][12] | [0.853][13] | [0.8392][14] | [0.85][15] | 0.8441 ± 0.007 | | `bs8-e10-lr3e-05` | [0.8483][16] | [0.8431][17] | [0.8443][18] | [0.8486][19] | [0.8359][20] | 0.844 ± 0.0052 | [1]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert_64k-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert_64k-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert_64k-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert_64k-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert_64k-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert_64k-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert_64k-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert_64k-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert_64k-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert_64k-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert_64k-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert_64k-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert_64k-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert_64k-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert_64k-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert_64k-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert_64k-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert_64k-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert_64k-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert_64k-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
4,608
[ [ -0.05914306640625, -0.026519775390625, 0.0187225341796875, 0.002399444580078125, 0.0109100341796875, 0.0018634796142578125, 0.006565093994140625, -0.02496337890625, 0.033172607421875, 0.03448486328125, -0.047760009765625, -0.039398193359375, -0.03558349609375, ...
timm/resnest50d_1s4x24d.in1k
2023-04-23T23:36:15.000Z
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2004.08955", "license:apache-2.0", "region:us" ]
image-classification
timm
null
null
timm/resnest50d_1s4x24d.in1k
0
416
timm
2023-04-23T23:35:51
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for resnest50d_1s4x24d.in1k A ResNeSt (ResNet based architecture with Split Attention) image classification model. Trained on ImageNet-1k by paper authors. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 25.7 - GMACs: 4.4 - Activations (M): 13.6 - Image size: 224 x 224 - **Papers:** - ResNeSt: Split-Attention Networks: https://arxiv.org/abs/2004.08955 - **Dataset:** ImageNet-1k - **Original:** https://github.com/zhanghang1989/ResNeSt ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('resnest50d_1s4x24d.in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'resnest50d_1s4x24d.in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 64, 112, 112]) # torch.Size([1, 256, 56, 56]) # torch.Size([1, 512, 28, 28]) # torch.Size([1, 1024, 14, 14]) # torch.Size([1, 2048, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'resnest50d_1s4x24d.in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 2048, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @article{zhang2020resnest, title={ResNeSt: Split-Attention Networks}, author={Zhang, Hang and Wu, Chongruo and Zhang, Zhongyue and Zhu, Yi and Zhang, Zhi and Lin, Haibin and Sun, Yue and He, Tong and Muller, Jonas and Manmatha, R. and Li, Mu and Smola, Alexander}, journal={arXiv preprint arXiv:2004.08955}, year={2020} } ```
3,765
[ [ -0.0401611328125, -0.035125732421875, 0.00806427001953125, 0.0139312744140625, -0.0262298583984375, -0.0251922607421875, -0.0201416015625, -0.02093505859375, 0.0289764404296875, 0.034210205078125, -0.04827880859375, -0.04888916015625, -0.05078125, -0.0036830...
TheBloke/Mistral-7B-Code-16K-qlora-GPTQ
2023-10-17T09:19:08.000Z
[ "transformers", "safetensors", "mistral", "text-generation", "license:apache-2.0", "text-generation-inference", "region:us" ]
text-generation
TheBloke
null
null
TheBloke/Mistral-7B-Code-16K-qlora-GPTQ
8
416
transformers
2023-10-17T08:45:37
--- base_model: Nondzu/Mistral-7B-code-16k-qlora inference: false license: apache-2.0 model_creator: Kamil model_name: Mistral 7B Code 16K qLoRA model_type: mistral prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Mistral 7B Code 16K qLoRA - GPTQ - Model creator: [Kamil](https://huggingface.co/Nondzu) - Original model: [Mistral 7B Code 16K qLoRA](https://huggingface.co/Nondzu/Mistral-7B-code-16k-qlora) <!-- description start --> ## Description This repo contains GPTQ model files for [Kamil's Mistral 7B Code 16K qLoRA](https://huggingface.co/Nondzu/Mistral-7B-code-16k-qlora). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Mistral-7B-Code-16K-qlora-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Mistral-7B-Code-16K-qlora-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Mistral-7B-Code-16K-qlora-GGUF) * [Kamil's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Nondzu/Mistral-7B-code-16k-qlora) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files, and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/Mistral-7B-Code-16K-qlora-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 4096 | 4.16 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Mistral-7B-Code-16K-qlora-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 4096 | 4.57 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Mistral-7B-Code-16K-qlora-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 4096 | 7.52 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Mistral-7B-Code-16K-qlora-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 4096 | 7.68 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. | | [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/Mistral-7B-Code-16K-qlora-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 4096 | 8.17 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. | | [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/Mistral-7B-Code-16K-qlora-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 4096 | 4.29 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download, including from branches ### In text-generation-webui To download from the `main` branch, enter `TheBloke/Mistral-7B-Code-16K-qlora-GPTQ` in the "Download model" box. To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/Mistral-7B-Code-16K-qlora-GPTQ:gptq-4bit-32g-actorder_True` ### From the command line I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` To download the `main` branch to a folder called `Mistral-7B-Code-16K-qlora-GPTQ`: ```shell mkdir Mistral-7B-Code-16K-qlora-GPTQ huggingface-cli download TheBloke/Mistral-7B-Code-16K-qlora-GPTQ --local-dir Mistral-7B-Code-16K-qlora-GPTQ --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir Mistral-7B-Code-16K-qlora-GPTQ huggingface-cli download TheBloke/Mistral-7B-Code-16K-qlora-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir Mistral-7B-Code-16K-qlora-GPTQ --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Huggingface cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell mkdir Mistral-7B-Code-16K-qlora-GPTQ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Mistral-7B-Code-16K-qlora-GPTQ --local-dir Mistral-7B-Code-16K-qlora-GPTQ --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ### With `git` (**not** recommended) To clone a specific branch with `git`, use a command like this: ```shell git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/Mistral-7B-Code-16K-qlora-GPTQ ``` Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.) <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui) Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/Mistral-7B-Code-16K-qlora-GPTQ`. - To download from a specific branch, enter for example `TheBloke/Mistral-7B-Code-16K-qlora-GPTQ:gptq-4bit-32g-actorder_True` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `Mistral-7B-Code-16K-qlora-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. - Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-tgi start --> ## Serving this model from Text Generation Inference (TGI) It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/Mistral-7B-Code-16K-qlora-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ''' client = InferenceClient(endpoint_url) response = client.text_generation(prompt, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1) print(f"Model output: {response}") ``` <!-- README_GPTQ.md-use-from-tgi end --> <!-- README_GPTQ.md-use-from-python start --> ## How to use this GPTQ model from Python code ### Install the necessary packages Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install transformers optimum pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7 ``` If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ git checkout v0.4.2 pip3 install . ``` ### You can then use the following code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/Mistral-7B-Code-16K-qlora-GPTQ" # To use a different branch, change revision # For example: revision="gptq-4bit-32g-actorder_True" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=False, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Tell me about AI" prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI). [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility. [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: Kamil's Mistral 7B Code 16K qLoRA # Mistral-7B-code-16k-qlora I'm excited to announce the release of a new model called Mistral-7B-code-16k-qlora. This small and fast model shows a lot of promise for supporting coding or acting as a copilot. I'm currently looking for people to help me test it out! ## Additional Information This model was trained on 3x RTX 3090 in my homelab, using around 65kWh for approximately 23 cents, which is equivalent to around $15 for electricity. ## Dataset: nickrosh/Evol-Instruct-Code-80k-v1 https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1 ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl"/>](https://github.com/OpenAccess-AI-Collective/axolotl) ## Settings: ``` base_model: mistralai/Mistral-7B-Instruct-v0.1 base_model_config: mistralai/Mistral-7B-Instruct-v0.1 model_type: MistralForCausalLM tokenizer_type: LlamaTokenizer is_mistral_derived_model: true load_in_8bit: false load_in_4bit: true strict: false datasets: - path: nickrosh/Evol-Instruct-Code-80k-v1 type: oasst dataset_prepared_path: val_set_size: 0.01 output_dir: ./Mistral-7B-Evol-Instruct-16k-test11 adapter: qlora lora_model_dir: # 16384 8192 4096 2048 sequence_len: 16384 sample_packing: true pad_to_sequence_len: true lora_r: 32 lora_alpha: 16 lora_dropout: 0.05 lora_target_modules: lora_target_linear: true lora_fan_in_fan_out: wandb_project: mistral-code wandb_entity: wandb_watch: wandb_run_id: wandb_log_model: gradient_accumulation_steps: 2 micro_batch_size: 1 num_epochs: 8 optimizer: paged_adamw_32bit lr_scheduler: cosine learning_rate: 0.0002 train_on_inputs: false group_by_length: false bf16: true fp16: false tf32: false gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 10 eval_steps: 20 save_steps: debug: # deepspeed: deepspeed: deepspeed/zero2.json weight_decay: 0.0 fsdp: fsdp_config: special_tokens: bos_token: "<s>" eos_token: "</s>" unk_token: "<unk>" ``` ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63729f35acef705233c87909/NyuqJFDkH00KGvuOwHIuG.png) Check my other projects: https://github.com/Nondzu/LlamaTor
21,643
[ [ -0.03857421875, -0.058868408203125, 0.01387786865234375, 0.01471710205078125, -0.02252197265625, -0.01654052734375, 0.00461578369140625, -0.0330810546875, 0.01528167724609375, 0.03155517578125, -0.042755126953125, -0.04443359375, -0.029815673828125, -0.00336...