modelId
stringlengths
4
111
lastModified
stringlengths
24
24
tags
list
pipeline_tag
stringlengths
5
30
author
stringlengths
2
34
config
null
securityStatus
null
id
stringlengths
4
111
likes
int64
0
9.53k
downloads
int64
2
73.6M
library_name
stringlengths
2
84
created
timestamp[us]
card
stringlengths
101
901k
card_len
int64
101
901k
embeddings
list
microsoft/BiomedNLP-KRISSBERT-PubMed-UMLS-EL
2022-05-25T02:45:36.000Z
[ "transformers", "pytorch", "bert", "exbert", "feature-extraction", "en", "arxiv:2112.07887", "license:mit", "endpoints_compatible", "region:us" ]
feature-extraction
microsoft
null
null
microsoft/BiomedNLP-KRISSBERT-PubMed-UMLS-EL
18
2,686
transformers
2022-04-15T17:50:38
--- language: en tags: - exbert license: mit pipeline_tag: feature-extraction widget: - text: "<ENT> ER </ENT> crowding has become a wide-spread problem." --- ## KRISSBERT [https://arxiv.org/pdf/2112.07887.pdf](https://arxiv.org/pdf/2112.07887.pdf) Entity linking faces significant challenges such as prolific variations and prevalent ambiguities, especially in high-value domains with myriad entities. Standard classification approaches suffer from the annotation bottleneck and cannot effectively handle unseen entities. Zero-shot entity linking has emerged as a promising direction for generalizing to new entities, but it still requires example gold entity mentions during training and canonical descriptions for all entities, both of which are rarely available outside of Wikipedia ([Logeswaran et al., 2019](https://aclanthology.org/P19-1335.pdf); [Wu et al., 2020](https://aclanthology.org/2020.emnlp-main.519.pdf)). We explore Knowledge-RIch Self-Supervision (KRISS) and train a contextual encoder (KRISSBERT) for entity linking, by leveraging readily available unlabeled text and domain knowledge. Specifically, the KRISSBERT model is initialized with [PubMedBERT](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract) parameters, and then continuously pretrained using biomedical entity names from the [UMLS](https://www.nlm.nih.gov/research/umls/index.html) ontology to self-supervise entity linking examples from [PubMed](https://pubmed.ncbi.nlm.nih.gov/) abstracts. Experiments on seven standard biomedical entity linking datasets show that KRISSBERT attains new state of the art, outperforming prior self-supervised methods by as much as 20 absolute points in accuracy. See [Zhang et al., 2021](https://arxiv.org/abs/2112.07887) for the details. Note that some prior systems like [BioSyn](https://aclanthology.org/2020.acl-main.335.pdf), [SapBERT](https://aclanthology.org/2021.naacl-main.334.pdf), and their follow-up work (e.g., [Lai et al., 2021](https://aclanthology.org/2021.findings-emnlp.140.pdf)) claimed to do entity linking, but their systems completely ignore the context of an entity mention, and can only predict a surface form in the entity dictionary (See Figure 1 in [BioSyn](https://aclanthology.org/2020.acl-main.335.pdf)), _**not the canonical entity ID (e.g., CUI in UMLS)**_. Therefore, they can't disambiguate ambiguous mentions. For instance, given the entity mention "_ER_" in the sentence "*ER crowding has become a wide-spread problem*", their systems ignore the sentence context, and simply predict the closest surface form, which is just "ER". Multiple entities share this surface form as a potential name or alias, such as *Emergency Room (C0562508)*, *Estrogen Receptor Gene (C1414461)*, and *Endoplasmic Reticulum(C0014239)*. Without using the context information, their systems can't resolve such ambiguity and pinpoint the correct entity *Emergency Room (C0562508)*. More problematically, their evaluation would deem such an ambiguous prediction as correct. Consequently, the reported results in their papers do not reflect true performance on entity linking. ## Usage for Entity Linking Here, we use the [MedMentions](https://github.com/chanzuckerberg/MedMentions) data to show you how to 1) **generate prototype embeddings**, and 2) **run entity linking**. (We are currently unable to release the self-supervised mention examples, because they require the UMLS and PubMed licenses.) #### 1. Create conda environment and install requirements ```bash conda create -n kriss -y python=3.8 && conda activate kriss pip install -r requirements.txt ``` #### 2. Switch the root dir to [usage](https://huggingface.co/microsoft/BiomedNLP-KRISSBERT-PubMed-UMLS-EL/tree/main/usage) ```bash cd usage ``` #### 3. Download the MedMentions dataset ```bash git clone https://github.com/chanzuckerberg/MedMentions.git ``` #### 4. Generate prototype embeddings ```bash python generate_prototypes.py ``` #### 5. Run entity linking ```bash python run_entity_linking.py ``` This will give you about `58.3%` top-1 accuracy. ## Citation If you find KRISSBERT useful in your research, please cite the following paper: ```latex @article{krissbert, author = {Sheng Zhang, Hao Cheng, Shikhar Vashishth, Cliff Wong, Jinfeng Xiao, Xiaodong Liu, Tristan Naumann, Jianfeng Gao, Hoifung Poon}, title = {Knowledge-Rich Self-Supervision for Biomedical Entity Linking}, year = {2021}, url = {https://arxiv.org/abs/2112.07887}, eprinttype = {arXiv}, eprint = {2112.07887}, } ```
4,545
[ [ -0.02252197265625, -0.056182861328125, 0.055755615234375, -0.005096435546875, -0.00872802734375, -0.0186767578125, -0.00905609130859375, -0.05560302734375, 0.037750244140625, 0.0269012451171875, -0.022796630859375, -0.051849365234375, -0.03472900390625, 0.03...
Yntec/NeverEndingDream768
2023-09-01T10:36:27.000Z
[ "diffusers", "stable-diffusion", "text-to-image", "art", "artistic", "Lykon", "en", "license:other", "endpoints_compatible", "has_space", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
Yntec
null
null
Yntec/NeverEndingDream768
0
2,686
diffusers
2023-09-01T09:01:53
--- language: - en license: other library_name: diffusers pipeline_tag: text-to-image tags: - stable-diffusion - text-to-image - art - artistic - Lykon --- # Never Ending Dream 768 768x768 version of this model for the interference API. Also consider supporting Lykon on Patreon - https://www.patreon.com/Lykon275 Sample and prompt: ![Sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/BZiskc0CNpOnJudutbPaz.png) pretty CUTE girl, 1940, Magazine ad, Iconic. hyperrealistic, octane render, Painterly soft brush by yoshitomo nara ( 2 0 1 2 ), painting detailed pastel from fantasia ( 1 9 4 1 ) Official Repository: https://huggingface.co/Lykon/NeverEnding-Dream
704
[ [ -0.0282135009765625, -0.03741455078125, 0.0479736328125, 0.0244293212890625, -0.021331787109375, 0.009185791015625, 0.01482391357421875, -0.07281494140625, 0.058868408203125, 0.060028076171875, -0.06768798828125, -0.041900634765625, -0.0369873046875, -0.0075...
keremberke/yolov8n-painting-classification
2023-02-22T13:01:15.000Z
[ "ultralytics", "tensorboard", "v8", "ultralyticsplus", "yolov8", "yolo", "vision", "image-classification", "pytorch", "awesome-yolov8-models", "dataset:keremberke/painting-style-classification", "model-index", "region:us" ]
image-classification
keremberke
null
null
keremberke/yolov8n-painting-classification
0
2,685
ultralytics
2023-01-27T16:49:23
--- tags: - ultralyticsplus - yolov8 - ultralytics - yolo - vision - image-classification - pytorch - awesome-yolov8-models library_name: ultralytics library_version: 8.0.21 inference: false datasets: - keremberke/painting-style-classification model-index: - name: keremberke/yolov8n-painting-classification results: - task: type: image-classification dataset: type: keremberke/painting-style-classification name: painting-style-classification split: validation metrics: - type: accuracy value: 0.04928 # min: 0.0 - max: 1.0 name: top1 accuracy - type: accuracy value: 0.23688 # min: 0.0 - max: 1.0 name: top5 accuracy --- <div align="center"> <img width="640" alt="keremberke/yolov8n-painting-classification" src="https://huggingface.co/keremberke/yolov8n-painting-classification/resolve/main/thumbnail.jpg"> </div> ### Supported Labels ``` ['Abstract_Expressionism', 'Action_painting', 'Analytical_Cubism', 'Art_Nouveau_Modern', 'Baroque', 'Color_Field_Painting', 'Contemporary_Realism', 'Cubism', 'Early_Renaissance', 'Expressionism', 'Fauvism', 'High_Renaissance', 'Impressionism', 'Mannerism_Late_Renaissance', 'Minimalism', 'Naive_Art_Primitivism', 'New_Realism', 'Northern_Renaissance', 'Pointillism', 'Pop_Art', 'Post_Impressionism', 'Realism', 'Rococo', 'Romanticism', 'Symbolism', 'Synthetic_Cubism', 'Ukiyo_e'] ``` ### How to use - Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus): ```bash pip install ultralyticsplus==0.0.23 ultralytics==8.0.21 ``` - Load model and perform prediction: ```python from ultralyticsplus import YOLO, postprocess_classify_output # load model model = YOLO('keremberke/yolov8n-painting-classification') # set model parameters model.overrides['conf'] = 0.25 # model confidence threshold # set image image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg' # perform inference results = model.predict(image) # observe results print(results[0].probs) # [0.1, 0.2, 0.3, 0.4] processed_result = postprocess_classify_output(model, result=results[0]) print(processed_result) # {"cat": 0.4, "dog": 0.6} ``` **More models available at: [awesome-yolov8-models](https://yolov8.xyz)**
2,262
[ [ -0.0369873046875, -0.0230865478515625, 0.03521728515625, -0.012451171875, -0.0233001708984375, -0.0044708251953125, 0.0038280487060546875, -0.03424072265625, 0.01328277587890625, 0.02813720703125, -0.0278472900390625, -0.0452880859375, -0.04315185546875, -0....
nvidia/mit-b3
2022-08-06T10:24:57.000Z
[ "transformers", "pytorch", "tf", "segformer", "image-classification", "vision", "dataset:imagenet_1k", "arxiv:2105.15203", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
nvidia
null
null
nvidia/mit-b3
2
2,684
transformers
2022-03-02T23:29:05
--- license: other tags: - vision datasets: - imagenet_1k widget: - src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg example_title: House - src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000002.jpg example_title: Castle --- # SegFormer (b3-sized) encoder pre-trained-only SegFormer encoder fine-tuned on Imagenet-1k. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer). Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset. This repository only contains the pre-trained hierarchical Transformer, hence it can be used for fine-tuning purposes. ## Intended uses & limitations You can use the model for fine-tuning of semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import SegformerFeatureExtractor, SegformerForImageClassification from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/mit-b3") model = SegformerForImageClassification.from_pretrained("nvidia/mit-b3") inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#). ### License The license for this model can be found [here](https://github.com/NVlabs/SegFormer/blob/master/LICENSE). ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2105-15203, author = {Enze Xie and Wenhai Wang and Zhiding Yu and Anima Anandkumar and Jose M. Alvarez and Ping Luo}, title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers}, journal = {CoRR}, volume = {abs/2105.15203}, year = {2021}, url = {https://arxiv.org/abs/2105.15203}, eprinttype = {arXiv}, eprint = {2105.15203}, timestamp = {Wed, 02 Jun 2021 11:46:42 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
3,354
[ [ -0.06805419921875, -0.05224609375, 0.00667572021484375, 0.01226043701171875, -0.02508544921875, -0.0269927978515625, 0.003635406494140625, -0.049041748046875, 0.0177154541015625, 0.0440673828125, -0.059722900390625, -0.03985595703125, -0.057098388671875, 0.0...
EleutherAI/polyglot-ko-3.8b
2023-06-07T05:03:23.000Z
[ "transformers", "pytorch", "safetensors", "gpt_neox", "text-generation", "causal-lm", "ko", "arxiv:2104.09864", "arxiv:2204.04541", "arxiv:2306.02254", "license:apache-2.0", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text-generation
EleutherAI
null
null
EleutherAI/polyglot-ko-3.8b
19
2,684
transformers
2022-09-09T14:15:36
--- language: - ko tags: - pytorch - causal-lm license: apache-2.0 --- # Polyglot-Ko-3.8B ## Model Description Polyglot-Ko is a series of large-scale Korean autoregressive language models made by the EleutherAI polyglot team. | Hyperparameter | Value | |----------------------|----------------------------------------------------------------------------------------------------------------------------------------| | \\(n_{parameters}\\) | 3,809,974,272 | | \\(n_{layers}\\) | 32 | | \\(d_{model}\\) | 3,072 | | \\(d_{ff}\\) | 12,288 | | \\(n_{heads}\\) | 24 | | \\(d_{head}\\) | 128 | | \\(n_{ctx}\\) | 2,048 | | \\(n_{vocab}\\) | 30,003 / 30,080 | | Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) | | RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) | The model consists of 32 transformer layers with a model dimension of 3072, and a feedforward dimension of 12288. The model dimension is split into 24 heads, each with a dimension of 128. Rotary Position Embedding (RoPE) is applied to 64 dimensions of each head. The model is trained with a tokenization vocabulary of 30003. ## Training data Polyglot-Ko-3.8B was trained on 863 GB of Korean language data (1.2TB before processing), a large-scale dataset curated by [TUNiB](https://tunib.ai/). The data collection process has abided by South Korean laws. This dataset was collected for the purpose of training Polyglot-Ko models, so it will not be released for public use. | Source |Size (GB) | Link | |-------------------------------------|---------|------------------------------------------| | Korean blog posts | 682.3 | - | | Korean news dataset | 87.0 | - | | Modu corpus | 26.4 |corpus.korean.go.kr | | Korean patent dataset | 19.0 | - | | Korean Q & A dataset | 18.1 | - | | KcBert dataset | 12.7 | github.com/Beomi/KcBERT | | Korean fiction dataset | 6.1 | - | | Korean online comments | 4.2 | - | | Korean wikipedia | 1.4 | ko.wikipedia.org | | Clova call | < 1.0 | github.com/clovaai/ClovaCall | | Naver sentiment movie corpus | < 1.0 | github.com/e9t/nsmc | | Korean hate speech dataset | < 1.0 | - | | Open subtitles | < 1.0 | opus.nlpl.eu/OpenSubtitles.php | | AIHub various tasks datasets | < 1.0 |aihub.or.kr | | Standard Korean language dictionary | < 1.0 | stdict.korean.go.kr/main/main.do | Furthermore, in order to avoid the model memorizing and generating personally identifiable information (PII) in the training data, we masked out the following sensitive information in the pre-processing stage: * `<|acc|>` : bank account number * `<|rrn|>` : resident registration number * `<|tell|>` : phone number ## Training procedure Polyglot-Ko-3.8B was trained for 219 billion tokens over 105,000 steps on 256 A100 GPUs with the [GPT-NeoX framework](https://github.com/EleutherAI/gpt-neox). It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token. ## How to use This model can be easily loaded using the `AutoModelForCausalLM` class: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("EleutherAI/polyglot-ko-3.8b") model = AutoModelForCausalLM.from_pretrained("EleutherAI/polyglot-ko-3.8b") ``` ## Evaluation results We evaluate Polyglot-Ko-3.8B on [KOBEST dataset](https://arxiv.org/abs/2204.04541), a benchmark with 5 downstream tasks, against comparable models such as skt/ko-gpt-trinity-1.2B-v0.5, kakaobrain/kogpt and facebook/xglm-7.5B, using the prompts provided in the paper. The following tables show the results when the number of few-shot examples differ. You can reproduce these results using the [polyglot branch of lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/polyglot) and the following scripts. For a fair comparison, all models were run under the same conditions and using the same prompts. In the tables, `n` refers to the number of few-shot examples. In case of WiC dataset, all models show random performance. ```console python main.py \ --model gpt2 \ --model_args pretrained='EleutherAI/polyglot-ko-3.8b' \ --tasks kobest_copa,kobest_hellaswag \ --num_fewshot $YOUR_NUM_FEWSHOT \ --batch_size $YOUR_BATCH_SIZE \ --device $YOUR_DEVICE \ --output_path $/path/to/output/ ``` ### COPA (F1) | Model | params | 0-shot | 5-shot | 10-shot | 50-shot | |----------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------| | [skt/ko-gpt-trinity-1.2B-v0.5](https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5) | 1.2B | 0.6696 | 0.6477 | 0.6419 | 0.6514 | | [kakaobrain/kogpt](https://huggingface.co/kakaobrain/kogpt) | 6.0B | 0.7345 | 0.7287 | 0.7277 | 0.7479 | | [facebook/xglm-7.5B](https://huggingface.co/facebook/xglm-7.5B) | 7.5B | 0.6723 | 0.6731 | 0.6769 | 0.7119 | | [EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 1.3B | 0.7196 | 0.7193 | 0.7204 | 0.7206 | | **[EleutherAI/polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) (this)** | **3.8B** | **0.7595** | **0.7608** | **0.7638** | **0.7788** | | [EleutherAI/polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 5.8B | 0.7745 | 0.7676 | 0.7775 | 0.7887 | | [EleutherAI/polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) | 12.8B | 0.7937 | 0.8108 | 0.8037 | 0.8369 | <img src="https://github.com/EleutherAI/polyglot/assets/19511788/d5b49364-aed5-4467-bae2-5a322c8e2ceb" width="800px"> ### HellaSwag (F1) | Model | params | 0-shot | 5-shot | 10-shot | 50-shot | |----------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------| | [skt/ko-gpt-trinity-1.2B-v0.5](https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5) | 1.2B | 0.5243 | 0.5272 | 0.5166 | 0.5352 | | [kakaobrain/kogpt](https://huggingface.co/kakaobrain/kogpt) | 6.0B | 0.5590 | 0.5833 | 0.5828 | 0.5907 | | [facebook/xglm-7.5B](https://huggingface.co/facebook/xglm-7.5B) | 7.5B | 0.5665 | 0.5689 | 0.5565 | 0.5622 | | [EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 1.3B | 0.5247 | 0.5260 | 0.5278 | 0.5427 | | **[EleutherAI/polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) (this)** | **3.8B** | **0.5707** | **0.5830** | **0.5670** | **0.5787** | | [EleutherAI/polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 5.8B | 0.5976 | 0.5998 | 0.5979 | 0.6208 | | [EleutherAI/polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) | 12.8B | 0.5954 | 0.6306 | 0.6098 | 0.6118 | <img src="https://github.com/EleutherAI/polyglot/assets/19511788/5acb60ac-161a-4ab3-a296-db4442e08b7f" width="800px"> ### BoolQ (F1) | Model | params | 0-shot | 5-shot | 10-shot | 50-shot | |----------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------| | [skt/ko-gpt-trinity-1.2B-v0.5](https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5) | 1.2B | 0.3356 | 0.4014 | 0.3640 | 0.3560 | | [kakaobrain/kogpt](https://huggingface.co/kakaobrain/kogpt) | 6.0B | 0.4514 | 0.5981 | 0.5499 | 0.5202 | | [facebook/xglm-7.5B](https://huggingface.co/facebook/xglm-7.5B) | 7.5B | 0.4464 | 0.3324 | 0.3324 | 0.3324 | | [EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 1.3B | 0.3552 | 0.4751 | 0.4109 | 0.4038 | | **[EleutherAI/polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) (this)** | **3.8B** | **0.4320** | **0.5263** | **0.4930** | **0.4038** | | [EleutherAI/polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 5.8B | 0.4356 | 0.5698 | 0.5187 | 0.5236 | | [EleutherAI/polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) | 12.8B | 0.4818 | 0.6041 | 0.6289 | 0.6448 | <img src="https://github.com/EleutherAI/polyglot/assets/19511788/b74c23c0-01f3-4b68-9e10-a48e9aa052ab" width="800px"> ### SentiNeg (F1) | Model | params | 0-shot | 5-shot | 10-shot | 50-shot | |----------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------| | [skt/ko-gpt-trinity-1.2B-v0.5](https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5) | 1.2B | 0.6065 | 0.6878 | 0.7280 | 0.8413 | | [kakaobrain/kogpt](https://huggingface.co/kakaobrain/kogpt) | 6.0B | 0.3747 | 0.8942 | 0.9294 | 0.9698 | | [facebook/xglm-7.5B](https://huggingface.co/facebook/xglm-7.5B) | 7.5B | 0.3578 | 0.4471 | 0.3964 | 0.5271 | | [EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 1.3B | 0.6790 | 0.6257 | 0.5514 | 0.7851 | | **[EleutherAI/polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) (this)** | **3.8B** | **0.4858** | **0.7950** | **0.7320** | **0.7851** | | [EleutherAI/polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 5.8B | 0.3394 | 0.8841 | 0.8808 | 0.9521 | | [EleutherAI/polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) | 12.8B | 0.9117 | 0.9015 | 0.9345 | 0.9723 | <img src="https://github.com/EleutherAI/polyglot/assets/19511788/95b56b19-d349-4b70-9ff9-94a5560f89ee" width="800px"> ### WiC (F1) | Model | params | 0-shot | 5-shot | 10-shot | 50-shot | |----------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------| | [skt/ko-gpt-trinity-1.2B-v0.5](https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5) | 1.2B | 0.3290 | 0.4313 | 0.4001 | 0.3621 | | [kakaobrain/kogpt](https://huggingface.co/kakaobrain/kogpt) | 6.0B | 0.3526 | 0.4775 | 0.4358 | 0.4061 | | [facebook/xglm-7.5B](https://huggingface.co/facebook/xglm-7.5B) | 7.5B | 0.3280 | 0.4903 | 0.4945 | 0.3656 | | [EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 1.3B | 0.3297 | 0.4850 | 0.4650 | 0.3290 | | **[EleutherAI/polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) (this)** | **3.8B** | **0.3390** | **0.4944** | **0.4203** | **0.3835** | | [EleutherAI/polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 5.8B | 0.3913 | 0.4688 | 0.4189 | 0.3910 | | [EleutherAI/polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) | 12.8B | 0.3985 | 0.3683 | 0.3307 | 0.3273 | <img src="https://github.com/EleutherAI/polyglot/assets/19511788/4de4a4c3-d7ac-4e04-8b0c-0d533fe88294" width="800px"> ## Limitations and Biases Polyglot-Ko has been trained to optimize next token prediction. Language models such as this are often used for a wide variety of tasks and it is important to be aware of possible unexpected outcomes. For instance, Polyglot-Ko will not always return the most factual or accurate response but the most statistically likely one. In addition, Polyglot may produce socially unacceptable or offensive content. We recommend having a human curator or other filtering mechanism to censor sensitive content. ## Citation and Related Information ### BibTeX entry If you find our work useful, please consider citing: ```bibtex @misc{ko2023technical, title={A Technical Report for Polyglot-Ko: Open-Source Large-Scale Korean Language Models}, author={Hyunwoong Ko and Kichang Yang and Minho Ryu and Taekyoon Choi and Seungmu Yang and jiwung Hyun and Sungho Park}, year={2023}, eprint={2306.02254}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Licensing All our models are licensed under the terms of the Apache License 2.0. ``` Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ``` ### Acknowledgement This project was made possible thanks to the computing resources from [Stability.ai](https://stability.ai), and thanks to [TUNiB](https://tunib.ai) for providing a large-scale Korean dataset for this work.
15,475
[ [ -0.049163818359375, -0.051849365234375, 0.021270751953125, 0.00511932373046875, -0.038177490234375, 0.0007643699645996094, -0.00862884521484375, -0.04022216796875, 0.031280517578125, 0.01335906982421875, -0.033966064453125, -0.0489501953125, -0.05426025390625, ...
keremberke/yolov8n-shoe-classification
2023-02-22T13:05:06.000Z
[ "ultralytics", "tensorboard", "v8", "ultralyticsplus", "yolov8", "yolo", "vision", "image-classification", "pytorch", "awesome-yolov8-models", "dataset:keremberke/shoe-classification", "model-index", "region:us" ]
image-classification
keremberke
null
null
keremberke/yolov8n-shoe-classification
0
2,684
ultralytics
2023-01-29T11:51:08
--- tags: - ultralyticsplus - yolov8 - ultralytics - yolo - vision - image-classification - pytorch - awesome-yolov8-models library_name: ultralytics library_version: 8.0.23 inference: false datasets: - keremberke/shoe-classification model-index: - name: keremberke/yolov8n-shoe-classification results: - task: type: image-classification dataset: type: keremberke/shoe-classification name: shoe-classification split: validation metrics: - type: accuracy value: 0.68675 # min: 0.0 - max: 1.0 name: top1 accuracy - type: accuracy value: 1 # min: 0.0 - max: 1.0 name: top5 accuracy --- <div align="center"> <img width="640" alt="keremberke/yolov8n-shoe-classification" src="https://huggingface.co/keremberke/yolov8n-shoe-classification/resolve/main/thumbnail.jpg"> </div> ### Supported Labels ``` ['adidas', 'converse', 'nike'] ``` ### How to use - Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus): ```bash pip install ultralyticsplus==0.0.24 ultralytics==8.0.23 ``` - Load model and perform prediction: ```python from ultralyticsplus import YOLO, postprocess_classify_output # load model model = YOLO('keremberke/yolov8n-shoe-classification') # set model parameters model.overrides['conf'] = 0.25 # model confidence threshold # set image image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg' # perform inference results = model.predict(image) # observe results print(results[0].probs) # [0.1, 0.2, 0.3, 0.4] processed_result = postprocess_classify_output(model, result=results[0]) print(processed_result) # {"cat": 0.4, "dog": 0.6} ``` **More models available at: [awesome-yolov8-models](https://yolov8.xyz)**
1,761
[ [ -0.032196044921875, -0.0144500732421875, 0.030242919921875, -0.010467529296875, -0.038665771484375, -0.01126861572265625, -0.0007877349853515625, -0.042816162109375, 0.010040283203125, 0.007518768310546875, -0.036163330078125, -0.045989990234375, -0.039459228515...
keremberke/yolov8n-valorant-detection
2023-02-22T13:02:22.000Z
[ "ultralytics", "tensorboard", "v8", "ultralyticsplus", "yolov8", "yolo", "vision", "object-detection", "pytorch", "awesome-yolov8-models", "dataset:keremberke/valorant-object-detection", "model-index", "region:us" ]
object-detection
keremberke
null
null
keremberke/yolov8n-valorant-detection
1
2,683
ultralytics
2023-01-28T08:44:49
--- tags: - ultralyticsplus - yolov8 - ultralytics - yolo - vision - object-detection - pytorch - awesome-yolov8-models library_name: ultralytics library_version: 8.0.21 inference: false datasets: - keremberke/valorant-object-detection model-index: - name: keremberke/yolov8n-valorant-detection results: - task: type: object-detection dataset: type: keremberke/valorant-object-detection name: valorant-object-detection split: validation metrics: - type: precision # since mAP@0.5 is not available on hf.co/metrics value: 0.93688 # min: 0.0 - max: 1.0 name: mAP@0.5(box) --- <div align="center"> <img width="640" alt="keremberke/yolov8n-valorant-detection" src="https://huggingface.co/keremberke/yolov8n-valorant-detection/resolve/main/thumbnail.jpg"> </div> ### Supported Labels ``` ['dropped spike', 'enemy', 'planted spike', 'teammate'] ``` ### How to use - Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus): ```bash pip install ultralyticsplus==0.0.23 ultralytics==8.0.21 ``` - Load model and perform prediction: ```python from ultralyticsplus import YOLO, render_result # load model model = YOLO('keremberke/yolov8n-valorant-detection') # set model parameters model.overrides['conf'] = 0.25 # NMS confidence threshold model.overrides['iou'] = 0.45 # NMS IoU threshold model.overrides['agnostic_nms'] = False # NMS class-agnostic model.overrides['max_det'] = 1000 # maximum number of detections per image # set image image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg' # perform inference results = model.predict(image) # observe results print(results[0].boxes) render = render_result(model=model, image=image, result=results[0]) render.show() ``` **More models available at: [awesome-yolov8-models](https://yolov8.xyz)**
1,860
[ [ -0.031341552734375, -0.0264129638671875, 0.03179931640625, -0.01436614990234375, -0.022979736328125, -0.0146331787109375, 0.01050567626953125, -0.026763916015625, 0.030181884765625, 0.01532745361328125, -0.044464111328125, -0.051971435546875, -0.03179931640625, ...
theintuitiveye/HARDblend
2023-08-24T12:44:24.000Z
[ "diffusers", "stable-diffusion", "text-to-image", "art", "en", "license:creativeml-openrail-m", "endpoints_compatible", "has_space", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
theintuitiveye
null
null
theintuitiveye/HARDblend
79
2,681
diffusers
2023-02-01T11:10:05
--- title: HARDblend colorFrom: green colorTo: indigo sdk: gradio sdk_version: 3.11.0 pinned: false license: creativeml-openrail-m tags: - stable-diffusion - text-to-image - art inference: true language: - en library_name: diffusers --- # **HARDblend** A versatile photorealistic NSFW capable model which is great at generating high quality portraits. It is a finetuned model trained on ~500 portrait images merged with Hassanblend, Aeros, RealisticVision1.2, Delibrate, SxD, f222. ## Usage Use stability ai VAE or bakedinVAE version for better results. *RAW samples* ![image](https://drive.google.com/uc?export=view&id=1iRai5itkHI-zlLsk5Hig5eK0AMM9NdKl) Help us to be able to create models of professional standards. Consider supporting us on [Patreon](https://www.patreon.com/intuitiveai) / [Ko-fi](https://ko-fi.com/intuitiveai) / [Paypal](https://www.paypal.com/paypalme/theintuitiveye). ## *Demo* We support a [Gradio](https://github.com/gradio-app/gradio) Web UI to run HARDblend : [![Open In Spaces](https://camo.githubusercontent.com/00380c35e60d6b04be65d3d94a58332be5cc93779f630bcdfc18ab9a3a7d3388/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f25463025394625413425393725323048756767696e67253230466163652d5370616365732d626c7565)](https://huggingface.co/spaces/theintuitiveye/HARDblend) ## *License* This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies : - You can't use the model to deliberately produce nor share illegal or harmful outputs or content - The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license - You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license [here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
2,173
[ [ -0.0318603515625, -0.035186767578125, 0.030914306640625, 0.0303192138671875, -0.0184326171875, -0.033233642578125, 0.004665374755859375, -0.02581787109375, -0.0017957687377929688, 0.06622314453125, -0.05975341796875, -0.0574951171875, -0.038909912109375, -0....
vesteinn/vit-mae-cub
2023-08-01T08:28:42.000Z
[ "transformers", "pytorch", "safetensors", "vit", "image-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
vesteinn
null
null
vesteinn/vit-mae-cub
0
2,681
transformers
2023-07-24T13:12:26
Note that this model does not work directly with HF, a modification that does mean pooling before the layernorm and classification head is needed. ```python from transformers import ( ViTForImageClassification, pipeline, AutoImageProcessor, ViTConfig, ViTModel, ) from transformers.modeling_outputs import ( ImageClassifierOutput, BaseModelOutputWithPooling, ) from PIL import Image import torch from torch import nn from typing import Optional, Union, Tuple class CustomViTModel(ViTModel): def forward( self, pixel_values: Optional[torch.Tensor] = None, bool_masked_pos: Optional[torch.BoolTensor] = None, head_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, interpolate_pos_encoding: Optional[bool] = None, return_dict: Optional[bool] = None, ) -> Union[Tuple, BaseModelOutputWithPooling]: r""" bool_masked_pos (`torch.BoolTensor` of shape `(batch_size, num_patches)`, *optional*): Boolean masked positions. Indicates which patches are masked (1) and which aren't (0). """ output_attentions = ( output_attentions if output_attentions is not None else self.config.output_attentions ) output_hidden_states = ( output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states ) return_dict = ( return_dict if return_dict is not None else self.config.use_return_dict ) if pixel_values is None: raise ValueError("You have to specify pixel_values") # Prepare head mask if needed # 1.0 in head_mask indicate we keep the head # attention_probs has shape bsz x n_heads x N x N # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) # TODO: maybe have a cleaner way to cast the input (from `ImageProcessor` side?) expected_dtype = self.embeddings.patch_embeddings.projection.weight.dtype if pixel_values.dtype != expected_dtype: pixel_values = pixel_values.to(expected_dtype) embedding_output = self.embeddings( pixel_values, bool_masked_pos=bool_masked_pos, interpolate_pos_encoding=interpolate_pos_encoding, ) encoder_outputs = self.encoder( embedding_output, head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) sequence_output = encoder_outputs[0] sequence_output = sequence_output[:, 1:, :].mean(dim=1) sequence_output = self.layernorm(sequence_output) pooled_output = ( self.pooler(sequence_output) if self.pooler is not None else None ) if not return_dict: head_outputs = ( (sequence_output, pooled_output) if pooled_output is not None else (sequence_output,) ) return head_outputs + encoder_outputs[1:] return BaseModelOutputWithPooling( last_hidden_state=sequence_output, pooler_output=pooled_output, hidden_states=encoder_outputs.hidden_states, attentions=encoder_outputs.attentions, ) class CustomViTForImageClassification(ViTForImageClassification): def __init__(self, config: ViTConfig) -> None: super().__init__(config) self.num_labels = config.num_labels self.vit = CustomViTModel(config, add_pooling_layer=False) # Classifier head self.classifier = ( nn.Linear(config.hidden_size, config.num_labels) if config.num_labels > 0 else nn.Identity() ) # Initialize weights and apply final processing self.post_init() def forward( self, pixel_values: Optional[torch.Tensor] = None, head_mask: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, interpolate_pos_encoding: Optional[bool] = None, return_dict: Optional[bool] = None, ) -> Union[tuple, ImageClassifierOutput]: r""" labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): Labels for computing the image classification/regression loss. Indices should be in `[0, ..., config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If `config.num_labels > 1` a classification loss is computed (Cross-Entropy). """ return_dict = ( return_dict if return_dict is not None else self.config.use_return_dict ) outputs = self.vit( pixel_values, head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, interpolate_pos_encoding=interpolate_pos_encoding, return_dict=return_dict, ) sequence_output = outputs[0] logits = self.classifier(sequence_output) loss = None return ImageClassifierOutput( loss=loss, logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) model = CustomViTForImageClassification.from_pretrained("vesteinn/vit-mae-cub") image_processor = AutoImageProcessor.from_pretrained("vesteinn/vit-mae-cub") classifier = pipeline( "image-classification", model=model, image_processor=image_processor ) ```
6,052
[ [ -0.039093017578125, -0.037689208984375, 0.0175018310546875, 0.01474761962890625, -0.0212249755859375, -0.018035888671875, -0.00540924072265625, -0.01360321044921875, 0.01593017578125, 0.024810791015625, -0.04669189453125, -0.03912353515625, -0.062286376953125, ...
ramsrigouthamg/t5_paraphraser
2020-12-11T22:00:04.000Z
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text2text-generation
ramsrigouthamg
null
null
ramsrigouthamg/t5_paraphraser
11
2,680
transformers
2022-03-02T23:29:05
## Model in Action 🚀 ```python import torch from transformers import T5ForConditionalGeneration,T5Tokenizer def set_seed(seed): torch.manual_seed(seed) if torch.cuda.is_available(): torch.cuda.manual_seed_all(seed) set_seed(42) model = T5ForConditionalGeneration.from_pretrained('ramsrigouthamg/t5_paraphraser') tokenizer = T5Tokenizer.from_pretrained('ramsrigouthamg/t5_paraphraser') device = torch.device("cuda" if torch.cuda.is_available() else "cpu") print ("device ",device) model = model.to(device) sentence = "Which course should I take to get started in data science?" # sentence = "What are the ingredients required to bake a perfect cake?" # sentence = "What is the best possible approach to learn aeronautical engineering?" # sentence = "Do apples taste better than oranges in general?" text = "paraphrase: " + sentence + " </s>" max_len = 256 encoding = tokenizer.encode_plus(text,pad_to_max_length=True, return_tensors="pt") input_ids, attention_masks = encoding["input_ids"].to(device), encoding["attention_mask"].to(device) # set top_k = 50 and set top_p = 0.95 and num_return_sequences = 3 beam_outputs = model.generate( input_ids=input_ids, attention_mask=attention_masks, do_sample=True, max_length=256, top_k=120, top_p=0.98, early_stopping=True, num_return_sequences=10 ) print ("\nOriginal Question ::") print (sentence) print ("\n") print ("Paraphrased Questions :: ") final_outputs =[] for beam_output in beam_outputs: sent = tokenizer.decode(beam_output, skip_special_tokens=True,clean_up_tokenization_spaces=True) if sent.lower() != sentence.lower() and sent not in final_outputs: final_outputs.append(sent) for i, final_output in enumerate(final_outputs): print("{}: {}".format(i, final_output)) ``` ## Output ``` Original Question :: Which course should I take to get started in data science? Paraphrased Questions :: 0: What should I learn to become a data scientist? 1: How do I get started with data science? 2: How would you start a data science career? 3: How can I start learning data science? 4: How do you get started in data science? 5: What's the best course for data science? 6: Which course should I start with for data science? 7: What courses should I follow to get started in data science? 8: What degree should be taken by a data scientist? 9: Which course should I follow to become a Data Scientist? ``` ## Detailed blog post available here : https://towardsdatascience.com/paraphrase-any-question-with-t5-text-to-text-transfer-transformer-pretrained-model-and-cbb9e35f1555
2,598
[ [ -0.016845703125, -0.06036376953125, 0.03472900390625, 0.004909515380859375, -0.0140228271484375, 0.0118408203125, -0.0046234130859375, -0.0002663135528564453, -0.0096893310546875, 0.03326416015625, -0.0438232421875, -0.0445556640625, -0.0447998046875, 0.0157...
keremberke/yolov8s-pokemon-classification
2023-02-22T13:02:11.000Z
[ "ultralytics", "tensorboard", "v8", "ultralyticsplus", "yolov8", "yolo", "vision", "image-classification", "pytorch", "awesome-yolov8-models", "dataset:keremberke/pokemon-classification", "model-index", "region:us" ]
image-classification
keremberke
null
null
keremberke/yolov8s-pokemon-classification
0
2,678
ultralytics
2023-01-28T04:48:41
--- tags: - ultralyticsplus - yolov8 - ultralytics - yolo - vision - image-classification - pytorch - awesome-yolov8-models library_name: ultralytics library_version: 8.0.21 inference: false datasets: - keremberke/pokemon-classification model-index: - name: keremberke/yolov8s-pokemon-classification results: - task: type: image-classification dataset: type: keremberke/pokemon-classification name: pokemon-classification split: validation metrics: - type: accuracy value: 0.02459 # min: 0.0 - max: 1.0 name: top1 accuracy - type: accuracy value: 0.0806 # min: 0.0 - max: 1.0 name: top5 accuracy --- <div align="center"> <img width="640" alt="keremberke/yolov8s-pokemon-classification" src="https://huggingface.co/keremberke/yolov8s-pokemon-classification/resolve/main/thumbnail.jpg"> </div> ### Supported Labels ``` ['Abra', 'Aerodactyl', 'Alakazam', 'Alolan Sandslash', 'Arbok', 'Arcanine', 'Articuno', 'Beedrill', 'Bellsprout', 'Blastoise', 'Bulbasaur', 'Butterfree', 'Caterpie', 'Chansey', 'Charizard', 'Charmander', 'Charmeleon', 'Clefable', 'Clefairy', 'Cloyster', 'Cubone', 'Dewgong', 'Diglett', 'Ditto', 'Dodrio', 'Doduo', 'Dragonair', 'Dragonite', 'Dratini', 'Drowzee', 'Dugtrio', 'Eevee', 'Ekans', 'Electabuzz', 'Electrode', 'Exeggcute', 'Exeggutor', 'Farfetchd', 'Fearow', 'Flareon', 'Gastly', 'Gengar', 'Geodude', 'Gloom', 'Golbat', 'Goldeen', 'Golduck', 'Golem', 'Graveler', 'Grimer', 'Growlithe', 'Gyarados', 'Haunter', 'Hitmonchan', 'Hitmonlee', 'Horsea', 'Hypno', 'Ivysaur', 'Jigglypuff', 'Jolteon', 'Jynx', 'Kabuto', 'Kabutops', 'Kadabra', 'Kakuna', 'Kangaskhan', 'Kingler', 'Koffing', 'Krabby', 'Lapras', 'Lickitung', 'Machamp', 'Machoke', 'Machop', 'Magikarp', 'Magmar', 'Magnemite', 'Magneton', 'Mankey', 'Marowak', 'Meowth', 'Metapod', 'Mew', 'Mewtwo', 'Moltres', 'MrMime', 'Muk', 'Nidoking', 'Nidoqueen', 'Nidorina', 'Nidorino', 'Ninetales', 'Oddish', 'Omanyte', 'Omastar', 'Onix', 'Paras', 'Parasect', 'Persian', 'Pidgeot', 'Pidgeotto', 'Pidgey', 'Pikachu', 'Pinsir', 'Poliwag', 'Poliwhirl', 'Poliwrath', 'Wigglytuff', 'Zapdos', 'Zubat'] ``` ### How to use - Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus): ```bash pip install ultralyticsplus==0.0.23 ultralytics==8.0.21 ``` - Load model and perform prediction: ```python from ultralyticsplus import YOLO, postprocess_classify_output # load model model = YOLO('keremberke/yolov8s-pokemon-classification') # set model parameters model.overrides['conf'] = 0.25 # model confidence threshold # set image image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg' # perform inference results = model.predict(image) # observe results print(results[0].probs) # [0.1, 0.2, 0.3, 0.4] processed_result = postprocess_classify_output(model, result=results[0]) print(processed_result) # {"cat": 0.4, "dog": 0.6} ``` **More models available at: [awesome-yolov8-models](https://yolov8.xyz)**
3,000
[ [ -0.03936767578125, -0.01153564453125, 0.01904296875, -0.005725860595703125, -0.00901031494140625, 0.0141448974609375, 0.0126953125, -0.02197265625, 0.040435791015625, 0.0163116455078125, -0.0267181396484375, -0.037567138671875, -0.047821044921875, 0.01763916...
livingbox/model-test-oct-23
2023-10-24T19:23:32.000Z
[ "diffusers", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us", "has_space" ]
text-to-image
livingbox
null
null
livingbox/model-test-oct-23
0
2,678
diffusers
2023-10-24T19:17:58
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### Model-test-oct-23 Dreambooth model trained by livingbox with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
508
[ [ -0.033782958984375, -0.0740966796875, 0.034332275390625, 0.034912109375, -0.0268402099609375, 0.033538818359375, 0.032379150390625, -0.0294342041015625, 0.0472412109375, 0.00872039794921875, -0.029693603515625, -0.01983642578125, -0.022796630859375, -0.00427...
ilovebots/bert-sdg-french
2023-08-11T17:57:30.000Z
[ "transformers", "pytorch", "safetensors", "bert", "text-classification", "Objectifs de développement durable (ODD)", "SDG", "BERT Classification", "fr", "license:mit", "endpoints_compatible", "region:us" ]
text-classification
ilovebots
null
null
ilovebots/bert-sdg-french
0
2,677
transformers
2023-08-03T22:00:26
--- license: mit language: - fr tags: - Objectifs de développement durable (ODD) - SDG - BERT Classification --- # ilovebots/bert-sdg-french <!-- Provide a quick summary of what the model is/does. --> Ce modèle permet de classer les textes en fonction des objectifs de développement durable (ODD) des Nations Unies. <img src="https://www.ulaval.ca/sites/default/files/DD/ODD/Tableau%20ODD.jpg" alt="image" width="600"/> Source: [https://www.un.org/development/desa/disabilities/about-us/sustainable-development-goals-sdgs-and-disability.html](https://www.un.org/development/desa/disabilities/about-us/sustainable-development-goals-sdgs-and-disability.html) ## Détails du modèle ### Description du modèle <!-- Provide a longer summary of what this model is. --> Ce modèle de classification de texte a été développé en fine-tuning le modèle pré-entraîné dbmdz/bert-base-french-europeana-cased. Les données d'entraînement de ce modèle affiné proviennent de l'ensemble de données communautaires OSDG (OSDG-CD) accessible au public à l'adresse https://zenodo.org/record/5550238#.ZBulfcJByF4. Ce modèle a été réalisé dans le cadre d'une recherche universitaire à l'[Université Laval](https://www.ulaval.ca/developpement-durable/objectifs-de-developpement-durable-de-lonu).<br> L'objectif était de créer un modèle de classification de texte SDG basé sur transformers en français.<br> Les principaux détails du modèle sont mis en évidence ci-dessous : - **Model type:** Text classification - **Language(s) (NLP):** French - **License:** mit - **Finetuned from model :** dbmdz/bert-base-french-europeana-cased ### Model Sources <!-- Provide the basic links for the model. --> - **Repository:** https://huggingface.co/ilovebots/bert-sdg-french ## How to Get Started with the Model Utilisez le code ci-dessous pour commencer à utiliser le modèle. ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("ilovebots/bert-sdg-french") model = AutoModelForSequenceClassification.from_pretrained("ilovebots/bert-sdg-french") ``` ## Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> La base disponible dans https://zenodo.org/record/5550238#.ZBulfcJByF4 a été enrichie des objectifs de développement durable des Nations Unies et traduite en en français. ## Training Hyperparameters - Num_epoch = 4 - Learning rate = 2e-5 - Epsilon = 1e-8 - Optimizer = AdamW - Batch size = 32 - Seed random = 42 ## Evaluation #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> - Accuracy = 0.84 <img src="https://raw.githubusercontent.com/I-Love-Bots/public/main/BertFinetuning.png" alt="image" width="600"/> ## Citation Martinez, D.F. (2023). SDG classification with BERT. https://huggingface.co/ilovebots/bert-sdg-french <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> <!--## Model Card Contact -->
3,169
[ [ -0.043182373046875, -0.04986572265625, 0.0251617431640625, 0.014068603515625, -0.0270233154296875, -0.01490020751953125, -0.010986328125, -0.0291595458984375, 0.0118255615234375, 0.027496337890625, -0.046722412109375, -0.04730224609375, -0.056243896484375, 0...
pszemraj/led-base-book-summary
2023-10-05T06:57:14.000Z
[ "transformers", "pytorch", "safetensors", "led", "text2text-generation", "summarization", "summary", "longformer", "booksum", "long-document", "long-form", "dataset:kmfoda/booksum", "license:apache-2.0", "license:bsd-3-clause", "model-index", "autotrain_compatible", "endpoints_compat...
summarization
pszemraj
null
null
pszemraj/led-base-book-summary
41
2,675
transformers
2022-03-02T23:29:05
--- license: - apache-2.0 - bsd-3-clause tags: - summarization - led - summary - longformer - booksum - long-document - long-form datasets: - kmfoda/booksum metrics: - rouge widget: - text: large earthquakes along a given fault segment do not occur at random intervals because it takes time to accumulate the strain energy for the rupture. The rates at which tectonic plates move and accumulate strain at their boundaries are approximately uniform. Therefore, in first approximation, one may expect that large ruptures of the same fault segment will occur at approximately constant time intervals. If subsequent main shocks have different amounts of slip across the fault, then the recurrence time may vary, and the basic idea of periodic mainshocks must be modified. For great plate boundary ruptures the length and slip often vary by a factor of 2. Along the southern segment of the San Andreas fault the recurrence interval is 145 years with variations of several decades. The smaller the standard deviation of the average recurrence interval, the more specific could be the long term prediction of a future mainshock. example_title: earthquakes - text: ' A typical feed-forward neural field algorithm. Spatiotemporal coordinates are fed into a neural network that predicts values in the reconstructed domain. Then, this domain is mapped to the sensor domain where sensor measurements are available as supervision. Class and Section Problems Addressed Generalization (Section 2) Inverse problems, ill-posed problems, editability; symmetries. Hybrid Representations (Section 3) Computation & memory efficiency, representation capacity, editability: Forward Maps (Section 4) Inverse problems Network Architecture (Section 5) Spectral bias, integration & derivatives. Manipulating Neural Fields (Section 6) Edit ability, constraints, regularization. Table 2: The five classes of techniques in the neural field toolbox each addresses problems that arise in learning, inference, and control. (Section 3). We can supervise reconstruction via differentiable forward maps that transform Or project our domain (e.g, 3D reconstruction via 2D images; Section 4) With appropriate network architecture choices, we can overcome neural network spectral biases (blurriness) and efficiently compute derivatives and integrals (Section 5). Finally, we can manipulate neural fields to add constraints and regularizations, and to achieve editable representations (Section 6). Collectively, these classes constitute a ''toolbox'' of techniques to help solve problems with neural fields There are three components in a conditional neural field: (1) An encoder or inference function € that outputs the conditioning latent variable 2 given an observation 0 E(0) =2. 2 is typically a low-dimensional vector, and is often referred to aS a latent code Or feature code_ (2) A mapping function 4 between Z and neural field parameters O: Y(z) = O; (3) The neural field itself $. The encoder € finds the most probable z given the observations O: argmaxz P(2/0). The decoder maximizes the inverse conditional probability to find the most probable 0 given Z: arg- max P(Olz). We discuss different encoding schemes with different optimality guarantees (Section 2.1.1), both global and local conditioning (Section 2.1.2), and different mapping functions Y (Section 2.1.3) 2. Generalization Suppose we wish to estimate a plausible 3D surface shape given a partial or noisy point cloud. We need a suitable prior over the sur- face in its reconstruction domain to generalize to the partial observations. A neural network expresses a prior via the function space of its architecture and parameters 0, and generalization is influenced by the inductive bias of this function space (Section 5).' example_title: scientific paper - text: ' the big variety of data coming from diverse sources is one of the key properties of the big data phenomenon. It is, therefore, beneficial to understand how data is generated in various environments and scenarios, before looking at what should be done with this data and how to design the best possible architecture to accomplish this The evolution of IT architectures, described in Chapter 2, means that the data is no longer processed by a few big monolith systems, but rather by a group of services In parallel to the processing layer, the underlying data storage has also changed and became more distributed This, in turn, required a significant paradigm shift as the traditional approach to transactions (ACID) could no longer be supported. On top of this, cloud computing is becoming a major approach with the benefits of reducing costs and providing on-demand scalability but at the same time introducing concerns about privacy, data ownership, etc In the meantime the Internet continues its exponential growth: Every day both structured and unstructured data is published and available for processing: To achieve competitive advantage companies have to relate their corporate resources to external services, e.g. financial markets, weather forecasts, social media, etc While several of the sites provide some sort of API to access the data in a more orderly fashion; countless sources require advanced web mining and Natural Language Processing (NLP) processing techniques: Advances in science push researchers to construct new instruments for observing the universe O conducting experiments to understand even better the laws of physics and other domains. Every year humans have at their disposal new telescopes, space probes, particle accelerators, etc These instruments generate huge streams of data, which need to be stored and analyzed. The constant drive for efficiency in the industry motivates the introduction of new automation techniques and process optimization: This could not be done without analyzing the precise data that describe these processes. As more and more human tasks are automated, machines provide rich data sets, which can be analyzed in real-time to drive efficiency to new levels. Finally, it is now evident that the growth of the Internet of Things is becoming a major source of data. More and more of the devices are equipped with significant computational power and can generate a continuous data stream from their sensors. In the subsequent sections of this chapter, we will look at the domains described above to see what they generate in terms of data sets. We will compare the volumes but will also look at what is characteristic and important from their respective points of view. 3.1 The Internet is undoubtedly the largest database ever created by humans. While several well described; cleaned, and structured data sets have been made available through this medium, most of the resources are of an ambiguous, unstructured, incomplete or even erroneous nature. Still, several examples in the areas such as opinion mining, social media analysis, e-governance, etc, clearly show the potential lying in these resources. Those who can successfully mine and interpret the Internet data can gain unique insight and competitive advantage in their business An important area of data analytics on the edge of corporate IT and the Internet is Web Analytics.' example_title: data science textbook - text: 'Transformer-based models have shown to be very useful for many NLP tasks. However, a major limitation of transformers-based models is its O(n^2)O(n 2) time & memory complexity (where nn is sequence length). Hence, it''s computationally very expensive to apply transformer-based models on long sequences n > 512n>512. Several recent papers, e.g. Longformer, Performer, Reformer, Clustered attention try to remedy this problem by approximating the full attention matrix. You can checkout 🤗''s recent blog post in case you are unfamiliar with these models. BigBird (introduced in paper) is one of such recent models to address this issue. BigBird relies on block sparse attention instead of normal attention (i.e. BERT''s attention) and can handle sequences up to a length of 4096 at a much lower computational cost compared to BERT. It has achieved SOTA on various tasks involving very long sequences such as long documents summarization, question-answering with long contexts. BigBird RoBERTa-like model is now available in 🤗Transformers. The goal of this post is to give the reader an in-depth understanding of big bird implementation & ease one''s life in using BigBird with 🤗Transformers. But, before going into more depth, it is important to remember that the BigBird''s attention is an approximation of BERT''s full attention and therefore does not strive to be better than BERT''s full attention, but rather to be more efficient. It simply allows to apply transformer-based models to much longer sequences since BERT''s quadratic memory requirement quickly becomes unbearable. Simply put, if we would have ∞ compute & ∞ time, BERT''s attention would be preferred over block sparse attention (which we are going to discuss in this post). If you wonder why we need more compute when working with longer sequences, this blog post is just right for you! Some of the main questions one might have when working with standard BERT-like attention include: Do all tokens really have to attend to all other tokens? Why not compute attention only over important tokens? How to decide what tokens are important? How to attend to just a few tokens in a very efficient way? In this blog post, we will try to answer those questions. What tokens should be attended to? We will give a practical example of how attention works by considering the sentence ''BigBird is now available in HuggingFace for extractive question answering''. In BERT-like attention, every word would simply attend to all other tokens. Let''s think about a sensible choice of key tokens that a queried token actually only should attend to by writing some pseudo-code. Will will assume that the token available is queried and build a sensible list of key tokens to attend to. >>> # let''s consider following sentence as an example >>> example = [''BigBird'', ''is'', ''now'', ''available'', ''in'', ''HuggingFace'', ''for'', ''extractive'', ''question'', ''answering''] >>> # further let''s assume, we''re trying to understand the representation of ''available'' i.e. >>> query_token = ''available'' >>> # We will initialize an empty `set` and fill up the tokens of our interest as we proceed in this section. >>> key_tokens = [] # => currently ''available'' token doesn''t have anything to attend Nearby tokens should be important because, in a sentence (sequence of words), the current word is highly dependent on neighboring past & future tokens. This intuition is the idea behind the concept of sliding attention.' example_title: bigbird blog intro - text: 'The majority of available text summarization datasets include short-form source documents that lack long-range causal and temporal dependencies, and often contain strong layout and stylistic biases. While relevant, such datasets will offer limited challenges for future generations of text summarization systems. We address these issues by introducing BookSum, a collection of datasets for long-form narrative summarization. Our dataset covers source documents from the literature domain, such as novels, plays and stories, and includes highly abstractive, human written summaries on three levels of granularity of increasing difficulty: paragraph-, chapter-, and book-level. The domain and structure of our dataset poses a unique set of challenges for summarization systems, which include: processing very long documents, non-trivial causal and temporal dependencies, and rich discourse structures. To facilitate future work, we trained and evaluated multiple extractive and abstractive summarization models as baselines for our dataset.' example_title: BookSum Abstract inference: parameters: max_length: 96 min_length: 8 no_repeat_ngram_size: 3 early_stopping: true repetition_penalty: 3.5 length_penalty: 0.3 encoder_no_repeat_ngram_size: 3 num_beams: 4 model-index: - name: pszemraj/led-base-book-summary results: - task: type: summarization name: Summarization dataset: name: kmfoda/booksum type: kmfoda/booksum config: kmfoda--booksum split: test metrics: - type: rouge value: 33.4536 name: ROUGE-1 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYmEzYjNkZTUxZjA0YTdmNTJkMjVkMTg2NDRjNTkzN2ZlNDlhNTBhMWQ5MTNiYWE4Mzg5YTMyMTM5YmZjNDI3OSIsInZlcnNpb24iOjF9.OWjM_HCQLQHK4AV4em70QGT3lrVk25WyZdcXA8ywest_XSx9KehJbsIMDKtXxOOMwxvkogKnScy4tbskYMQqDg - type: rouge value: 5.2232 name: ROUGE-2 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOTVhOTdjZjc5YTdhMmVjZGE1NTA5MmJkYmM3Y2U3OGVlMjZmOGVlMTUzYTdiZGRhM2NmZjAzMjFkZjlkMzJmOCIsInZlcnNpb24iOjF9.qOlwWEe8dfBunmwImhbkcxzUW3ml-ESsuxjWN1fjn_o36zaUlDqlrXovMcL9GX9mVdvZDhx9W82rAR8h6410AQ - type: rouge value: 16.2044 name: ROUGE-L verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNzkwOTEwYjkxYzlhMWE4ZjhlZDVjZWEwMWY2YzgwY2Q2YzJkYWFhMTQ4ODFlZmVkY2I1OWVhMTFmZThlOGY4NCIsInZlcnNpb24iOjF9.fJSr9wRQ07YIPMpb2_xv14EkHRz3gsPdZH-4LzpdviLOjVhlK1Y4gSZjp3PTEbu4Hua0umvNTMrhii8hp3DFBA - type: rouge value: 29.9765 name: ROUGE-LSUM verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYWRkYjcwMTYwODRjN2E4MDliZWQyNjczNDU1NGZkMDRkNDlhNDA1YzZiOTk1MWJjZDkyMDg3MGMxYmVhOTA5MyIsInZlcnNpb24iOjF9.tUkVmhT0bl9eY_BzAzdzEI1lo3Iyfv6HBrrsVsRHqPFh4C0Q9Zk3IXbR-F_gMDx9vDiZIkpfG7SfsIZXwhDkBw - type: loss value: 3.1985862255096436 name: loss verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiM2RmYzQ1NTFiYjk3YTZjMTI3NDJlMDY0MTgyZDZlZDRmZDcwOWE1YjU0OGYyZTJlY2RkZTEzZDFlNDk2ZjgyNSIsInZlcnNpb24iOjF9.Pc5Tfu8IXYeB5ETK2JMIL4gpRIvvYXVS6w1AZdfq9dD1dm9Te2xaNhzGBHviqgEfFI9APNSJB28wna1OpYP0Dg - type: gen_len value: 191.9783 name: gen_len verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNmMyMDI5MzFlNzNjODNmOWQ0ZTM3MzVkNTNkYzIxNTIwZDQzMTU2MTM0YjYzNjJiMGRhOTQ0OWFhN2U4N2NjYyIsInZlcnNpb24iOjF9.AfsX-O1YwfbPxUwAD7rd1Ub7SXth7FFpTo2iNSOUWFhYmDUECkf6qtJ5pVHXXZwnpidAlfPTPg-5y3dx_BBGCA - task: type: summarization name: Summarization dataset: name: samsum type: samsum config: samsum split: test metrics: - type: rouge value: 32 name: ROUGE-1 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYmNhZjk3NjFlZDBhZjU2YzgzOTdhZTNkZjBkYjNjZDk2YjE2NDBmMDhiY2Y5M2EwNGI5Njk1NWU3ZDYyMzk2ZSIsInZlcnNpb24iOjF9.htkMQQLjIeFFjnpAJOwwxAdgzGZX10Und6RONubeeydXqQqb562EHqAw0K1ZlqltC4GBGKK3xslGOWXQ5AV6CA - type: rouge value: 10.0781 name: ROUGE-2 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMWYzZDA1YmU5YTkzMjEwN2IzMTNhZmZmOTU2ZGUyNzdlNWQ0OGQ1Y2UxOGQ0NWUyOWVmZmZkYzFkODE3OTliNiIsInZlcnNpb24iOjF9.WVE3fmYLkOW32_neYYj4TNJ5lhrG-27DnoJd4YDUzpHYvGWGoFU9CUuIFraQFnojRr02f3KqVY7T33DG5mpzBg - type: rouge value: 23.6331 name: ROUGE-L verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYTYyOTE0ODY2Mjk0YTk5ZTY5NTZkM2JkOGZhNjQ3NjNiMjVhNTc4ZmMwYzg1ZGIxOTA2MDQxNmU3Yjc5YWY0MSIsInZlcnNpb24iOjF9.yQ8WpdsyGKSuTG8MxHXqujEAYOIrt_hoUbuHc8HnS-GjS9xJ-rKO6pP6HYbi0LC9Xqh2_QPveCpNqr9ZQMGRCg - type: rouge value: 28.7831 name: ROUGE-LSUM verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzVkMDNlODA4NWI3OGI1OGFlNjFlNWE4YzY5ZDE1NDdhMjIwYjlkNDIxNDZjOGRiNTI1MGJkMmE0YWZiMDNhMiIsInZlcnNpb24iOjF9.qoxn2g70rbbX6sVCvm_cXzvYZf1UdTDU44vvEVdZL-4h36cJRCOx5--O1tZEVdyvlMVi-tYz1RSxLRwQd72FAw - type: loss value: 2.903024673461914 name: loss verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZGM2M2NlY2Q3NjYxY2EyM2FkYmM5OGVhYzcyNjA3ZTFlYzc3M2M2ODNmNWVjNjZmMGNiODc4MWY5NWE2ZDMyNyIsInZlcnNpb24iOjF9.pC4UK75LbyVFFm0-fcStMtdQhbuHE37wkZHoVbSQOYSyxjI8yA46bQkPmgg5znby9FK_wIgGxC_4KOdEeN4jBw - type: gen_len value: 60.7411 name: gen_len verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZWEwMDFiYjgyNzRhZDVmOWIzYzZlZWU5OTFkYmU4YzI2Mjk2OTg1ZDVlNzU0YzNhOWI1MmU2NTAxZWUzZmFlOCIsInZlcnNpb24iOjF9.Zepow4AFj1sQ6zyJGoy_Dl4ICKRtzZI2nVYWlTsDnGrBDT42ak9mFUuw-BjHR8dEVHJKmOZlLk6GJ09bL7tGAA - task: type: summarization name: Summarization dataset: name: cnn_dailymail type: cnn_dailymail config: 3.0.0 split: test metrics: - type: rouge value: 30.5036 name: ROUGE-1 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMmFkM2M4YTcyODEwMzY1MWViYTY0NmEzNjYwNGM4OTI4MmY1ZTk2ZjVjZjMwOGUwM2JiYTA0YjdkMWRkZTQ5MyIsInZlcnNpb24iOjF9.GatKuC1oPoD1HT9pA9lGAj6GNjhe3ADSNgZ5apntAFCHETlNV1mNf1zQ-rgFH2FP-lF3qS56Jn54pFp6FMwaBw - type: rouge value: 13.2558 name: ROUGE-2 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjUwZjBmMTUzNmM3ZTRjODQ0MGFiM2I3Y2ViMDRkODQzNGI3YzM0MmJiNzU1N2UwOTZmMGFkOTQwMzNjNmFiMSIsInZlcnNpb24iOjF9.kOWpg36sB5GdPVYUZpWlS0pSKu5mKmHcLmJO1I3oUzMSiwDeUpAPLXNC0u_gJMFaFdsaNTywepDuttLdB2oBBg - type: rouge value: 19.0284 name: ROUGE-L verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMTJmYzZmZWJiNTljYmJiZTllODk0NjdmNGNkZWZlZjMwMGE5YTAzMjMwNTcyNGM4MWE4MDUzYjM3NzQ5NzA2ZCIsInZlcnNpb24iOjF9.ooUqXvZC6ci_XxKrIcox2R2A0C8qyN0HP5djFMMb9SfoAaJAgdM0j6qsVQj9ccr0AgeRRIPNH_vI3gg-_lvaDw - type: rouge value: 28.3404 name: ROUGE-LSUM verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZTcxMDg5ZGI1MDRmNzM0ZmEyZmNiZGYxZTg0NzA4N2U0YTY3MGYxMjgzMzI0NjVlNWNiYTZmNWZjMzZkMmYzNiIsInZlcnNpb24iOjF9.RbEZQB2-IPb-l6Z1xeOE42NGwX1KQjlr2wNL9VH75L1gmMxKGTPMR_Yazma84ZKK-Ai7s2YPNh-MDanNU_4GCw - type: loss value: 3.9438512325286865 name: loss verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZjQ2YmE1OTE5NDJlMTBhZGMzNDE5OThmNzMzOTRlYjEzMjc2ZDgyMDliNGY1NjFhOGQ0N2NkYmUzZGUwOGVlZiIsInZlcnNpb24iOjF9.FAwbzK-XJc-oEBFO7m8p4hkDCZDEhmU0ZSytrim-uHHcSFjRvbL-dF8rIvKVcxw5QeZ6QKZ7EkjDT7Ltt8KyCA - type: gen_len value: 231.0935 name: gen_len verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOTMzMTMyYjhhNjFiYjMyNDlhYzQzODM0MWNhNjkwMDVjNmFjYTk2NmQ4NzJlZjlhZjM2MGMwNWI1MjIxMGNiZCIsInZlcnNpb24iOjF9.mHDxhA2wVj6FDx7un4028-A8iGMFcPlSb5vH2DPGLPzQHBhSlvNac4-OELZf0PRmsXSb1nIqHqU-S_WUs8OSBg - task: type: summarization name: Summarization dataset: name: billsum type: billsum config: default split: test metrics: - type: rouge value: 36.8502 name: ROUGE-1 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYmE2ZjI4YmJkZGVjZDkzNzU5ZmI2MDYzNGZkNjE2OGM0Y2Y0Nzk1NTc1ZmUyZmFhYjIwY2RhMDVkMzQ1MWIxYyIsInZlcnNpb24iOjF9.SZjhhFkKwvRrI-Yl29psn17u1RCISsmmLVXxo2kxCjkhtMOma-EzC5YidjPDGQLb-J2nvqUworaC2pL_oeHxDQ - type: rouge value: 15.9147 name: ROUGE-2 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiODgwOTJhOWIyZDQ4ZDA5YWMzYTJkZWFmMzlkNWYxNTg5OGFiNzY0MTExNTgyMTdlMTQ1N2EwYWY4OGZkNWY5YyIsInZlcnNpb24iOjF9.DS-X3eA1tGhVSuUL8uSPtJMNijODF3ugaKEtBglmPqF1OQZwIwQs-NExNYP4d6Y4Pa9d-DujD5yfyl9C8HBGCw - type: rouge value: 23.4762 name: ROUGE-L verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYTYxNTA4YzhmYTQ0YmRjMWU5ZDliZWFhMjM4ZmUyNGUyOWJhNzA1MDBhZDliYmYyYzY3NjBmZTZlYWY3YTY3ZCIsInZlcnNpb24iOjF9.o0W7dqdz0sqMPKtJbXSRpyVNsREEUypW-bGv7TW5lfJFkijfDKhVITEClFLWu5n2tIV-sXAYxgQHDf5_hpY-Dw - type: rouge value: 30.9597 name: ROUGE-LSUM verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNzEzOGNiYjk4NDkxNTFmMjA5YjM1YTQzZTk2N2JiZDgxNzAxYzFlYjliZjA3NmRjMzZlNGYyODBkNTI1NzVjNiIsInZlcnNpb24iOjF9.C_hobTR0ZY958oUZcGEKj2RoPOkyfMCTznwi4mUx-bfGRRAecMyn45bWVwwRq12glk1vThDetCjOMHA6jgSDCw - type: loss value: 3.878790855407715 name: loss verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNmYyOWM0YWQ0MjAxZDg5ZWQyNDk3MGUwNzdkOWIwZDc0OGJjYTU3YjZmOWY0YTljNDI0OWRlNTI0ZDMwZWEzOCIsInZlcnNpb24iOjF9.P01Jzfa-5jyMeoEqEsEluKOydNmtRtNy8YhwfJuYHVJTVDzCIfzY8b7iNfqTfKFKwKkZ4eTwmA6vmsPZeASDAw - type: gen_len value: 131.3622 name: gen_len verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYmJjN2Q5ZGNlZjQ2ODJiYTZlMzZmNWVmMzRlMGQ0ZTkxZWM3ZDQ4ZmQ1NmUyZjY4MTVhZGE5NDFiZTBhNDZiYSIsInZlcnNpb24iOjF9.DqYNc0ZCX_EqRi4zbSBAtb-js_JBHSWZkeGR9gSwEkJletKYFxPGZWd-B1ez88aj6PO775-qHd98xx3IWCHECQ - task: type: summarization name: Summarization dataset: name: big_patent type: big_patent config: y split: test metrics: - type: rouge value: 33.7585 name: ROUGE-1 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiM2VmMGU5YWJlZWFlNjA3MDY2NTBmZWU3YWQxYTk3OGYzZmU5NmFmMTQ1NTVmNDQyZTJkNDMwY2E5NGRjMGU3MSIsInZlcnNpb24iOjF9.P6Rt9c3Xi_B-u8B1ug4paeZDoAO4ErGeNM0gELHGeOMj4XMjeSvyAW_-30cA9Wf23-0jGPOSZbN5pME4JpxfDA - type: rouge value: 9.4101 name: ROUGE-2 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDA0NzUxMjIwYTFjNGQ5YTA4YjE1NGU5YWMzYjhiOTk2NWE3ZGQxNDY4YTI3ZmI0ODBjYmJkZjcwYTM2OTg2MCIsInZlcnNpb24iOjF9.23hd2SuLoX3_Rygj2ykcSQccPeFsf4yLDAgvS189jx6JNln0MVR6YI2-3Yzo5g8LJk0MCbgkOp0my-nf7nMaDw - type: rouge value: 18.8927 name: ROUGE-L verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiODhhMGZiZWFlNmZkYmYxZjJmODE1NWRiZjI2OGU1MTc4MDkyYjk1Mzk5ODFkYWVhY2ExNTViYjJmYzkzNWJhYiIsInZlcnNpb24iOjF9.SkKhf-l2cl2KcuC17oPrBtkBlZJaj2ujCgzRlfZy76rU9JtlW7N9bcy1ugnw-vRVUVVR6wUK08T45YorfuxqBg - type: rouge value: 28.5051 name: ROUGE-LSUM verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMTgzYzA0NmQ0OTZmNzJkNGZiNTdmMzFmOTljMWE3YzM0NDg2MDY1ZDY5ZTE4MmQ5YzU1ZDFiNmE2ZjkwMjRjMiIsInZlcnNpb24iOjF9.p1TQINRxMatNe77_BMnusSg1K5FOD9f1_N4TBJDjJHNhYnyQDE4pKHfK8j6fsHGg58DHVQjmm8g96SK4uMF6DA - type: loss value: 5.162865161895752 name: loss verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZWM1YTQ4MjVmMDkyZDI3OWJmODhmOWE2MDYyMDA4OGRmYzhiY2YzZjVmMTZkMTI4NjBlY2MwMDY3ZDE5ZjlmMyIsInZlcnNpb24iOjF9.Czh4TOG-QIqyc_-GJ3wc1TLuxc-KLwPelV5tiwEjNhZFyUZkjLH__ccOxBk9TYy2vunvh2AwdY3Mt6Fr8LhaDA - type: gen_len value: 222.6626 name: gen_len verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiY2JjNzVkODhmOWQ5NWMwNDdlNzhkYjE5NjY3NTgwNWVmZDZlMzc4NDdmZjdlN2M2ODBkZGU5NGU0ZjMzM2Q5OCIsInZlcnNpb24iOjF9.z4hZ-uXg8PPn-THRHFrsWZpS3jgE8URk5yoLenwWtev5toTrZ2Y-DP8O30nPnzMkzA4yzo_NUKIACxoUdMqfCQ - task: type: summarization name: Summarization dataset: name: multi_news type: multi_news config: default split: test metrics: - type: rouge value: 38.7332 name: ROUGE-1 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMGViMThhNTdlZDRiMTg5NTZjNGVmOThiMjI5NDEyZDMxYjU4MTU2ZTliZjZmMzAzMmRhNDIxYjViYjZmNWYwNSIsInZlcnNpb24iOjF9.SK_1Q9WlkNhu3mfsyir1l72pddjURZvJV3mcJ4jhBxS2k2q1NAR8JT_iT8v1thLiv8NUDmDr2o9Dig4A8svDBw - type: rouge value: 11.0072 name: ROUGE-2 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzkzMDU1ZGZlOWUwOGQyY2UwMWFjZTY1MDBmNzcyZGYzZTliNGVkNDZjZDVjZjA4NmE3OWVhMGIyZmE3NGE0NSIsInZlcnNpb24iOjF9.j0wvR0NPw0lqxW3ASbmBvxAbFHGikXw-Y7FjutojhzTfSs3BIs5Z8s5_h6eesvSGT5fS_qUrbnl9EEBwjrXqDg - type: rouge value: 18.6018 name: ROUGE-L verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjIwNTUzN2ZhZjU5OGFhYzRmZmEwY2NkZWVjYmYzZjRjMGIxNzNjZDY5YzIyMTg2NDJkMGYxYmViNTcwOTc5NCIsInZlcnNpb24iOjF9.rD_tFYRyb-o6VX7Z52fULvP_HQjqqshqnvbjAxWjuCM9hCn1J6oh0zAASPw0k1lWiURbiMCiaxIHxe_5BN_rAQ - type: rouge value: 34.5911 name: ROUGE-LSUM verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiY2Q4MWY3NGFhNjE5YjE5NzIyODVhNTYxNWFmZDE5NjNiZTM1M2M3ZmIwNTZiOWEyMTc2MzQ0MWQ5YTdjYThlNyIsInZlcnNpb24iOjF9.R789HgYsv_k6OrjocVi0ywx0aCRlgOKpEWUiSUDca-AfoDS8ADJBtLYoEKg1wnRlR9yWoD4vtEWdKbyOOln1CA - type: loss value: 3.5744354724884033 name: loss verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzBjZTk0YWMwMzQxNDRlY2UxZDc4NTE1MmEzNDkwM2M3ZGZhNGMzNmI4ZDU2ZTVhZDkwMjNhYTkxZTIwN2E4MyIsInZlcnNpb24iOjF9.bDQ_3-CumosWKroMwBEMwKnDAj4ENQbUnbS387hU0zAY1K5g1NOy7fKBohxYZnRVolEfiuhszifUMW9zcLjqCA - type: gen_len value: 192.0014 name: gen_len verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDQxZmEwYmU5MGI1ZWE5NTIyMmM1MTVlMjVjNTg4MDQyMjJhNGE5NDJhNmZiN2Y4ZDc4ZmExNjBkMjQzMjQxMyIsInZlcnNpb24iOjF9.o3WblPY-iL1vT66xPwyyi1VMPhI53qs9GJ5HsHGbglOALwZT4n2-6IRxRNcL2lLj9qUehWUKkhruUyDM5-4RBg - task: type: summarization name: Summarization dataset: name: xsum type: xsum config: default split: test metrics: - type: rouge value: 16.3186 name: ROUGE-1 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYjNiYzkxNTc1M2ZiYzY4NmVhY2U4MGU0YWE1NzQ4YzQxNjM1ZThmOWU3ZjUwMWUxMWM1NTQyYzc0OWQ5MzQyZSIsInZlcnNpb24iOjF9.cDZzbzxrXaM4n-Fa-vBpUgq7ildtHg9hlO5p9pt58VYLGK3rsid3oUE2qsFH6Qk63j2cF4_hzgq93xoVlnR3Dg - type: rouge value: 3.0261 name: ROUGE-2 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYjkzNzA0ODk3NWJjOGM2ZWFlY2MyZWM4NzZlYzZiMGQ2ODc0NzgzNDYzYmVlZjg2ZjBmNDMwOGViYTljYWQ2NSIsInZlcnNpb24iOjF9.ohBfAUhEktfITK6j_NusN5SOmF4XUHZWPNMpGrsGXRHTf1bUl6_UEQ0S3w58WQsgIuV3MkxWNRBU1oZAm3fbBQ - type: rouge value: 10.4045 name: ROUGE-L verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMDM2ZDZhYzBiNGM3NDdhODlmNjJhMTNlZDE3ZTZmYjM1MWU5YmE0ODMyZGFhMmM0YmMwMzNiZWU4ZDAzMDFlNiIsInZlcnNpb24iOjF9.653PFaov_0t8g_fVyVxm8DBx7uV4646yK0rtxOxC7qsnRdljdThSOklw9tND5-44WdkzipzuLyVzq1qe-TbKBA - type: rouge value: 12.612 name: ROUGE-LSUM verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNmY5YzU2ZjE2OWM0ZGQwZmVjZjQwZTQ0MDNkZmNiMTdhZjFkMDA5OGFhYWQ0Y2QwZDY0YWJlNWUxZGQ0YTUwZiIsInZlcnNpb24iOjF9.RXyu1jIj_gV26WCHSGHZufWXKFEexuRaLD4gkOvlBcaXJrFoE11tttB6mYzN6Tk8qx5cvV5L_ZIUfDmOqunkAA - type: loss value: 3.323798179626465 name: loss verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZjU5ZWUxMjIwMWYwNDY1YzUwMzUxNGFiZWI3ZDVhZDFlYzJhNzk3MjA1OGExNTg0NjZlOGQyYzBiZjdhN2E2YSIsInZlcnNpb24iOjF9.vFxH1vHAACKE4XcgBhuoaV38yUZuYJuNm23V3nWVbF4FwyN79srV3Y9CqPGoOiIoUSQJ9fdKZXZub5j0GuUJAA - type: gen_len value: 149.7551 name: gen_len verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNzg1ZjY5MTJkMTgzMjhiYzMxNjkyZjlmNmI2ZGU0YTRhZjU5NjQwOWE5MjczZDIxNGI1MGI4YzhhOGVkZDFkYSIsInZlcnNpb24iOjF9.S7W5-vqldJuqtC5MweC3iCK6uy-uTRe4kGqoApMl2Sn6w9sVHnY7u905yNLXzFLrLYMgjlct5LB7AAirHeEJBw --- # LED-Based Summarization Model: Condensing Long and Technical Information <a href="https://colab.research.google.com/gist/pszemraj/36950064ca76161d9d258e5cdbfa6833/led-base-demo-token-batching.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> The Longformer Encoder-Decoder (LED) for Narrative-Esque Long Text Summarization is a model I fine-tuned from [allenai/led-base-16384](https://huggingface.co/allenai/led-base-16384) to condense extensive technical, academic, and narrative content in a fairly generalizable way. ## Key Features and Use Cases - Ideal for summarizing long narratives, articles, papers, textbooks, and other documents. - the sparknotes-esque style leads to 'explanations' in the summarized content, offering insightful output. - High capacity: Handles up to 16,384 tokens per batch. - demos: try it out in the notebook linked above or in the [demo on Spaces](https://huggingface.co/spaces/pszemraj/summarize-long-text) > **Note:** The API widget has a max length of ~96 tokens due to inference timeout constraints. ## Training Details The model was trained on the BookSum dataset released by SalesForce, which leads to the `bsd-3-clause` license. The training process involved 16 epochs with parameters tweaked to facilitate very fine-tuning-type training (super low learning rate). Model checkpoint: [`pszemraj/led-base-16384-finetuned-booksum`](https://huggingface.co/pszemraj/led-base-16384-finetuned-booksum). ## Other Related Checkpoints This model is the smallest/fastest booksum-tuned model I have worked on. If you're looking for higher quality summaries, check out: - [Long-T5-tglobal-base](https://huggingface.co/pszemraj/long-t5-tglobal-base-16384-book-summary) - [BigBird-Pegasus-Large-K](https://huggingface.co/pszemraj/bigbird-pegasus-large-K-booksum) - [Pegasus-X-Large](https://huggingface.co/pszemraj/pegasus-x-large-book-summary) - [Long-T5-tglobal-XL](https://huggingface.co/pszemraj/long-t5-tglobal-xl-16384-book-summary) There are also other variants on other datasets etc on my hf profile, feel free to try them out :) --- ## Basic Usage I recommend using `encoder_no_repeat_ngram_size=3` when calling the pipeline object, as it enhances the summary quality by encouraging the use of new vocabulary and crafting an abstractive summary. Create the pipeline object: ```python import torch from transformers import pipeline hf_name = "pszemraj/led-base-book-summary" summarizer = pipeline( "summarization", hf_name, device=0 if torch.cuda.is_available() else -1, ) ``` Feed the text into the pipeline object: ```python wall_of_text = "your words here" result = summarizer( wall_of_text, min_length=8, max_length=256, no_repeat_ngram_size=3, encoder_no_repeat_ngram_size=3, repetition_penalty=3.5, num_beams=4, do_sample=False, early_stopping=True, ) print(result[0]["generated_text"]) ``` ## Simplified Usage with TextSum To streamline the process of using this and other models, I've developed [a Python package utility](https://github.com/pszemraj/textsum) named `textsum`. This package offers simple interfaces for applying summarization models to text documents of arbitrary length. Install TextSum: ```bash pip install textsum ``` Then use it in Python with this model: ```python from textsum.summarize import Summarizer model_name = "pszemraj/led-base-book-summary" summarizer = Summarizer( model_name_or_path=model_name, # you can use any Seq2Seq model on the Hub token_batch_length=4096, # how many tokens to batch summarize at a time ) long_string = "This is a long string of text that will be summarized." out_str = summarizer.summarize_string(long_string) print(f"summary: {out_str}") ``` Currently implemented interfaces include a Python API, a Command-Line Interface (CLI), and a shareable demo/web UI. For detailed explanations and documentation, check the [README](https://github.com/pszemraj/textsum) or the [wiki](https://github.com/pszemraj/textsum/wiki) ---
32,335
[ [ -0.020111083984375, -0.049560546875, 0.02099609375, 0.0188446044921875, -0.035064697265625, -0.005268096923828125, -0.030914306640625, -0.0166778564453125, 0.0217742919921875, 0.0311737060546875, -0.02569580078125, -0.04876708984375, -0.03900146484375, 0.022...
timm/vit_base_patch16_rpn_224.sw_in1k
2023-05-06T00:03:08.000Z
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2111.09883", "arxiv:2010.11929", "license:apache-2.0", "region:us" ]
image-classification
timm
null
null
timm/vit_base_patch16_rpn_224.sw_in1k
0
2,675
timm
2022-12-22T07:31:51
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for vit_base_patch16_rpn_224.sw_in1k A Vision Transformer (ViT) image classification model. This is a `timm` specific variation of the architecture with residual post normalization blocks. Trained on ImageNet-1k in `timm` using recipe template described below. Recipe details: * Based on Swin Transformer train / pretrain recipe with modifications (related to both DeiT and ConvNeXt recipes) * AdamW optimizer, gradient clipping, EMA weight averaging * Cosine LR schedule with warmup ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 86.5 - GMACs: 16.8 - Activations (M): 16.4 - Image size: 224 x 224 - **Papers:** - Swin Transformer V2: Scaling Up Capacity and Resolution: https://arxiv.org/abs/2111.09883 - An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2 - **Dataset:** ImageNet-1k - **Original:** https://github.com/huggingface/pytorch-image-models ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('vit_base_patch16_rpn_224.sw_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'vit_base_patch16_rpn_224.sw_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 196, 768) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ``` ```bibtex @inproceedings{liu2021swinv2, title={Swin Transformer V2: Scaling Up Capacity and Resolution}, author={Ze Liu and Han Hu and Yutong Lin and Zhuliang Yao and Zhenda Xie and Yixuan Wei and Jia Ning and Yue Cao and Zheng Zhang and Li Dong and Furu Wei and Baining Guo}, booktitle={International Conference on Computer Vision and Pattern Recognition (CVPR)}, year={2022} } ``` ```bibtex @article{dosovitskiy2020vit, title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale}, author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil}, journal={ICLR}, year={2021} } ```
4,086
[ [ -0.038055419921875, -0.0269012451171875, -0.005771636962890625, 0.01377105712890625, -0.028045654296875, -0.0287628173828125, -0.0209197998046875, -0.038482666015625, 0.016571044921875, 0.0305938720703125, -0.041534423828125, -0.03778076171875, -0.05191040039062...
keremberke/yolov8n-plane-detection
2023-02-22T13:03:17.000Z
[ "ultralytics", "tensorboard", "v8", "ultralyticsplus", "yolov8", "yolo", "vision", "object-detection", "pytorch", "awesome-yolov8-models", "dataset:keremberke/plane-detection", "model-index", "region:us" ]
object-detection
keremberke
null
null
keremberke/yolov8n-plane-detection
2
2,675
ultralytics
2023-01-29T06:22:06
--- tags: - ultralyticsplus - yolov8 - ultralytics - yolo - vision - object-detection - pytorch - awesome-yolov8-models library_name: ultralytics library_version: 8.0.21 inference: false datasets: - keremberke/plane-detection model-index: - name: keremberke/yolov8n-plane-detection results: - task: type: object-detection dataset: type: keremberke/plane-detection name: plane-detection split: validation metrics: - type: precision # since mAP@0.5 is not available on hf.co/metrics value: 0.995 # min: 0.0 - max: 1.0 name: mAP@0.5(box) --- <div align="center"> <img width="640" alt="keremberke/yolov8n-plane-detection" src="https://huggingface.co/keremberke/yolov8n-plane-detection/resolve/main/thumbnail.jpg"> </div> ### Supported Labels ``` ['planes'] ``` ### How to use - Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus): ```bash pip install ultralyticsplus==0.0.23 ultralytics==8.0.21 ``` - Load model and perform prediction: ```python from ultralyticsplus import YOLO, render_result # load model model = YOLO('keremberke/yolov8n-plane-detection') # set model parameters model.overrides['conf'] = 0.25 # NMS confidence threshold model.overrides['iou'] = 0.45 # NMS IoU threshold model.overrides['agnostic_nms'] = False # NMS class-agnostic model.overrides['max_det'] = 1000 # maximum number of detections per image # set image image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg' # perform inference results = model.predict(image) # observe results print(results[0].boxes) render = render_result(model=model, image=image, result=results[0]) render.show() ``` **More models available at: [awesome-yolov8-models](https://yolov8.xyz)**
1,771
[ [ -0.038665771484375, -0.0206451416015625, 0.040130615234375, -0.0159149169921875, -0.02630615234375, -0.02044677734375, 0.019561767578125, -0.025604248046875, 0.02728271484375, 0.020355224609375, -0.044281005859375, -0.044677734375, -0.0275421142578125, -0.01...
keremberke/yolov8s-plane-detection
2023-02-22T13:03:24.000Z
[ "ultralytics", "tensorboard", "v8", "ultralyticsplus", "yolov8", "yolo", "vision", "object-detection", "pytorch", "awesome-yolov8-models", "dataset:keremberke/plane-detection", "model-index", "region:us" ]
object-detection
keremberke
null
null
keremberke/yolov8s-plane-detection
3
2,675
ultralytics
2023-01-29T06:42:07
--- tags: - ultralyticsplus - yolov8 - ultralytics - yolo - vision - object-detection - pytorch - awesome-yolov8-models library_name: ultralytics library_version: 8.0.21 inference: false datasets: - keremberke/plane-detection model-index: - name: keremberke/yolov8s-plane-detection results: - task: type: object-detection dataset: type: keremberke/plane-detection name: plane-detection split: validation metrics: - type: precision # since mAP@0.5 is not available on hf.co/metrics value: 0.995 # min: 0.0 - max: 1.0 name: mAP@0.5(box) --- <div align="center"> <img width="640" alt="keremberke/yolov8s-plane-detection" src="https://huggingface.co/keremberke/yolov8s-plane-detection/resolve/main/thumbnail.jpg"> </div> ### Supported Labels ``` ['planes'] ``` ### How to use - Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus): ```bash pip install ultralyticsplus==0.0.23 ultralytics==8.0.21 ``` - Load model and perform prediction: ```python from ultralyticsplus import YOLO, render_result # load model model = YOLO('keremberke/yolov8s-plane-detection') # set model parameters model.overrides['conf'] = 0.25 # NMS confidence threshold model.overrides['iou'] = 0.45 # NMS IoU threshold model.overrides['agnostic_nms'] = False # NMS class-agnostic model.overrides['max_det'] = 1000 # maximum number of detections per image # set image image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg' # perform inference results = model.predict(image) # observe results print(results[0].boxes) render = render_result(model=model, image=image, result=results[0]) render.show() ``` **More models available at: [awesome-yolov8-models](https://yolov8.xyz)**
1,771
[ [ -0.03826904296875, -0.019134521484375, 0.0413818359375, -0.01654052734375, -0.02667236328125, -0.0194854736328125, 0.0196533203125, -0.0252838134765625, 0.0263824462890625, 0.0193328857421875, -0.043304443359375, -0.044647216796875, -0.028350830078125, -0.01...
keremberke/yolov8s-pcb-defect-segmentation
2023-02-22T13:02:28.000Z
[ "ultralytics", "tensorboard", "v8", "ultralyticsplus", "yolov8", "yolo", "vision", "image-segmentation", "pytorch", "awesome-yolov8-models", "dataset:keremberke/pcb-defect-segmentation", "model-index", "region:us" ]
image-segmentation
keremberke
null
null
keremberke/yolov8s-pcb-defect-segmentation
1
2,674
ultralytics
2023-01-28T07:39:17
--- tags: - ultralyticsplus - yolov8 - ultralytics - yolo - vision - image-segmentation - pytorch - awesome-yolov8-models library_name: ultralytics library_version: 8.0.21 inference: false datasets: - keremberke/pcb-defect-segmentation model-index: - name: keremberke/yolov8s-pcb-defect-segmentation results: - task: type: image-segmentation dataset: type: keremberke/pcb-defect-segmentation name: pcb-defect-segmentation split: validation metrics: - type: precision # since mAP@0.5 is not available on hf.co/metrics value: 0.51452 # min: 0.0 - max: 1.0 name: mAP@0.5(box) - type: precision # since mAP@0.5 is not available on hf.co/metrics value: 0.49054 # min: 0.0 - max: 1.0 name: mAP@0.5(mask) --- <div align="center"> <img width="640" alt="keremberke/yolov8s-pcb-defect-segmentation" src="https://huggingface.co/keremberke/yolov8s-pcb-defect-segmentation/resolve/main/thumbnail.jpg"> </div> ### Supported Labels ``` ['Dry_joint', 'Incorrect_installation', 'PCB_damage', 'Short_circuit'] ``` ### How to use - Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus): ```bash pip install ultralyticsplus==0.0.23 ultralytics==8.0.21 ``` - Load model and perform prediction: ```python from ultralyticsplus import YOLO, render_result # load model model = YOLO('keremberke/yolov8s-pcb-defect-segmentation') # set model parameters model.overrides['conf'] = 0.25 # NMS confidence threshold model.overrides['iou'] = 0.45 # NMS IoU threshold model.overrides['agnostic_nms'] = False # NMS class-agnostic model.overrides['max_det'] = 1000 # maximum number of detections per image # set image image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg' # perform inference results = model.predict(image) # observe results print(results[0].boxes) print(results[0].masks) render = render_result(model=model, image=image, result=results[0]) render.show() ``` **More models available at: [awesome-yolov8-models](https://yolov8.xyz)**
2,066
[ [ -0.026153564453125, -0.03790283203125, 0.049835205078125, -0.00890350341796875, -0.0350341796875, -0.011688232421875, 0.0234527587890625, -0.034576416015625, 0.02545166015625, 0.01416015625, -0.051300048828125, -0.04705810546875, -0.0208892822265625, -0.0171...
Hvijapuram22/my-pet-dog
2023-11-06T14:01:26.000Z
[ "diffusers", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
Hvijapuram22
null
null
Hvijapuram22/my-pet-dog
0
2,673
diffusers
2023-11-06T13:47:39
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### My-Pet-Dog Dreambooth model trained by Hvijapuram22 following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: MITS-1185 Sample pictures of this concept: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6544cb0d0df73a16829ac840/8N9GUjtciAQdUkSym6WY9.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6544cb0d0df73a16829ac840/fQqk5oJdLkcNDHZJq7pKg.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6544cb0d0df73a16829ac840/hbC4QJI9-ldgPm_XY1TFr.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6544cb0d0df73a16829ac840/OgVJAt6HSwElY5QchHi4u.png)
775
[ [ -0.06268310546875, -0.032470703125, 0.02337646484375, 0.0207672119140625, -0.017303466796875, 0.036712646484375, 0.0206146240234375, -0.0267333984375, 0.0282745361328125, 0.010101318359375, -0.039031982421875, -0.0303802490234375, -0.0245361328125, 0.0084533...
deepset/gbert-base-germandpr-question_encoder
2023-05-05T06:59:31.000Z
[ "transformers", "pytorch", "safetensors", "dpr", "feature-extraction", "exbert", "de", "dataset:deepset/germandpr", "license:mit", "endpoints_compatible", "region:us" ]
feature-extraction
deepset
null
null
deepset/gbert-base-germandpr-question_encoder
5
2,672
transformers
2022-03-02T23:29:05
--- language: de datasets: - deepset/germandpr license: mit thumbnail: https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg tags: - exbert --- ![bert_image](https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg) ## Overview **Language model:** gbert-base-germandpr **Language:** German **Training data:** GermanDPR train set (~ 56MB) **Eval data:** GermanDPR test set (~ 6MB) **Infrastructure**: 4x V100 GPU **Published**: Apr 26th, 2021 ## Details - We trained a dense passage retrieval model with two gbert-base models as encoders of questions and passages. - The dataset is GermanDPR, a new, German language dataset, which we hand-annotated and published [online](https://deepset.ai/germanquad). - It comprises 9275 question/answer pairs in the training set and 1025 pairs in the test set. For each pair, there are one positive context and three hard negative contexts. - As the basis of the training data, we used our hand-annotated GermanQuAD dataset as positive samples and generated hard negative samples from the latest German Wikipedia dump (6GB of raw txt files). - The data dump was cleaned with tailored scripts, leading to 2.8 million indexed passages from German Wikipedia. See https://deepset.ai/germanquad for more details and dataset download. ## Hyperparameters ``` batch_size = 40 n_epochs = 20 num_training_steps = 4640 num_warmup_steps = 460 max_seq_len = 32 tokens for question encoder and 300 tokens for passage encoder learning_rate = 1e-6 lr_schedule = LinearWarmup embeds_dropout_prob = 0.1 num_hard_negatives = 2 ``` ## Performance During training, we monitored the in-batch average rank and the loss and evaluated different batch sizes, numbers of epochs, and number of hard negatives on a dev set split from the train set. The dev split contained 1030 question/answer pairs. Even without thorough hyperparameter tuning, we observed quite stable learning. Multiple restarts with different seeds produced quite similar results. Note that the in-batch average rank is influenced by settings for batch size and number of hard negatives. A smaller number of hard negatives makes the task easier. After fixing the hyperparameters we trained the model on the full GermanDPR train set. We further evaluated the retrieval performance of the trained model on the full German Wikipedia with the GermanDPR test set as labels. To this end, we converted the GermanDPR test set to SQuAD format. The DPR model drastically outperforms the BM25 baseline with regard to recall@k. ![performancetable](https://lh3.google.com/u/0/d/1lX6G0cp4NTx1yUWs74LI0Gcs41sYy_Fb=w2880-h1578-iv1) ## Usage ### In haystack You can load the model in [haystack](https://github.com/deepset-ai/haystack/) as a retriever for doing QA at scale: ```python retriever = DensePassageRetriever( document_store=document_store, query_embedding_model="deepset/gbert-base-germandpr-question_encoder" passage_embedding_model="deepset/gbert-base-germandpr-ctx_encoder" ) ``` ## Authors - Timo Möller: `timo.moeller [at] deepset.ai` - Julian Risch: `julian.risch [at] deepset.ai` - Malte Pietsch: `malte.pietsch [at] deepset.ai` ## About us ![deepset logo](https://workablehr.s3.amazonaws.com/uploads/account/logo/476306/logo) We bring NLP to the industry via open source! Our focus: Industry specific language models & large scale QA systems. Some of our work: - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert) - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad) - [FARM](https://github.com/deepset-ai/FARM) - [Haystack](https://github.com/deepset-ai/haystack/) Get in touch: [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Website](https://deepset.ai) By the way: [we're hiring!](http://www.deepset.ai/jobs)
4,001
[ [ -0.03619384765625, -0.06414794921875, 0.0250091552734375, 0.00354766845703125, -0.007678985595703125, -0.0199432373046875, -0.03399658203125, -0.0261688232421875, -0.007381439208984375, 0.0255584716796875, -0.0284576416015625, -0.04632568359375, -0.0300750732421...
keremberke/yolov8s-valorant-detection
2023-02-22T13:02:34.000Z
[ "ultralytics", "tensorboard", "v8", "ultralyticsplus", "yolov8", "yolo", "vision", "object-detection", "pytorch", "awesome-yolov8-models", "dataset:keremberke/valorant-object-detection", "model-index", "region:us" ]
object-detection
keremberke
null
null
keremberke/yolov8s-valorant-detection
0
2,669
ultralytics
2023-01-28T09:17:46
--- tags: - ultralyticsplus - yolov8 - ultralytics - yolo - vision - object-detection - pytorch - awesome-yolov8-models library_name: ultralytics library_version: 8.0.21 inference: false datasets: - keremberke/valorant-object-detection model-index: - name: keremberke/yolov8s-valorant-detection results: - task: type: object-detection dataset: type: keremberke/valorant-object-detection name: valorant-object-detection split: validation metrics: - type: precision # since mAP@0.5 is not available on hf.co/metrics value: 0.97138 # min: 0.0 - max: 1.0 name: mAP@0.5(box) --- <div align="center"> <img width="640" alt="keremberke/yolov8s-valorant-detection" src="https://huggingface.co/keremberke/yolov8s-valorant-detection/resolve/main/thumbnail.jpg"> </div> ### Supported Labels ``` ['dropped spike', 'enemy', 'planted spike', 'teammate'] ``` ### How to use - Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus): ```bash pip install ultralyticsplus==0.0.23 ultralytics==8.0.21 ``` - Load model and perform prediction: ```python from ultralyticsplus import YOLO, render_result # load model model = YOLO('keremberke/yolov8s-valorant-detection') # set model parameters model.overrides['conf'] = 0.25 # NMS confidence threshold model.overrides['iou'] = 0.45 # NMS IoU threshold model.overrides['agnostic_nms'] = False # NMS class-agnostic model.overrides['max_det'] = 1000 # maximum number of detections per image # set image image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg' # perform inference results = model.predict(image) # observe results print(results[0].boxes) render = render_result(model=model, image=image, result=results[0]) render.show() ``` **More models available at: [awesome-yolov8-models](https://yolov8.xyz)**
1,860
[ [ -0.0306396484375, -0.0245819091796875, 0.033172607421875, -0.0143585205078125, -0.0234222412109375, -0.01294708251953125, 0.0113677978515625, -0.0267181396484375, 0.0297698974609375, 0.01509857177734375, -0.043792724609375, -0.05169677734375, -0.032928466796875,...
saattrupdan/nbailab-base-ner-scandi
2023-05-16T13:02:06.000Z
[ "transformers", "pytorch", "safetensors", "bert", "token-classification", "da", "no", "nb", "nn", "sv", "fo", "is", "dataset:dane", "dataset:norne", "dataset:wikiann", "dataset:suc3.0", "arxiv:1911.12146", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_sp...
token-classification
saattrupdan
null
null
saattrupdan/nbailab-base-ner-scandi
14
2,666
transformers
2022-03-02T23:29:05
--- language: - da - no - nb - nn - sv - fo - is license: mit datasets: - dane - norne - wikiann - suc3.0 model-index: - name: nbailab-base-ner-scandi results: [] widget: - "Hans er en professor på Københavns Universitetet i København, og han er en rigtig københavner. Hans kat, altså Hans' kat, Lisa, er supersød. Han fik købt en Mona Lisa på tilbud i Netto og gav den til sin kat, og nu er Mona Lisa'en Lisa's kæreste eje. Hans bror Peter og Hans besluttede, at Peterskirken skulle have fint besøg. Men nu har de begge Corona." inference: parameters: aggregation_strategy: "first" --- # ScandiNER - Named Entity Recognition model for Scandinavian Languages This model is a fine-tuned version of [NbAiLab/nb-bert-base](https://huggingface.co/NbAiLab/nb-bert-base) for Named Entity Recognition for Danish, Norwegian (both Bokmål and Nynorsk), Swedish, Icelandic and Faroese. It has been fine-tuned on the concatenation of [DaNE](https://aclanthology.org/2020.lrec-1.565/), [NorNE](https://arxiv.org/abs/1911.12146), [SUC 3.0](https://spraakbanken.gu.se/en/resources/suc3) and the Icelandic and Faroese parts of the [WikiANN](https://aclanthology.org/P17-1178/) dataset. It also works reasonably well on English sentences, given the fact that the pretrained model is also trained on English data along with Scandinavian languages. The model will predict the following four entities: | **Tag** | **Name** | **Description** | | :------ | :------- | :-------------- | | `PER` | Person | The name of a person (e.g., *Birgitte* and *Mohammed*) | | `LOC` | Location | The name of a location (e.g., *Tyskland* and *Djurgården*) | | `ORG` | Organisation | The name of an organisation (e.g., *Bunnpris* and *Landsbankinn*) | | `MISC` | Miscellaneous | A named entity of a different kind (e.g., *Ūjķnustu pund* and *Mona Lisa*) | ## Quick start You can use this model in your scripts as follows: ```python >>> from transformers import pipeline >>> import pandas as pd >>> ner = pipeline(task='ner', ... model='saattrupdan/nbailab-base-ner-scandi', ... aggregation_strategy='first') >>> result = ner('Borghild kjøper seg inn i Bunnpris') >>> pd.DataFrame.from_records(result) entity_group score word start end 0 PER 0.981257 Borghild 0 8 1 ORG 0.974099 Bunnpris 26 34 ``` ## Performance The following is the Micro-F1 NER performance on Scandinavian NER test datasets, compared with the current state-of-the-art. The models have been evaluated on the test set along with 9 bootstrapped versions of it, with the mean and 95% confidence interval shown here: | **Model ID** | **DaNE** | **NorNE-NB** | **NorNE-NN** | **SUC 3.0** | **WikiANN-IS** | **WikiANN-FO** | **Average** | | :----------- | -------: | -----------: | -----------: | ----------: | -------------: | -------------: | ----------: | | saattrupdan/nbailab-base-ner-scandi | **87.44 ± 0.81** | **91.06 ± 0.26** | **90.42 ± 0.61** | **88.37 ± 0.17** | **88.61 ± 0.41** | **90.22 ± 0.46** | **89.08 ± 0.46** | | chcaa/da\_dacy\_large\_trf | 83.61 ± 1.18 | 78.90 ± 0.49 | 72.62 ± 0.58 | 53.35 ± 0.17 | 50.57 ± 0.46 | 51.72 ± 0.52 | 63.00 ± 0.57 | | RecordedFuture/Swedish-NER | 64.09 ± 0.97 | 61.74 ± 0.50 | 56.67 ± 0.79 | 66.60 ± 0.27 | 34.54 ± 0.73 | 42.16 ± 0.83 | 53.32 ± 0.69 | | Maltehb/danish-bert-botxo-ner-dane | 69.25 ± 1.17 | 60.57 ± 0.27 | 35.60 ± 1.19 | 38.37 ± 0.26 | 21.00 ± 0.57 | 27.88 ± 0.48 | 40.92 ± 0.64 | | Maltehb/-l-ctra-danish-electra-small-uncased-ner-dane | 70.41 ± 1.19 | 48.76 ± 0.70 | 27.58 ± 0.61 | 35.39 ± 0.38 | 26.22 ± 0.52 | 28.30 ± 0.29 | 39.70 ± 0.61 | | radbrt/nb\_nocy\_trf | 56.82 ± 1.63 | 68.20 ± 0.75 | 69.22 ± 1.04 | 31.63 ± 0.29 | 20.32 ± 0.45 | 12.91 ± 0.50 | 38.08 ± 0.75 | Aside from its high accuracy, it's also substantially **smaller** and **faster** than the previous state-of-the-art: | **Model ID** | **Samples/second** | **Model size** | | :----------- | -----------------: | -------------: | | saattrupdan/nbailab-base-ner-scandi | 4.16 ± 0.18 | 676 MB | | chcaa/da\_dacy\_large\_trf | 0.65 ± 0.01 | 2,090 MB | ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 90135.90000000001 - num_epochs: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Micro F1 | Micro F1 No Misc | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:----------------:| | 0.6682 | 1.0 | 2816 | 0.0872 | 0.6916 | 0.7306 | | 0.0684 | 2.0 | 5632 | 0.0464 | 0.8167 | 0.8538 | | 0.0444 | 3.0 | 8448 | 0.0367 | 0.8485 | 0.8783 | | 0.0349 | 4.0 | 11264 | 0.0316 | 0.8684 | 0.8920 | | 0.0282 | 5.0 | 14080 | 0.0290 | 0.8820 | 0.9033 | | 0.0231 | 6.0 | 16896 | 0.0283 | 0.8854 | 0.9060 | | 0.0189 | 7.0 | 19712 | 0.0253 | 0.8964 | 0.9156 | | 0.0155 | 8.0 | 22528 | 0.0260 | 0.9016 | 0.9201 | | 0.0123 | 9.0 | 25344 | 0.0266 | 0.9059 | 0.9233 | | 0.0098 | 10.0 | 28160 | 0.0280 | 0.9091 | 0.9279 | | 0.008 | 11.0 | 30976 | 0.0309 | 0.9093 | 0.9287 | | 0.0065 | 12.0 | 33792 | 0.0313 | 0.9103 | 0.9284 | | 0.0053 | 13.0 | 36608 | 0.0322 | 0.9078 | 0.9257 | | 0.0046 | 14.0 | 39424 | 0.0343 | 0.9075 | 0.9256 | ### Framework versions - Transformers 4.10.3 - Pytorch 1.9.0+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3
5,962
[ [ -0.0501708984375, -0.0355224609375, 0.0168304443359375, -0.0009493827819824219, -0.009368896484375, -0.00930023193359375, -0.00652313232421875, -0.0222930908203125, 0.046630859375, 0.01186370849609375, -0.036376953125, -0.048583984375, -0.03759765625, 0.0082...
keremberke/yolov8s-csgo-player-detection
2023-02-22T13:02:50.000Z
[ "ultralytics", "tensorboard", "v8", "ultralyticsplus", "yolov8", "yolo", "vision", "object-detection", "pytorch", "awesome-yolov8-models", "dataset:keremberke/csgo-object-detection", "model-index", "region:us" ]
object-detection
keremberke
null
null
keremberke/yolov8s-csgo-player-detection
2
2,665
ultralytics
2023-01-29T01:55:30
--- tags: - ultralyticsplus - yolov8 - ultralytics - yolo - vision - object-detection - pytorch - awesome-yolov8-models library_name: ultralytics library_version: 8.0.21 inference: false datasets: - keremberke/csgo-object-detection model-index: - name: keremberke/yolov8s-csgo-player-detection results: - task: type: object-detection dataset: type: keremberke/csgo-object-detection name: csgo-object-detection split: validation metrics: - type: precision # since mAP@0.5 is not available on hf.co/metrics value: 0.88561 # min: 0.0 - max: 1.0 name: mAP@0.5(box) --- <div align="center"> <img width="640" alt="keremberke/yolov8s-csgo-player-detection" src="https://huggingface.co/keremberke/yolov8s-csgo-player-detection/resolve/main/thumbnail.jpg"> </div> ### Supported Labels ``` ['ct', 'cthead', 't', 'thead'] ``` ### How to use - Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus): ```bash pip install ultralyticsplus==0.0.23 ultralytics==8.0.21 ``` - Load model and perform prediction: ```python from ultralyticsplus import YOLO, render_result # load model model = YOLO('keremberke/yolov8s-csgo-player-detection') # set model parameters model.overrides['conf'] = 0.25 # NMS confidence threshold model.overrides['iou'] = 0.45 # NMS IoU threshold model.overrides['agnostic_nms'] = False # NMS class-agnostic model.overrides['max_det'] = 1000 # maximum number of detections per image # set image image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg' # perform inference results = model.predict(image) # observe results print(results[0].boxes) render = render_result(model=model, image=image, result=results[0]) render.show() ``` **More models available at: [awesome-yolov8-models](https://yolov8.xyz)**
1,835
[ [ -0.037506103515625, -0.027496337890625, 0.036285400390625, -0.0199737548828125, -0.024749755859375, -0.00533294677734375, -0.0007238388061523438, -0.039398193359375, 0.02178955078125, 0.00931549072265625, -0.048065185546875, -0.0419921875, -0.03106689453125, ...
timm/convnext_atto_ols.a2_in1k
2023-03-31T21:54:47.000Z
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2201.03545", "license:apache-2.0", "region:us" ]
image-classification
timm
null
null
timm/convnext_atto_ols.a2_in1k
0
2,663
timm
2022-12-13T07:06:15
--- tags: - image-classification - timm library_tag: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for convnext_atto_ols.a2_in1k A ConvNeXt image classification model. Trained in `timm` on ImageNet-1k by Ross Wightman. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 3.7 - GMACs: 0.6 - Activations (M): 4.1 - Image size: train = 224 x 224, test = 288 x 288 - **Papers:** - A ConvNet for the 2020s: https://arxiv.org/abs/2201.03545 - **Original:** https://github.com/huggingface/pytorch-image-models - **Dataset:** ImageNet-1k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('convnext_atto_ols.a2_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'convnext_atto_ols.a2_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 40, 56, 56]) # torch.Size([1, 80, 28, 28]) # torch.Size([1, 160, 14, 14]) # torch.Size([1, 320, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'convnext_atto_ols.a2_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 320, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). All timing numbers from eager model PyTorch 1.13 on RTX 3090 w/ AMP. | model |top1 |top5 |img_size|param_count|gmacs |macts |samples_per_sec|batch_size| |------------------------------------------------------------------------------------------------------------------------------|------|------|--------|-----------|------|------|---------------|----------| | [convnextv2_huge.fcmae_ft_in22k_in1k_512](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_512) |88.848|98.742|512 |660.29 |600.81|413.07|28.58 |48 | | [convnextv2_huge.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_384) |88.668|98.738|384 |660.29 |337.96|232.35|50.56 |64 | | [convnext_xxlarge.clip_laion2b_soup_ft_in1k](https://huggingface.co/timm/convnext_xxlarge.clip_laion2b_soup_ft_in1k) |88.612|98.704|256 |846.47 |198.09|124.45|122.45 |256 | | [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384) |88.312|98.578|384 |200.13 |101.11|126.74|196.84 |256 | | [convnextv2_large.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k_384) |88.196|98.532|384 |197.96 |101.1 |126.74|128.94 |128 | | [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320) |87.968|98.47 |320 |200.13 |70.21 |88.02 |283.42 |256 | | [convnext_xlarge.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k_384) |87.75 |98.556|384 |350.2 |179.2 |168.99|124.85 |192 | | [convnextv2_base.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k_384) |87.646|98.422|384 |88.72 |45.21 |84.49 |209.51 |256 | | [convnext_large.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k_384) |87.476|98.382|384 |197.77 |101.1 |126.74|194.66 |256 | | [convnext_large_mlp.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k) |87.344|98.218|256 |200.13 |44.94 |56.33 |438.08 |256 | | [convnextv2_large.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k) |87.26 |98.248|224 |197.96 |34.4 |43.13 |376.84 |256 | | [convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384) |87.138|98.212|384 |88.59 |45.21 |84.49 |365.47 |256 | | [convnext_xlarge.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k) |87.002|98.208|224 |350.2 |60.98 |57.5 |368.01 |256 | | [convnext_base.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k_384) |86.796|98.264|384 |88.59 |45.21 |84.49 |366.54 |256 | | [convnextv2_base.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k) |86.74 |98.022|224 |88.72 |15.38 |28.75 |624.23 |256 | | [convnext_large.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k) |86.636|98.028|224 |197.77 |34.4 |43.13 |581.43 |256 | | [convnext_base.clip_laiona_augreg_ft_in1k_384](https://huggingface.co/timm/convnext_base.clip_laiona_augreg_ft_in1k_384) |86.504|97.97 |384 |88.59 |45.21 |84.49 |368.14 |256 | | [convnext_base.clip_laion2b_augreg_ft_in12k_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k) |86.344|97.97 |256 |88.59 |20.09 |37.55 |816.14 |256 | | [convnextv2_huge.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in1k) |86.256|97.75 |224 |660.29 |115.0 |79.07 |154.72 |256 | | [convnext_small.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_small.in12k_ft_in1k_384) |86.182|97.92 |384 |50.22 |25.58 |63.37 |516.19 |256 | | [convnext_base.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in1k) |86.154|97.68 |256 |88.59 |20.09 |37.55 |819.86 |256 | | [convnext_base.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k) |85.822|97.866|224 |88.59 |15.38 |28.75 |1037.66 |256 | | [convnext_small.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k_384) |85.778|97.886|384 |50.22 |25.58 |63.37 |518.95 |256 | | [convnextv2_large.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in1k) |85.742|97.584|224 |197.96 |34.4 |43.13 |375.23 |256 | | [convnext_small.in12k_ft_in1k](https://huggingface.co/timm/convnext_small.in12k_ft_in1k) |85.174|97.506|224 |50.22 |8.71 |21.56 |1474.31 |256 | | [convnext_tiny.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k_384) |85.118|97.608|384 |28.59 |13.14 |39.48 |856.76 |256 | | [convnextv2_tiny.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k_384) |85.112|97.63 |384 |28.64 |13.14 |39.48 |491.32 |256 | | [convnextv2_base.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in1k) |84.874|97.09 |224 |88.72 |15.38 |28.75 |625.33 |256 | | [convnext_small.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k) |84.562|97.394|224 |50.22 |8.71 |21.56 |1478.29 |256 | | [convnext_large.fb_in1k](https://huggingface.co/timm/convnext_large.fb_in1k) |84.282|96.892|224 |197.77 |34.4 |43.13 |584.28 |256 | | [convnext_tiny.in12k_ft_in1k](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k) |84.186|97.124|224 |28.59 |4.47 |13.44 |2433.7 |256 | | [convnext_tiny.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k_384) |84.084|97.14 |384 |28.59 |13.14 |39.48 |862.95 |256 | | [convnextv2_tiny.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k) |83.894|96.964|224 |28.64 |4.47 |13.44 |1452.72 |256 | | [convnext_base.fb_in1k](https://huggingface.co/timm/convnext_base.fb_in1k) |83.82 |96.746|224 |88.59 |15.38 |28.75 |1054.0 |256 | | [convnextv2_nano.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k_384) |83.37 |96.742|384 |15.62 |7.22 |24.61 |801.72 |256 | | [convnext_small.fb_in1k](https://huggingface.co/timm/convnext_small.fb_in1k) |83.142|96.434|224 |50.22 |8.71 |21.56 |1464.0 |256 | | [convnextv2_tiny.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in1k) |82.92 |96.284|224 |28.64 |4.47 |13.44 |1425.62 |256 | | [convnext_tiny.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k) |82.898|96.616|224 |28.59 |4.47 |13.44 |2480.88 |256 | | [convnext_nano.in12k_ft_in1k](https://huggingface.co/timm/convnext_nano.in12k_ft_in1k) |82.282|96.344|224 |15.59 |2.46 |8.37 |3926.52 |256 | | [convnext_tiny_hnf.a2h_in1k](https://huggingface.co/timm/convnext_tiny_hnf.a2h_in1k) |82.216|95.852|224 |28.59 |4.47 |13.44 |2529.75 |256 | | [convnext_tiny.fb_in1k](https://huggingface.co/timm/convnext_tiny.fb_in1k) |82.066|95.854|224 |28.59 |4.47 |13.44 |2346.26 |256 | | [convnextv2_nano.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k) |82.03 |96.166|224 |15.62 |2.46 |8.37 |2300.18 |256 | | [convnextv2_nano.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in1k) |81.83 |95.738|224 |15.62 |2.46 |8.37 |2321.48 |256 | | [convnext_nano_ols.d1h_in1k](https://huggingface.co/timm/convnext_nano_ols.d1h_in1k) |80.866|95.246|224 |15.65 |2.65 |9.38 |3523.85 |256 | | [convnext_nano.d1h_in1k](https://huggingface.co/timm/convnext_nano.d1h_in1k) |80.768|95.334|224 |15.59 |2.46 |8.37 |3915.58 |256 | | [convnextv2_pico.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_pico.fcmae_ft_in1k) |80.304|95.072|224 |9.07 |1.37 |6.1 |3274.57 |256 | | [convnext_pico.d1_in1k](https://huggingface.co/timm/convnext_pico.d1_in1k) |79.526|94.558|224 |9.05 |1.37 |6.1 |5686.88 |256 | | [convnext_pico_ols.d1_in1k](https://huggingface.co/timm/convnext_pico_ols.d1_in1k) |79.522|94.692|224 |9.06 |1.43 |6.5 |5422.46 |256 | | [convnextv2_femto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_femto.fcmae_ft_in1k) |78.488|93.98 |224 |5.23 |0.79 |4.57 |4264.2 |256 | | [convnext_femto_ols.d1_in1k](https://huggingface.co/timm/convnext_femto_ols.d1_in1k) |77.86 |93.83 |224 |5.23 |0.82 |4.87 |6910.6 |256 | | [convnext_femto.d1_in1k](https://huggingface.co/timm/convnext_femto.d1_in1k) |77.454|93.68 |224 |5.22 |0.79 |4.57 |7189.92 |256 | | [convnextv2_atto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_atto.fcmae_ft_in1k) |76.664|93.044|224 |3.71 |0.55 |3.81 |4728.91 |256 | | [convnext_atto_ols.a2_in1k](https://huggingface.co/timm/convnext_atto_ols.a2_in1k) |75.88 |92.846|224 |3.7 |0.58 |4.11 |7963.16 |256 | | [convnext_atto.d2_in1k](https://huggingface.co/timm/convnext_atto.d2_in1k) |75.664|92.9 |224 |3.7 |0.55 |3.81 |8439.22 |256 | ## Citation ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ``` ```bibtex @article{liu2022convnet, author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie}, title = {A ConvNet for the 2020s}, journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, year = {2022}, } ```
15,642
[ [ -0.0670166015625, -0.032501220703125, -0.00386810302734375, 0.035919189453125, -0.03216552734375, -0.0156402587890625, -0.01200103759765625, -0.03485107421875, 0.0660400390625, 0.017669677734375, -0.042877197265625, -0.041717529296875, -0.05133056640625, -0....
internlm/internlm-chat-7b-v1_1
2023-10-20T14:23:08.000Z
[ "transformers", "pytorch", "internlm", "text-generation", "custom_code", "region:us" ]
text-generation
internlm
null
null
internlm/internlm-chat-7b-v1_1
23
2,663
transformers
2023-08-21T14:24:51
--- pipeline_tag: text-generation --- # InternLM <div align="center"> <img src="https://github.com/InternLM/InternLM/assets/22529082/b9788105-8892-4398-8b47-b513a292378e" width="200"/> <div>&nbsp;</div> <div align="center"> <b><font size="5">InternLM</font></b> <sup> <a href="https://internlm.intern-ai.org.cn/"> <i><font size="4">HOT</font></i> </a> </sup> <div>&nbsp;</div> </div> [![evaluation](https://github.com/InternLM/InternLM/assets/22529082/f80a2a58-5ddf-471a-8da4-32ab65c8fd3b)](https://github.com/internLM/OpenCompass/) [💻Github Repo](https://github.com/InternLM/InternLM) • [🤔Reporting Issues](https://github.com/InternLM/InternLM/issues/new) </div> ## Introduction InternLM has open-sourced a 7 billion parameter base model and a chat model tailored for practical scenarios. The model has the following characteristics: - It leverages trillions of high-quality tokens for training to establish a powerful knowledge base. - It supports an 8k context window length, enabling longer input sequences and stronger reasoning capabilities. - It provides a versatile toolset for users to flexibly build their own workflows. ## InternLM-7B ### Performance Evaluation We conducted a comprehensive evaluation of InternLM using the open-source evaluation tool [OpenCompass](https://github.com/internLM/OpenCompass/). The evaluation covered five dimensions of capabilities: disciplinary competence, language competence, knowledge competence, inference competence, and comprehension competence. Here are some of the evaluation results, and you can visit the [OpenCompass leaderboard](https://opencompass.org.cn/rank) for more evaluation results. | Datasets\Models | **InternLM-Chat-7B** | **InternLM-7B** | LLaMA-7B | Baichuan-7B | ChatGLM2-6B | Alpaca-7B | Vicuna-7B | | -------------------- | --------------------- | ---------------- | --------- | --------- | ------------ | --------- | ---------- | | C-Eval(Val) | 53.2 | 53.4 | 24.2 | 42.7 | 50.9 | 28.9 | 31.2 | | MMLU | 50.8 | 51.0 | 35.2* | 41.5 | 46.0 | 39.7 | 47.3 | | AGIEval | 42.5 | 37.6 | 20.8 | 24.6 | 39.0 | 24.1 | 26.4 | | CommonSenseQA | 75.2 | 59.5 | 65.0 | 58.8 | 60.0 | 68.7 | 66.7 | | BUSTM | 74.3 | 50.6 | 48.5 | 51.3 | 55.0 | 48.8 | 62.5 | | CLUEWSC | 78.6 | 59.1 | 50.3 | 52.8 | 59.8 | 50.3 | 52.2 | | MATH | 6.4 | 7.1 | 2.8 | 3.0 | 6.6 | 2.2 | 2.8 | | GSM8K | 34.5 | 31.2 | 10.1 | 9.7 | 29.2 | 6.0 | 15.3 | | HumanEval | 14.0 | 10.4 | 14.0 | 9.2 | 9.2 | 9.2 | 11.0 | | RACE(High) | 76.3 | 57.4 | 46.9* | 28.1 | 66.3 | 40.7 | 54.0 | - The evaluation results were obtained from [OpenCompass 20230706](https://github.com/internLM/OpenCompass/) (some data marked with *, which means come from the original papers), and evaluation configuration can be found in the configuration files provided by [OpenCompass](https://github.com/internLM/OpenCompass/). - The evaluation data may have numerical differences due to the version iteration of [OpenCompass](https://github.com/internLM/OpenCompass/), so please refer to the latest evaluation results of [OpenCompass](https://github.com/internLM/OpenCompass/). **Limitations:** Although we have made efforts to ensure the safety of the model during the training process and to encourage the model to generate text that complies with ethical and legal requirements, the model may still produce unexpected outputs due to its size and probabilistic generation paradigm. For example, the generated responses may contain biases, discrimination, or other harmful content. Please do not propagate such content. We are not responsible for any consequences resulting from the dissemination of harmful information. ### Import from Transformers To load the InternLM 7B Chat model using Transformers, use the following code: ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM >>> tokenizer = AutoTokenizer.from_pretrained("internlm/internlm-chat-7b-v1.1", trust_remote_code=True) >>> model = AutoModelForCausalLM.from_pretrained("internlm/internlm-chat-7b-v1.1", trust_remote_code=True).cuda() >>> model = model.eval() >>> response, history = model.chat(tokenizer, "hello", history=[]) >>> print(response) Hello! How can I help you today? >>> response, history = model.chat(tokenizer, "please provide three suggestions about time management", history=history) >>> print(response) Sure, here are three tips for effective time management: 1. Prioritize tasks based on importance and urgency: Make a list of all your tasks and categorize them into "important and urgent," "important but not urgent," and "not important but urgent." Focus on completing the tasks in the first category before moving on to the others. 2. Use a calendar or planner: Write down deadlines and appointments in a calendar or planner so you don't forget them. This will also help you schedule your time more effectively and avoid overbooking yourself. 3. Minimize distractions: Try to eliminate any potential distractions when working on important tasks. Turn off notifications on your phone, close unnecessary tabs on your computer, and find a quiet place to work if possible. Remember, good time management skills take practice and patience. Start with small steps and gradually incorporate these habits into your daily routine. ``` ### Dialogue You can interact with the InternLM Chat 7B model through a frontend interface by running the following code: ```bash pip install streamlit==1.24.0 pip install transformers==4.30.2 streamlit run web_demo.py ``` The effect is as follows ![demo](https://github.com/InternLM/InternLM/assets/9102141/11b60ee0-47e4-42c0-8278-3051b2f17fe4) ## Open Source License The code is licensed under Apache-2.0, while model weights are fully open for academic research and also allow **free** commercial usage. To apply for a commercial license, please fill in the [application form (English)](https://wj.qq.com/s2/12727483/5dba/)/[申请表(中文)](https://wj.qq.com/s2/12725412/f7c1/). For other questions or collaborations, please contact <internlm@pjlab.org.cn>. ## 简介 InternLM ,即书生·浦语大模型,包含面向实用场景的70亿参数基础模型与对话模型 (InternLM-7B)。模型具有以下特点: - 使用上万亿高质量预料,建立模型超强知识体系; - 支持8k语境窗口长度,实现更长输入与更强推理体验; - 通用工具调用能力,支持用户灵活自助搭建流程; ## InternLM-7B ### 性能评测 我们使用开源评测工具 [OpenCompass](https://github.com/internLM/OpenCompass/) 从学科综合能力、语言能力、知识能力、推理能力、理解能力五大能力维度对InternLM开展全面评测,部分评测结果如下表所示,欢迎访问[ OpenCompass 榜单 ](https://opencompass.org.cn/rank)获取更多的评测结果。 | 数据集\模型 | **InternLM-Chat-7B** | **InternLM-7B** | LLaMA-7B | Baichuan-7B | ChatGLM2-6B | Alpaca-7B | Vicuna-7B | | -------------------- | --------------------- | ---------------- | --------- | --------- | ------------ | --------- | ---------- | | C-Eval(Val) | 53.2 | 53.4 | 24.2 | 42.7 | 50.9 | 28.9 | 31.2 | | MMLU | 50.8 | 51.0 | 35.2* | 41.5 | 46.0 | 39.7 | 47.3 | | AGIEval | 42.5 | 37.6 | 20.8 | 24.6 | 39.0 | 24.1 | 26.4 | | CommonSenseQA | 75.2 | 59.5 | 65.0 | 58.8 | 60.0 | 68.7 | 66.7 | | BUSTM | 74.3 | 50.6 | 48.5 | 51.3 | 55.0 | 48.8 | 62.5 | | CLUEWSC | 78.6 | 59.1 | 50.3 | 52.8 | 59.8 | 50.3 | 52.2 | | MATH | 6.4 | 7.1 | 2.8 | 3.0 | 6.6 | 2.2 | 2.8 | | GSM8K | 34.5 | 31.2 | 10.1 | 9.7 | 29.2 | 6.0 | 15.3 | | HumanEval | 14.0 | 10.4 | 14.0 | 9.2 | 9.2 | 9.2 | 11.0 | | RACE(High) | 76.3 | 57.4 | 46.9* | 28.1 | 66.3 | 40.7 | 54.0 | - 以上评测结果基于 [OpenCompass 20230706](https://github.com/internLM/OpenCompass/) 获得(部分数据标注`*`代表数据来自原始论文),具体测试细节可参见 [OpenCompass](https://github.com/internLM/OpenCompass/) 中提供的配置文件。 - 评测数据会因 [OpenCompass](https://github.com/internLM/OpenCompass/) 的版本迭代而存在数值差异,请以 [OpenCompass](https://github.com/internLM/OpenCompass/) 最新版的评测结果为主。 **局限性:** 尽管在训练过程中我们非常注重模型的安全性,尽力促使模型输出符合伦理和法律要求的文本,但受限于模型大小以及概率生成范式,模型可能会产生各种不符合预期的输出,例如回复内容包含偏见、歧视等有害内容,请勿传播这些内容。由于传播不良信息导致的任何后果,本项目不承担责任。 ### 通过 Transformers 加载 通过以下的代码加载 InternLM 7B Chat 模型 ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM >>> tokenizer = AutoTokenizer.from_pretrained("internlm/internlm-chat-7b-v1.1", trust_remote_code=True) >>> model = AutoModelForCausalLM.from_pretrained("internlm/internlm-chat-7b-v1.1", trust_remote_code=True).cuda() >>> model = model.eval() >>> response, history = model.chat(tokenizer, "你好", history=[]) >>> print(response) 你好!有什么我可以帮助你的吗? >>> response, history = model.chat(tokenizer, "请提供三个管理时间的建议。", history=history) >>> print(response) 当然可以!以下是三个管理时间的建议: 1. 制定计划:制定一个详细的计划,包括每天要完成的任务和活动。这将有助于您更好地组织时间,并确保您能够按时完成任务。 2. 优先级:将任务按照优先级排序,先完成最重要的任务。这将确保您能够在最短的时间内完成最重要的任务,从而节省时间。 3. 集中注意力:避免分心,集中注意力完成任务。关闭社交媒体和电子邮件通知,专注于任务,这将帮助您更快地完成任务,并减少错误的可能性。 ``` ### 通过前端网页对话 可以通过以下代码启动一个前端的界面来与 InternLM Chat 7B 模型进行交互 ```bash pip install streamlit==1.24.0 pip install transformers==4.30.2 streamlit run web_demo.py ``` 效果如下 ![效果](https://github.com/InternLM/InternLM/assets/9102141/11b60ee0-47e4-42c0-8278-3051b2f17fe4) ## 开源许可证 本仓库的代码依照 Apache-2.0 协议开源。模型权重对学术研究完全开放,也可申请免费的商业使用授权([申请表](https://wj.qq.com/s2/12725412/f7c1/))。其他问题与合作请联系 <internlm@pjlab.org.cn>。
10,351
[ [ -0.0264892578125, -0.054718017578125, 0.0027179718017578125, 0.03173828125, -0.0144500732421875, 0.004150390625, -0.012054443359375, -0.0277862548828125, -0.003353118896484375, 0.0010061264038085938, -0.019439697265625, -0.05242919921875, -0.036102294921875, ...
KappaNeuro/studio-ghibli-style
2023-09-14T10:52:17.000Z
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "studio ghibli", "art", "ghibli", "style", "painting", "films", "license:other", "region:us", "has_space" ]
text-to-image
KappaNeuro
null
null
KappaNeuro/studio-ghibli-style
8
2,658
diffusers
2023-09-14T10:52:13
--- license: other tags: - text-to-image - stable-diffusion - lora - diffusers - studio ghibli - art - ghibli - style - painting - films base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: Studio Ghibli Style widget: - text: "Studio Ghibli Style - Japanese soccer player Mina Tanaka overcomes crushing pressure. Studio Ghibli animation style. surreal." - text: "Studio Ghibli Style - Anime style image like window xp background. This image contains hills with covered grasses. On the one of the hills there is earth tiny path. Left side of the image, there are a tiny wooden one-story house with a roof. One of the hills, top of the hill there is a white sheep. Sunny day, Noon time." - text: "Studio Ghibli Style - a man and a woman standing in front of a cartoon character, a storybook illustration by Studio Ghibli, cgsociety, magical realism, official art, anime, movie still The background is a picture of a train running next to a river, two sides are yellow flowers 3d 4k official art" - text: "Studio Ghibli Style - As the unwitting young guardian of a perimeter, explore a unspoiled nature reserve, piece together the history and discover that the fate of the planet depends on a truth to be unveiled.Studio Ghibli Cel Style" - text: "Studio Ghibli Style - Studio ghibli style, big cute black cat is looking out of big wood paned window at a big pink dog wood tree, rolling green hills in background, aesthetic furniture in foreground" - text: "Studio Ghibli Style - an amazing image that shows that Mistakes help me learn and improve; they are a natural part of the learning process, in the style of Ghibli 4k 8k 16k 32k 64k" - text: "Studio Ghibli Style - same image, same image, plantation, yellow and green, traditional chinese houses, distant mountain in the background ghibli design" - text: "Studio Ghibli Style - wales flying in the sky, fantastic ambiance, moons and mountains in backgrounds - Ghibli animation studio rendering" - text: "Studio Ghibli Style - Design a poster that showcases the beautiful landscapes and scenery from Studio Ghibli films" --- # Studio Ghibli Style ([CivitAI](https://civitai.com/models/106712)) ![Image 0](2331978.jpeg) > Studio Ghibli Style - Japanese soccer player Mina Tanaka overcomes crushing pressure. Studio Ghibli animation style. surreal. <p>The Studio Ghibli style refers to the distinctive artistic and storytelling approach seen in the animated films produced by Studio Ghibli. It is characterized by its attention to detail, hand-drawn animation, richly crafted worlds, and emotionally resonant storytelling.</p><p>Visually, the Studio Ghibli style often features lush and vibrant environments, meticulously designed backgrounds, and intricate character designs. The attention to detail is remarkable, with carefully rendered textures, naturalistic movements, and expressive facial expressions. The animation captures a sense of fluidity and grace, immersing viewers in a visually stunning cinematic experience.</p><p>Storytelling is at the heart of the Studio Ghibli style. The films often explore themes of nature, the environment, coming-of-age, and the power of human connections. They possess a unique ability to blend fantasy elements with grounded, relatable narratives, resulting in stories that are both whimsical and deeply resonant. Studio Ghibli films often celebrate the imagination and the spirit of adventure, while also grappling with deeper philosophical questions and social commentary.</p><p>The studio's films also feature strong and complex characters, particularly young protagonists who embark on transformative journeys of self-discovery and personal growth. These characters often face challenges and conflicts that allow for exploration of universal themes such as identity, love, loss, and the duality of human nature.</p><p>Music plays an integral role in the Studio Ghibli style, with beautiful and emotive scores composed by Joe Hisaishi. The music enhances the storytelling, evoking a wide range of emotions and further immersing viewers in the enchanting worlds created by the studio.</p><p>The Studio Ghibli style has captivated audiences worldwide, transcending language and cultural barriers. The films' artistry, imagination, and universal themes have earned them a devoted following and critical acclaim. The studio's commitment to craftsmanship, creativity, and storytelling continues to inspire both animators and film enthusiasts, leaving a lasting impact on the world of animation.</p> ## Image examples for the model: ![Image 1](2331954.jpeg) > Studio Ghibli Style - Anime style image like window xp background. This image contains hills with covered grasses. On the one of the hills there is earth tiny path. Left side of the image, there are a tiny wooden one-story house with a roof. One of the hills, top of the hill there is a white sheep. Sunny day, Noon time. ![Image 2](2331959.jpeg) > Studio Ghibli Style - a man and a woman standing in front of a cartoon character, a storybook illustration by Studio Ghibli, cgsociety, magical realism, official art, anime, movie still The background is a picture of a train running next to a river, two sides are yellow flowers 3d 4k official art ![Image 3](2331948.jpeg) > ![Image 4](2331951.jpeg) > Studio Ghibli Style - As the unwitting young guardian of a perimeter, explore a unspoiled nature reserve, piece together the history and discover that the fate of the planet depends on a truth to be unveiled.Studio Ghibli Cel Style ![Image 5](2331967.jpeg) > Studio Ghibli Style - Studio ghibli style, big cute black cat is looking out of big wood paned window at a big pink dog wood tree, rolling green hills in background, aesthetic furniture in foreground ![Image 6](2331968.jpeg) > Studio Ghibli Style - an amazing image that shows that Mistakes help me learn and improve; they are a natural part of the learning process, in the style of Ghibli 4k 8k 16k 32k 64k ![Image 7](2331969.jpeg) > Studio Ghibli Style - same image, same image, plantation, yellow and green, traditional chinese houses, distant mountain in the background ghibli design ![Image 8](2331972.jpeg) > Studio Ghibli Style - wales flying in the sky, fantastic ambiance, moons and mountains in backgrounds - Ghibli animation studio rendering ![Image 9](2331992.jpeg) > Studio Ghibli Style - Design a poster that showcases the beautiful landscapes and scenery from Studio Ghibli films
6,440
[ [ -0.039093017578125, -0.03289794921875, 0.0223541259765625, 0.0264739990234375, -0.015716552734375, 0.04547119140625, -0.00457763671875, -0.051544189453125, 0.061553955078125, 0.00838470458984375, -0.08392333984375, -0.02740478515625, -0.032745361328125, -0.0...
Yntec/NovelAIRemix
2023-09-24T08:54:37.000Z
[ "diffusers", "Anime", "text-to-image", "license:creativeml-openrail-m", "endpoints_compatible", "has_space", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
Yntec
null
null
Yntec/NovelAIRemix
3
2,655
diffusers
2023-09-03T14:31:16
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image tags: - Anime --- # NovelAIRemix NovelAI mixed with SD1.5. Sample and prompt: ![sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/yykKpqu2aNrnihASE5Evx.png) sitting elementary girl, Pretty CUTE, gorgeous hair, Magazine ad, iconic, 1943, Cartoon, sharp focus, 4k. beautiful art on canvas by kyoani and ROSSDRAWS and ross tran. DETAILED CHIBI Check out: https://huggingface.co/Yntec/NovelAI # Recipe SD1.4Full + fp16 - no-ema = SD1.4 (https://huggingface.co/Yntec/NovelAIRemix/resolve/main/sd-v1-4-fp16-no-ema.safetensors) SD1.5Full + fp16 - no-ema = SD1.5 (https://huggingface.co/Yntec/DreamLikeRemix/resolve/main/v1-5-pruned-fp16-no-ema.safetensors) Add Difference (SD1.4 + (SD1.4 - SD1.5)*1)=SD1.5Essence (https://huggingface.co/Yntec/NovelAIRemix/resolve/main/SD1.5Essence.safetensors) Weighted Sum (SD1.5Essence * (1 - 0.7) + NovelAIFull * 0.7) = NovelAISD1.5 Weighted Sum (NovelAISD1.5 * (1 - 0.7) + NovelAISFW * 0.7) = NovelAIRemix
1,078
[ [ -0.035186767578125, -0.03021240234375, 0.026123046875, 0.057769775390625, -0.008026123046875, -0.0023670196533203125, 0.0112762451171875, -0.0295867919921875, 0.08367919921875, 0.042999267578125, -0.05865478515625, -0.034393310546875, -0.043670654296875, -0....
defog/sqlcoder-7b
2023-10-04T13:46:35.000Z
[ "transformers", "pytorch", "mistral", "text-generation", "code", "en", "license:cc-by-sa-4.0", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
defog
null
null
defog/sqlcoder-7b
30
2,653
transformers
2023-10-03T04:23:34
--- license: cc-by-sa-4.0 language: - en pipeline_tag: text-generation tags: - code --- # Defog SQLCoder Defog's SQLCoder is a state-of-the-art LLM for converting natural language questions to SQL queries. [Interactive Demo](https://defog.ai/sqlcoder-demo/) | [🤗 HF Repo](https://huggingface.co/defog/sqlcoder2) | [♾️ Colab](https://colab.research.google.com/drive/1z4rmOEiFkxkMiecAWeTUlPl0OmKgfEu7?usp=sharing) | [🐦 Twitter](https://twitter.com/defogdata) ## TL;DR SQLCoder-7B is a 7B parameter model that outperforms `gpt-3.5-turbo` for natural language to SQL generation tasks on our [sql-eval](https://github.com/defog-ai/sql-eval) framework, and significantly outperforms all popular open-source models. When fine-tuned on a given schema, it also outperforms `gpt-4` SQLCoder-7B is fine-tuned on a base Mistral-7B model. ## Results on novel datasets not seen in training | model | perc_correct | |-|-| | gpt4-2023-10-04 | 82.0 | | defog-sqlcoder2 | 74.5 | | gpt4-2023-08-28 | 74.0 | | defog-sqlcoder-7b | 71.0 | | gpt-3.5-2023-10-04 | 66.0 | | claude-2 | 64.5 | | gpt-3.5-2023-08-28 | 61.0 | | claude_instant_1 | 61.0 | | text-davinci-003 | 52.5 | ## License The code in this repo (what little there is of it) is Apache-2 licensed. The model weights have a `CC BY-SA 4.0` license. The TL;DR is that you can use and modify the model for any purpose – including commercial use. However, if you modify the weights (for example, by fine-tuning), you must open-source your modified weights under the same license terms. ## Training SQLCoder was trained on more than 20,000 human-curated questions. These questions were based on 10 different schemas. None of the schemas in the training data were included in our evaluation framework. You can read more about our [training approach](https://defog.ai/blog/open-sourcing-sqlcoder2-7b/) and [evaluation framework](https://defog.ai/blog/open-sourcing-sqleval/). ## Results by question category We classified each generated question into one of 5 categories. The table displays the percentage of questions answered correctly by each model, broken down by category. | query_category | gpt-4 | sqlcoder2-15b | sqlcoder-7b | gpt-3.5 | claude-2 | claude-instant | gpt-3 | |:-----------------|--------:|----------------:|--------------:|----------:|-----------:|-----------------:|--------:| | date | 72 | 76 | 64 | 68 | 52 | 48 | 32 | | group_by | 91.4 | 80 | 82.9 | 77.1 | 71.4 | 71.4 | 71.4 | | order_by | 82.9 | 77.1 | 74.3 | 68.6 | 74.3 | 74.3 | 68.6 | | ratio | 80 | 60 | 54.3 | 37.1 | 57.1 | 45.7 | 25.7 | | join | 82.9 | 77.1 | 74.3 | 71.4 | 65.7 | 62.9 | 57.1 | | where | 80 | 77.1 | 74.3 | 74.3 | 62.9 | 60 | 54.3 | ## Using SQLCoder You can use SQLCoder via the `transformers` library by downloading our model weights from the Hugging Face repo. We have added sample code for [inference](./inference.py) on a [sample database schema](./metadata.sql). ```bash python inference.py -q "Question about the sample database goes here" # Sample question: # Do we get more revenue from customers in New York compared to customers in San Francisco? Give me the total revenue for each city, and the difference between the two. ``` You can also use a demo on our website [here](https://defog.ai/sqlcoder-demo), or run SQLCoder in Colab [here](https://colab.research.google.com/drive/13BIKsqHnPOBcQ-ba2p77L5saiepTIwu0#scrollTo=ZpbVgVHMkJvC) ## Hardware Requirements SQLCoder has been tested on an A100 40GB GPU with `bfloat16` weights. You can also load an 8-bit and 4-bit quantized version of the model on consumer GPUs with 20GB or more of memory – like RTX 4090, RTX 3090, and Apple M2 Pro, M2 Max, or M2 Ultra Chips with 20GB or more of memory. ## Todo - [x] Open-source the v1 model weights - [x] Train the model on more data, with higher data variance - [ ] Tune the model further with Reward Modelling and RLHF - [ ] Pretrain a model from scratch that specializes in SQL analysis
4,364
[ [ -0.0275421142578125, -0.07379150390625, 0.0144500732421875, 0.0029754638671875, -0.0187835693359375, -0.019012451171875, -0.007266998291015625, -0.02557373046875, 0.005329132080078125, 0.039703369140625, -0.0396728515625, -0.04150390625, -0.0284423828125, 0....
cryptoman/converted-llama-2-70b
2023-07-21T15:43:53.000Z
[ "transformers", "pytorch", "llama", "text-generation", "facebook", "meta", "llama-2", "en", "arxiv:2307.09288", "license:other", "text-generation-inference", "region:us" ]
text-generation
cryptoman
null
null
cryptoman/converted-llama-2-70b
1
2,652
transformers
2023-07-20T08:49:12
--- inference: false language: - en license: other model_type: llama pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-2 --- # **Llama-2-70b converted to HF format** These are the original weights of the LLaMA 70B models that have just been converted to Hugging Face Transformers format using the [transformation script](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/convert_llama_weights_to_hf.py). Original model page: https://huggingface.co/meta-llama/Llama-2-70b # Original model card: Meta's Llama 2 70B # **Llama 2** Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 70B pretrained model. Links to other models can be found in the index at the bottom. ## Model Details *Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.* Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM. **Model Developers** Meta **Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations. **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety. ||Training Data|Params|Content Length|GQA|Tokens|LR| |---|---|---|---|---|---|---| |Llama 2|*A new mix of publicly available online data*|7B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|13B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|70B|4k|&#10004;|2.0T|1.5 x 10<sup>-4</sup>| *Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability. **Model Dates** Llama 2 was trained between January 2023 and July 2023. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) **Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288) ## Intended Use **Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212). **Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program. ||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)| |---|---|---|---| |Llama 2 7B|184320|400|31.22| |Llama 2 13B|368640|400|62.44| |Llama 2 70B|1720320|400|291.42| |Total|3311616||539.00| **CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023. ## Evaluation Results In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library. |Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval| |---|---|---|---|---|---|---|---|---|---| |Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9| |Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9| |Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7| |Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6| |Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3| |Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1| |Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**| **Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1. |||TruthfulQA|Toxigen| |---|---|---|---| |Llama 1|7B|27.42|23.00| |Llama 1|13B|41.74|23.08| |Llama 1|33B|44.19|22.57| |Llama 1|65B|48.71|21.77| |Llama 2|7B|33.29|**21.25**| |Llama 2|13B|41.86|26.10| |Llama 2|70B|**50.18**|24.60| **Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better). |||TruthfulQA|Toxigen| |---|---|---|---| |Llama-2-Chat|7B|57.04|**0.00**| |Llama-2-Chat|13B|62.18|**0.00**| |Llama-2-Chat|70B|**64.14**|0.01| **Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above. ## Ethical Considerations and Limitations Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide) ## Reporting Issues Please report any software “bug,” or other problems with the models through one of the following means: - Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama) - Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) - Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) ## Llama Model Index |Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf| |---|---|---|---|---| |7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)| |13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)| |70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
9,981
[ [ -0.016326904296875, -0.05126953125, 0.0266571044921875, 0.0161590576171875, -0.027252197265625, 0.01580810546875, -0.005588531494140625, -0.0545654296875, 0.00782012939453125, 0.0232391357421875, -0.05438232421875, -0.04132080078125, -0.051605224609375, 0.00...
timm/vit_small_patch16_224.augreg_in21k
2023-05-06T00:28:07.000Z
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-21k", "arxiv:2106.10270", "arxiv:2010.11929", "license:apache-2.0", "region:us" ]
image-classification
timm
null
null
timm/vit_small_patch16_224.augreg_in21k
0
2,651
timm
2022-12-22T07:53:43
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-21k --- # Model card for vit_small_patch16_224.augreg_in21k A Vision Transformer (ViT) image classification model. Trained on ImageNet-21k (with additional augmentation and regularization) in JAX by paper authors, ported to PyTorch by Ross Wightman. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 30.1 - GMACs: 4.3 - Activations (M): 8.3 - Image size: 224 x 224 - **Papers:** - How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers: https://arxiv.org/abs/2106.10270 - An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2 - **Dataset:** ImageNet-21k - **Original:** https://github.com/google-research/vision_transformer ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('vit_small_patch16_224.augreg_in21k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'vit_small_patch16_224.augreg_in21k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 197, 384) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @article{steiner2021augreg, title={How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers}, author={Steiner, Andreas and Kolesnikov, Alexander and and Zhai, Xiaohua and Wightman, Ross and Uszkoreit, Jakob and Beyer, Lucas}, journal={arXiv preprint arXiv:2106.10270}, year={2021} } ``` ```bibtex @article{dosovitskiy2020vit, title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale}, author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil}, journal={ICLR}, year={2021} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
3,803
[ [ -0.0396728515625, -0.029693603515625, -0.0011091232299804688, 0.0044708251953125, -0.0265655517578125, -0.0260009765625, -0.024017333984375, -0.036865234375, 0.01397705078125, 0.0223541259765625, -0.0389404296875, -0.03466796875, -0.04638671875, 0.0020122528...
facebook/muppet-roberta-base
2021-06-28T21:44:23.000Z
[ "transformers", "pytorch", "roberta", "fill-mask", "exbert", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:2101.11038", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
facebook
null
null
facebook/muppet-roberta-base
7
2,650
transformers
2022-03-02T23:29:05
--- language: en tags: - exbert license: mit datasets: - bookcorpus - wikipedia --- # Muppet: Massive Multi-task Representations with Pre-Finetuning # RoBERTa base model This is a Massive Multi-task Pre-finetuned version of Roberta base. It was introduced in [this paper](https://arxiv.org/abs/2101.11038). The model improves over roberta-base in a wide range of GLUE, QA tasks (details can be found in the paper). The gains in smaller datasets are significant. Note: This checkpoint does not contain the classificaiton/MRC heads used during pre-finetuning due to compatibility issues and hence you might get slightly lower performance than that reported in the paper on some datasets ## Model description RoBERTa is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the BERT model as inputs. ## Intended uses & limitations You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=roberta) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ## Evaluation results When fine-tuned on downstream tasks, this model achieves the following results: Glue test results: | Model | MNLI | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE | SQuAD| |:----:|:----:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|:----:| | Roberta-base | 87.6 | 91.9 | 92.8 | 94.8 | 63.6 | 91.2 | 90.2 | 78.7 | 82.6| | MUPPET Roberta-base | 88.1 | 91.9 | 93.3 | 96.7 | - | - | 91.7 | 87.8 | 86.6| ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2101-11038, author = {Armen Aghajanyan and Anchit Gupta and Akshat Shrivastava and Xilun Chen and Luke Zettlemoyer and Sonal Gupta}, title = {Muppet: Massive Multi-task Representations with Pre-Finetuning}, journal = {CoRR}, volume = {abs/2101.11038}, year = {2021}, url = {https://arxiv.org/abs/2101.11038}, archivePrefix = {arXiv}, eprint = {2101.11038}, timestamp = {Sun, 31 Jan 2021 17:23:50 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2101-11038.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
3,627
[ [ -0.03131103515625, -0.07159423828125, 0.01959228515625, 0.0106353759765625, -0.00408172607421875, -0.00331878662109375, -0.036041259765625, -0.031402587890625, 0.0167694091796875, 0.03271484375, -0.048797607421875, -0.022369384765625, -0.054962158203125, -0....
keremberke/yolov8s-pothole-segmentation
2023-02-22T13:01:08.000Z
[ "ultralytics", "tensorboard", "v8", "ultralyticsplus", "yolov8", "yolo", "vision", "image-segmentation", "pytorch", "awesome-yolov8-models", "dataset:keremberke/pothole-segmentation", "model-index", "region:us" ]
image-segmentation
keremberke
null
null
keremberke/yolov8s-pothole-segmentation
2
2,649
ultralytics
2023-01-26T03:12:06
--- tags: - ultralyticsplus - yolov8 - ultralytics - yolo - vision - image-segmentation - pytorch - awesome-yolov8-models library_name: ultralytics library_version: 8.0.21 inference: false datasets: - keremberke/pothole-segmentation model-index: - name: keremberke/yolov8s-pothole-segmentation results: - task: type: image-segmentation dataset: type: keremberke/pothole-segmentation name: pothole-segmentation split: validation metrics: - type: precision # since mAP@0.5 is not available on hf.co/metrics value: 0.92833 # min: 0.0 - max: 1.0 name: mAP@0.5(box) - type: precision # since mAP@0.5 is not available on hf.co/metrics value: 0.92833 # min: 0.0 - max: 1.0 name: mAP@0.5(mask) --- <div align="center"> <img width="640" alt="keremberke/yolov8s-pothole-segmentation" src="https://huggingface.co/keremberke/yolov8s-pothole-segmentation/resolve/main/thumbnail.jpg"> </div> ### Supported Labels ``` ['pothole'] ``` ### How to use - Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus): ```bash pip install ultralyticsplus==0.0.23 ultralytics==8.0.21 ``` - Load model and perform prediction: ```python from ultralyticsplus import YOLO, render_result # load model model = YOLO('keremberke/yolov8s-pothole-segmentation') # set model parameters model.overrides['conf'] = 0.25 # NMS confidence threshold model.overrides['iou'] = 0.45 # NMS IoU threshold model.overrides['agnostic_nms'] = False # NMS class-agnostic model.overrides['max_det'] = 1000 # maximum number of detections per image # set image image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg' # perform inference results = model.predict(image) # observe results print(results[0].boxes) print(results[0].masks) render = render_result(model=model, image=image, result=results[0]) render.show() ``` **More models available at: [awesome-yolov8-models](https://yolov8.xyz)**
1,986
[ [ -0.033721923828125, -0.03204345703125, 0.053619384765625, -0.01157379150390625, -0.038818359375, -0.013519287109375, 0.01544952392578125, -0.02862548828125, 0.0141754150390625, 0.0232696533203125, -0.04180908203125, -0.050811767578125, -0.038055419921875, -0...
TheBloke/CodeLlama-7B-Instruct-GPTQ
2023-09-27T12:46:05.000Z
[ "transformers", "safetensors", "llama", "text-generation", "llama-2", "custom_code", "code", "arxiv:2308.12950", "license:llama2", "has_space", "text-generation-inference", "region:us" ]
text-generation
TheBloke
null
null
TheBloke/CodeLlama-7B-Instruct-GPTQ
40
2,648
transformers
2023-08-24T20:27:24
--- language: - code license: llama2 tags: - llama-2 model_name: CodeLlama 7B Instruct base_model: codellama/CodeLlama-7b-instruct-hf inference: false model_creator: Meta model_type: llama pipeline_tag: text-generation prompt_template: '[INST] Write code to solve the following coding problem that obeys the constraints and passes the example test cases. Please wrap your code answer using ```: {prompt} [/INST] ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # CodeLlama 7B Instruct - GPTQ - Model creator: [Meta](https://huggingface.co/meta-llama) - Original model: [CodeLlama 7B Instruct](https://huggingface.co/codellama/CodeLlama-7b-instruct-hf) <!-- description start --> ## Description This repo contains GPTQ model files for [Meta's CodeLlama 7B Instruct](https://huggingface.co/codellama/CodeLlama-7b-instruct-hf). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/CodeLlama-7B-Instruct-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/CodeLlama-7B-Instruct-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/CodeLlama-7B-Instruct-GGUF) * [Meta's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/codellama/CodeLlama-7b-instruct-hf) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: CodeLlama ``` [INST] Write code to solve the following coding problem that obeys the constraints and passes the example test cases. Please wrap your code answer using ```: {prompt} [/INST] ``` <!-- prompt-template end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. All recent GPTQ files are made with AutoGPTQ, and all files in non-main branches are made with AutoGPTQ. Files in the `main` branch which were uploaded before August 2023 were made with GPTQ-for-LLaMa. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The dataset used for quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/CodeLlama-7B-Instruct-GPTQ/tree/main) | 4 | 128 | No | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 8192 | 3.90 GB | Yes | 4-bit, without Act Order and group size 128g. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/CodeLlama-7B-Instruct-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 8192 | 4.28 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/CodeLlama-7B-Instruct-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 8192 | 4.02 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. | | [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/CodeLlama-7B-Instruct-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 8192 | 3.90 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/CodeLlama-7B-Instruct-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 8192 | 7.01 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/CodeLlama-7B-Instruct-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 8192 | 7.16 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download from branches - In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/CodeLlama-7B-Instruct-GPTQ:main` - With Git, you can clone a branch with: ``` git clone --single-branch --branch main https://huggingface.co/TheBloke/CodeLlama-7B-Instruct-GPTQ ``` - In Python Transformers code, the branch is the `revision` parameter; see below. <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui). Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/CodeLlama-7B-Instruct-GPTQ`. - To download from a specific branch, enter for example `TheBloke/CodeLlama-7B-Instruct-GPTQ:main` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `CodeLlama-7B-Instruct-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. * Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-python start --> ## How to use this GPTQ model from Python code ### Install the necessary packages Requires: Transformers 4.32.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install transformers>=4.32.0 optimum>=1.12.0 pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7 ``` If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ pip3 install . ``` ### For CodeLlama models only: you must use Transformers 4.33.0 or later. If 4.33.0 is not yet released when you read this, you will need to install Transformers from source: ```shell pip3 uninstall -y transformers pip3 install git+https://github.com/huggingface/transformers.git ``` ### You can then use the following code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/CodeLlama-7B-Instruct-GPTQ" # To use a different branch, change revision # For example: revision="main" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=True, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Tell me about AI" prompt_template=f'''[INST] Write code to solve the following coding problem that obeys the constraints and passes the example test cases. Please wrap your code answer using ```: {prompt} [/INST] ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI). [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility. [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: Meta's CodeLlama 7B Instruct # **Code Llama** Code Llama is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 34 billion parameters. This is the repository for the 7B instruct-tuned version in the Hugging Face Transformers format. This model is designed for general code synthesis and understanding. Links to other models can be found in the index at the bottom. | | Base Model | Python | Instruct | | --- | ----------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- | | 7B | [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) | [codellama/CodeLlama-7b-Python-hf](https://huggingface.co/codellama/CodeLlama-7b-Python-hf) | [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) | | 13B | [codellama/CodeLlama-13b-hf](https://huggingface.co/codellama/CodeLlama-13b-hf) | [codellama/CodeLlama-13b-Python-hf](https://huggingface.co/codellama/CodeLlama-13b-Python-hf) | [codellama/CodeLlama-13b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf) | | 34B | [codellama/CodeLlama-34b-hf](https://huggingface.co/codellama/CodeLlama-34b-hf) | [codellama/CodeLlama-34b-Python-hf](https://huggingface.co/codellama/CodeLlama-34b-Python-hf) | [codellama/CodeLlama-34b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-34b-Instruct-hf) | ## Model Use To use this model, please make sure to install transformers from `main` until the next version is released: ```bash pip install git+https://github.com/huggingface/transformers.git@main accelerate ``` Model capabilities: - [x] Code completion. - [x] Infilling. - [x] Instructions / chat. - [ ] Python specialist. ## Model Details *Note: Use of this model is governed by the Meta license. Meta developed and publicly released the Code Llama family of large language models (LLMs). **Model Developers** Meta **Variations** Code Llama comes in three model sizes, and three variants: * Code Llama: base models designed for general code synthesis and understanding * Code Llama - Python: designed specifically for Python * Code Llama - Instruct: for instruction following and safer deployment All variants are available in sizes of 7B, 13B and 34B parameters. **This repository contains the Instruct version of the 7B parameters model.** **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Code Llama is an auto-regressive language model that uses an optimized transformer architecture. **Model Dates** Code Llama and its variants have been trained between January 2023 and July 2023. **Status** This is a static model trained on an offline dataset. Future versions of Code Llama - Instruct will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) **Research Paper** More information can be found in the paper "[Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/)" or its [arXiv page](https://arxiv.org/abs/2308.12950). ## Intended Use **Intended Use Cases** Code Llama and its variants is intended for commercial and research use in English and relevant programming languages. The base model Code Llama can be adapted for a variety of code synthesis and understanding tasks, Code Llama - Python is designed specifically to handle the Python programming language, and Code Llama - Instruct is intended to be safer to use for code assistant and generation applications. **Out-of-Scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Code Llama and its variants. ## Hardware and Software **Training Factors** We used custom training libraries. The training and fine-tuning of the released models have been performed Meta’s Research Super Cluster. **Carbon Footprint** In aggregate, training all 9 Code Llama models required 400K GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 65.3 tCO2eq, 100% of which were offset by Meta’s sustainability program. ## Training Data All experiments reported here and the released models have been trained and fine-tuned using the same data as Llama 2 with different weights (see Section 2 and Table 1 in the [research paper](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) for details). ## Evaluation Results See evaluations for the main models and detailed ablations in Section 3 and safety evaluations in Section 4 of the research paper. ## Ethical Considerations and Limitations Code Llama and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Code Llama’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Code Llama, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available available at [https://ai.meta.com/llama/responsible-user-guide](https://ai.meta.com/llama/responsible-user-guide).
21,148
[ [ -0.0347900390625, -0.0604248046875, 0.01168060302734375, 0.0092010498046875, -0.02508544921875, -0.009552001953125, 0.002094268798828125, -0.035552978515625, 0.01224517822265625, 0.0279388427734375, -0.043609619140625, -0.04345703125, -0.0244293212890625, -0...
keremberke/yolov8s-chest-xray-classification
2023-02-22T13:02:05.000Z
[ "ultralytics", "tensorboard", "v8", "ultralyticsplus", "yolov8", "yolo", "vision", "image-classification", "pytorch", "awesome-yolov8-models", "dataset:keremberke/chest-xray-classification", "model-index", "region:us" ]
image-classification
keremberke
null
null
keremberke/yolov8s-chest-xray-classification
0
2,647
ultralytics
2023-01-27T22:59:04
--- tags: - ultralyticsplus - yolov8 - ultralytics - yolo - vision - image-classification - pytorch - awesome-yolov8-models library_name: ultralytics library_version: 8.0.21 inference: false datasets: - keremberke/chest-xray-classification model-index: - name: keremberke/yolov8s-chest-xray-classification results: - task: type: image-classification dataset: type: keremberke/chest-xray-classification name: chest-xray-classification split: validation metrics: - type: accuracy value: 0.94158 # min: 0.0 - max: 1.0 name: top1 accuracy - type: accuracy value: 1 # min: 0.0 - max: 1.0 name: top5 accuracy --- <div align="center"> <img width="640" alt="keremberke/yolov8s-chest-xray-classification" src="https://huggingface.co/keremberke/yolov8s-chest-xray-classification/resolve/main/thumbnail.jpg"> </div> ### Supported Labels ``` ['NORMAL', 'PNEUMONIA'] ``` ### How to use - Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus): ```bash pip install ultralyticsplus==0.0.23 ultralytics==8.0.21 ``` - Load model and perform prediction: ```python from ultralyticsplus import YOLO, postprocess_classify_output # load model model = YOLO('keremberke/yolov8s-chest-xray-classification') # set model parameters model.overrides['conf'] = 0.25 # model confidence threshold # set image image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg' # perform inference results = model.predict(image) # observe results print(results[0].probs) # [0.1, 0.2, 0.3, 0.4] processed_result = postprocess_classify_output(model, result=results[0]) print(processed_result) # {"cat": 0.4, "dog": 0.6} ``` **More models available at: [awesome-yolov8-models](https://yolov8.xyz)**
1,796
[ [ -0.02294921875, -0.012542724609375, 0.042694091796875, -0.022064208984375, -0.03558349609375, -0.02313232421875, 0.012298583984375, -0.03045654296875, 0.01483917236328125, 0.02740478515625, -0.0299530029296875, -0.04833984375, -0.046630859375, -0.00640869140...
kaiyuy/leandojo-lean3-tacgen-byt5-small
2023-06-23T18:50:20.000Z
[ "transformers", "pytorch", "t5", "text2text-generation", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text2text-generation
kaiyuy
null
null
kaiyuy/leandojo-lean3-tacgen-byt5-small
1
2,647
transformers
2023-06-17T04:39:19
--- license: mit inference: parameters: max_length: 1024 widget: - text: "a b : ℕ\n⊢ a + b = b + a" example_title: "Example" --- [LeanDojo: Theorem Proving with Retrieval-Augmented Language Models](https://arxiv.org/abs/xxxx.xxxxx) Under review, NeurIPS (Datasets and Benchmarks Track), 2023 [Kaiyu Yang](https://yangky11.github.io/), [Aidan Swope](https://aidanswope.com/about), [Alex Gu](https://minimario.github.io/), [Rahul Chalamala](https://www.linkedin.com/in/rchalamala), [Peiyang Song](https://www.linkedin.com/in/peiyang-song-3279b3251/), [Shixing Yu](https://billysx.github.io/), [Saad Godil](https://www.linkedin.com/in/saad-godil-9728353/), [Ryan Prenger](https://www.linkedin.com/in/ryan-prenger-18797ba1/), [Anima Anandkumar](http://tensorlab.cms.caltech.edu/users/anima/) ```bibtex @article{yang2023leandojo, title={{LeanDojo}: Theorem Proving with Retrieval-Augmented Language Models}, author={Yang, Kaiyu and Swope, Aidan and Gu, Alex and Chalamala, Rahul and Song, Peiyang and Yu, Shixing and Godil, Saad and Prenger, Ryan and Anandkumar, Anima}, journal={arXiv preprint arXiv:xxxx.xxxxx}, year={2023} } ``` Please visit [LeanDojo Website](https://leandojo.org/) for details.
1,223
[ [ -0.0186767578125, -0.017913818359375, 0.04193115234375, 0.018463134765625, 0.0052947998046875, -0.00983428955078125, -0.0233001708984375, -0.037933349609375, 0.01441192626953125, 0.026580810546875, -0.00701141357421875, -0.047332763671875, -0.048065185546875, ...
artificialguybr/analogredmond
2023-10-07T06:25:45.000Z
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "license:creativeml-openrail-m", "has_space", "region:us" ]
text-to-image
artificialguybr
null
null
artificialguybr/analogredmond
4
2,645
diffusers
2023-08-17T01:21:20
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion - lora - diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: AnalogRedmAF widget: - text: AnalogRedmAF --- # Analog.Redmond ![row01](00147-2176616884.png) Analog.Redmond is here! V2 HERE:https://huggingface.co/artificialguybr/analogredmond-v2 TEST ALL MY LORAS HERE:https://huggingface.co/spaces/artificialguybr/artificialguybr-demo-lora?logs=build Introducing AnalogRedmond, the ultimate LORA for creating stunning analog photography! I'm grateful for the GPU time from Redmond.AI that allowed me to make this LORA! If you need GPU, then you need the great services from Redmond.AI. It is based on SD XL 1.0 and fine-tuned on a large dataset of analog photographs. The LORA has a high capacity to generate Analog Photographs. You can use detailed, minimalist, colorful, black and white as tag to control the results. The tag for the model:AnalogRedmAF LORA is not perfect and sometimes needs more than one gen to create good images. This is inspired in the good Dreambooth Model Nitro made for SD 1.5! I really hope you like the LORA and use it. If you like the model and think it's worth it, you can make a donation to my Patreon or Ko-fi. Follow me in my twitter to know before all about new models: https://twitter.com/artificialguybr/
1,360
[ [ -0.05841064453125, -0.0689697265625, 0.0245513916015625, 0.0093536376953125, -0.038848876953125, 0.0003867149353027344, 0.0279388427734375, -0.0638427734375, 0.086669921875, 0.018310546875, -0.055206298828125, -0.0259552001953125, -0.014801025390625, -0.0129...
Yntec/RainbowDreams
2023-07-12T12:33:08.000Z
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "yntec", "dreamlike", "rainbowpatch", "license:creativeml-openrail-m", "endpoints_compatible", "has_space", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
Yntec
null
null
Yntec/RainbowDreams
1
2,644
diffusers
2023-07-11T16:20:50
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image tags: - stable-diffusion - stable-diffusion-diffusers - diffusers - text-to-image - yntec - dreamlike - rainbowpatch - diffusers --- # RainbowDreams A mix of Rainbowpatch 1.0 by Patchmonk at https://civitai.com/models/5528/rainbowpatch and my favorite models. Use "Rainbowpatch" at the beginning of the prompt to enhance the effect.
422
[ [ -0.06439208984375, -0.0244293212890625, -0.002758026123046875, 0.09600830078125, -0.0266571044921875, 0.00067138671875, 0.0445556640625, -0.047698974609375, 0.06488037109375, 0.0516357421875, -0.0926513671875, -0.00579071044921875, -0.021759033203125, -0.016...
keremberke/yolov8n-csgo-player-detection
2023-02-22T13:02:39.000Z
[ "ultralytics", "tensorboard", "v8", "ultralyticsplus", "yolov8", "yolo", "vision", "object-detection", "pytorch", "awesome-yolov8-models", "dataset:keremberke/csgo-object-detection", "model-index", "region:us" ]
object-detection
keremberke
null
null
keremberke/yolov8n-csgo-player-detection
7
2,643
ultralytics
2023-01-29T01:17:24
--- tags: - ultralyticsplus - yolov8 - ultralytics - yolo - vision - object-detection - pytorch - awesome-yolov8-models library_name: ultralytics library_version: 8.0.21 inference: false datasets: - keremberke/csgo-object-detection model-index: - name: keremberke/yolov8n-csgo-player-detection results: - task: type: object-detection dataset: type: keremberke/csgo-object-detection name: csgo-object-detection split: validation metrics: - type: precision # since mAP@0.5 is not available on hf.co/metrics value: 0.84441 # min: 0.0 - max: 1.0 name: mAP@0.5(box) --- <div align="center"> <img width="640" alt="keremberke/yolov8n-csgo-player-detection" src="https://huggingface.co/keremberke/yolov8n-csgo-player-detection/resolve/main/thumbnail.jpg"> </div> ### Supported Labels ``` ['ct', 'cthead', 't', 'thead'] ``` ### How to use - Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus): ```bash pip install ultralyticsplus==0.0.23 ultralytics==8.0.21 ``` - Load model and perform prediction: ```python from ultralyticsplus import YOLO, render_result # load model model = YOLO('keremberke/yolov8n-csgo-player-detection') # set model parameters model.overrides['conf'] = 0.25 # NMS confidence threshold model.overrides['iou'] = 0.45 # NMS IoU threshold model.overrides['agnostic_nms'] = False # NMS class-agnostic model.overrides['max_det'] = 1000 # maximum number of detections per image # set image image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg' # perform inference results = model.predict(image) # observe results print(results[0].boxes) render = render_result(model=model, image=image, result=results[0]) render.show() ``` **More models available at: [awesome-yolov8-models](https://yolov8.xyz)**
1,835
[ [ -0.03515625, -0.028717041015625, 0.0367431640625, -0.018157958984375, -0.024871826171875, -0.002857208251953125, -0.003154754638671875, -0.03662109375, 0.020782470703125, 0.0098876953125, -0.047882080078125, -0.044097900390625, -0.031951904296875, -0.0145568...
facebook/timesformer-base-finetuned-ssv2
2022-12-12T12:53:06.000Z
[ "transformers", "pytorch", "timesformer", "video-classification", "vision", "arxiv:2102.05095", "license:cc-by-nc-4.0", "endpoints_compatible", "has_space", "region:us" ]
video-classification
facebook
null
null
facebook/timesformer-base-finetuned-ssv2
2
2,642
transformers
2022-10-07T20:36:48
--- license: "cc-by-nc-4.0" tags: - vision - video-classification --- # TimeSformer (base-sized model, fine-tuned on Something Something v2) TimeSformer model pre-trained on [Something Something v2](https://developer.qualcomm.com/software/ai-datasets/something-something). It was introduced in the paper [TimeSformer: Is Space-Time Attention All You Need for Video Understanding?](https://arxiv.org/abs/2102.05095) by Tong et al. and first released in [this repository](https://github.com/facebookresearch/TimeSformer). Disclaimer: The team releasing TimeSformer did not write a model card for this model so this model card has been written by [fcakyon](https://github.com/fcakyon). ## Intended uses & limitations You can use the raw model for video classification into one of the 174 possible Something Something v2 labels. ### How to use Here is how to use this model to classify a video: ```python from transformers import AutoImageProcessor, TimesformerForVideoClassification import numpy as np import torch video = list(np.random.randn(8, 3, 224, 224)) processor = AutoImageProcessor.from_pretrained("facebook/timesformer-base-finetuned-ssv2") model = TimesformerForVideoClassification.from_pretrained("facebook/timesformer-base-finetuned-ssv2") inputs = processor(images=video, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` For more code examples, we refer to the [documentation](https://huggingface.co/transformers/main/model_doc/timesformer.html#). ### BibTeX entry and citation info ```bibtex @inproceedings{bertasius2021space, title={Is Space-Time Attention All You Need for Video Understanding?}, author={Bertasius, Gedas and Wang, Heng and Torresani, Lorenzo}, booktitle={International Conference on Machine Learning}, pages={813--824}, year={2021}, organization={PMLR} } ```
1,989
[ [ -0.0181121826171875, -0.05224609375, 0.015594482421875, 0.01235198974609375, -0.01302337646484375, -0.000010848045349121094, 0.001941680908203125, -0.0121612548828125, -0.005584716796875, -0.0025348663330078125, -0.056121826171875, -0.0194244384765625, -0.062133...
openai/shap-e
2023-07-20T16:02:25.000Z
[ "diffusers", "text-to-image", "text-to-3d", "shap-e", "arxiv:2305.02463", "license:mit", "has_space", "diffusers:ShapEPipeline", "region:us" ]
text-to-image
openai
null
null
openai/shap-e
27
2,642
diffusers
2023-07-04T13:25:35
--- license: mit tags: - text-to-image - text-to-3d - shap-e - diffusers --- # Shap-E Shap-E introduces a diffusion process that can generate a 3D image from a text prompt. It was introduced in [Shap-E: Generating Conditional 3D Implicit Functions](https://arxiv.org/abs/2305.02463) by Heewoo Jun and Alex Nichol from OpenAI. Original repository of Shap-E can be found here: https://github.com/openai/shap-e. _The authors of Shap-E didn't author this model card. They provide a separate model card [here](https://github.com/openai/shap-e/blob/main/model-card.md)._ ## Introduction The abstract of the Shap-E paper: *We present Shap-E, a conditional generative model for 3D assets. Unlike recent work on 3D generative models which produce a single output representation, Shap-E directly generates the parameters of implicit functions that can be rendered as both textured meshes and neural radiance fields. We train Shap-E in two stages: first, we train an encoder that deterministically maps 3D assets into the parameters of an implicit function; second, we train a conditional diffusion model on outputs of the encoder. When trained on a large dataset of paired 3D and text data, our resulting models are capable of generating complex and diverse 3D assets in a matter of seconds. When compared to Point-E, an explicit generative model over point clouds, Shap-E converges faster and reaches comparable or better sample quality despite modeling a higher-dimensional, multi-representation output space. We release model weights, inference code, and samples at [this https URL](https://github.com/openai/shap-e).* ## Released checkpoints The authors released the following checkpoints: * [openai/shap-e](https://hf.co/openai/shap-e): produces a 3D image from a text input prompt * [openai/shap-e-img2img](https://hf.co/openai/shap-e-img2img): samples a 3D image from synthetic 2D image ## Usage examples in 🧨 diffusers First make sure you have installed all the dependencies: ```bash pip install transformers accelerate -q pip install git+https://github.com/huggingface/diffusers@@shap-ee ``` Once the dependencies are installed, use the code below: ```python import torch from diffusers import ShapEPipeline from diffusers.utils import export_to_gif ckpt_id = "openai/shap-e" pipe = ShapEPipeline.from_pretrained(repo).to("cuda") guidance_scale = 15.0 prompt = "a shark" images = pipe( prompt, guidance_scale=guidance_scale, num_inference_steps=64, size=256, ).images gif_path = export_to_gif(images, "shark_3d.gif") ``` ## Results <table> <tbody> <tr> <td align="center"> <img src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/shap-e/bird_3d.gif" alt="a bird"> </td> <td align="center"> <img src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/shap-e/shark_3d.gif" alt="a shark"> </td align="center"> <td align="center"> <img src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/shap-e/veg_3d.gif" alt="A bowl of vegetables"> </td> </tr> <tr> <td align="center">A bird</td> <td align="center">A shark</td> <td align="center">A bowl of vegetables</td> </tr> </tr> </tbody> <table> ## Training details Refer to the [original paper](https://arxiv.org/abs/2305.02463). ## Known limitations and potential biases Refer to the [original model card](https://github.com/openai/shap-e/blob/main/model-card.md). ## Citation ```bibtex @misc{jun2023shape, title={Shap-E: Generating Conditional 3D Implicit Functions}, author={Heewoo Jun and Alex Nichol}, year={2023}, eprint={2305.02463}, archivePrefix={arXiv}, primaryClass={cs.CV} } ```
3,884
[ [ -0.039794921875, -0.06658935546875, 0.044921875, 0.01334381103515625, -0.01268768310546875, -0.035400390625, 0.01316070556640625, -0.047149658203125, 0.018768310546875, 0.0168914794921875, -0.036224365234375, -0.0304718017578125, -0.0447998046875, 0.00175762...
keremberke/yolov8n-hard-hat-detection
2023-02-22T13:04:34.000Z
[ "ultralytics", "tensorboard", "v8", "ultralyticsplus", "yolov8", "yolo", "vision", "object-detection", "pytorch", "awesome-yolov8-models", "dataset:keremberke/hard-hat-detection", "model-index", "region:us" ]
object-detection
keremberke
null
null
keremberke/yolov8n-hard-hat-detection
1
2,640
ultralytics
2023-01-29T22:41:13
--- tags: - ultralyticsplus - yolov8 - ultralytics - yolo - vision - object-detection - pytorch - awesome-yolov8-models library_name: ultralytics library_version: 8.0.23 inference: false datasets: - keremberke/hard-hat-detection model-index: - name: keremberke/yolov8n-hard-hat-detection results: - task: type: object-detection dataset: type: keremberke/hard-hat-detection name: hard-hat-detection split: validation metrics: - type: precision # since mAP@0.5 is not available on hf.co/metrics value: 0.83633 # min: 0.0 - max: 1.0 name: mAP@0.5(box) --- <div align="center"> <img width="640" alt="keremberke/yolov8n-hard-hat-detection" src="https://huggingface.co/keremberke/yolov8n-hard-hat-detection/resolve/main/thumbnail.jpg"> </div> ### Supported Labels ``` ['Hardhat', 'NO-Hardhat'] ``` ### How to use - Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus): ```bash pip install ultralyticsplus==0.0.24 ultralytics==8.0.23 ``` - Load model and perform prediction: ```python from ultralyticsplus import YOLO, render_result # load model model = YOLO('keremberke/yolov8n-hard-hat-detection') # set model parameters model.overrides['conf'] = 0.25 # NMS confidence threshold model.overrides['iou'] = 0.45 # NMS IoU threshold model.overrides['agnostic_nms'] = False # NMS class-agnostic model.overrides['max_det'] = 1000 # maximum number of detections per image # set image image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg' # perform inference results = model.predict(image) # observe results print(results[0].boxes) render = render_result(model=model, image=image, result=results[0]) render.show() ``` **More models available at: [awesome-yolov8-models](https://yolov8.xyz)**
1,809
[ [ -0.034942626953125, -0.027252197265625, 0.040374755859375, -0.0186920166015625, -0.0292205810546875, -0.0106201171875, -0.00586700439453125, -0.034912109375, 0.023040771484375, 0.0174560546875, -0.055145263671875, -0.05487060546875, -0.0295562744140625, -0.0...
nvidia/mit-b4
2022-08-06T10:28:21.000Z
[ "transformers", "pytorch", "tf", "segformer", "image-classification", "vision", "dataset:imagenet_1k", "arxiv:2105.15203", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
nvidia
null
null
nvidia/mit-b4
1
2,639
transformers
2022-03-02T23:29:05
--- license: other tags: - vision datasets: - imagenet_1k widget: - src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg example_title: House - src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000002.jpg example_title: Castle --- # SegFormer (b4-sized) encoder pre-trained-only SegFormer encoder fine-tuned on Imagenet-1k. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer). Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset. This repository only contains the pre-trained hierarchical Transformer, hence it can be used for fine-tuning purposes. ## Intended uses & limitations You can use the model for fine-tuning of semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import SegformerFeatureExtractor, SegformerForImageClassification from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/mit-b4") model = SegformerForImageClassification.from_pretrained("nvidia/mit-b4") inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#). ### License The license for this model can be found [here](https://github.com/NVlabs/SegFormer/blob/master/LICENSE). ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2105-15203, author = {Enze Xie and Wenhai Wang and Zhiding Yu and Anima Anandkumar and Jose M. Alvarez and Ping Luo}, title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers}, journal = {CoRR}, volume = {abs/2105.15203}, year = {2021}, url = {https://arxiv.org/abs/2105.15203}, eprinttype = {arXiv}, eprint = {2105.15203}, timestamp = {Wed, 02 Jun 2021 11:46:42 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
3,354
[ [ -0.06866455078125, -0.051025390625, 0.006443023681640625, 0.01100921630859375, -0.0247344970703125, -0.0259246826171875, 0.003246307373046875, -0.04815673828125, 0.01885986328125, 0.043609619140625, -0.060546875, -0.03985595703125, -0.057220458984375, 0.0091...
sadakmed/distiluse-base-multilingual-cased-v2
2021-09-22T09:37:21.000Z
[ "sentence-transformers", "pytorch", "distilbert", "DistilBert", "Universal Sentence Encoder", "sentence-embeddings", "sentence-similarity", "multilingual", "license:apache-2.0", "endpoints_compatible", "region:us" ]
sentence-similarity
sadakmed
null
null
sadakmed/distiluse-base-multilingual-cased-v2
1
2,636
sentence-transformers
2022-03-02T23:29:05
--- language: multilingual tags: - DistilBert - Universal Sentence Encoder - sentence-embeddings - sentence-transformers - sentence-similarity license: apache-2.0 --- While v1 model supports 15 languages, this version supports 50+ languages. However, performance on the 15 languages mentioned in v1 are reported to be a bit lower. Note that ST has additional two layers(Pooling, Linear), that cannot be saved in any predefined model in HG.
441
[ [ -0.030426025390625, -0.0567626953125, 0.024444580078125, 0.03765869140625, -0.0275115966796875, 0.00035881996154785156, -0.00328826904296875, -0.067138671875, -0.01129150390625, 0.053924560546875, -0.052734375, -0.0257568359375, -0.021392822265625, 0.0089874...
keremberke/yolov8n-forklift-detection
2023-02-22T13:00:05.000Z
[ "ultralytics", "tensorboard", "v8", "ultralyticsplus", "yolov8", "yolo", "vision", "object-detection", "pytorch", "awesome-yolov8-models", "dataset:keremberke/forklift-object-detection", "model-index", "region:us" ]
object-detection
keremberke
null
null
keremberke/yolov8n-forklift-detection
2
2,636
ultralytics
2023-01-15T15:49:05
--- tags: - ultralyticsplus - yolov8 - ultralytics - yolo - vision - object-detection - pytorch - awesome-yolov8-models library_name: ultralytics library_version: 8.0.21 inference: false datasets: - keremberke/forklift-object-detection model-index: - name: keremberke/yolov8n-forklift-detection results: - task: type: object-detection dataset: type: keremberke/forklift-object-detection name: forklift-object-detection split: validation metrics: - type: precision # since mAP@0.5 is not available on hf.co/metrics value: 0.83794 # min: 0.0 - max: 1.0 name: mAP@0.5(box) --- <div align="center"> <img width="640" alt="keremberke/yolov8n-forklift-detection" src="https://huggingface.co/keremberke/yolov8n-forklift-detection/resolve/main/thumbnail.jpg"> </div> ### Supported Labels ``` ['forklift', 'person'] ``` ### How to use - Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus): ```bash pip install ultralyticsplus==0.0.23 ultralytics==8.0.21 ``` - Load model and perform prediction: ```python from ultralyticsplus import YOLO, render_result # load model model = YOLO('keremberke/yolov8n-forklift-detection') # set model parameters model.overrides['conf'] = 0.25 # NMS confidence threshold model.overrides['iou'] = 0.45 # NMS IoU threshold model.overrides['agnostic_nms'] = False # NMS class-agnostic model.overrides['max_det'] = 1000 # maximum number of detections per image # set image image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg' # perform inference results = model.predict(image) # observe results print(results[0].boxes) render = render_result(model=model, image=image, result=results[0]) render.show() ``` **More models available at: [awesome-yolov8-models](https://yolov8.xyz)**
1,827
[ [ -0.034332275390625, -0.01611328125, 0.03564453125, -0.0284576416015625, -0.030426025390625, -0.0237274169921875, 0.02044677734375, -0.0379638671875, 0.0244598388671875, 0.013702392578125, -0.04937744140625, -0.043060302734375, -0.030242919921875, -0.00152206...
Yntec/Dreamlike
2023-09-14T05:07:37.000Z
[ "diffusers", "photorealistic", "photoreal", "art", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "en", "license:other", "has_space", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
Yntec
null
null
Yntec/Dreamlike
1
2,635
diffusers
2023-08-23T23:52:12
--- license: other library_name: diffusers pipeline_tag: text-to-image tags: - photorealistic - photoreal - art - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers language: - en inference: false --- # Dreamlike What happens when in the process of making your mix you have to create an intermediate "temporary" model, and it ends up looking better than your mix? You get Dreamlike. Samples and prompt: ![Sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/FzseugUAQVglDXRqY14nC.png) ![Sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/wWu_Wi3QHwLJanHoRt4su.png) ![Sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/gpTVeweHpWT5QQHHfWLdO.png) Close up of a pretty CUTE girl wearing a colourful octopus as a hat, fantasy, intricate, elegant, highly detailed, digital painting, artstation, concept art, smooth, 8 k, sharp focus, illustration, drawing by ROSSDRAWS and Clay Mann and artgerm and greg rutkowski and alphonse mucha Full story: https://huggingface.co/Yntec/dreamlike-photoreal-remix/ Full recipe: https://huggingface.co/Yntec/dreamlike-photoreal-remix/discussions/3 Original page: https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0
1,295
[ [ -0.04168701171875, -0.05657958984375, 0.0460205078125, 0.020782470703125, -0.0243377685546875, 0.0286712646484375, 0.009246826171875, -0.05657958984375, 0.0826416015625, 0.06134033203125, -0.0770263671875, -0.01090240478515625, -0.032470703125, -0.0067367553...
SaiCharan7829/my-pet-dog
2023-10-07T07:17:24.000Z
[ "diffusers", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "endpoints_compatible", "has_space", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
SaiCharan7829
null
null
SaiCharan7829/my-pet-dog
0
2,634
diffusers
2023-10-07T07:13:20
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### My-Pet-Dog Dreambooth model trained by SaiCharan7829 following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: GoX19932gAS Sample pictures of this concept: ![0](https://huggingface.co/SaiCharan7829/my-pet-dog/resolve/main/sample_images/1695971468989.jpg)
406
[ [ -0.059478759765625, -0.01605224609375, 0.0297393798828125, 0.01004791259765625, -0.0099639892578125, 0.03131103515625, 0.0269775390625, -0.0310516357421875, 0.045196533203125, 0.02838134765625, -0.03985595703125, -0.0173492431640625, -0.01393890380859375, 0....
keremberke/yolov8n-nlf-head-detection
2023-02-22T13:04:55.000Z
[ "ultralytics", "tensorboard", "v8", "ultralyticsplus", "yolov8", "yolo", "vision", "object-detection", "pytorch", "awesome-yolov8-models", "dataset:keremberke/nfl-object-detection", "model-index", "region:us" ]
object-detection
keremberke
null
null
keremberke/yolov8n-nlf-head-detection
1
2,632
ultralytics
2023-01-30T06:27:18
--- tags: - ultralyticsplus - yolov8 - ultralytics - yolo - vision - object-detection - pytorch - awesome-yolov8-models library_name: ultralytics library_version: 8.0.23 inference: false datasets: - keremberke/nfl-object-detection model-index: - name: keremberke/yolov8n-nlf-head-detection results: - task: type: object-detection dataset: type: keremberke/nfl-object-detection name: nfl-object-detection split: validation metrics: - type: precision # since mAP@0.5 is not available on hf.co/metrics value: 0.20933 # min: 0.0 - max: 1.0 name: mAP@0.5(box) --- <div align="center"> <img width="640" alt="keremberke/yolov8n-nlf-head-detection" src="https://huggingface.co/keremberke/yolov8n-nlf-head-detection/resolve/main/thumbnail.jpg"> </div> ### Supported Labels ``` ['Helmet', 'Helmet-Blurred', 'Helmet-Difficult', 'Helmet-Partial', 'Helmet-Sideline'] ``` ### How to use - Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus): ```bash pip install ultralyticsplus==0.0.24 ultralytics==8.0.23 ``` - Load model and perform prediction: ```python from ultralyticsplus import YOLO, render_result # load model model = YOLO('keremberke/yolov8n-nlf-head-detection') # set model parameters model.overrides['conf'] = 0.25 # NMS confidence threshold model.overrides['iou'] = 0.45 # NMS IoU threshold model.overrides['agnostic_nms'] = False # NMS class-agnostic model.overrides['max_det'] = 1000 # maximum number of detections per image # set image image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg' # perform inference results = model.predict(image) # observe results print(results[0].boxes) render = render_result(model=model, image=image, result=results[0]) render.show() ``` **More models available at: [awesome-yolov8-models](https://yolov8.xyz)**
1,875
[ [ -0.04119873046875, -0.029632568359375, 0.03521728515625, -0.0112152099609375, -0.029296875, -0.01171112060546875, 0.005947113037109375, -0.042022705078125, 0.02935791015625, 0.01849365234375, -0.059417724609375, -0.052520751953125, -0.0338134765625, -0.00006...
sebastian-hofstaetter/distilbert-dot-tas_b-b256-msmarco
2021-04-15T08:54:28.000Z
[ "transformers", "pytorch", "distilbert", "feature-extraction", "dpr", "dense-passage-retrieval", "knowledge-distillation", "en", "dataset:ms_marco", "arxiv:2104.06967", "endpoints_compatible", "has_space", "region:us" ]
feature-extraction
sebastian-hofstaetter
null
null
sebastian-hofstaetter/distilbert-dot-tas_b-b256-msmarco
20
2,631
transformers
2022-03-02T23:29:05
--- language: "en" tags: - dpr - dense-passage-retrieval - knowledge-distillation datasets: - ms_marco --- # DistilBert for Dense Passage Retrieval trained with Balanced Topic Aware Sampling (TAS-B) We provide a retrieval trained DistilBert-based model (we call the *dual-encoder then dot-product scoring* architecture BERT_Dot) trained with Balanced Topic Aware Sampling on MSMARCO-Passage. This instance was trained with a batch size of 256 and can be used to **re-rank a candidate set** or **directly for a vector index based dense retrieval**. The architecture is a 6-layer DistilBERT, without architecture additions or modifications (we only change the weights during training) - to receive a query/passage representation we pool the CLS vector. We use the same BERT layers for both query and passage encoding (yields better results, and lowers memory requirements). If you want to know more about our efficient (can be done on a single consumer GPU in 48 hours) batch composition procedure and dual supervision for dense retrieval training, check out our paper: https://arxiv.org/abs/2104.06967 🎉 For more information and a minimal usage example please visit: https://github.com/sebastian-hofstaetter/tas-balanced-dense-retrieval ## Effectiveness on MSMARCO Passage & TREC-DL'19 We trained our model on the MSMARCO standard ("small"-400K query) training triples re-sampled with our TAS-B method. As teacher models we used the BERT_CAT pairwise scores as well as the ColBERT model for in-batch-negative signals published here: https://github.com/sebastian-hofstaetter/neural-ranking-kd ### MSMARCO-DEV (7K) | | MRR@10 | NDCG@10 | Recall@1K | |----------------------------------|--------|---------|-----------------------------| | BM25 | .194 | .241 | .857 | | **TAS-B BERT_Dot** (Retrieval) | .347 | .410 | .978 | ### TREC-DL'19 For MRR and Recall we use the recommended binarization point of the graded relevance of 2. This might skew the results when compared to other binarization point numbers. | | MRR@10 | NDCG@10 | Recall@1K | |----------------------------------|--------|---------|-----------------------------| | BM25 | .689 | .501 | .739 | | **TAS-B BERT_Dot** (Retrieval) | .883 | .717 | .843 | ### TREC-DL'20 For MRR and Recall we use the recommended binarization point of the graded relevance of 2. This might skew the results when compared to other binarization point numbers. | | MRR@10 | NDCG@10 | Recall@1K | |----------------------------------|--------|---------|-----------------------------| | BM25 | .649 | .475 | .806 | | **TAS-B BERT_Dot** (Retrieval) | .843 | .686 | .875 | For more baselines, info and analysis, please see the paper: https://arxiv.org/abs/2104.06967 ## Limitations & Bias - The model inherits social biases from both DistilBERT and MSMARCO. - The model is only trained on relatively short passages of MSMARCO (avg. 60 words length), so it might struggle with longer text. ## Citation If you use our model checkpoint please cite our work as: ``` @inproceedings{Hofstaetter2021_tasb_dense_retrieval, author = {Sebastian Hofst{\"a}tter and Sheng-Chieh Lin and Jheng-Hong Yang and Jimmy Lin and Allan Hanbury}, title = {{Efficiently Teaching an Effective Dense Retriever with Balanced Topic Aware Sampling}}, booktitle = {Proc. of SIGIR}, year = {2021}, } ```
3,746
[ [ -0.039825439453125, -0.0703125, 0.029510498046875, 0.02008056640625, -0.026153564453125, -0.00910186767578125, -0.01251220703125, -0.0195465087890625, 0.0210723876953125, 0.021484375, -0.006591796875, -0.044921875, -0.0509033203125, -0.003936767578125, -...
keremberke/yolov8n-protective-equipment-detection
2023-02-22T13:03:41.000Z
[ "ultralytics", "tensorboard", "v8", "ultralyticsplus", "yolov8", "yolo", "vision", "object-detection", "pytorch", "awesome-yolov8-models", "dataset:keremberke/protective-equipment-detection", "model-index", "region:us" ]
object-detection
keremberke
null
null
keremberke/yolov8n-protective-equipment-detection
0
2,629
ultralytics
2023-01-29T09:47:40
--- tags: - ultralyticsplus - yolov8 - ultralytics - yolo - vision - object-detection - pytorch - awesome-yolov8-models library_name: ultralytics library_version: 8.0.21 inference: false datasets: - keremberke/protective-equipment-detection model-index: - name: keremberke/yolov8n-protective-equipment-detection results: - task: type: object-detection dataset: type: keremberke/protective-equipment-detection name: protective-equipment-detection split: validation metrics: - type: precision # since mAP@0.5 is not available on hf.co/metrics value: 0.24713 # min: 0.0 - max: 1.0 name: mAP@0.5(box) --- <div align="center"> <img width="640" alt="keremberke/yolov8n-protective-equipment-detection" src="https://huggingface.co/keremberke/yolov8n-protective-equipment-detection/resolve/main/thumbnail.jpg"> </div> ### Supported Labels ``` ['glove', 'goggles', 'helmet', 'mask', 'no_glove', 'no_goggles', 'no_helmet', 'no_mask', 'no_shoes', 'shoes'] ``` ### How to use - Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus): ```bash pip install ultralyticsplus==0.0.23 ultralytics==8.0.21 ``` - Load model and perform prediction: ```python from ultralyticsplus import YOLO, render_result # load model model = YOLO('keremberke/yolov8n-protective-equipment-detection') # set model parameters model.overrides['conf'] = 0.25 # NMS confidence threshold model.overrides['iou'] = 0.45 # NMS IoU threshold model.overrides['agnostic_nms'] = False # NMS class-agnostic model.overrides['max_det'] = 1000 # maximum number of detections per image # set image image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg' # perform inference results = model.predict(image) # observe results print(results[0].boxes) render = render_result(model=model, image=image, result=results[0]) render.show() ``` **More models available at: [awesome-yolov8-models](https://yolov8.xyz)**
1,977
[ [ -0.0293121337890625, -0.0201873779296875, 0.0328369140625, -0.028839111328125, -0.033172607421875, -0.01422882080078125, 0.0161590576171875, -0.03631591796875, 0.0196990966796875, 0.017303466796875, -0.04949951171875, -0.053680419921875, -0.028076171875, -0....
HooshvareLab/bert-fa-zwnj-base
2021-05-18T21:05:42.000Z
[ "transformers", "pytorch", "tf", "jax", "bert", "fill-mask", "fa", "arxiv:2005.12515", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us", "has_space" ]
fill-mask
HooshvareLab
null
null
HooshvareLab/bert-fa-zwnj-base
6
2,626
transformers
2022-03-02T23:29:04
--- language: fa license: apache-2.0 --- # ParsBERT (v3.0) A Transformer-based Model for Persian Language Understanding The new version of BERT v3.0 for Persian is available today and can tackle the zero-width non-joiner character for Persian writing. Also, the model was trained on new multi-types corpora with a new set of vocabulary. ## Introduction ParsBERT is a monolingual language model based on Google’s BERT architecture. This model is pre-trained on large Persian corpora with various writing styles from numerous subjects (e.g., scientific, novels, news). Paper presenting ParsBERT: [arXiv:2005.12515](https://arxiv.org/abs/2005.12515) ### BibTeX entry and citation info Please cite in publications as the following: ```bibtex @article{ParsBERT, title={ParsBERT: Transformer-based Model for Persian Language Understanding}, author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri}, journal={ArXiv}, year={2020}, volume={abs/2005.12515} } ``` ## Questions? Post a Github issue on the [ParsBERT Issues](https://github.com/hooshvare/parsbert/issues) repo.
1,124
[ [ -0.02117919921875, -0.0596923828125, 0.036102294921875, 0.0199737548828125, -0.027130126953125, 0.00888824462890625, -0.033447265625, -0.0249176025390625, 0.017333984375, 0.044647216796875, -0.036407470703125, -0.03143310546875, -0.023345947265625, -0.005237...
KM4STfulltext/SSCI-BERT-e2
2022-12-24T03:20:32.000Z
[ "transformers", "pytorch", "bert", "fill-mask", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
KM4STfulltext
null
null
KM4STfulltext/SSCI-BERT-e2
1
2,626
transformers
2022-06-01T08:59:09
--- license: apache-2.0 --- # SSCI-BERT: A pretrained language model for social scientific text ## Introduction The research for social science texts needs the support natural language processing tools. The pre-trained language model has greatly improved the accuracy of text mining in general texts. At present, there is an urgent need for a pre-trained language model specifically for the automatic processing of scientific texts in social science. We used the abstract of social science research as the training set. Based on the deep language model framework of BERT, we constructed [SSCI-BERT and SSCI-SciBERT](https://github.com/S-T-Full-Text-Knowledge-Mining/SSCI-BERT) pre-training language models by [transformers/run_mlm.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_mlm.py). We designed four downstream tasks of Text Classification on different social scientific article corpus to verify the performance of the model. - SSCI-BERT and SSCI-SciBERT are trained on the abstract of articles published in SSCI journals from 1986 to 2021. The training set involved in the experiment included a total of `503910614 words`. - Based on the idea of Domain-Adaptive Pretraining, `SSCI-BERT` and `SSCI-SciBERT` combine a large amount of abstracts of scientific articles based on the BERT structure, and continue to train the BERT and SSCI-SciBERT models respectively to obtain pre-training models for the automatic processing of Social science research texts. ## News - 2022-03-24 : SSCIBERT and SSCI-SciBERT has been put forward for the first time. ## How to use ### Huggingface Transformers The `from_pretrained` method based on [Huggingface Transformers](https://github.com/huggingface/transformers) can directly obtain SSCI-BERT and SSCI-SciBERT models online. - SSCI-BERT ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("KM4STfulltext/SSCI-BERT-e2") model = AutoModel.from_pretrained("KM4STfulltext/SSCI-BERT-e2") ``` - SSCI-SciBERT ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("KM4STfulltext/SSCI-SciBERT-e2") model = AutoModel.from_pretrained("KM4STfulltext/SSCI-SciBERT-e2") ``` ### Download Models - The version of the model we provide is `PyTorch`. ### From Huggingface - Download directly through Huggingface's official website. - [KM4STfulltext/SSCI-BERT-e2](https://huggingface.co/KM4STfulltext/SSCI-BERT-e2) - [KM4STfulltext/SSCI-SciBERT-e2](https://huggingface.co/KM4STfulltext/SSCI-SciBERT-e2) - [KM4STfulltext/SSCI-BERT-e4 ](https://huggingface.co/KM4STfulltext/SSCI-BERT-e4) - [KM4STfulltext/SSCI-SciBERT-e4](https://huggingface.co/KM4STfulltext/SSCI-SciBERT-e4) ### From Google Drive We have put the model on Google Drive for users. | Model | DATASET(year) | Base Model | | ------------------------------------------------------------ | ------------- | ---------------------- | | [SSCI-BERT-e2](https://drive.google.com/drive/folders/1xEDnovlwGO2JxqCaf3rdjS2cB6DOxhj4?usp=sharing) | 1986-2021 | Bert-base-cased | | [SSCI-SciBERT-e2](https://drive.google.com/drive/folders/16DtIvnHvbrR_92MwgthRRsULW6An9te1?usp=sharing) (recommended) | 1986-2021 | Scibert-scivocab-cased | | [SSCI-BERT-e4](https://drive.google.com/drive/folders/1sr6Av8p904Jrjps37g7E8aj4HnAHXSxW?usp=sharing) | 1986-2021 | Bert-base-cased | | [SSCI-SciBERT-e4](https://drive.google.com/drive/folders/1ty-b4TIFu8FbilgC4VcI7Bgn_O5MDMVe?usp=sharing) | 1986-2021 | Scibert-scivocab-cased | ## Evaluation & Results - We use SSCI-BERT and SSCI-SciBERT to perform Text Classificationon different social science research corpus. The experimental results are as follows. Relevant data sets are available for download in the **Verification task datasets** folder of this project. #### JCR Title Classify Dataset | Model | accuracy | macro avg | weighted avg | | ---------------------- | -------- | --------- | ------------ | | Bert-base-cased | 28.43 | 22.06 | 21.86 | | Scibert-scivocab-cased | 38.48 | 33.89 | 33.92 | | SSCI-BERT-e2 | 40.43 | 35.37 | 35.33 | | SSCI-SciBERT-e2 | 41.35 | 37.27 | 37.25 | | SSCI-BERT-e4 | 40.65 | 35.49 | 35.40 | | SSCI-SciBERT-e4 | 41.13 | 36.96 | 36.94 | | Support | 2300 | 2300 | 2300 | #### JCR Abstract Classify Dataset | Model | accuracy | macro avg | weighted avg | | ---------------------- | -------- | --------- | ------------ | | Bert-base-cased | 48.59 | 42.8 | 42.82 | | Scibert-scivocab-cased | 55.59 | 51.4 | 51.81 | | SSCI-BERT-e2 | 58.05 | 53.31 | 53.73 | | SSCI-SciBERT-e2 | 59.95 | 56.51 | 57.12 | | SSCI-BERT-e4 | 59.00 | 54.97 | 55.59 | | SSCI-SciBERT-e4 | 60.00 | 56.38 | 56.90 | | Support | 2200 | 2200 | 2200 | #### JCR Mixed Titles and Abstracts Dataset | **Model** | **accuracy** | **macro avg** | **weighted avg** | | ---------------------- | ------------ | -------------- | ----------------- | | Bert-base-cased | 58.24 | 57.27 | 57.25 | | Scibert-scivocab-cased | 59.58 | 58.65 | 58.68 | | SSCI-BERT-e2 | 60.89 | 60.24 | 60.30 | | SSCI-SciBERT-e2 | 60.96 | 60.54 | 60.51 | | SSCI-BERT-e4 | 61.00 | 60.48 | 60.43 | | SSCI-SciBERT-e4 | 61.24 | 60.71 | 60.75 | | Support | 4500 | 4500 | 4500 | #### SSCI Abstract Structural Function Recognition (Classify Dataset) | | Bert-base-cased | SSCI-BERT-e2 | SSCI-BERT-e4 | support | | ------------ | -------------------------- | ------------------- | ------------------- | ----------- | | B | 63.77 | 64.29 | 64.63 | 224 | | P | 53.66 | 57.14 | 57.99 | 95 | | M | 87.63 | 88.43 | 89.06 | 323 | | R | 86.81 | 88.28 | **88.47** | 419 | | C | 78.32 | 79.82 | 78.95 | 316 | | accuracy | 79.59 | 80.9 | 80.97 | 1377 | | macro avg | 74.04 | 75.59 | 75.82 | 1377 | | weighted avg | 79.02 | 80.32 | 80.44 | 1377 | | | **Scibert-scivocab-cased** | **SSCI-SciBERT-e2** | **SSCI-SciBERT-e4** | **support** | | B | 69.98 | **70.95** | **70.95** | 224 | | P | 58.89 | **60.12** | 58.96 | 95 | | M | 89.37 | **90.12** | 88.11 | 323 | | R | 87.66 | 88.07 | 87.44 | 419 | | C | 80.7 | 82.61 | **82.94** | 316 | | accuracy | 81.63 | **82.72** | 82.06 | 1377 | | macro avg | 77.32 | **78.37** | 77.68 | 1377 | | weighted avg | 81.6 | **82.58** | 81.92 | 1377 | ## Cited - If our content is helpful for your research work, please quote our research in your article. - https://link.springer.com/article/10.1007/s11192-022-04602-4 - ## Disclaimer - The experimental results presented in the report only show the performance under a specific data set and hyperparameter combination, and cannot represent the essence of each model. The experimental results may change due to random number seeds and computing equipment. - **Users can use the model arbitrarily within the scope of the license, but we are not responsible for the direct or indirect losses caused by using the content of the project.** ## Acknowledgment - SSCI-BERT was trained based on [BERT-Base-Cased]([google-research/bert: TensorFlow code and pre-trained models for BERT (github.com)](https://github.com/google-research/bert)). - SSCI-SciBERT was trained based on [scibert-scivocab-cased]([allenai/scibert: A BERT model for scientific text. (github.com)](https://github.com/allenai/scibert))
9,021
[ [ -0.0228271484375, -0.0209197998046875, 0.016998291015625, 0.010406494140625, -0.0165863037109375, 0.0032558441162109375, -0.00632476806640625, -0.0223388671875, 0.03570556640625, 0.005626678466796875, -0.034637451171875, -0.05450439453125, -0.062164306640625, ...
keremberke/yolov8n-blood-cell-detection
2023-02-22T13:03:12.000Z
[ "ultralytics", "tensorboard", "v8", "ultralyticsplus", "yolov8", "yolo", "vision", "object-detection", "pytorch", "awesome-yolov8-models", "dataset:keremberke/blood-cell-object-detection", "model-index", "region:us" ]
object-detection
keremberke
null
null
keremberke/yolov8n-blood-cell-detection
2
2,624
ultralytics
2023-01-29T05:06:49
--- tags: - ultralyticsplus - yolov8 - ultralytics - yolo - vision - object-detection - pytorch - awesome-yolov8-models library_name: ultralytics library_version: 8.0.21 inference: false datasets: - keremberke/blood-cell-object-detection model-index: - name: keremberke/yolov8n-blood-cell-detection results: - task: type: object-detection dataset: type: keremberke/blood-cell-object-detection name: blood-cell-object-detection split: validation metrics: - type: precision # since mAP@0.5 is not available on hf.co/metrics value: 0.89265 # min: 0.0 - max: 1.0 name: mAP@0.5(box) --- <div align="center"> <img width="640" alt="keremberke/yolov8n-blood-cell-detection" src="https://huggingface.co/keremberke/yolov8n-blood-cell-detection/resolve/main/thumbnail.jpg"> </div> ### Supported Labels ``` ['Platelets', 'RBC', 'WBC'] ``` ### How to use - Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus): ```bash pip install ultralyticsplus==0.0.23 ultralytics==8.0.21 ``` - Load model and perform prediction: ```python from ultralyticsplus import YOLO, render_result # load model model = YOLO('keremberke/yolov8n-blood-cell-detection') # set model parameters model.overrides['conf'] = 0.25 # NMS confidence threshold model.overrides['iou'] = 0.45 # NMS IoU threshold model.overrides['agnostic_nms'] = False # NMS class-agnostic model.overrides['max_det'] = 1000 # maximum number of detections per image # set image image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg' # perform inference results = model.predict(image) # observe results print(results[0].boxes) render = render_result(model=model, image=image, result=results[0]) render.show() ``` **More models available at: [awesome-yolov8-models](https://yolov8.xyz)**
1,846
[ [ -0.0299530029296875, -0.0185089111328125, 0.03387451171875, -0.026458740234375, -0.041595458984375, -0.005939483642578125, 0.02337646484375, -0.037322998046875, 0.031280517578125, 0.02154541015625, -0.038604736328125, -0.048828125, -0.0234222412109375, 0.003...
keremberke/yolov8n-scene-classification
2023-02-22T13:00:14.000Z
[ "ultralytics", "tensorboard", "v8", "ultralyticsplus", "yolov8", "yolo", "vision", "image-classification", "pytorch", "awesome-yolov8-models", "dataset:keremberke/indoor-scene-classification", "model-index", "region:us" ]
image-classification
keremberke
null
null
keremberke/yolov8n-scene-classification
1
2,623
ultralytics
2023-01-27T01:35:34
--- tags: - ultralyticsplus - yolov8 - ultralytics - yolo - vision - image-classification - pytorch - awesome-yolov8-models library_name: ultralytics library_version: 8.0.20 inference: false datasets: - keremberke/indoor-scene-classification model-index: - name: keremberke/yolov8n-scene-classification results: - task: type: image-classification dataset: type: keremberke/indoor-scene-classification name: indoor-scene-classification split: validation metrics: - type: accuracy value: 0.01605 # min: 0.0 - max: 1.0 name: top1 accuracy - type: accuracy value: 0.08793 # min: 0.0 - max: 1.0 name: top5 accuracy --- <div align="center"> <img width="640" alt="keremberke/yolov8n-scene-classification" src="https://huggingface.co/keremberke/yolov8n-scene-classification/resolve/main/thumbnail.jpg"> </div> ### Supported Labels ``` ['airport_inside', 'artstudio', 'auditorium', 'bakery', 'bookstore', 'bowling', 'buffet', 'casino', 'children_room', 'church_inside', 'classroom', 'cloister', 'closet', 'clothingstore', 'computerroom', 'concert_hall', 'corridor', 'deli', 'dentaloffice', 'dining_room', 'elevator', 'fastfood_restaurant', 'florist', 'gameroom', 'garage', 'greenhouse', 'grocerystore', 'gym', 'hairsalon', 'hospitalroom', 'inside_bus', 'inside_subway', 'jewelleryshop', 'kindergarden', 'kitchen', 'laboratorywet', 'laundromat', 'library', 'livingroom', 'lobby', 'locker_room', 'mall', 'meeting_room', 'movietheater', 'museum', 'nursery', 'office', 'operating_room', 'pantry', 'poolinside', 'prisoncell', 'restaurant', 'restaurant_kitchen', 'shoeshop', 'stairscase', 'studiomusic', 'subway', 'toystore', 'trainstation', 'tv_studio', 'videostore', 'waitingroom', 'warehouse', 'winecellar'] ``` ### How to use - Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus): ```bash pip install ultralyticsplus==0.0.21 ``` - Load model and perform prediction: ```python from ultralyticsplus import YOLO, postprocess_classify_output # load model model = YOLO('keremberke/yolov8n-scene-classification') # set model parameters model.overrides['conf'] = 0.25 # model confidence threshold # set image image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg' # perform inference results = model.predict(image) # observe results print(results[0].probs) # [0.1, 0.2, 0.3, 0.4] processed_result = postprocess_classify_output(model, result=results[0]) print(processed_result) # {"cat": 0.4, "dog": 0.6} ``` **More models available at: [awesome-yolov8-models](https://yolov8.xyz)**
2,613
[ [ -0.03521728515625, -0.0281829833984375, 0.0272064208984375, -0.0226593017578125, -0.0062103271484375, -0.007099151611328125, 0.01317596435546875, -0.027099609375, 0.01202392578125, 0.0301666259765625, -0.04254150390625, -0.052947998046875, -0.034881591796875, ...
biu-nlp/abstract-sim-sentence
2023-05-26T08:20:51.000Z
[ "transformers", "pytorch", "mpnet", "fill-mask", "feature-extraction", "sentence-similarity", "en", "dataset:biu-nlp/abstract-sim", "autotrain_compatible", "endpoints_compatible", "region:us" ]
feature-extraction
biu-nlp
null
null
biu-nlp/abstract-sim-sentence
15
2,623
transformers
2023-05-13T14:06:28
--- language: - en tags: - feature-extraction - sentence-similarity datasets: - biu-nlp/abstract-sim widgets: - sentence-similarity - feature-extraction --- A model for mapping abstract sentence descriptions to sentences that fit the descriptions. Trained on Wikipedia. Use ```load_finetuned_model``` to load the query and sentence encoder, and ```encode_batch()``` to encode a sentence with the model. **Note**: the method uses a dual encoder architecture. This is the **sentence encoder**; it should be used alongside the [**Query encoder**](https://huggingface.co/biu-nlp/abstract-sim-query). ```python from transformers import AutoTokenizer, AutoModel import torch from typing import List from sklearn.metrics.pairwise import cosine_similarity def load_finetuned_model(): sentence_encoder = AutoModel.from_pretrained("biu-nlp/abstract-sim-sentence") query_encoder = AutoModel.from_pretrained("biu-nlp/abstract-sim-query") tokenizer = AutoTokenizer.from_pretrained("biu-nlp/abstract-sim-sentence") return tokenizer, query_encoder, sentence_encoder def encode_batch(model, tokenizer, sentences: List[str], device: str): input_ids = tokenizer(sentences, padding=True, max_length=512, truncation=True, return_tensors="pt", add_special_tokens=True).to(device) features = model(**input_ids)[0] features = torch.sum(features[:,1:,:] * input_ids["attention_mask"][:,1:].unsqueeze(-1), dim=1) / torch.clamp(torch.sum(input_ids["attention_mask"][:,1:], dim=1, keepdims=True), min=1e-9) return features ``` Usage example: ```python tokenizer, query_encoder, sentence_encoder = load_finetuned_model() relevant_sentences = ["Fingersoft's parent company is the Finger Group.", "WHIRC – a subsidiary company of Wright-Hennepin", "CK Life Sciences International (Holdings) Inc. (), or CK Life Sciences, is a subsidiary of CK Hutchison Holdings", "EM Microelectronic-Marin (subsidiary of The Swatch Group).", "The company is currently a division of the corporate group Jam Industries.", "Volt Technical Resources is a business unit of Volt Workforce Solutions, a subsidiary of Volt Information Sciences (currently trading over-the-counter as VISI.)." ] irrelevant_sentences = ["The second company is deemed to be a subsidiary of the parent company.", "The company has gone through more than one incarnation.", "The company is owned by its employees.", "Larger companies compete for market share by acquiring smaller companies that may own a particular market sector.", "A parent company is a company that owns 51% or more voting stock in another firm (or subsidiary).", "It is a holding company that provides services through its subsidiaries in the following areas: oil and gas, industrial and infrastructure, government and power.", "RXVT Technologies is no longer a subsidiary of the parent company." ] all_sentences = relevant_sentences + irrelevant_sentences query = "<query>: A company is a part of a larger company." embeddings = encode_batch(sentence_encoder, tokenizer, all_sentences, "cpu").detach().cpu().numpy() query_embedding = encode_batch(query_encoder, tokenizer, [query], "cpu").detach().cpu().numpy() sims = cosine_similarity(query_embedding, embeddings)[0] sentences_sims = list(zip(all_sentences, sims)) sentences_sims.sort(key=lambda x: x[1], reverse=True) for s, sim in sentences_sims: print(s, sim) ``` Expected output: ``` WHIRC – a subsidiary company of Wright-Hennepin 0.9396286 EM Microelectronic-Marin (subsidiary of The Swatch Group). 0.93929046 Fingersoft's parent company is the Finger Group. 0.936247 CK Life Sciences International (Holdings) Inc. (), or CK Life Sciences, is a subsidiary of CK Hutchison Holdings 0.9350312 The company is currently a division of the corporate group Jam Industries. 0.9273489 Volt Technical Resources is a business unit of Volt Workforce Solutions, a subsidiary of Volt Information Sciences (currently trading over-the-counter as VISI.). 0.9005086 The second company is deemed to be a subsidiary of the parent company. 0.6723645 It is a holding company that provides services through its subsidiaries in the following areas: oil and gas, industrial and infrastructure, government and power. 0.60081375 A parent company is a company that owns 51% or more voting stock in another firm (or subsidiary). 0.59490484 The company is owned by its employees. 0.55286574 RXVT Technologies is no longer a subsidiary of the parent company. 0.4321953 The company has gone through more than one incarnation. 0.38889483 Larger companies compete for market share by acquiring smaller companies that may own a particular market sector. 0.25472647 ```
4,981
[ [ -0.013214111328125, -0.042510986328125, 0.0283966064453125, 0.0142974853515625, -0.0186767578125, -0.007965087890625, -0.01081085205078125, -0.0146636962890625, 0.0279998779296875, 0.036376953125, -0.053009033203125, -0.0313720703125, -0.0234375, 0.011306762...
Yntec/ArcticFowl
2023-08-10T22:59:14.000Z
[ "diffusers", "anime", "art", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "ArcticFlamingo", "en", "license:creativeml-openrail-m", "endpoints_compatible", "has_space", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
Yntec
null
null
Yntec/ArcticFowl
3
2,621
diffusers
2023-08-09T20:38:20
--- license: creativeml-openrail-m language: - en library_name: diffusers pipeline_tag: text-to-image tags: - anime - art - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - ArcticFlamingo --- This model with the Blessed2 VAE baked in. Demo image by digiplay: ![demo](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/C4phA3NdYMK1U66tQoyIf.jpeg) Samples and prompts: ![sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/lTnjBMcm-ClX_iiosvjCv.png) ![sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/Zi4adyzhuUGIi_SyysYlj.png) Pretty cute girl. Thumbs up. Thumbs up. Thumbs up. Thumbs up. Thumbs up. Thumbs up. Acrylic art on canvas by ROSSDRAWS and Clay Mann and tyler edlin Original pages: https://civitai.com/models/16164?modelVersionId=84783 https://huggingface.co/NoCrypt/blessed_vae/tree/main
932
[ [ -0.007297515869140625, -0.049713134765625, 0.0283050537109375, 0.018157958984375, -0.0259857177734375, -0.005565643310546875, 0.031402587890625, -0.0318603515625, 0.03594970703125, 0.0562744140625, -0.032928466796875, -0.016937255859375, -0.045989990234375, ...
keremberke/yolov8s-blood-cell-detection
2023-02-22T13:03:07.000Z
[ "ultralytics", "tensorboard", "v8", "ultralyticsplus", "yolov8", "yolo", "vision", "object-detection", "pytorch", "awesome-yolov8-models", "dataset:keremberke/blood-cell-object-detection", "model-index", "region:us" ]
object-detection
keremberke
null
null
keremberke/yolov8s-blood-cell-detection
1
2,616
ultralytics
2023-01-29T05:24:20
--- tags: - ultralyticsplus - yolov8 - ultralytics - yolo - vision - object-detection - pytorch - awesome-yolov8-models library_name: ultralytics library_version: 8.0.21 inference: false datasets: - keremberke/blood-cell-object-detection model-index: - name: keremberke/yolov8s-blood-cell-detection results: - task: type: object-detection dataset: type: keremberke/blood-cell-object-detection name: blood-cell-object-detection split: validation metrics: - type: precision # since mAP@0.5 is not available on hf.co/metrics value: 0.91681 # min: 0.0 - max: 1.0 name: mAP@0.5(box) --- <div align="center"> <img width="640" alt="keremberke/yolov8s-blood-cell-detection" src="https://huggingface.co/keremberke/yolov8s-blood-cell-detection/resolve/main/thumbnail.jpg"> </div> ### Supported Labels ``` ['Platelets', 'RBC', 'WBC'] ``` ### How to use - Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus): ```bash pip install ultralyticsplus==0.0.23 ultralytics==8.0.21 ``` - Load model and perform prediction: ```python from ultralyticsplus import YOLO, render_result # load model model = YOLO('keremberke/yolov8s-blood-cell-detection') # set model parameters model.overrides['conf'] = 0.25 # NMS confidence threshold model.overrides['iou'] = 0.45 # NMS IoU threshold model.overrides['agnostic_nms'] = False # NMS class-agnostic model.overrides['max_det'] = 1000 # maximum number of detections per image # set image image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg' # perform inference results = model.predict(image) # observe results print(results[0].boxes) render = render_result(model=model, image=image, result=results[0]) render.show() ``` **More models available at: [awesome-yolov8-models](https://yolov8.xyz)**
1,846
[ [ -0.0298309326171875, -0.0167083740234375, 0.03619384765625, -0.026031494140625, -0.041229248046875, -0.00472259521484375, 0.024078369140625, -0.03680419921875, 0.030364990234375, 0.0214996337890625, -0.037567138671875, -0.049224853515625, -0.0227813720703125, ...
KoboldAI/GPT-Neo-2.7B-Janeway
2022-03-20T12:57:50.000Z
[ "transformers", "pytorch", "gpt_neo", "text-generation", "en", "license:mit", "endpoints_compatible", "region:us" ]
text-generation
KoboldAI
null
null
KoboldAI/GPT-Neo-2.7B-Janeway
6
2,614
transformers
2022-03-02T23:29:04
--- language: en license: mit --- # GPT-Neo 2.7B - Janeway ## Model Description GPT-Neo 2.7B-Janeway is a finetune created using EleutherAI's GPT-Neo 2.7B model. ## Training data The training data contains around 2210 ebooks, mostly in the sci-fi and fantasy genres. The dataset is based on the same dataset used by GPT-Neo-2.7B-Picard, with 20% more data in various genres. Some parts of the dataset have been prepended using the following text: `[Genre: <genre1>,<genre2>]` ### How to use You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: ```py >>> from transformers import pipeline >>> generator = pipeline('text-generation', model='KoboldAI/GPT-Neo-2.7B-Janeway') >>> generator("Welcome Captain Janeway, I apologize for the delay.", do_sample=True, min_length=50) [{'generated_text': 'Welcome Captain Janeway, I apologize for the delay."\nIt's all right," Janeway said. "I'm certain that you're doing your best to keep me informed of what\'s going on."'}] ``` ### Limitations and Biases GPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. GPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile. As with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. ### BibTeX entry and citation info The model is made using the following software: ```bibtex @software{gpt-neo, author = {Black, Sid and Leo, Gao and Wang, Phil and Leahy, Connor and Biderman, Stella}, title = {{GPT-Neo: Large Scale Autoregressive Language Modeling with Mesh-Tensorflow}}, month = mar, year = 2021, note = {{If you use this software, please cite it using these metadata.}}, publisher = {Zenodo}, version = {1.0}, doi = {10.5281/zenodo.5297715}, url = {https://doi.org/10.5281/zenodo.5297715} } ```
2,681
[ [ -0.0135955810546875, -0.056915283203125, 0.0301666259765625, 0.00016009807586669922, -0.02081298828125, -0.018829345703125, -0.005580902099609375, -0.0243682861328125, 0.0017023086547851562, 0.043304443359375, -0.04046630859375, -0.033660888671875, -0.0532531738...
Yntec/AgarthaChadstyle
2023-11-04T03:31:27.000Z
[ "diffusers", "Style", "Abstract", "Surrealism", "ChadUltraF3", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us", "has_space" ]
text-to-image
Yntec
null
null
Yntec/AgarthaChadstyle
0
2,611
diffusers
2023-11-04T02:50:41
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image tags: - Style - Abstract - Surrealism - ChadUltraF3 - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers --- # 🌈🧬🍭🍄👁️ Agartha 👁️🍄🍭🧬🌈(ChadStyle) Check the many trigger words of this model at the original page: https://civitai.com/models/69808/agartha-chadstyle Sample and prompt: ![Sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/XjXVRIcTs_xOB8xkd-RCc.png) bedroom, DETAILED CHIBI Cartoon, BLUE EYES, Pretty CUTE Girl, beautiful detailed PONYTAIL, seifuku clothes, gorgeous detailed hair, Magazine ad, 1949, iconic. acrylic art on canvas By KlaysMoji and artgerm and Clay Mann and and leyendecker and Dave Rapoza
781
[ [ -0.033935546875, -0.07586669921875, 0.00453948974609375, 0.0288848876953125, -0.027374267578125, 0.003238677978515625, 0.0101165771484375, -0.039215087890625, 0.0728759765625, 0.01091766357421875, -0.058929443359375, -0.0303192138671875, -0.05743408203125, -...
Twitter/twhin-bert-base
2023-07-07T03:38:25.000Z
[ "transformers", "pytorch", "safetensors", "bert", "fill-mask", "Twitter", "Multilingual", "en", "ja", "pt", "es", "ko", "ar", "tr", "th", "fr", "id", "ru", "de", "fa", "it", "zh", "pl", "hi", "ur", "nl", "el", "ms", "ca", "sr", "sv", "uk", "he", "fi"...
fill-mask
Twitter
null
null
Twitter/twhin-bert-base
25
2,609
transformers
2022-10-18T18:34:23
--- language: - en - ja - pt - es - ko - ar - tr - th - fr - id - ru - de - fa - it - zh - pl - hi - ur - nl - el - ms - ca - sr - sv - uk - he - fi - cs - ta - ne - vi - hu - eo - bn - mr - ml - hr - no - sw - sl - te - az - da - ro - gl - gu - ps - mk - kn - bg - lv - eu - pa - et - mn - sq - si - sd - la - is - jv - lt - ku - am - bs - hy - or - sk - uz - cy - my - su - br - as - af - be - fy - kk - ga - lo - ka - km - sa - mg - so - ug - ky - gd - yi tags: - Twitter - Multilingual license: "apache-2.0" mask_token: "<mask>" --- # TwHIN-BERT: A Socially-Enriched Pre-trained Language Model for Multilingual Tweet Representations [![PRs Welcome](https://img.shields.io/badge/PRs-welcome-green.svg?style=flat-square)](http://makeapullrequest.com) [![arXiv](https://img.shields.io/badge/arXiv-2203.15827-b31b1b.svg)](https://arxiv.org/abs/2209.07562) This repo contains models, code and pointers to datasets from our paper: [TwHIN-BERT: A Socially-Enriched Pre-trained Language Model for Multilingual Tweet Representations](https://arxiv.org/abs/2209.07562). [[PDF]](https://arxiv.org/pdf/2209.07562.pdf) [[HuggingFace Models]](https://huggingface.co/Twitter) ### Overview TwHIN-BERT is a new multi-lingual Tweet language model that is trained on 7 billion Tweets from over 100 distinct languages. TwHIN-BERT differs from prior pre-trained language models as it is trained with not only text-based self-supervision (e.g., MLM), but also with a social objective based on the rich social engagements within a Twitter Heterogeneous Information Network (TwHIN). TwHIN-BERT can be used as a drop-in replacement for BERT in a variety of NLP and recommendation tasks. It not only outperforms similar models semantic understanding tasks such text classification), but also **social recommendation** tasks such as predicting user to Tweet engagement. ## 1. Pretrained Models We initially release two pretrained TwHIN-BERT models (base and large) that are compatible wit the [HuggingFace BERT models](https://github.com/huggingface/transformers). | Model | Size | Download Link (🤗 HuggingFace) | | ------------- | ------------- | --------- | | TwHIN-BERT-base | 280M parameters | [Twitter/TwHIN-BERT-base](https://huggingface.co/Twitter/twhin-bert-base) | | TwHIN-BERT-large | 550M parameters | [Twitter/TwHIN-BERT-large](https://huggingface.co/Twitter/twhin-bert-large) | To use these models in 🤗 Transformers: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained('Twitter/twhin-bert-base') model = AutoModel.from_pretrained('Twitter/twhin-bert-base') inputs = tokenizer("I'm using TwHIN-BERT! #TwHIN-BERT #NLP", return_tensors="pt") outputs = model(**inputs) ``` <!-- ## 2. Set up environment and data ### Environment TBD ## 3. Fine-tune TwHIN-BERT TBD --> ## Citation If you use TwHIN-BERT or out datasets in your work, please cite the following: ```bib @article{zhang2022twhin, title={TwHIN-BERT: A Socially-Enriched Pre-trained Language Model for Multilingual Tweet Representations}, author={Zhang, Xinyang and Malkov, Yury and Florez, Omar and Park, Serim and McWilliams, Brian and Han, Jiawei and El-Kishky, Ahmed}, journal={arXiv preprint arXiv:2209.07562}, year={2022} } ```
3,440
[ [ -0.016021728515625, -0.046905517578125, 0.0109710693359375, 0.0406494140625, -0.01715087890625, 0.01256561279296875, -0.051849365234375, -0.049530029296875, 0.03692626953125, 0.004852294921875, -0.042938232421875, -0.036651611328125, -0.054351806640625, -0.0...
gustavorayo/ryo-takemasa-v1
2023-10-22T11:48:39.000Z
[ "diffusers", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
gustavorayo
null
null
gustavorayo/ryo-takemasa-v1
0
2,609
diffusers
2023-10-22T11:44:36
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### ryo-takemasa-v1 Dreambooth model trained by gustavorayo with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
508
[ [ -0.023284912109375, -0.047637939453125, 0.049896240234375, 0.0243988037109375, -0.0275421142578125, 0.0208740234375, 0.0246734619140625, -0.033966064453125, 0.058929443359375, 0.00835418701171875, -0.02880859375, -0.023040771484375, -0.044769287109375, -0.01...
Unbabel/gec-t5_small
2021-09-27T11:27:48.000Z
[ "transformers", "pytorch", "t5", "text2text-generation", "grammatical error correction", "text2text", "en", "dataset:clang-8", "dataset:conll-14", "dataset:conll-13", "arxiv:2106.03830", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation...
text2text-generation
Unbabel
null
null
Unbabel/gec-t5_small
15
2,607
transformers
2022-03-02T23:29:05
--- language: - en tags: - grammatical error correction - text2text - t5 license: apache-2.0 datasets: - clang-8 - conll-14 - conll-13 metrics: - f0.5 --- This model is an implementation of the paper [A Simple Recipe for Multilingual Grammatical Error Correction](https://arxiv.org/pdf/2106.03830.pdf) from Google where they report the State of the art score in the task of Grammatical Error Correction (GEC). We implement the version with the T5-small with the reported F_0.5 score in the paper (60.70). To effectively use the "Hosted inference API", write "gec: [YOUR SENTENCE HERE]". In order to use the model, look at the following snippet: ```python from transformers import T5ForConditionalGeneration, T5Tokenizer model = T5ForConditionalGeneration.from_pretrained("Unbabel/gec-t5_small") tokenizer = T5Tokenizer.from_pretrained('t5-small') sentence = "I like to swimming" tokenized_sentence = tokenizer('gec: ' + sentence, max_length=128, truncation=True, padding='max_length', return_tensors='pt') corrected_sentence = tokenizer.decode( model.generate( input_ids = tokenized_sentence.input_ids, attention_mask = tokenized_sentence.attention_mask, max_length=128, num_beams=5, early_stopping=True, )[0], skip_special_tokens=True, clean_up_tokenization_spaces=True ) print(corrected_sentence) # -> I like swimming. ```
1,387
[ [ -0.01171875, -0.046142578125, 0.0447998046875, 0.0212554931640625, -0.01288604736328125, -0.0158233642578125, -0.02838134765625, -0.0228271484375, 0.005992889404296875, 0.004550933837890625, -0.061614990234375, -0.06427001953125, -0.04547119140625, 0.0299224...
snunlp/KR-FinBert-SC
2022-04-28T05:07:18.000Z
[ "transformers", "pytorch", "bert", "text-classification", "ko", "endpoints_compatible", "has_space", "region:us" ]
text-classification
snunlp
null
null
snunlp/KR-FinBert-SC
12
2,606
transformers
2022-03-02T23:29:05
--- language: - ko --- # KR-FinBert & KR-FinBert-SC Much progress has been made in the NLP (Natural Language Processing) field, with numerous studies showing that domain adaptation using small-scale corpus and fine-tuning with labeled data is effective for overall performance improvement. we proposed KR-FinBert for the financial domain by further pre-training it on a financial corpus and fine-tuning it for sentiment analysis. As many studies have shown, the performance improvement through adaptation and conducting the downstream task was also clear in this experiment. ![KR-FinBert](https://huggingface.co/snunlp/KR-FinBert/resolve/main/images/KR-FinBert.png) ## Data The training data for this model is expanded from those of **[KR-BERT-MEDIUM](https://huggingface.co/snunlp/KR-Medium)**, texts from Korean Wikipedia, general news articles, legal texts crawled from the National Law Information Center and [Korean Comments dataset](https://www.kaggle.com/junbumlee/kcbert-pretraining-corpus-korean-news-comments). For the transfer learning, **corporate related economic news articles from 72 media sources** such as the Financial Times, The Korean Economy Daily, etc and **analyst reports from 16 securities companies** such as Kiwoom Securities, Samsung Securities, etc are added. Included in the dataset is 440,067 news titles with their content and 11,237 analyst reports. **The total data size is about 13.22GB.** For mlm training, we split the data line by line and **the total no. of lines is 6,379,315.** KR-FinBert is trained for 5.5M steps with the maxlen of 512, training batch size of 32, and learning rate of 5e-5, taking 67.48 hours to train the model using NVIDIA TITAN XP. ## Downstream tasks ### Sentimental Classification model Downstream task performances with 50,000 labeled data. |Model|Accuracy| |-|-| |KR-FinBert|0.963| |KR-BERT-MEDIUM|0.958| |KcBert-large|0.955| |KcBert-base|0.953| |KoBert|0.817| ### Inference sample |Positive|Negative| |-|-| |현대바이오, '폴리탁셀' 코로나19 치료 가능성에 19% 급등 | 영화관株 '코로나 빙하기' 언제 끝나나…"CJ CGV 올 4000억 손실 날수도" | |이수화학, 3분기 영업익 176억…전년比 80%↑ | C쇼크에 멈춘 흑자비행…대한항공 1분기 영업적자 566억 | |"GKL, 7년 만에 두 자릿수 매출성장 예상" | '1000억대 횡령·배임' 최신원 회장 구속… SK네트웍스 "경영 공백 방지 최선" | |위지윅스튜디오, 콘텐츠 활약에 사상 첫 매출 1000억원 돌파 | 부품 공급 차질에…기아차 광주공장 전면 가동 중단 | |삼성전자, 2년 만에 인도 스마트폰 시장 점유율 1위 '왕좌 탈환' | 현대제철, 지난해 영업익 3,313억원···전년比 67.7% 감소 | ### Citation ``` @misc{kr-FinBert-SC, author = {Kim, Eunhee and Hyopil Shin}, title = {KR-FinBert: Fine-tuning KR-FinBert for Sentiment Analysis}, year = {2022}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://huggingface.co/snunlp/KR-FinBert-SC}} } ```
2,669
[ [ -0.039215087890625, -0.041351318359375, 0.01323699951171875, 0.03118896484375, -0.027435302734375, 0.004852294921875, -0.021331787109375, -0.0284271240234375, 0.0195465087890625, 0.03253173828125, -0.03570556640625, -0.052276611328125, -0.054656982421875, -0...
google/vit-hybrid-base-bit-384
2023-09-11T20:45:52.000Z
[ "transformers", "pytorch", "safetensors", "vit-hybrid", "image-classification", "vision", "dataset:imagenet-1k", "arxiv:2010.11929", "arxiv:2006.03677", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
google
null
null
google/vit-hybrid-base-bit-384
4
2,603
transformers
2022-12-06T17:38:55
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet-1k --- # Vision Transformer (base-sized model) - Hybrid The hybrid Vision Transformer (ViT) model was proposed in [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby. It's the first paper that successfully trains a Transformer encoder on ImageNet, attaining very good results compared to familiar convolutional architectures. ViT hybrid is a slight variant of the [plain Vision Transformer](vit), by leveraging a convolutional backbone (specifically, [BiT](bit)) whose features are used as initial "tokens" for the Transformer. Disclaimer: The team releasing ViT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description *While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train.* ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=google/vit) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import ViTHybridImageProcessor, ViTHybridForImageClassification from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = ViTHybridImageProcessor.from_pretrained('google/vit-hybrid-base-bit-384') model = ViTHybridForImageClassification.from_pretrained('google/vit-hybrid-base-bit-384') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) >>> tabby, tabby cat ``` For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/vit.html#). ## Training data The ViT-Hybrid model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes, and fine-tuned on [ImageNet](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found [here](https://github.com/google-research/vision_transformer/blob/master/vit_jax/input_pipeline.py). Images are resized/rescaled to the same resolution (224x224) and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5). ### Pretraining The model was trained on TPUv3 hardware (8 cores). All model variants are trained with a batch size of 4096 and learning rate warmup of 10k steps. For ImageNet, the authors found it beneficial to additionally apply gradient clipping at global norm 1. Training resolution is 224. ## Evaluation results For evaluation results on several image classification benchmarks, we refer to tables 2 and 5 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance. ### BibTeX entry and citation info ```bibtex @misc{wu2020visual, title={Visual Transformers: Token-based Image Representation and Processing for Computer Vision}, author={Bichen Wu and Chenfeng Xu and Xiaoliang Dai and Alvin Wan and Peizhao Zhang and Zhicheng Yan and Masayoshi Tomizuka and Joseph Gonzalez and Kurt Keutzer and Peter Vajda}, year={2020}, eprint={2006.03677}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ```bibtex @inproceedings{deng2009imagenet, title={Imagenet: A large-scale hierarchical image database}, author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li}, booktitle={2009 IEEE conference on computer vision and pattern recognition}, pages={248--255}, year={2009}, organization={Ieee} } ```
5,212
[ [ -0.048675537109375, -0.02032470703125, -0.0104217529296875, -0.005214691162109375, -0.0253753662109375, -0.0104827880859375, -0.018218994140625, -0.05340576171875, 0.00981903076171875, 0.0277252197265625, -0.01873779296875, -0.01947021484375, -0.050262451171875,...
keremberke/yolov8n-pokemon-classification
2023-02-22T13:01:59.000Z
[ "ultralytics", "tensorboard", "v8", "ultralyticsplus", "yolov8", "yolo", "vision", "image-classification", "pytorch", "awesome-yolov8-models", "dataset:keremberke/pokemon-classification", "model-index", "region:us" ]
image-classification
keremberke
null
null
keremberke/yolov8n-pokemon-classification
1
2,603
ultralytics
2023-01-28T04:03:32
--- tags: - ultralyticsplus - yolov8 - ultralytics - yolo - vision - image-classification - pytorch - awesome-yolov8-models library_name: ultralytics library_version: 8.0.21 inference: false datasets: - keremberke/pokemon-classification model-index: - name: keremberke/yolov8n-pokemon-classification results: - task: type: image-classification dataset: type: keremberke/pokemon-classification name: pokemon-classification split: validation metrics: - type: accuracy value: 0.02322 # min: 0.0 - max: 1.0 name: top1 accuracy - type: accuracy value: 0.09016 # min: 0.0 - max: 1.0 name: top5 accuracy --- <div align="center"> <img width="640" alt="keremberke/yolov8n-pokemon-classification" src="https://huggingface.co/keremberke/yolov8n-pokemon-classification/resolve/main/thumbnail.jpg"> </div> ### Supported Labels ``` ['Abra', 'Aerodactyl', 'Alakazam', 'Alolan Sandslash', 'Arbok', 'Arcanine', 'Articuno', 'Beedrill', 'Bellsprout', 'Blastoise', 'Bulbasaur', 'Butterfree', 'Caterpie', 'Chansey', 'Charizard', 'Charmander', 'Charmeleon', 'Clefable', 'Clefairy', 'Cloyster', 'Cubone', 'Dewgong', 'Diglett', 'Ditto', 'Dodrio', 'Doduo', 'Dragonair', 'Dragonite', 'Dratini', 'Drowzee', 'Dugtrio', 'Eevee', 'Ekans', 'Electabuzz', 'Electrode', 'Exeggcute', 'Exeggutor', 'Farfetchd', 'Fearow', 'Flareon', 'Gastly', 'Gengar', 'Geodude', 'Gloom', 'Golbat', 'Goldeen', 'Golduck', 'Golem', 'Graveler', 'Grimer', 'Growlithe', 'Gyarados', 'Haunter', 'Hitmonchan', 'Hitmonlee', 'Horsea', 'Hypno', 'Ivysaur', 'Jigglypuff', 'Jolteon', 'Jynx', 'Kabuto', 'Kabutops', 'Kadabra', 'Kakuna', 'Kangaskhan', 'Kingler', 'Koffing', 'Krabby', 'Lapras', 'Lickitung', 'Machamp', 'Machoke', 'Machop', 'Magikarp', 'Magmar', 'Magnemite', 'Magneton', 'Mankey', 'Marowak', 'Meowth', 'Metapod', 'Mew', 'Mewtwo', 'Moltres', 'MrMime', 'Muk', 'Nidoking', 'Nidoqueen', 'Nidorina', 'Nidorino', 'Ninetales', 'Oddish', 'Omanyte', 'Omastar', 'Onix', 'Paras', 'Parasect', 'Persian', 'Pidgeot', 'Pidgeotto', 'Pidgey', 'Pikachu', 'Pinsir', 'Poliwag', 'Poliwhirl', 'Poliwrath', 'Wigglytuff', 'Zapdos', 'Zubat'] ``` ### How to use - Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus): ```bash pip install ultralyticsplus==0.0.23 ultralytics==8.0.21 ``` - Load model and perform prediction: ```python from ultralyticsplus import YOLO, postprocess_classify_output # load model model = YOLO('keremberke/yolov8n-pokemon-classification') # set model parameters model.overrides['conf'] = 0.25 # model confidence threshold # set image image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg' # perform inference results = model.predict(image) # observe results print(results[0].probs) # [0.1, 0.2, 0.3, 0.4] processed_result = postprocess_classify_output(model, result=results[0]) print(processed_result) # {"cat": 0.4, "dog": 0.6} ``` **More models available at: [awesome-yolov8-models](https://yolov8.xyz)**
3,001
[ [ -0.040069580078125, -0.0136260986328125, 0.0178375244140625, -0.0062103271484375, -0.00957489013671875, 0.0120086669921875, 0.01226043701171875, -0.02166748046875, 0.0413818359375, 0.017486572265625, -0.0275115966796875, -0.03778076171875, -0.04766845703125, ...
keremberke/yolov8s-nlf-head-detection
2023-02-22T13:04:29.000Z
[ "ultralytics", "tensorboard", "v8", "ultralyticsplus", "yolov8", "yolo", "vision", "object-detection", "pytorch", "awesome-yolov8-models", "dataset:keremberke/nfl-object-detection", "model-index", "region:us" ]
object-detection
keremberke
null
null
keremberke/yolov8s-nlf-head-detection
1
2,602
ultralytics
2023-01-29T19:46:30
--- tags: - ultralyticsplus - yolov8 - ultralytics - yolo - vision - object-detection - pytorch - awesome-yolov8-models library_name: ultralytics library_version: 8.0.23 inference: false datasets: - keremberke/nfl-object-detection model-index: - name: keremberke/yolov8s-nlf-head-detection results: - task: type: object-detection dataset: type: keremberke/nfl-object-detection name: nfl-object-detection split: validation metrics: - type: precision # since mAP@0.5 is not available on hf.co/metrics value: 0.27882 # min: 0.0 - max: 1.0 name: mAP@0.5(box) --- <div align="center"> <img width="640" alt="keremberke/yolov8s-nlf-head-detection" src="https://huggingface.co/keremberke/yolov8s-nlf-head-detection/resolve/main/thumbnail.jpg"> </div> ### Supported Labels ``` ['Helmet', 'Helmet-Blurred', 'Helmet-Difficult', 'Helmet-Partial', 'Helmet-Sideline'] ``` ### How to use - Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus): ```bash pip install ultralyticsplus==0.0.24 ultralytics==8.0.23 ``` - Load model and perform prediction: ```python from ultralyticsplus import YOLO, render_result # load model model = YOLO('keremberke/yolov8s-nlf-head-detection') # set model parameters model.overrides['conf'] = 0.25 # NMS confidence threshold model.overrides['iou'] = 0.45 # NMS IoU threshold model.overrides['agnostic_nms'] = False # NMS class-agnostic model.overrides['max_det'] = 1000 # maximum number of detections per image # set image image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg' # perform inference results = model.predict(image) # observe results print(results[0].boxes) render = render_result(model=model, image=image, result=results[0]) render.show() ``` **More models available at: [awesome-yolov8-models](https://yolov8.xyz)**
1,875
[ [ -0.04095458984375, -0.02801513671875, 0.036346435546875, -0.01213836669921875, -0.029510498046875, -0.01116943359375, 0.006137847900390625, -0.041534423828125, 0.02899169921875, 0.0184326171875, -0.058624267578125, -0.052764892578125, -0.03460693359375, 0.00...
facebook/timesformer-hr-finetuned-k600
2022-12-12T12:53:13.000Z
[ "transformers", "pytorch", "timesformer", "video-classification", "vision", "arxiv:2102.05095", "license:cc-by-nc-4.0", "endpoints_compatible", "has_space", "region:us" ]
video-classification
facebook
null
null
facebook/timesformer-hr-finetuned-k600
2
2,601
transformers
2022-10-07T22:51:20
--- license: "cc-by-nc-4.0" tags: - vision - video-classification --- # TimeSformer (base-sized model, fine-tuned on Kinetics-600) TimeSformer model pre-trained on [Kinetics-600](https://www.deepmind.com/open-source/kinetics). It was introduced in the paper [TimeSformer: Is Space-Time Attention All You Need for Video Understanding?](https://arxiv.org/abs/2102.05095) by Tong et al. and first released in [this repository](https://github.com/facebookresearch/TimeSformer). Disclaimer: The team releasing TimeSformer did not write a model card for this model so this model card has been written by [fcakyon](https://github.com/fcakyon). ## Intended uses & limitations You can use the raw model for video classification into one of the 600 possible Kinetics-600 labels. ### How to use Here is how to use this model to classify a video: ```python from transformers import AutoImageProcessor, TimesformerForVideoClassification import numpy as np import torch video = list(np.random.randn(16, 3, 448, 448)) processor = AutoImageProcessor.from_pretrained("facebook/timesformer-hr-finetuned-k600") model = TimesformerForVideoClassification.from_pretrained("facebook/timesformer-hr-finetuned-k600") inputs = processor(images=video, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` For more code examples, we refer to the [documentation](https://huggingface.co/transformers/main/model_doc/timesformer.html#). ### BibTeX entry and citation info ```bibtex @inproceedings{bertasius2021space, title={Is Space-Time Attention All You Need for Video Understanding?}, author={Bertasius, Gedas and Wang, Heng and Torresani, Lorenzo}, booktitle={International Conference on Machine Learning}, pages={813--824}, year={2021}, organization={PMLR} } ```
1,930
[ [ -0.0169830322265625, -0.042510986328125, 0.024932861328125, 0.006549835205078125, -0.0112762451171875, 0.005046844482421875, 0.000507354736328125, -0.004428863525390625, -0.0005860328674316406, -0.00879669189453125, -0.05621337890625, -0.026885986328125, -0.0601...
livingbox/incremental-test-03
2023-10-30T20:42:29.000Z
[ "diffusers", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us", "has_space" ]
text-to-image
livingbox
null
null
livingbox/incremental-test-03
0
2,600
diffusers
2023-10-30T20:37:18
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### Incremental-test-03 Dreambooth model trained by livingbox with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
510
[ [ -0.03106689453125, -0.08160400390625, 0.032623291015625, 0.05224609375, -0.01334381103515625, 0.03759765625, 0.028076171875, -0.0264129638671875, 0.038421630859375, 0.0045928955078125, -0.032257080078125, -0.0146636962890625, -0.0237274169921875, -0.00361251...
microsoft/resnet-34
2023-06-26T19:49:23.000Z
[ "transformers", "pytorch", "tf", "safetensors", "resnet", "image-classification", "vision", "dataset:imagenet-1k", "arxiv:1512.03385", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
microsoft
null
null
microsoft/resnet-34
4
2,598
transformers
2022-03-16T15:41:51
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet-1k --- # ResNet-34 v1.5 ResNet model pre-trained on ImageNet-1k at resolution 224x224. It was introduced in the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) by He et al. Disclaimer: The team releasing ResNet did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description ResNet (Residual Network) is a convolutional neural network that democratized the concepts of residual learning and skip connections. This enables to train much deeper models. This is ResNet v1.5, which differs from the original model: in the bottleneck blocks which require downsampling, v1 has stride = 2 in the first 1x1 convolution, whereas v1.5 has stride = 2 in the 3x3 convolution. This difference makes ResNet50 v1.5 slightly more accurate (\~0.5% top1) than v1, but comes with a small performance drawback (~5% imgs/sec) according to [Nvidia](https://catalog.ngc.nvidia.com/orgs/nvidia/resources/resnet_50_v1_5_for_pytorch). ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/resnet_architecture.png) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=resnet) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import AutoFeatureExtractor, ResNetForImageClassification import torch from datasets import load_dataset dataset = load_dataset("huggingface/cats-image") image = dataset["test"]["image"][0] feature_extractor = AutoFeatureExtractor.from_pretrained("microsoft/resnet-34") model = ResNetForImageClassification.from_pretrained("microsoft/resnet-34") inputs = feature_extractor(image, return_tensors="pt") with torch.no_grad(): logits = model(**inputs).logits # model predicts one of the 1000 ImageNet classes predicted_label = logits.argmax(-1).item() print(model.config.id2label[predicted_label]) ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/resnet). ### BibTeX entry and citation info ```bibtex @inproceedings{he2016deep, title={Deep residual learning for image recognition}, author={He, Kaiming and Zhang, Xiangyu and Ren, Shaoqing and Sun, Jian}, booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition}, pages={770--778}, year={2016} } ```
2,662
[ [ -0.0469970703125, -0.01236724853515625, -0.0157623291015625, -0.005706787109375, -0.0228271484375, -0.0119476318359375, -0.0033416748046875, -0.054962158203125, 0.022735595703125, 0.03265380859375, -0.045745849609375, -0.02008056640625, -0.044219970703125, 0...
CrucibleAI/ControlNetMediaPipeFace
2023-05-19T19:32:02.000Z
[ "diffusers", "controlnet", "laion", "face", "mediapipe", "image-to-image", "en", "dataset:LAION-Face", "dataset:LAION", "arxiv:2302.05543", "arxiv:2112.10752", "arxiv:2210.08402", "license:openrail", "has_space", "diffusers:ControlNetModel", "region:us" ]
image-to-image
CrucibleAI
null
null
CrucibleAI/ControlNetMediaPipeFace
479
2,598
diffusers
2023-03-30T18:28:07
--- language: - en thumbnail: '' tags: - controlnet - laion - face - mediapipe - image-to-image license: openrail base_model: stabilityai/stable-diffusion-2-1-base datasets: - LAION-Face - LAION pipeline_tag: image-to-image --- # ControlNet LAION Face Dataset ## Table of Contents: - Overview: Samples, Contents, and Construction - Usage: Downloading, Training, and Inference - License - Credits and Thanks # Overview: This dataset is designed to train a ControlNet with human facial expressions. It includes keypoints for pupils to allow gaze direction. Training has been tested on Stable Diffusion v2.1 base (512) and Stable Diffusion v1.5. ## Samples: Cherry-picked from ControlNet + Stable Diffusion v2.1 Base |Input|Face Detection|Output| |:---:|:---:|:---:| |<img src="https://huggingface.co/CrucibleAI/ControlNetMediaPipeFace/resolve/main/samples_laion_face_dataset/happy_source.jpg">|<img src="https://huggingface.co/CrucibleAI/ControlNetMediaPipeFace/resolve/main/samples_laion_face_dataset/happy_annotation.png">|<img src="https://huggingface.co/CrucibleAI/ControlNetMediaPipeFace/resolve/main/samples_laion_face_dataset/happy_result.png">| |<img src="https://huggingface.co/CrucibleAI/ControlNetMediaPipeFace/resolve/main/samples_laion_face_dataset/neutral_source.jpg">|<img src="https://huggingface.co/CrucibleAI/ControlNetMediaPipeFace/resolve/main/samples_laion_face_dataset/neutral_annotation.png">|<img src="https://huggingface.co/CrucibleAI/ControlNetMediaPipeFace/resolve/main/samples_laion_face_dataset/neutral_result.png">| |<img src="https://huggingface.co/CrucibleAI/ControlNetMediaPipeFace/resolve/main/samples_laion_face_dataset/sad_source.jpg">|<img src="https://huggingface.co/CrucibleAI/ControlNetMediaPipeFace/resolve/main/samples_laion_face_dataset/sad_annotation.png">|<img src="https://huggingface.co/CrucibleAI/ControlNetMediaPipeFace/resolve/main/samples_laion_face_dataset/sad_result.png">| |<img src="https://huggingface.co/CrucibleAI/ControlNetMediaPipeFace/resolve/main/samples_laion_face_dataset/screaming_source.jpg">|<img src="https://huggingface.co/CrucibleAI/ControlNetMediaPipeFace/resolve/main/samples_laion_face_dataset/screaming_annotation.png">|<img src="https://huggingface.co/CrucibleAI/ControlNetMediaPipeFace/resolve/main/samples_laion_face_dataset/screaming_result.png">| |<img src="https://huggingface.co/CrucibleAI/ControlNetMediaPipeFace/resolve/main/samples_laion_face_dataset/sideways_source.jpg">|<img src="https://huggingface.co/CrucibleAI/ControlNetMediaPipeFace/resolve/main/samples_laion_face_dataset/sideways_annotation.png">|<img src="https://huggingface.co/CrucibleAI/ControlNetMediaPipeFace/resolve/main/samples_laion_face_dataset/sideways_result.png">| |<img src="https://huggingface.co/CrucibleAI/ControlNetMediaPipeFace/resolve/main/samples_laion_face_dataset/surprised_source.jpg">|<img src="https://huggingface.co/CrucibleAI/ControlNetMediaPipeFace/resolve/main/samples_laion_face_dataset/surprised_annotation.png">|<img src="https://huggingface.co/CrucibleAI/ControlNetMediaPipeFace/resolve/main/samples_laion_face_dataset/surprised_result.png">| Images with multiple faces are also supported: <img src="https://huggingface.co/CrucibleAI/ControlNetMediaPipeFace/resolve/main/samples_laion_face_dataset/family_source.jpg"> <img src="https://huggingface.co/CrucibleAI/ControlNetMediaPipeFace/resolve/main/samples_laion_face_dataset/family_annotation.png"> <img src="https://huggingface.co/CrucibleAI/ControlNetMediaPipeFace/resolve/main/samples_laion_face_dataset/family_result.png"> ## Dataset Contents: - train_laion_face.py - Entrypoint for ControlNet training. - laion_face_dataset.py - Code for performing dataset iteration. Cropping and resizing happens here. - tool_download_face_targets.py - A tool to read metadata.json and populate the target folder. - tool_generate_face_poses.py - The original file used to generate the source images. Included for reproducibility, but not required for training. - training/laion-face-processed/prompt.jsonl - Read by laion_face_dataset. Includes prompts for the images. - training/laion-face-processed/metadata.json - Excerpts from LAION for the relevant data. Also used for downloading the target dataset. - training/laion-face-processed/source/xxxxxxxxx.jpg - Images with detections performed. Generated from the target images. - training/laion-face-processed/target/xxxxxxxxx.jpg - Selected images from LAION Face. ## Dataset Construction: Source images were generated by pulling slice 00000 from LAION Face and passing them through MediaPipe's face detector with special configuration parameters. The colors and line thicknesses used for MediaPipe are as follows: ``` f_thick = 2 f_rad = 1 right_iris_draw = DrawingSpec(color=(10, 200, 250), thickness=f_thick, circle_radius=f_rad) right_eye_draw = DrawingSpec(color=(10, 200, 180), thickness=f_thick, circle_radius=f_rad) right_eyebrow_draw = DrawingSpec(color=(10, 220, 180), thickness=f_thick, circle_radius=f_rad) left_iris_draw = DrawingSpec(color=(250, 200, 10), thickness=f_thick, circle_radius=f_rad) left_eye_draw = DrawingSpec(color=(180, 200, 10), thickness=f_thick, circle_radius=f_rad) left_eyebrow_draw = DrawingSpec(color=(180, 220, 10), thickness=f_thick, circle_radius=f_rad) mouth_draw = DrawingSpec(color=(10, 180, 10), thickness=f_thick, circle_radius=f_rad) head_draw = DrawingSpec(color=(10, 200, 10), thickness=f_thick, circle_radius=f_rad) iris_landmark_spec = {468: right_iris_draw, 473: left_iris_draw} ``` We have implemented a method named `draw_pupils` which modifies some functionality from MediaPipe. It exists as a stopgap until some pending changes are merged. # Usage: The containing ZIP file should be decompressed into the root of the ControlNet directory. The `train_laion_face.py`, `laion_face_dataset.py`, and other `.py` files should sit adjacent to `tutorial_train.py` and `tutorial_train_sd21.py`. We are assuming a checkout of the ControlNet repo at 0acb7e5, but there is no direct dependency on the repository. ## Downloading: For copyright reasons, we cannot include the original target files. We have provided a script (tool_download_face_targets.py) which will read from training/laion-face-processed/metadata.json and populate the target folder. This file has no requirements, but will use tqdm if it is installed. ## Training: When the targets folder is fully populated, training can be run on a machine with at least 24 gigabytes of VRAM. Our model was trained for 200 hours (four epochs) on an A6000. ```bash python tool_add_control.py ./models/v1-5-pruned-emaonly.ckpt ./models/controlnet_sd15_laion_face.ckpt python ./train_laion_face_sd15.py ``` ## Inference: We have provided `gradio_face2image.py`. Update the following two lines to point them to your trained model. ``` model = create_model('./models/cldm_v21.yaml').cpu() # If you fine-tune on SD2.1 base, this does not need to change. model.load_state_dict(load_state_dict('./models/control_sd21_openpose.pth', location='cuda')) ``` The model has some limitations: while it is empirically better at tracking gaze and mouth poses than previous attempts, it may still ignore controls. Adding details to the prompt like, "looking right" can abate bad behavior. ## 🧨 Diffusers It is recommended to use the checkpoint with [Stable Diffusion 2.1 - Base](stabilityai/stable-diffusion-2-1-base) as the checkpoint has been trained on it. Experimentally, the checkpoint can be used with other diffusion models such as dreamboothed stable diffusion. To use with Stable Diffusion 1.5, insert `subfolder="diffusion_sd15"` into the from_pretrained arguments. A v1.5 half-precision variant is provided but untested. 1. Install `diffusers` and related packages: ``` $ pip install diffusers transformers accelerate ``` 2. Run code: ```py from PIL import Image import numpy as np import torch from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler from diffusers.utils import load_image image = load_image( "https://huggingface.co/CrucibleAI/ControlNetMediaPipeFace/resolve/main/samples_laion_face_dataset/family_annotation.png" ) # Stable Diffusion 2.1-base: controlnet = ControlNetModel.from_pretrained("CrucibleAI/ControlNetMediaPipeFace", torch_dtype=torch.float16, variant="fp16") pipe = StableDiffusionControlNetPipeline.from_pretrained( "stabilityai/stable-diffusion-2-1-base", controlnet=controlnet, safety_checker=None, torch_dtype=torch.float16 ) # OR # Stable Diffusion 1.5: controlnet = ControlNetModel.from_pretrained("CrucibleAI/ControlNetMediaPipeFace", subfolder="diffusion_sd15") pipe = StableDiffusionControlNetPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", controlnet=controlnet, safety_checker=None) pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) # Remove if you do not have xformers installed # see https://huggingface.co/docs/diffusers/v0.13.0/en/optimization/xformers#installing-xformers # for installation instructions pipe.enable_xformers_memory_efficient_attention() pipe.enable_model_cpu_offload() image = pipe("a happy family at a dentist advertisement", image=image, num_inference_steps=30).images[0] image.save('./images.png') ``` # License: ### Source Images: (/training/laion-face-processed/source/) This work is marked with CC0 1.0. To view a copy of this license, visit http://creativecommons.org/publicdomain/zero/1.0 ### Trained Models: Our trained ControlNet checkpoints are released under CreativeML Open RAIL-M. ### Source Code: lllyasviel/ControlNet is licensed under the Apache License 2.0 Our modifications are released under the same license. # Credits and Thanks: Greatest thanks to Zhang et al. for ControlNet, Rombach et al. (StabilityAI) for Stable Diffusion, and Schuhmann et al. for LAION. Sample images for this document were obtained from Unsplash and are CC0. ``` @misc{zhang2023adding, title={Adding Conditional Control to Text-to-Image Diffusion Models}, author={Lvmin Zhang and Maneesh Agrawala}, year={2023}, eprint={2302.05543}, archivePrefix={arXiv}, primaryClass={cs.CV} } @misc{rombach2021highresolution, title={High-Resolution Image Synthesis with Latent Diffusion Models}, author={Robin Rombach and Andreas Blattmann and Dominik Lorenz and Patrick Esser and Björn Ommer}, year={2021}, eprint={2112.10752}, archivePrefix={arXiv}, primaryClass={cs.CV} } @misc{schuhmann2022laion5b, title={LAION-5B: An open large-scale dataset for training next generation image-text models}, author={Christoph Schuhmann and Romain Beaumont and Richard Vencu and Cade Gordon and Ross Wightman and Mehdi Cherti and Theo Coombes and Aarush Katta and Clayton Mullis and Mitchell Wortsman and Patrick Schramowski and Srivatsa Kundurthy and Katherine Crowson and Ludwig Schmidt and Robert Kaczmarczyk and Jenia Jitsev}, year={2022}, eprint={2210.08402}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` This project was made possible by Crucible AI.
11,101
[ [ -0.032470703125, -0.022430419921875, -0.005970001220703125, 0.0157470703125, -0.01043701171875, -0.01995849609375, 0.00005692243576049805, -0.021728515625, 0.040130615234375, 0.042877197265625, -0.050689697265625, -0.03741455078125, -0.04443359375, -0.012115...
digiplay/VoidnoiseCore_R0829
2023-08-20T17:55:33.000Z
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "endpoints_compatible", "has_space", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
digiplay
null
null
digiplay/VoidnoiseCore_R0829
2
2,597
diffusers
2023-08-20T17:07:38
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- Model info : https://civitai.com/models/131565?modelVersionId=144629 ![00005-483248321 (1).jpeg](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/wzAP-zQg5fkjhRHu7IqQM.jpeg) ![logo.png](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/3ewuOvyQUkkeRCYMC2Jcc.png)
447
[ [ -0.041748046875, -0.007411956787109375, 0.0166778564453125, 0.0209503173828125, -0.03631591796875, 0.0007767677307128906, 0.0273284912109375, -0.019287109375, 0.051025390625, 0.02545166015625, -0.0545654296875, -0.0156402587890625, -0.02130126953125, -0.0116...
lucasresck/bert-base-cased-ag-news
2021-11-09T02:11:29.000Z
[ "transformers", "pytorch", "bert", "text-classification", "classification", "en", "dataset:ag_news", "license:mit", "endpoints_compatible", "region:us" ]
text-classification
lucasresck
null
null
lucasresck/bert-base-cased-ag-news
2
2,594
transformers
2022-03-02T23:29:05
--- language: - en license: mit tags: - bert - classification datasets: - ag_news metrics: - accuracy - f1 - recall - precision widget: - text: "Is it soccer or football?" example_title: "Sports" - text: "A new version of Ubuntu was released." example_title: "Sci/Tech" --- # bert-base-cased-ag-news BERT model fine-tuned on AG News classification dataset using a linear layer on top of the [CLS] token output, with 0.945 test accuracy. ### How to use Here is how to use this model to classify a given text: ```python from transformers import AutoTokenizer, BertForSequenceClassification tokenizer = AutoTokenizer.from_pretrained('lucasresck/bert-base-cased-ag-news') model = BertForSequenceClassification.from_pretrained('lucasresck/bert-base-cased-ag-news') text = "Is it soccer or football?" encoded_input = tokenizer(text, return_tensors='pt', truncation=True, max_length=512) output = model(**encoded_input) ``` ### Limitations and bias Bias were not assessed in this model, but, considering that pre-trained BERT is known to carry bias, it is also expected for this model. BERT's authors say: "This bias will also affect all fine-tuned versions of this model." ## Evaluation results ``` precision recall f1-score support 0 0.9539 0.9584 0.9562 1900 1 0.9884 0.9879 0.9882 1900 2 0.9251 0.9095 0.9172 1900 3 0.9127 0.9242 0.9184 1900 accuracy 0.9450 7600 macro avg 0.9450 0.9450 0.9450 7600 weighted avg 0.9450 0.9450 0.9450 7600 ```
1,643
[ [ -0.02197265625, -0.05120849609375, 0.0091094970703125, 0.00734710693359375, -0.0198211669921875, -0.0086517333984375, -0.00531005859375, -0.0224761962890625, 0.0211181640625, 0.0059967041015625, -0.035247802734375, -0.0496826171875, -0.0643310546875, -0.0163...
timm/edgenext_base.usi_in1k
2023-04-23T22:42:59.000Z
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2206.10589", "arxiv:2204.03475", "license:mit", "region:us" ]
image-classification
timm
null
null
timm/edgenext_base.usi_in1k
0
2,594
timm
2023-04-23T22:42:43
--- tags: - image-classification - timm library_name: timm license: mit datasets: - imagenet-1k --- # Model card for edgenext_base.usi_in1k An EdgeNeXt image classification model. Trained on ImageNet-1k by paper authors using distillation (`USI` as per `Solving ImageNet`). ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 18.5 - GMACs: 3.8 - Activations (M): 15.6 - Image size: train = 256 x 256, test = 320 x 320 - **Papers:** - EdgeNeXt: Efficiently Amalgamated CNN-Transformer Architecture for Mobile Vision Applications: https://arxiv.org/abs/2206.10589 - Solving ImageNet: a Unified Scheme for Training any Backbone to Top Results: https://arxiv.org/abs/2204.03475 - **Dataset:** ImageNet-1k - **Original:** https://github.com/mmaaz60/EdgeNeXt ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('edgenext_base.usi_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'edgenext_base.usi_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 80, 64, 64]) # torch.Size([1, 160, 32, 32]) # torch.Size([1, 288, 16, 16]) # torch.Size([1, 584, 8, 8]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'edgenext_base.usi_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 584, 8, 8) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Citation ```bibtex @inproceedings{Maaz2022EdgeNeXt, title={EdgeNeXt: Efficiently Amalgamated CNN-Transformer Architecture for Mobile Vision Applications}, author={Muhammad Maaz and Abdelrahman Shaker and Hisham Cholakkal and Salman Khan and Syed Waqas Zamir and Rao Muhammad Anwer and Fahad Shahbaz Khan}, booktitle={International Workshop on Computational Aspects of Deep Learning at 17th European Conference on Computer Vision (CADL2022)}, year={2022}, organization={Springer} } ``` ```bibtex @misc{https://doi.org/10.48550/arxiv.2204.03475, doi = {10.48550/ARXIV.2204.03475}, url = {https://arxiv.org/abs/2204.03475}, author = {Ridnik, Tal and Lawen, Hussam and Ben-Baruch, Emanuel and Noy, Asaf}, keywords = {Computer Vision and Pattern Recognition (cs.CV), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Solving ImageNet: a Unified Scheme for Training any Backbone to Top Results}, publisher = {arXiv}, year = {2022}, } ```
4,428
[ [ -0.044525146484375, -0.02862548828125, 0.0008707046508789062, 0.001972198486328125, -0.0249481201171875, -0.0233306884765625, -0.0121307373046875, -0.032623291015625, 0.00714111328125, 0.0275726318359375, -0.039825439453125, -0.0528564453125, -0.04815673828125, ...
GroNLP/hateBERT
2023-06-02T14:04:39.000Z
[ "transformers", "pytorch", "safetensors", "bert", "fill-mask", "HateBERT", "text classification", "abusive language", "hate speech", "offensive language", "en", "autotrain_compatible", "endpoints_compatible", "region:us", "has_space" ]
fill-mask
GroNLP
null
null
GroNLP/hateBERT
22
2,592
transformers
2022-03-02T23:29:04
--- language: en tags: - HateBERT - text classification - abusive language - hate speech - offensive language --- # [Tommaso Caselli](https://www.semanticscholar.org/author/Tommaso-Caselli/1864635) • [Valerio Basile](https://www.semanticscholar.org/author/Valerio-Basile/3101511) • [Jelena Mitrovic](https://www.semanticscholar.org/author/Jelena-Mitrovic/145157863) • [Michael Granizter](https://www.semanticscholar.org/author/M.-Granitzer/2389675) ## Model description HateBERT is an English pre-trained BERT model obtained by further training the English BERT base uncased model with more than 1 million posts from banned communites from Reddit. The model has been developed as a collaboration between the University of Groningen, the university of Turin, and the University of Passau. For details, check out the paper presented at [WOAH 2021](https://aclanthology.org/2021.woah-1.3/). The code and the fine-tuned models are available on [OSF](https://osf.io/tbd58/?view_onlycb79b3228d4248ddb875eb1803525ad8). ### BibTeX entry and citation info ```bibtex @inproceedings{caselli-etal-2021-hatebert, \ttitle = "{H}ate{BERT}: Retraining {BERT} for Abusive Language Detection in {E}nglish", \tauthor = "Caselli, Tommaso and Basile, Valerio and Mitrovi{\'c}, Jelena and Granitzer, Michael", \tbooktitle = "Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021)", \tmonth = aug, \tyear = "2021", \taddress = "Online", \tpublisher = "Association for Computational Linguistics", \tturl = "https://aclanthology.org/2021.woah-1.3", \tdoi = "10.18653/v1/2021.woah-1.3", \tpages = "17--25", \tabstract = "We introduce HateBERT, a re-trained BERT model for abusive language detection in English. The model was trained on RAL-E, a large-scale dataset of Reddit comments in English from communities banned for being offensive, abusive, or hateful that we have curated and made available to the public. We present the results of a detailed comparison between a general pre-trained language model and the retrained version on three English datasets for offensive, abusive language and hate speech detection tasks. In all datasets, HateBERT outperforms the corresponding general BERT model. We also discuss a battery of experiments comparing the portability of the fine-tuned models across the datasets, suggesting that portability is affected by compatibility of the annotated phenomena.", } ```
2,469
[ [ -0.0318603515625, -0.06512451171875, 0.01493072509765625, 0.0091400146484375, -0.017913818359375, -0.026519775390625, -0.0231781005859375, -0.054534912109375, 0.015838623046875, 0.024505615234375, -0.0173797607421875, -0.03875732421875, -0.07025146484375, -0...
nvidia/segformer-b5-finetuned-cityscapes-1024-1024
2022-08-09T11:29:37.000Z
[ "transformers", "pytorch", "tf", "segformer", "vision", "image-segmentation", "dataset:cityscapes", "arxiv:2105.15203", "license:other", "endpoints_compatible", "has_space", "region:us" ]
image-segmentation
nvidia
null
null
nvidia/segformer-b5-finetuned-cityscapes-1024-1024
15
2,592
transformers
2022-03-02T23:29:05
--- license: other tags: - vision - image-segmentation datasets: - cityscapes widget: - src: https://cdn-media.huggingface.co/Inference-API/Sample-results-on-the-Cityscapes-dataset-The-above-images-show-how-our-method-can-handle.png example_title: Road --- # SegFormer (b5-sized) model fine-tuned on CityScapes SegFormer model fine-tuned on CityScapes at resolution 1024x1024. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer). Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset. ## Intended uses & limitations You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation from PIL import Image import requests feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/segformer-b5-finetuned-cityscapes-1024-1024") model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b5-finetuned-cityscapes-1024-1024") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4) ``` For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#). ### License The license for this model can be found [here](https://github.com/NVlabs/SegFormer/blob/master/LICENSE). ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2105-15203, author = {Enze Xie and Wenhai Wang and Zhiding Yu and Anima Anandkumar and Jose M. Alvarez and Ping Luo}, title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers}, journal = {CoRR}, volume = {abs/2105.15203}, year = {2021}, url = {https://arxiv.org/abs/2105.15203}, eprinttype = {arXiv}, eprint = {2105.15203}, timestamp = {Wed, 02 Jun 2021 11:46:42 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
3,136
[ [ -0.06695556640625, -0.05218505859375, 0.0169830322265625, 0.019775390625, -0.0213470458984375, -0.02520751953125, -0.0002799034118652344, -0.0506591796875, 0.0205535888671875, 0.043182373046875, -0.06256103515625, -0.04620361328125, -0.051239013671875, 0.012...
facebook/timesformer-hr-finetuned-k400
2022-12-12T12:52:40.000Z
[ "transformers", "pytorch", "timesformer", "video-classification", "vision", "arxiv:2102.05095", "license:cc-by-nc-4.0", "endpoints_compatible", "has_space", "region:us" ]
video-classification
facebook
null
null
facebook/timesformer-hr-finetuned-k400
1
2,591
transformers
2022-10-07T22:11:12
--- license: "cc-by-nc-4.0" tags: - vision - video-classification --- # TimeSformer (high-resolution variant, fine-tuned on Kinetics-400) TimeSformer model pre-trained on [Kinetics-400](https://www.deepmind.com/open-source/kinetics). It was introduced in the paper [TimeSformer: Is Space-Time Attention All You Need for Video Understanding?](https://arxiv.org/abs/2102.05095) by Tong et al. and first released in [this repository](https://github.com/facebookresearch/TimeSformer). Disclaimer: The team releasing TimeSformer did not write a model card for this model so this model card has been written by [fcakyon](https://github.com/fcakyon). ## Intended uses & limitations You can use the raw model for video classification into one of the 400 possible Kinetics-400 labels. ### How to use Here is how to use this model to classify a video: ```python from transformers import AutoImageProcessor, TimesformerForVideoClassification import numpy as np import torch video = list(np.random.randn(16, 3, 448, 448)) processor = AutoImageProcessor.from_pretrained("facebook/timesformer-hr-finetuned-k400") model = TimesformerForVideoClassification.from_pretrained("facebook/timesformer-hr-finetuned-k400") inputs = processor(images=video, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` For more code examples, we refer to the [documentation](https://huggingface.co/transformers/main/model_doc/timesformer.html#). ### BibTeX entry and citation info ```bibtex @inproceedings{bertasius2021space, title={Is Space-Time Attention All You Need for Video Understanding?}, author={Bertasius, Gedas and Wang, Heng and Torresani, Lorenzo}, booktitle={International Conference on Machine Learning}, pages={813--824}, year={2021}, organization={PMLR} } ```
1,937
[ [ -0.022003173828125, -0.0401611328125, 0.0244903564453125, 0.00955963134765625, -0.0099639892578125, 0.00940704345703125, -0.0005893707275390625, -0.0079803466796875, -0.0003333091735839844, -0.007175445556640625, -0.05712890625, -0.0289306640625, -0.060791015625...
keremberke/yolov8s-shoe-classification
2023-02-22T13:05:11.000Z
[ "ultralytics", "tensorboard", "v8", "ultralyticsplus", "yolov8", "yolo", "vision", "image-classification", "pytorch", "awesome-yolov8-models", "dataset:keremberke/shoe-classification", "model-index", "region:us" ]
image-classification
keremberke
null
null
keremberke/yolov8s-shoe-classification
0
2,591
ultralytics
2023-01-30T06:33:06
--- tags: - ultralyticsplus - yolov8 - ultralytics - yolo - vision - image-classification - pytorch - awesome-yolov8-models library_name: ultralytics library_version: 8.0.23 inference: false datasets: - keremberke/shoe-classification model-index: - name: keremberke/yolov8s-shoe-classification results: - task: type: image-classification dataset: type: keremberke/shoe-classification name: shoe-classification split: validation metrics: - type: accuracy value: 0.68675 # min: 0.0 - max: 1.0 name: top1 accuracy - type: accuracy value: 1 # min: 0.0 - max: 1.0 name: top5 accuracy --- <div align="center"> <img width="640" alt="keremberke/yolov8s-shoe-classification" src="https://huggingface.co/keremberke/yolov8s-shoe-classification/resolve/main/thumbnail.jpg"> </div> ### Supported Labels ``` ['adidas', 'converse', 'nike'] ``` ### How to use - Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus): ```bash pip install ultralyticsplus==0.0.24 ultralytics==8.0.23 ``` - Load model and perform prediction: ```python from ultralyticsplus import YOLO, postprocess_classify_output # load model model = YOLO('keremberke/yolov8s-shoe-classification') # set model parameters model.overrides['conf'] = 0.25 # model confidence threshold # set image image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg' # perform inference results = model.predict(image) # observe results print(results[0].probs) # [0.1, 0.2, 0.3, 0.4] processed_result = postprocess_classify_output(model, result=results[0]) print(processed_result) # {"cat": 0.4, "dog": 0.6} ``` **More models available at: [awesome-yolov8-models](https://yolov8.xyz)**
1,761
[ [ -0.031829833984375, -0.0131378173828125, 0.031494140625, -0.010345458984375, -0.037200927734375, -0.01036834716796875, -0.0001709461212158203, -0.04296875, 0.00888824462890625, 0.0071868896484375, -0.035064697265625, -0.04620361328125, -0.039886474609375, -0...
vilsonrodrigues/falcon-7b-sharded
2023-07-13T12:48:29.000Z
[ "transformers", "safetensors", "falcon", "text-generation", "custom_code", "en", "dataset:tiiuae/falcon-refinedweb", "arxiv:2205.14135", "arxiv:1911.02150", "arxiv:2101.00027", "arxiv:2005.14165", "arxiv:2104.09864", "arxiv:2306.01116", "license:apache-2.0", "text-generation-inference", ...
text-generation
vilsonrodrigues
null
null
vilsonrodrigues/falcon-7b-sharded
4
2,590
transformers
2023-06-16T00:24:45
--- datasets: - tiiuae/falcon-refinedweb language: - en inference: false license: apache-2.0 --- # Resharded Resharded version of https://huggingface.co/tiiuae/falcon-7b for low RAM enviroments (e.g. Colab, Kaggle) in safetensors # 🚀 Falcon-7B **Falcon-7B is a 7B parameters causal decoder-only model built by [TII](https://www.tii.ae) and trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. It is made available under the Apache 2.0 license.** *Paper coming soon* 😊. 🤗 To get started with Falcon (inference, finetuning, quantization, etc.), we recommend reading [this great blogpost fron HF](https://huggingface.co/blog/falcon)! ## Why use Falcon-7B? * **It outperforms comparable open-source models** (e.g., [MPT-7B](https://huggingface.co/mosaicml/mpt-7b), [StableLM](https://github.com/Stability-AI/StableLM), [RedPajama](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-7B-v0.1) etc.), thanks to being trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). * **It features an architecture optimized for inference**, with FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135)) and multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)). * **It is made available under a permissive Apache 2.0 license allowing for commercial use**, without any royalties or restrictions. ⚠️ Falcon is now available as a core model in the `transformers` library! To use the in-library version, please install the latest version of `transformers` with `pip install git+https://github.com/ huggingface/transformers.git`, then simply remove the `trust_remote_code=True` argument from `from_pretrained()`. ⚠️ **This is a raw, pretrained model, which should be further finetuned for most usecases.** If you are looking for a version better suited to taking generic instructions in a chat format, we recommend taking a look at [Falcon-7B-Instruct](https://huggingface.co/tiiuae/falcon-7b-instruct). 🔥 **Looking for an even more powerful model?** [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b) is Falcon-7B's big brother! ```python from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch model = "tiiuae/falcon-7b" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, tokenizer=tokenizer, torch_dtype=torch.bfloat16, device_map="auto", ) sequences = pipeline( "Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:", max_length=200, do_sample=True, top_k=10, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, ) for seq in sequences: print(f"Result: {seq['generated_text']}") ``` 💥 **Falcon LLMs require PyTorch 2.0 for use with `transformers`!** For fast inference with Falcon, check-out [Text Generation Inference](https://github.com/huggingface/text-generation-inference)! Read more in this [blogpost]((https://huggingface.co/blog/falcon). You will need **at least 16GB of memory** to swiftly run inference with Falcon-7B. # Model Card for Falcon-7B ## Model Details ### Model Description - **Developed by:** [https://www.tii.ae](https://www.tii.ae); - **Model type:** Causal decoder-only; - **Language(s) (NLP):** English and French; - **License:** Apache 2.0. ### Model Source - **Paper:** *coming soon*. ## Uses ### Direct Use Research on large language models; as a foundation for further specialization and finetuning for specific usecases (e.g., summarization, text generation, chatbot, etc.) ### Out-of-Scope Use Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful. ## Bias, Risks, and Limitations Falcon-7B is trained on English and French data only, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online. ### Recommendations We recommend users of Falcon-7B to consider finetuning it for the specific set of tasks of interest, and for guardrails and appropriate precautions to be taken for any production use. ## How to Get Started with the Model ```python from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch model = "tiiuae/falcon-7b" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, tokenizer=tokenizer, torch_dtype=torch.bfloat16, device_map="auto", ) sequences = pipeline( "Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:", max_length=200, do_sample=True, top_k=10, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, ) for seq in sequences: print(f"Result: {seq['generated_text']}") ``` ## Training Details ### Training Data Falcon-7B was trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), a high-quality filtered and deduplicated web dataset which we enhanced with curated corpora. Significant components from our curated copora were inspired by The Pile ([Gao et al., 2020](https://arxiv.org/abs/2101.00027)). | **Data source** | **Fraction** | **Tokens** | **Sources** | |--------------------|--------------|------------|-----------------------------------| | [RefinedWeb-English](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) | 79% | 1,185B | massive web crawl | | Books | 7% | 110B | | | Conversations | 6% | 85B | Reddit, StackOverflow, HackerNews | | Code | 3% | 45B | | | RefinedWeb-French | 3% | 45B | massive web crawl | | Technical | 2% | 30B | arXiv, PubMed, USPTO, etc. | The data was tokenized with the Falcon-[7B](https://huggingface.co/tiiuae/falcon-7b)/[40B](https://huggingface.co/tiiuae/falcon-40b) tokenizer. ### Training Procedure Falcon-7B was trained on 384 A100 40GB GPUs, using a 2D parallelism strategy (PP=2, DP=192) combined with ZeRO. #### Training Hyperparameters | **Hyperparameter** | **Value** | **Comment** | |--------------------|------------|-------------------------------------------| | Precision | `bfloat16` | | | Optimizer | AdamW | | | Learning rate | 6e-4 | 4B tokens warm-up, cosine decay to 1.2e-5 | | Weight decay | 1e-1 | | | Z-loss | 1e-4 | | | Batch size | 2304 | 30B tokens ramp-up | #### Speeds, Sizes, Times Training happened in early March 2023 and took about two weeks. ## Evaluation *Paper coming soon*. See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) for early results. ## Technical Specifications ### Model Architecture and Objective Falcon-7B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token). The architecture is broadly adapted from the GPT-3 paper ([Brown et al., 2020](https://arxiv.org/abs/2005.14165)), with the following differences: * **Positionnal embeddings:** rotary ([Su et al., 2021](https://arxiv.org/abs/2104.09864)); * **Attention:** multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)) and FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135)); * **Decoder-block:** parallel attention/MLP with a single layer norm. | **Hyperparameter** | **Value** | **Comment** | |--------------------|-----------|----------------------------------------| | Layers | 32 | | | `d_model` | 4544 | Increased to compensate for multiquery | | `head_dim` | 64 | Reduced to optimise for FlashAttention | | Vocabulary | 65024 | | | Sequence length | 2048 | | ### Compute Infrastructure #### Hardware Falcon-7B was trained on AWS SageMaker, on 384 A100 40GB GPUs in P4d instances. #### Software Falcon-7B was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.) ## Citation *Paper coming soon* 😊. In the meanwhile, you can use the following information to cite: ``` @article{falcon40b, title={{Falcon-40B}: an open large language model with state-of-the-art performance}, author={Almazrouei, Ebtesam and Alobeidli, Hamza and Alshamsi, Abdulaziz and Cappelli, Alessandro and Cojocaru, Ruxandra and Debbah, Merouane and Goffinet, Etienne and Heslow, Daniel and Launay, Julien and Malartic, Quentin and Noune, Badreddine and Pannier, Baptiste and Penedo, Guilherme}, year={2023} } ``` To learn more about the pretraining dataset, see the 📓 [RefinedWeb paper](https://arxiv.org/abs/2306.01116). ``` @article{refinedweb, title={The {R}efined{W}eb dataset for {F}alcon {LLM}: outperforming curated corpora with web data, and web data only}, author={Guilherme Penedo and Quentin Malartic and Daniel Hesslow and Ruxandra Cojocaru and Alessandro Cappelli and Hamza Alobeidli and Baptiste Pannier and Ebtesam Almazrouei and Julien Launay}, journal={arXiv preprint arXiv:2306.01116}, eprint={2306.01116}, eprinttype = {arXiv}, url={https://arxiv.org/abs/2306.01116}, year={2023} } ``` ## License Falcon-7B is made available under the Apache 2.0 license. ## Contact falconllm@tii.ae
10,574
[ [ -0.04443359375, -0.061920166015625, 0.0035533905029296875, 0.020477294921875, -0.01091766357421875, -0.002376556396484375, -0.0116729736328125, -0.037811279296875, 0.020294189453125, 0.0267181396484375, -0.03582763671875, -0.034912109375, -0.058319091796875, ...
Fantasy-Studio/Paint-by-Example
2022-12-07T10:44:13.000Z
[ "diffusers", "stable-diffusion", "arxiv:2211.13227", "license:creativeml-openrail-m", "has_space", "diffusers:PaintByExamplePipeline", "region:us" ]
null
Fantasy-Studio
null
null
Fantasy-Studio/Paint-by-Example
32
2,587
diffusers
2022-11-27T16:51:40
--- license: creativeml-openrail-m tags: - stable-diffusion inference: false --- # Paint-By-Example ## Overview [Paint by Example: Exemplar-based Image Editing with Diffusion Models](https://arxiv.org/abs/2211.13227) by Binxin Yang, Shuyang Gu, Bo Zhang, Ting Zhang, Xuejin Chen, Xiaoyan Sun, Dong Chen, Fang Wen The abstract of the paper is the following: *Language-guided image editing has achieved great success recently. In this paper, for the first time, we investigate exemplar-guided image editing for more precise control. We achieve this goal by leveraging self-supervised training to disentangle and re-organize the source image and the exemplar. However, the naive approach will cause obvious fusing artifacts. We carefully analyze it and propose an information bottleneck and strong augmentations to avoid the trivial solution of directly copying and pasting the exemplar image. Meanwhile, to ensure the controllability of the editing process, we design an arbitrary shape mask for the exemplar image and leverage the classifier-free guidance to increase the similarity to the exemplar image. The whole framework involves a single forward of the diffusion model without any iterative optimization. We demonstrate that our method achieves an impressive performance and enables controllable editing on in-the-wild images with high fidelity.* The original codebase can be found [here](https://github.com/Fantasy-Studio/Paint-by-Example). ## Available Pipelines: | Pipeline | Tasks | Colab |---|---|:---:| | [pipeline_paint_by_example.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/paint_by_example/pipeline_paint_by_example.py) | *Image-Guided Image Painting* | - | ## Tips - [Fantasy-Studio/Paint-by-Example](https://huggingface.co/Fantasy-Studio/Paint-by-Example) has been warm-started from the [CompVis/stable-diffusion-v1-4](https://huggingface.co/CompVis/stable-diffusion-v1-4) and with the objective to inpaint partly masked images conditioned on example / reference images - To quickly demo *PaintByExample*, please have a look at [this demo](https://huggingface.co/spaces/Fantasy-Studio/Paint-by-Example). - You can run the following code snippet as an example: ```python # !pip install diffusers transformers import PIL import requests import torch from io import BytesIO from diffusers import DiffusionPipeline def download_image(url): response = requests.get(url) return PIL.Image.open(BytesIO(response.content)).convert("RGB") img_url = "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/image/example_1.png" mask_url = "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/mask/example_1.png" example_url = "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/reference/example_1.jpg" init_image = download_image(img_url).resize((512, 512)) mask_image = download_image(mask_url).resize((512, 512)) example_image = download_image(example_url).resize((512, 512)) pipe = DiffusionPipeline.from_pretrained( "Fantasy-Studio/Paint-by-Example", torch_dtype=torch.float16, ) pipe = pipe.to("cuda") image = pipe(image=init_image, mask_image=mask_image, example_image=example_image).images[0] image ```
3,265
[ [ -0.037811279296875, -0.051666259765625, 0.026092529296875, 0.025787353515625, -0.01485443115234375, 0.0113525390625, -0.0022411346435546875, -0.03009033203125, 0.0221405029296875, 0.036773681640625, -0.050140380859375, -0.0223541259765625, -0.05029296875, -0...
nvidia/segformer-b4-finetuned-ade-512-512
2022-08-06T10:25:42.000Z
[ "transformers", "pytorch", "tf", "segformer", "vision", "image-segmentation", "dataset:scene_parse_150", "arxiv:2105.15203", "license:other", "endpoints_compatible", "region:us" ]
image-segmentation
nvidia
null
null
nvidia/segformer-b4-finetuned-ade-512-512
1
2,585
transformers
2022-03-02T23:29:05
--- license: other tags: - vision - image-segmentation datasets: - scene_parse_150 widget: - src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg example_title: House - src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000002.jpg example_title: Castle --- # SegFormer (b4-sized) model fine-tuned on ADE20k SegFormer model fine-tuned on ADE20k at resolution 512x512. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer). Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset. ## Intended uses & limitations You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation from PIL import Image import requests feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/segformer-b4-finetuned-ade-512-512") model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b4-finetuned-ade-512-512") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4) ``` For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#). ### License The license for this model can be found [here](https://github.com/NVlabs/SegFormer/blob/master/LICENSE). ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2105-15203, author = {Enze Xie and Wenhai Wang and Zhiding Yu and Anima Anandkumar and Jose M. Alvarez and Ping Luo}, title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers}, journal = {CoRR}, volume = {abs/2105.15203}, year = {2021}, url = {https://arxiv.org/abs/2105.15203}, eprinttype = {arXiv}, eprint = {2105.15203}, timestamp = {Wed, 02 Jun 2021 11:46:42 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
3,209
[ [ -0.06707763671875, -0.053741455078125, 0.01385498046875, 0.0155792236328125, -0.0234222412109375, -0.0262451171875, 0.0001932382583618164, -0.051116943359375, 0.0223846435546875, 0.043121337890625, -0.06561279296875, -0.043487548828125, -0.056610107421875, 0...
Helsinki-NLP/opus-mt-lv-en
2023-08-16T12:00:49.000Z
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "lv", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
Helsinki-NLP
null
null
Helsinki-NLP/opus-mt-lv-en
0
2,584
transformers
2022-03-02T23:29:04
--- tags: - translation license: apache-2.0 --- ### opus-mt-lv-en * source languages: lv * target languages: en * OPUS readme: [lv-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lv-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/lv-en/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lv-en/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lv-en/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdev2017-enlv.lv.en | 29.9 | 0.587 | | newstest2017-enlv.lv.en | 22.1 | 0.526 | | Tatoeba.lv.en | 53.3 | 0.707 |
907
[ [ -0.020965576171875, -0.0287933349609375, 0.022857666015625, 0.0241241455078125, -0.028900146484375, -0.025665283203125, -0.0291900634765625, -0.0041351318359375, 0.00321197509765625, 0.035675048828125, -0.056427001953125, -0.041046142578125, -0.039093017578125, ...
jonatasgrosman/wav2vec2-xls-r-1b-portuguese
2022-12-14T02:02:02.000Z
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "pt", "robust-speech-event", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "endpoints_compatible", "has_space", "region:...
automatic-speech-recognition
jonatasgrosman
null
null
jonatasgrosman/wav2vec2-xls-r-1b-portuguese
9
2,581
transformers
2022-03-02T23:29:05
--- language: - pt license: apache-2.0 tags: - automatic-speech-recognition - hf-asr-leaderboard - mozilla-foundation/common_voice_8_0 - pt - robust-speech-event datasets: - mozilla-foundation/common_voice_8_0 model-index: - name: XLS-R Wav2Vec2 Portuguese by Jonatas Grosman results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 8 type: mozilla-foundation/common_voice_8_0 args: pt metrics: - name: Test WER type: wer value: 8.7 - name: Test CER type: cer value: 2.55 - name: Test WER (+LM) type: wer value: 6.04 - name: Test CER (+LM) type: cer value: 1.98 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Dev Data type: speech-recognition-community-v2/dev_data args: pt metrics: - name: Dev WER type: wer value: 24.23 - name: Dev CER type: cer value: 11.3 - name: Dev WER (+LM) type: wer value: 19.41 - name: Dev CER (+LM) type: cer value: 10.19 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Test Data type: speech-recognition-community-v2/eval_data args: pt metrics: - name: Test WER type: wer value: 18.8 --- # Fine-tuned XLS-R 1B model for speech recognition in Portuguese Fine-tuned [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on Portuguese using the train and validation splits of [Common Voice 8.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0), [CORAA](https://github.com/nilc-nlp/CORAA), [Multilingual TEDx](http://www.openslr.org/100), and [Multilingual LibriSpeech](https://www.openslr.org/94/). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool, and thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :) ## Usage Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library: ```python from huggingsound import SpeechRecognitionModel model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-xls-r-1b-portuguese") audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"] transcriptions = model.transcribe(audio_paths) ``` Writing your own inference script: ```python import torch import librosa from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor LANG_ID = "pt" MODEL_ID = "jonatasgrosman/wav2vec2-xls-r-1b-portuguese" SAMPLES = 10 test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]") processor = Wav2Vec2Processor.from_pretrained(MODEL_ID) model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000) batch["speech"] = speech_array batch["sentence"] = batch["sentence"].upper() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) predicted_sentences = processor.batch_decode(predicted_ids) ``` ## Evaluation Commands 1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test` ```bash python eval.py --model_id jonatasgrosman/wav2vec2-xls-r-1b-portuguese --dataset mozilla-foundation/common_voice_8_0 --config pt --split test ``` 2. To evaluate on `speech-recognition-community-v2/dev_data` ```bash python eval.py --model_id jonatasgrosman/wav2vec2-xls-r-1b-portuguese --dataset speech-recognition-community-v2/dev_data --config pt --split validation --chunk_length_s 5.0 --stride_length_s 1.0 ``` ## Citation If you want to cite this model you can use this: ```bibtex @misc{grosman2021xlsr-1b-portuguese, title={Fine-tuned {XLS-R} 1{B} model for speech recognition in {P}ortuguese}, author={Grosman, Jonatas}, howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-xls-r-1b-portuguese}}, year={2022} } ```
4,543
[ [ -0.0264892578125, -0.052398681640625, 0.01181793212890625, 0.0219573974609375, -0.01519012451171875, -0.01727294921875, -0.0311737060546875, -0.043487548828125, 0.00815582275390625, 0.024200439453125, -0.0345458984375, -0.043731689453125, -0.048797607421875, ...
ramsrigouthamg/t5-large-paraphraser-diverse-high-quality
2021-09-21T05:21:49.000Z
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text2text-generation
ramsrigouthamg
null
null
ramsrigouthamg/t5-large-paraphraser-diverse-high-quality
24
2,577
transformers
2022-03-02T23:29:05
Blog post with more details as well as easy to use Google Colab link: https://towardsdatascience.com/high-quality-sentence-paraphraser-using-transformers-in-nlp-c33f4482856f !pip install transformers==4.10.2 !pip install sentencepiece==0.1.96 ``` from transformers import AutoTokenizer, AutoModelForSeq2SeqLM model = AutoModelForSeq2SeqLM.from_pretrained("ramsrigouthamg/t5-large-paraphraser-diverse-high-quality") tokenizer = AutoTokenizer.from_pretrained("ramsrigouthamg/t5-large-paraphraser-diverse-high-quality") import torch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") print ("device ",device) model = model.to(device) # Beam Search context = "Once, a group of frogs were roaming around the forest in search of water." text = "paraphrase: "+context + " </s>" encoding = tokenizer.encode_plus(text,max_length =128, padding=True, return_tensors="pt") input_ids,attention_mask = encoding["input_ids"].to(device), encoding["attention_mask"].to(device) model.eval() beam_outputs = model.generate( input_ids=input_ids,attention_mask=attention_mask, max_length=128, early_stopping=True, num_beams=15, num_return_sequences=3 ) print ("\n\n") print ("Original: ",context) for beam_output in beam_outputs: sent = tokenizer.decode(beam_output, skip_special_tokens=True,clean_up_tokenization_spaces=True) print (sent) ``` **Output from the above code** ``` Original: Once, a group of frogs were roaming around the forest in search of water. paraphrasedoutput: A herd of frogs were wandering around the woods in search of water. paraphrasedoutput: A herd of frogs was wandering around the woods in search of water. paraphrasedoutput: A herd of frogs were wandering around the forest in search of water at one time. ```
1,787
[ [ -0.016387939453125, -0.050079345703125, 0.029449462890625, 0.0411376953125, -0.033660888671875, 0.005031585693359375, 0.00809478759765625, -0.0020847320556640625, 0.00875091552734375, 0.0313720703125, -0.025054931640625, -0.0208892822265625, -0.051605224609375, ...
IDEA-CCNL/Taiyi-Stable-Diffusion-1B-Chinese-v0.1
2023-05-25T09:27:55.000Z
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "zh", "Chinese", "arxiv:2112.10752", "arxiv:2209.02970", "license:creativeml-openrail-m", "endpoints_compatible", "has_space", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
IDEA-CCNL
null
null
IDEA-CCNL/Taiyi-Stable-Diffusion-1B-Chinese-v0.1
412
2,577
diffusers
2022-10-31T12:23:38
--- language: zh license: creativeml-openrail-m tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - zh - Chinese inference: true widget: - text: "孤帆远影碧空尽,惟见长江天际流,油画" example_title: 孤帆远影碧空尽,惟见长江天际流,油画 - text: "日出在印象的港口来回, 唯美, 插画" example_title: 日出在印象的港口来回, 唯美, 插画 - text: "科幻, 外星文明, 建筑, 机械感, 4k壁纸" example_title: 科幻, 外星文明, 建筑, 机械感, 4k壁纸 - text: "东临碣石, 以观沧海, 波涛汹涌, 插画" example_title: 东临碣石, 以观沧海, 波涛汹涌, 插画 - text: "飞流直下三千尺, 疑是银河落九天, 瀑布, 插画" example_title: 飞流直下三千尺, 疑是银河落九天, 瀑布, 插画 - text: "女孩背影, 日落, 唯美插画" example_title: 女孩背影, 日落, 唯美插画 extra_gated_prompt: |- One more step before getting this model. This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content 2. IDEA-CCNL claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here: https://huggingface.co/spaces/CompVis/stable-diffusion-license By clicking on "Access repository" below, you accept that your *contact information* (email address and username) can be shared with the model authors as well. extra_gated_fields: I have read the License and agree with its terms: checkbox --- # Taiyi-Stable-Diffusion-1B-Chinese-v0.1 - Main Page:[Fengshenbang](https://fengshenbang-lm.com/) - Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM) ## 简介 Brief Introduction 首个开源的中文Stable Diffusion模型,基于0.2亿筛选过的中文图文对训练。 The first open source Chinese Stable diffusion, which was trained on 20M filtered Chinese image-text pairs. ## 在线体验 Gradio Web UI 可以在[Taiyi-Stable-Diffusion-Chinese](https://huggingface.co/spaces/IDEA-CCNL/Taiyi-Stable-Diffusion-Chinese)体验我们的模型。 We support a [Gradio](https://github.com/gradio-app/gradio) Web UI to run Taiyi-Stable-Diffusion-1B-Chinese-v0.1: [Taiyi-Stable-Diffusion-Chinese](https://huggingface.co/spaces/IDEA-CCNL/Taiyi-Stable-Diffusion-Chinese) ## 简介 Brief Introduction 首个开源的中英双语Stable Diffusion模型,基于0.2亿筛选过的中文图文对训练。 ## 模型分类 Model Taxonomy | 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra | | :----: | :----: | :----: | :----: | :----: | :----: | | 特殊 Special | 多模态 Multimodal | 太乙 Taiyi | Stable Diffusion | 1B | Chinese | ## 模型信息 Model Information 我们将[Noah-Wukong](https://wukong-dataset.github.io/wukong-dataset/)数据集(100M)和[Zero](https://zero.so.com/)数据集(23M)用作预训练的数据集,先用[IDEA-CCNL/Taiyi-CLIP-RoBERTa-102M-ViT-L-Chinese](https://huggingface.co/IDEA-CCNL/Taiyi-CLIP-RoBERTa-102M-ViT-L-Chinese)对这两个数据集的图文对相似性进行打分,取CLIP Score大于0.2的图文对作为我们的训练集。 我们使用[IDEA-CCNL/Taiyi-CLIP-RoBERTa-102M-ViT-L-Chinese](https://huggingface.co/IDEA-CCNL/Taiyi-CLIP-RoBERTa-102M-ViT-L-Chinese)作为初始化的text encoder,冻住[stable-diffusion-v1-4](https://huggingface.co/CompVis/stable-diffusion-v1-4)([论文](https://arxiv.org/abs/2112.10752))模型的其他部分,只训练text encoder,以便保留原始模型的生成能力且实现中文概念的对齐。该模型目前在0.2亿图文对上训练了一个epoch。 我们在 32 x A100 训练了大约100小时。该版本只是一个初步的版本,我们将持续优化并开源后续模型,欢迎交流。 We use [Noah-Wukong](https://wukong-dataset.github.io/wukong-dataset/)(100M) 和 [Zero](https://zero.so.com/)(23M) as our dataset, and take the image and text pairs with CLIP Score (based on [IDEA-CCNL/Taiyi-CLIP-RoBERTa-102M-ViT-L-Chinese](https://huggingface.co/IDEA-CCNL/Taiyi-CLIP-RoBERTa-102M-ViT-L-Chinese)) greater than 0.2 as our Training set. We use [IDEA-CCNL/Taiyi-CLIP-RoBERTa-102M-ViT-L-Chinese](https://huggingface.co/IDEA-CCNL/Taiyi-CLIP-RoBERTa-102M-ViT-L-Chinese) as our init text encoder. To keep the powerful generative capability of stable diffusion and align Chinese concepts with the images, We only train the text encoder and freeze other part of the [stable-diffusion-v1-4](https://huggingface.co/CompVis/stable-diffusion-v1-4)([paper](https://arxiv.org/abs/2112.10752)) model. It takes 100 hours to train this model based on 32 x A100. This model is a preliminary version and we will update this model continuously and open sourse. Welcome to exchange! ### Result Basic Prompt | 铁马冰河入梦来,3D绘画。 | 飞流直下三千尺,油画。 | 女孩背影,日落,唯美插画。 | | ---- | ---- | ---- | | ![](result_examples/tiema.png) | ![](result_examples/feiliu.png) | ![](result_examples/nvhai.jpg) | Advanced Prompt | 铁马冰河入梦来,概念画,科幻,玄幻,3D | 中国海边城市,科幻,未来感,唯美,插画。 | 那人却在灯火阑珊处,色彩艳丽,古风,资深插画师作品,桌面高清壁纸。 | | ---- | ---- | ---- | | ![](result_examples/tiema2.jpg) | ![](result_examples/chengshi.jpg) | ![](result_examples/naren.jpg) | ## 使用 Usage ### 全精度 Full precision ```py from diffusers import StableDiffusionPipeline pipe = StableDiffusionPipeline.from_pretrained("IDEA-CCNL/Taiyi-Stable-Diffusion-1B-Chinese-v0.1").to("cuda") prompt = '飞流直下三千尺,油画' image = pipe(prompt, guidance_scale=7.5).images[0] image.save("飞流.png") ``` ### 半精度 Half precision FP16 (CUDA) 添加 `torch_dtype=torch.float16` 和 `device_map="auto"` 可以快速加载 FP16 的权重,以加快推理速度。 更多信息见 [the optimization docs](https://huggingface.co/docs/diffusers/main/en/optimization/fp16#half-precision-weights)。 ```py # !pip install git+https://github.com/huggingface/accelerate import torch from diffusers import StableDiffusionPipeline torch.backends.cudnn.benchmark = True pipe = StableDiffusionPipeline.from_pretrained("IDEA-CCNL/Taiyi-Stable-Diffusion-1B-Chinese-v0.1", torch_dtype=torch.float16) pipe.to('cuda') prompt = '飞流直下三千尺,油画' image = pipe(prompt, guidance_scale=7.5).images[0] image.save("飞流.png") ``` ### 使用手册 Handbook for Taiyi https://github.com/IDEA-CCNL/Fengshenbang-LM/blob/main/fengshen/examples/stable_diffusion_chinese/taiyi_handbook.md ### 怎样微调 How to finetune https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/main/fengshen/examples/finetune_taiyi_stable_diffusion ### webui配置 Configure webui https://github.com/IDEA-CCNL/stable-diffusion-webui/blob/master/README.md ### DreamBooth https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/main/fengshen/examples/stable_diffusion_dreambooth ## 引用 Citation 如果您在您的工作中使用了我们的模型,可以引用我们的[总论文](https://arxiv.org/abs/2209.02970): If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970): ```text @article{fengshenbang, author = {Jiaxing Zhang and Ruyi Gan and Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen}, title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence}, journal = {CoRR}, volume = {abs/2209.02970}, year = {2022} } ``` 也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/): You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/): ```text @misc{Fengshenbang-LM, title={Fengshenbang-LM}, author={IDEA-CCNL}, year={2021}, howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}}, } ```
7,489
[ [ -0.030517578125, -0.0634765625, 0.0262298583984375, 0.02850341796875, -0.035797119140625, -0.0233001708984375, -0.0262298583984375, -0.02239990234375, 0.03033447265625, 0.01245880126953125, -0.025238037109375, -0.05230712890625, -0.039215087890625, -0.004703...
Yntec/Darkside
2023-09-29T07:52:19.000Z
[ "diffusers", "Anime", "Horror", "Pixar", "DucHaiten", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:creativeml-openrail-m", "endpoints_compatible", "has_space", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
Yntec
null
null
Yntec/Darkside
1
2,577
diffusers
2023-09-29T06:54:29
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image tags: - Anime - Horror - Pixar - DucHaiten - stable-diffusion - stable-diffusion-diffusers - diffusers - text-to-image --- # DucHaiten Darkside fp16 no-ema version of this model: https://civitai.com/models/5426?modelVersionId=6311 Samples and prompt: ![Sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/IJYod8CehiODd6XqdoJFg.png) ![Sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/Obn05NKzFtkJq4kTcqBos.png) Cartoon Pretty CUTE Girl, ilya kuvshinov detailed, DETAILED CHIBI EYES, gorgeous detailed hair, high school, Magazine ad, iconic, 1949, sharp focus. visible brushstrokes By KlaysMoji and artgerm and Clay Mann and and simon cowell and leyendecker. By Dave Rapoza. Pretty CUTE girl.
857
[ [ -0.04083251953125, -0.07135009765625, 0.0190887451171875, 0.011627197265625, -0.0261688232421875, -0.00380706787109375, 0.0163421630859375, -0.032501220703125, 0.0841064453125, 0.044189453125, -0.051910400390625, -0.053955078125, -0.03253173828125, 0.0082702...
TheBloke/CodeLlama-13B-Instruct-GPTQ
2023-09-27T12:46:10.000Z
[ "transformers", "safetensors", "llama", "text-generation", "llama-2", "custom_code", "code", "arxiv:2308.12950", "license:llama2", "text-generation-inference", "region:us" ]
text-generation
TheBloke
null
null
TheBloke/CodeLlama-13B-Instruct-GPTQ
29
2,569
transformers
2023-08-25T00:51:10
--- language: - code license: llama2 tags: - llama-2 model_name: CodeLlama 13B Instruct base_model: codellama/CodeLlama-13b-Instruct-hf inference: false model_creator: Meta model_type: llama pipeline_tag: text-generation prompt_template: '[INST] Write code to solve the following coding problem that obeys the constraints and passes the example test cases. Please wrap your code answer using ```: {prompt} [/INST] ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # CodeLlama 13B Instruct - GPTQ - Model creator: [Meta](https://huggingface.co/meta-llama) - Original model: [CodeLlama 13B Instruct](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf) <!-- description start --> ## Description This repo contains GPTQ model files for [Meta's CodeLlama 13B Instruct](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/CodeLlama-13B-Instruct-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/CodeLlama-13B-Instruct-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/CodeLlama-13B-Instruct-GGUF) * [Meta's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: CodeLlama ``` [INST] Write code to solve the following coding problem that obeys the constraints and passes the example test cases. Please wrap your code answer using ```: {prompt} [/INST] ``` <!-- prompt-template end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. All recent GPTQ files are made with AutoGPTQ, and all files in non-main branches are made with AutoGPTQ. Files in the `main` branch which were uploaded before August 2023 were made with GPTQ-for-LLaMa. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The dataset used for quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/CodeLlama-13B-Instruct-GPTQ/tree/main) | 4 | 128 | No | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 8192 | 7.26 GB | Yes | 4-bit, without Act Order and group size 128g. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/CodeLlama-13B-Instruct-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 8192 | 8.00 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/CodeLlama-13B-Instruct-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 8192 | 7.51 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. | | [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/CodeLlama-13B-Instruct-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 8192 | 7.26 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/CodeLlama-13B-Instruct-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 8192 | 13.36 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/CodeLlama-13B-Instruct-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 8192 | 13.65 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download from branches - In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/CodeLlama-13B-Instruct-GPTQ:main` - With Git, you can clone a branch with: ``` git clone --single-branch --branch main https://huggingface.co/TheBloke/CodeLlama-13B-Instruct-GPTQ ``` - In Python Transformers code, the branch is the `revision` parameter; see below. <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui). Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/CodeLlama-13B-Instruct-GPTQ`. - To download from a specific branch, enter for example `TheBloke/CodeLlama-13B-Instruct-GPTQ:main` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `CodeLlama-13B-Instruct-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. * Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-python start --> ## How to use this GPTQ model from Python code ### Install the necessary packages Requires: Transformers 4.32.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install transformers>=4.32.0 optimum>=1.12.0 pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7 ``` If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ pip3 install . ``` ### For CodeLlama models only: you must use Transformers 4.33.0 or later. If 4.33.0 is not yet released when you read this, you will need to install Transformers from source: ```shell pip3 uninstall -y transformers pip3 install git+https://github.com/huggingface/transformers.git ``` ### You can then use the following code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/CodeLlama-13B-Instruct-GPTQ" # To use a different branch, change revision # For example: revision="main" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=True, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Tell me about AI" prompt_template=f'''[INST] Write code to solve the following coding problem that obeys the constraints and passes the example test cases. Please wrap your code answer using ```: {prompt} [/INST] ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI). [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility. [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: Meta's CodeLlama 13B Instruct # **Code Llama** Code Llama is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 34 billion parameters. This is the repository for the 13 instruct-tuned version in the Hugging Face Transformers format. This model is designed for general code synthesis and understanding. Links to other models can be found in the index at the bottom. | | Base Model | Python | Instruct | | --- | ----------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- | | 7B | [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) | [codellama/CodeLlama-7b-Python-hf](https://huggingface.co/codellama/CodeLlama-7b-Python-hf) | [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) | | 13B | [codellama/CodeLlama-13b-hf](https://huggingface.co/codellama/CodeLlama-13b-hf) | [codellama/CodeLlama-13b-Python-hf](https://huggingface.co/codellama/CodeLlama-13b-Python-hf) | [codellama/CodeLlama-13b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf) | | 34B | [codellama/CodeLlama-34b-hf](https://huggingface.co/codellama/CodeLlama-34b-hf) | [codellama/CodeLlama-34b-Python-hf](https://huggingface.co/codellama/CodeLlama-34b-Python-hf) | [codellama/CodeLlama-34b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-34b-Instruct-hf) | ## Model Use To use this model, please make sure to install transformers from `main` until the next version is released: ```bash pip install git+https://github.com/huggingface/transformers.git@main accelerate ``` Model capabilities: - [x] Code completion. - [x] Infilling. - [x] Instructions / chat. - [ ] Python specialist. ## Model Details *Note: Use of this model is governed by the Meta license. Meta developed and publicly released the Code Llama family of large language models (LLMs). **Model Developers** Meta **Variations** Code Llama comes in three model sizes, and three variants: * Code Llama: base models designed for general code synthesis and understanding * Code Llama - Python: designed specifically for Python * Code Llama - Instruct: for instruction following and safer deployment All variants are available in sizes of 7B, 13B and 34B parameters. **This repository contains the Instruct version of the 13B parameters model.** **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Code Llama is an auto-regressive language model that uses an optimized transformer architecture. **Model Dates** Code Llama and its variants have been trained between January 2023 and July 2023. **Status** This is a static model trained on an offline dataset. Future versions of Code Llama - Instruct will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) **Research Paper** More information can be found in the paper "[Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/)" or its [arXiv page](https://arxiv.org/abs/2308.12950). ## Intended Use **Intended Use Cases** Code Llama and its variants is intended for commercial and research use in English and relevant programming languages. The base model Code Llama can be adapted for a variety of code synthesis and understanding tasks, Code Llama - Python is designed specifically to handle the Python programming language, and Code Llama - Instruct is intended to be safer to use for code assistant and generation applications. **Out-of-Scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Code Llama and its variants. ## Hardware and Software **Training Factors** We used custom training libraries. The training and fine-tuning of the released models have been performed Meta’s Research Super Cluster. **Carbon Footprint** In aggregate, training all 9 Code Llama models required 400K GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 65.3 tCO2eq, 100% of which were offset by Meta’s sustainability program. ## Training Data All experiments reported here and the released models have been trained and fine-tuned using the same data as Llama 2 with different weights (see Section 2 and Table 1 in the [research paper](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) for details). ## Evaluation Results See evaluations for the main models and detailed ablations in Section 3 and safety evaluations in Section 4 of the research paper. ## Ethical Considerations and Limitations Code Llama and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Code Llama’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Code Llama, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available available at [https://ai.meta.com/llama/responsible-user-guide](https://ai.meta.com/llama/responsible-user-guide).
21,174
[ [ -0.0347900390625, -0.06085205078125, 0.01155853271484375, 0.0092926025390625, -0.024627685546875, -0.00972747802734375, 0.0025196075439453125, -0.035369873046875, 0.01201629638671875, 0.0273590087890625, -0.044158935546875, -0.042572021484375, -0.024398803710937...
timm/resnest50d.in1k
2023-04-23T23:35:49.000Z
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2004.08955", "license:apache-2.0", "region:us" ]
image-classification
timm
null
null
timm/resnest50d.in1k
0
2,568
timm
2023-04-23T23:35:20
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for resnest50d.in1k A ResNeSt (ResNet based architecture with Split Attention) image classification model. Trained on ImageNet-1k by paper authors. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 27.5 - GMACs: 5.4 - Activations (M): 14.4 - Image size: 224 x 224 - **Papers:** - ResNeSt: Split-Attention Networks: https://arxiv.org/abs/2004.08955 - **Dataset:** ImageNet-1k - **Original:** https://github.com/zhanghang1989/ResNeSt ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('resnest50d.in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'resnest50d.in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 64, 112, 112]) # torch.Size([1, 256, 56, 56]) # torch.Size([1, 512, 28, 28]) # torch.Size([1, 1024, 14, 14]) # torch.Size([1, 2048, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'resnest50d.in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 2048, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @article{zhang2020resnest, title={ResNeSt: Split-Attention Networks}, author={Zhang, Hang and Wu, Chongruo and Zhang, Zhongyue and Zhu, Yi and Zhang, Zhi and Lin, Haibin and Sun, Yue and He, Tong and Muller, Jonas and Manmatha, R. and Li, Mu and Smola, Alexander}, journal={arXiv preprint arXiv:2004.08955}, year={2020} } ```
3,733
[ [ -0.042510986328125, -0.03460693359375, 0.0073699951171875, 0.0110321044921875, -0.02508544921875, -0.02374267578125, -0.0193939208984375, -0.0210418701171875, 0.0279541015625, 0.036163330078125, -0.05023193359375, -0.048126220703125, -0.04962158203125, -0.00...
allenai/t5-small-squad2-question-generation
2023-01-24T16:27:47.000Z
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "en", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text2text-generation
allenai
null
null
allenai/t5-small-squad2-question-generation
36
2,565
transformers
2022-03-02T23:29:05
--- language: en --- A simple question-generation model built based on SQuAD 2.0 dataset. Example use: ```python from transformers import T5Config, T5ForConditionalGeneration, T5Tokenizer model_name = "allenai/t5-small-squad2-question-generation" tokenizer = T5Tokenizer.from_pretrained(model_name) model = T5ForConditionalGeneration.from_pretrained(model_name) def run_model(input_string, **generator_args): input_ids = tokenizer.encode(input_string, return_tensors="pt") res = model.generate(input_ids, **generator_args) output = tokenizer.batch_decode(res, skip_special_tokens=True) print(output) return output run_model("shrouds herself in white and walks penitentially disguised as brotherly love through factories and parliaments; offers help, but desires power;") run_model("He thanked all fellow bloggers and organizations that showed support.") run_model("Races are held between April and December at the Veliefendi Hippodrome near Bakerky, 15 km (9 miles) west of Istanbul.") ``` which should result in the following: ``` ['What is the name of the man who is a brotherly love?'] ['What did He thank all fellow bloggers and organizations that showed support?'] ['Where is the Veliefendi Hippodrome located?'] ```
1,251
[ [ -0.031951904296875, -0.043182373046875, 0.033233642578125, 0.0005245208740234375, -0.034454345703125, -0.01251983642578125, -0.0010862350463867188, -0.0248260498046875, -0.00128936767578125, 0.0206756591796875, -0.0911865234375, -0.0245513916015625, -0.025115966...
kakaobrain/kogpt
2022-09-26T02:17:11.000Z
[ "KakaoBrain", "KoGPT", "GPT", "GPT3", "ko", "arxiv:2104.09864", "arxiv:2109.04650", "license:cc-by-nc-4.0", "has_space", "region:us" ]
null
kakaobrain
null
null
kakaobrain/kogpt
92
2,564
null
2022-03-02T23:29:05
--- language: ko tags: - KakaoBrain - KoGPT - GPT - GPT3 license: cc-by-nc-4.0 --- # KoGPT KakaoBrain's Pre-Trained Language Models. * KoGPT (Korean Generative Pre-trained Transformer) * [https://github.com/kakaobrain/kogpt](https://github.com/kakaobrain/kogpt) * [https://huggingface.co/kakaobrain/kogpt](https://huggingface.co/kakaobrain/kogpt) ## Model Descriptions ### KoGPT6B-ryan1.5b * [\[huggingface\]\[kakaobrain/kogpt\]\[KoGPT6B-ryan1.5b\]](https://huggingface.co/kakaobrain/kogpt/tree/KoGPT6B-ryan1.5b) * [\[huggingface\]\[kakaobrain/kogpt\]\[KoGPT6B-ryan1.5b-float16\]](https://huggingface.co/kakaobrain/kogpt/tree/KoGPT6B-ryan1.5b-float16) | Hyperparameter | Value | |:---------------------|--------------:| | \\(n_{parameters}\\) | 6,166,502,400 | | \\(n_{layers}\\) | 28 | | \\(d_{model}\\) | 4,096 | | \\(d_{ff}\\) | 16,384 | | \\(n_{heads}\\) | 16 | | \\(d_{head}\\) | 256 | | \\(n_{ctx}\\) | 2,048 | | \\(n_{vocab}\\) | 64,512 | | Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) | | RoPE Dimensions | 64 | ## Hardware requirements ### KoGPT6B-ryan1.5b #### GPU The following is the recommended minimum GPU hardware guidance for a handful of example KoGPT. * `32GB GPU RAM` in the required minimum memory size ### KoGPT6B-ryan1.5b-float16 #### GPU The following is the recommended minimum GPU hardware guidance for a handful of example KoGPT. * half-precision requires NVIDIA GPUS based on Volta, Turing or Ampere * `16GB GPU RAM` in the required minimum memory size ## Usage ### prompt ```bash python -m kogpt --help usage: KoGPT inference [-h] [--model MODEL] [--revision {KoGPT6B-ryan1.5b}] [--device {cpu,cuda}] [-d] KakaoBrain Korean(hangul) Generative Pre-Training Model optional arguments: -h, --help show this help message and exit --model MODEL huggingface repo (default:kakaobrain/kogpt) --revision {KoGPT6B-ryan1.5b} --device {cpu,cuda} (default:cuda) -d, --debug ``` ```bash python -m kogpt prompt> 인간처럼 생각하고, 행동하는 '지능'을 통해 인류가 이제까지 풀지 못했던 temperature(0.8)> max_length(128)> 64 인간처럼 생각하고, 행동하는 '지능'을 통해 인류가 이제까지 풀지 못했던 문제의 해답을 찾을 수 있을 것이다. 과학기술이 고도로 발달한 21세기를 살아갈 우리 아이들에게 가장 필요한 것은 사고력 훈련이다. 사고력 훈련을 통해, 세상 prompt> ... ``` ### python ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained( 'kakaobrain/kogpt', revision='KoGPT6B-ryan1.5b-float16', # or float32 version: revision=KoGPT6B-ryan1.5b bos_token='[BOS]', eos_token='[EOS]', unk_token='[UNK]', pad_token='[PAD]', mask_token='[MASK]' ) model = AutoModelForCausalLM.from_pretrained( 'kakaobrain/kogpt', revision='KoGPT6B-ryan1.5b-float16', # or float32 version: revision=KoGPT6B-ryan1.5b pad_token_id=tokenizer.eos_token_id, torch_dtype='auto', low_cpu_mem_usage=True ).to(device='cuda', non_blocking=True) _ = model.eval() prompt = '인간처럼 생각하고, 행동하는 \'지능\'을 통해 인류가 이제까지 풀지 못했던' with torch.no_grad(): tokens = tokenizer.encode(prompt, return_tensors='pt').to(device='cuda', non_blocking=True) gen_tokens = model.generate(tokens, do_sample=True, temperature=0.8, max_length=64) generated = tokenizer.batch_decode(gen_tokens)[0] print(generated) # print: 인간처럼 생각하고, 행동하는 '지능'을 통해 인류가 이제까지 풀지 못했던 문제의 해답을 찾을 수 있을 것이다. 과학기술이 고도로 발달한 21세기를 살아갈 우리 아이들에게 가장 필요한 것은 사고력 훈련이다. 사고력 훈련을 통해, 세상 ``` ## Experiments ### In-context Few-Shots | Models | #params | NSMC (Acc.) | YNAT (F1) | KLUE-STS (F1) | |:--------------|--------:|------------:|----------:|--------------:| | HyperCLOVA[1] | 1.3B | 83.9 | 58.7 | 60.9 | | HyperCLOVA[1] | 6.9B | 83.8 | 67.5 | 59.3 | | HyperCLOVA[1] | 13.0B | 87.9 | 67.9 | 60.0 | | HyperCLOVA[1] | 39.0B | 88.0 | 71.4 | 61.6 | | HyperCLOVA[1] | 82.0B | **88.2** | 72.7 | **65.1** | | **Ours** | 6.0B | 87.8 | **78.0** | 64.3 | ### Finetuning / P-Tuning We have been reported to have issues(https://github.com/kakaobrain/kogpt/issues/17) with our downstream evaluation. The previously published performance evaluation table was deleted because it was difficult to see it as a fair comparison because the comparison target algorithm was different and the performance measurement method could not be confirmed. You can refer to the above issue link for the existing performance evaluation table and troubleshooting results. ## Limitations KakaoBrain `KoGPT` was trained on `ryan dataset`, a dataset known to contain profanity, lewd, political changed, and other harsh language. Therefore, `KoGPT` can generate socially unacceptable texts. As with all language models, It is difficult to predict in advance how `KoGPT` will response to particular prompts and offensive content without warning. Primarily Korean: `KoGPT` is primarily trained on Korean texts, and is best for classifying, searching, summarizing or generating such texts. `KoGPT` by default perform worse on inputs that are different from the data distribution it is trained on, including non-Korean as well as specific dialects of Korean that are not well represented in the training data. [comment]: <> (If abnormal or socially unacceptable text is generated during testing, please send a "prompt" and the "generated text" to [kogpt-report@kakaobrain.com]&#40;mailto:kogpt-report@kakaobrain.com&#41;. ) 카카오브레인 `KoGPT`는 욕설, 음란, 정치적 내용 및 기타 거친 언어에 대한 처리를 하지 않은 `ryan dataset`으로 학습하였습니다. 따라서 `KoGPT`는 사회적으로 용인되지 않은 텍스트를 생성할 수 있습니다. 다른 언어 모델과 마찬가지로 특정 프롬프트와 공격적인 콘텐츠에 어떠한 결과를 생성할지 사전에 파악하기 어렵습니다. `KoGPT`는 주로 한국어 텍스트로 학습을 하였으며 이러한 텍스트를 분류, 검색, 요약 또는 생성하는데 가장 적합합니다. 기본적으로 `KoGPT`는 학습 데이터에 잘 나타나지 않는 방언뿐만아니라 한국어가 아닌 경우와 같이 학습 데이터에서 발견하기 어려운 입력에서 좋지 않은 성능을 보입니다. [comment]: <> (테스트중에 발생한 비정상적인 혹은 사회적으로 용인되지 않는 텍스트가 생성된 경우 [kogpt-report@kakaobrain.com]&#40;mailto:kogpt-report@kakaobrain.com&#41;로 "prompt"와 "생성된 문장"을 함께 보내주시기 바랍니다.) ## Citation If you apply this library or model to any project and research, please cite our code: ``` @misc{kakaobrain2021kogpt, title = {KoGPT: KakaoBrain Korean(hangul) Generative Pre-trained Transformer}, author = {Ildoo Kim and Gunsoo Han and Jiyeon Ham and Woonhyuk Baek}, year = {2021}, howpublished = {\url{https://github.com/kakaobrain/kogpt}}, } ``` ## Contact This is released as an open source in the hope that it will be helpful to many research institutes and startups for research purposes. We look forward to contacting us from various places who wish to cooperate with us. [contact@kakaobrain.com](mailto:contact@kakaobrain.com) ## License The `source code` of KakaoBrain `KoGPT` are licensed under [Apache 2.0](LICENSE.apache-2.0) License. The `pretrained wieghts` of KakaoBrain `KoGPT` are licensed under [CC-BY-NC-ND 4.0 License](https://creativecommons.org/licenses/by-nc-nd/4.0/) License. 카카오브레인 `KoGPT`의 `소스코드(source code)`는 [Apache 2.0](LICENSE.apache-2.0) 라이선스 하에 공개되어 있습니다. 카카오브레인 `KoGPT`의 `사전학습된 가중치(pretrained weights)`는 [CC-BY-NC-ND 4.0 라이선스](https://creativecommons.org/licenses/by-nc-nd/4.0/) 라이선스 하에 공개되어 있습니다. 모델 및 코드, 사전학습된 가중치를 사용할 경우 라이선스 내용을 준수해 주십시오. 라이선스 전문은 [Apache 2.0](LICENSE.apache-2.0), [LICENSE.cc-by-nc-nd-4.0](LICENSE.cc-by-nc-nd-4.0) 파일에서 확인하실 수 있습니다. ## References [1] [HyperCLOVA](https://arxiv.org/abs/2109.04650): Kim, Boseop, et al. "What changes can large-scale language models bring? intensive study on hyperclova: Billions-scale korean generative pretrained transformers." arXiv preprint arXiv:2109.04650 (2021).
7,743
[ [ -0.041412353515625, -0.0496826171875, 0.0232696533203125, 0.0199432373046875, -0.039703369140625, -0.009674072265625, -0.016143798828125, -0.0161590576171875, 0.006168365478515625, 0.0231475830078125, -0.034210205078125, -0.037872314453125, -0.05023193359375, ...
google/bert_uncased_L-8_H-512_A-8
2021-05-19T17:35:53.000Z
[ "transformers", "pytorch", "jax", "bert", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
google
null
null
google/bert_uncased_L-8_H-512_A-8
3
2,562
transformers
2022-03-02T23:29:05
--- thumbnail: https://huggingface.co/front/thumbnails/google.png license: apache-2.0 --- BERT Miniatures === This is the set of 24 BERT models referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962) (English only, uncased, trained with WordPiece masking). We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher. Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity. You can download the 24 BERT miniatures either from the [official BERT Github page](https://github.com/google-research/bert/), or via HuggingFace from the links below: | |H=128|H=256|H=512|H=768| |---|:---:|:---:|:---:|:---:| | **L=2** |[**2/128 (BERT-Tiny)**][2_128]|[2/256][2_256]|[2/512][2_512]|[2/768][2_768]| | **L=4** |[4/128][4_128]|[**4/256 (BERT-Mini)**][4_256]|[**4/512 (BERT-Small)**][4_512]|[4/768][4_768]| | **L=6** |[6/128][6_128]|[6/256][6_256]|[6/512][6_512]|[6/768][6_768]| | **L=8** |[8/128][8_128]|[8/256][8_256]|[**8/512 (BERT-Medium)**][8_512]|[8/768][8_768]| | **L=10** |[10/128][10_128]|[10/256][10_256]|[10/512][10_512]|[10/768][10_768]| | **L=12** |[12/128][12_128]|[12/256][12_256]|[12/512][12_512]|[**12/768 (BERT-Base)**][12_768]| Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model. Here are the corresponding GLUE scores on the test set: |Model|Score|CoLA|SST-2|MRPC|STS-B|QQP|MNLI-m|MNLI-mm|QNLI(v2)|RTE|WNLI|AX| |---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| |BERT-Tiny|64.2|0.0|83.2|81.1/71.1|74.3/73.6|62.2/83.4|70.2|70.3|81.5|57.2|62.3|21.0| |BERT-Mini|65.8|0.0|85.9|81.1/71.8|75.4/73.3|66.4/86.2|74.8|74.3|84.1|57.9|62.3|26.1| |BERT-Small|71.2|27.8|89.7|83.4/76.2|78.8/77.0|68.1/87.0|77.6|77.0|86.4|61.8|62.3|28.6| |BERT-Medium|73.5|38.0|89.6|86.6/81.6|80.4/78.4|69.6/87.9|80.0|79.1|87.7|62.2|62.3|30.5| For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs: - batch sizes: 8, 16, 32, 64, 128 - learning rates: 3e-4, 1e-4, 5e-5, 3e-5 If you use these models, please cite the following paper: ``` @article{turc2019, title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models}, author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina}, journal={arXiv preprint arXiv:1908.08962v2 }, year={2019} } ``` [2_128]: https://huggingface.co/google/bert_uncased_L-2_H-128_A-2 [2_256]: https://huggingface.co/google/bert_uncased_L-2_H-256_A-4 [2_512]: https://huggingface.co/google/bert_uncased_L-2_H-512_A-8 [2_768]: https://huggingface.co/google/bert_uncased_L-2_H-768_A-12 [4_128]: https://huggingface.co/google/bert_uncased_L-4_H-128_A-2 [4_256]: https://huggingface.co/google/bert_uncased_L-4_H-256_A-4 [4_512]: https://huggingface.co/google/bert_uncased_L-4_H-512_A-8 [4_768]: https://huggingface.co/google/bert_uncased_L-4_H-768_A-12 [6_128]: https://huggingface.co/google/bert_uncased_L-6_H-128_A-2 [6_256]: https://huggingface.co/google/bert_uncased_L-6_H-256_A-4 [6_512]: https://huggingface.co/google/bert_uncased_L-6_H-512_A-8 [6_768]: https://huggingface.co/google/bert_uncased_L-6_H-768_A-12 [8_128]: https://huggingface.co/google/bert_uncased_L-8_H-128_A-2 [8_256]: https://huggingface.co/google/bert_uncased_L-8_H-256_A-4 [8_512]: https://huggingface.co/google/bert_uncased_L-8_H-512_A-8 [8_768]: https://huggingface.co/google/bert_uncased_L-8_H-768_A-12 [10_128]: https://huggingface.co/google/bert_uncased_L-10_H-128_A-2 [10_256]: https://huggingface.co/google/bert_uncased_L-10_H-256_A-4 [10_512]: https://huggingface.co/google/bert_uncased_L-10_H-512_A-8 [10_768]: https://huggingface.co/google/bert_uncased_L-10_H-768_A-12 [12_128]: https://huggingface.co/google/bert_uncased_L-12_H-128_A-2 [12_256]: https://huggingface.co/google/bert_uncased_L-12_H-256_A-4 [12_512]: https://huggingface.co/google/bert_uncased_L-12_H-512_A-8 [12_768]: https://huggingface.co/google/bert_uncased_L-12_H-768_A-12
4,617
[ [ -0.053558349609375, -0.03546142578125, 0.0239410400390625, 0.0131683349609375, -0.0237274169921875, -0.016937255859375, -0.02398681640625, -0.031219482421875, 0.04376220703125, -0.006107330322265625, -0.06103515625, -0.030670166015625, -0.05206298828125, -0....
google/pegasus-multi_news
2023-01-24T16:42:34.000Z
[ "transformers", "pytorch", "pegasus", "text2text-generation", "summarization", "en", "arxiv:1912.08777", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
summarization
google
null
null
google/pegasus-multi_news
10
2,561
transformers
2022-03-02T23:29:05
--- language: en tags: - summarization --- ### Pegasus Models See Docs: [here](https://huggingface.co/transformers/master/model_doc/pegasus.html) Original TF 1 code [here](https://github.com/google-research/pegasus) Authors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019 Maintained by: [@sshleifer](https://twitter.com/sam_shleifer) Task: Summarization The following is copied from the authors' README. # Mixed & Stochastic Checkpoints We train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table. | dataset | C4 | HugeNews | Mixed & Stochastic| | ---- | ---- | ---- | ----| | xsum | 45.20/22.06/36.99 | 47.21/24.56/39.25 | 47.60/24.83/39.64| | cnn_dailymail | 43.90/21.20/40.76 | 44.17/21.47/41.11 | 44.16/21.56/41.30| | newsroom | 45.07/33.39/41.28 | 45.15/33.51/41.33 | 45.98/34.20/42.18| | multi_news | 46.74/17.95/24.26 | 47.52/18.72/24.91 | 47.65/18.75/24.95| | gigaword | 38.75/19.96/36.14 | 39.12/19.86/36.24 | 39.65/20.47/36.76| | wikihow | 43.07/19.70/34.79 | 41.35/18.51/33.42 | 46.39/22.12/38.41 *| | reddit_tifu | 26.54/8.94/21.64 | 26.63/9.01/21.60 | 27.99/9.81/22.94| | big_patent | 53.63/33.16/42.25 | 53.41/32.89/42.07 | 52.29/33.08/41.66 *| | arxiv | 44.70/17.27/25.80 | 44.67/17.18/25.73 | 44.21/16.95/25.67| | pubmed | 45.49/19.90/27.69 | 45.09/19.56/27.42 | 45.97/20.15/28.25| | aeslc | 37.69/21.85/36.84 | 37.40/21.22/36.45 | 37.68/21.25/36.51| | billsum | 57.20/39.56/45.80 | 57.31/40.19/45.82 | 59.67/41.58/47.59| The "Mixed & Stochastic" model has the following changes: - trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples). - trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity). - the model uniformly sample a gap sentence ratio between 15% and 45%. - importance sentences are sampled using a 20% uniform noise to importance scores. - the sentencepiece tokenizer is updated to be able to encode newline character. (*) the numbers of wikihow and big_patent datasets are not comparable because of change in tokenization and data: - wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information. - we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS. The "Mixed & Stochastic" model has the following changes (from pegasus-large in the paper): trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples). trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity). the model uniformly sample a gap sentence ratio between 15% and 45%. importance sentences are sampled using a 20% uniform noise to importance scores. the sentencepiece tokenizer is updated to be able to encode newline character. Citation ``` @misc{zhang2019pegasus, title={PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization}, author={Jingqing Zhang and Yao Zhao and Mohammad Saleh and Peter J. Liu}, year={2019}, eprint={1912.08777}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
3,332
[ [ -0.0284423828125, -0.05816650390625, 0.0289306640625, 0.02069091796875, -0.0264892578125, -0.0251007080078125, -0.01071929931640625, -0.033721923828125, 0.0394287109375, 0.0221405029296875, -0.058349609375, -0.045867919921875, -0.054779052734375, -0.00140571...
anjali0610/my-dog
2023-10-24T17:20:01.000Z
[ "diffusers", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us", "has_space" ]
text-to-image
anjali0610
null
null
anjali0610/my-dog
0
2,561
diffusers
2023-10-24T17:14:45
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### My-Dog Dreambooth model trained by anjali0610 following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: CCEW-122 Sample pictures of this concept: ![0](https://huggingface.co/anjali0610/my-dog/resolve/main/sample_images/arb_(8).jpg)
383
[ [ -0.054443359375, -0.0164947509765625, 0.0277557373046875, 0.0009636878967285156, -0.0088348388671875, 0.03485107421875, 0.03790283203125, -0.034942626953125, 0.039215087890625, 0.0275726318359375, -0.049591064453125, -0.0263824462890625, -0.021514892578125, ...
M-CLIP/M-BERT-Base-ViT-B
2021-05-18T21:34:39.000Z
[ "transformers", "pytorch", "tf", "jax", "bert", "feature-extraction", "endpoints_compatible", "region:us" ]
feature-extraction
M-CLIP
null
null
M-CLIP/M-BERT-Base-ViT-B
10
2,560
transformers
2022-03-02T23:29:04
<br /> <p align="center"> <h1 align="center">M-BERT Base ViT-B</h1> <p align="center"> <a href="https://github.com/FreddeFrallan/Multilingual-CLIP/tree/main/Model%20Cards/M-BERT%20Base%20ViT-B">Github Model Card</a> </p> </p> ## Usage To use this model along with the original CLIP vision encoder you need to download the code and additional linear weights from the [Multilingual-CLIP Github](https://github.com/FreddeFrallan/Multilingual-CLIP). Once this is done, you can load and use the model with the following code ```python from src import multilingual_clip model = multilingual_clip.load_model('M-BERT-Base-ViT') embeddings = model(['Älgen är skogens konung!', 'Wie leben Eisbären in der Antarktis?', 'Вы знали, что все белые медведи левши?']) print(embeddings.shape) # Yields: torch.Size([3, 640]) ``` <!-- ABOUT THE PROJECT --> ## About A [BERT-base-multilingual](https://huggingface.co/bert-base-multilingual-cased) tuned to match the embedding space for [69 languages](https://github.com/FreddeFrallan/Multilingual-CLIP/blob/main/Model%20Cards/M-BERT%20Base%2069/Fine-Tune-Languages.md), to the embedding space of the CLIP text encoder which accompanies the ViT-B/32 vision encoder. <br> A full list of the 100 languages used during pre-training can be found [here](https://github.com/google-research/bert/blob/master/multilingual.md#list-of-languages), and a list of the 4069languages used during fine-tuning can be found in [SupportedLanguages.md](https://github.com/FreddeFrallan/Multilingual-CLIP/blob/main/Model%20Cards/M-BERT%20Base%2069/Fine-Tune-Languages.md). Training data pairs was generated by sampling 40k sentences for each language from the combined descriptions of [GCC](https://ai.google.com/research/ConceptualCaptions/) + [MSCOCO](https://cocodataset.org/#home) + [VizWiz](https://vizwiz.org/tasks-and-datasets/image-captioning/), and translating them into the corresponding language. All translation was done using the [AWS translate service](https://aws.amazon.com/translate/), the quality of these translations have currently not been analyzed, but one can assume the quality varies between the 69 languages.
2,162
[ [ -0.031982421875, -0.032073974609375, 0.014923095703125, 0.0158843994140625, -0.041046142578125, 0.0075836181640625, -0.038604736328125, -0.0254364013671875, 0.03546142578125, 0.0178375244140625, -0.04901123046875, -0.046844482421875, -0.045379638671875, 0.00...
TencentARC/t2i-adapter-depth-midas-sdxl-1.0
2023-09-07T19:11:24.000Z
[ "diffusers", "art", "t2i-adapter", "image-to-image", "stable-diffusion-xl-diffusers", "stable-diffusion-xl", "arxiv:2302.08453", "license:apache-2.0", "has_space", "diffusers:T2IAdapter", "region:us" ]
image-to-image
TencentARC
null
null
TencentARC/t2i-adapter-depth-midas-sdxl-1.0
8
2,558
diffusers
2023-09-03T14:46:44
--- license: apache-2.0 base_model: stabilityai/stable-diffusion-xl-base-1.0 tags: - art - t2i-adapter - image-to-image - stable-diffusion-xl-diffusers - stable-diffusion-xl --- # T2I-Adapter-SDXL - Depth-MiDaS T2I Adapter is a network providing additional conditioning to stable diffusion. Each t2i checkpoint takes a different type of conditioning as input and is used with a specific base stable diffusion checkpoint. This checkpoint provides conditioning on depth for the StableDiffusionXL checkpoint. This was a collaboration between **Tencent ARC** and [**Hugging Face**](https://huggingface.co/). ## Model Details - **Developed by:** T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models - **Model type:** Diffusion-based text-to-image generation model - **Language(s):** English - **License:** Apache 2.0 - **Resources for more information:** [GitHub Repository](https://github.com/TencentARC/T2I-Adapter), [Paper](https://arxiv.org/abs/2302.08453). - **Model complexity:** | | SD-V1.4/1.5 | SD-XL | T2I-Adapter | T2I-Adapter-SDXL | | --- | --- |--- |--- |--- | | Parameters | 860M | 2.6B |77 M | 77/79 M | | - **Cite as:** @misc{ title={T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models}, author={Chong Mou, Xintao Wang, Liangbin Xie, Yanze Wu, Jian Zhang, Zhongang Qi, Ying Shan, Xiaohu Qie}, year={2023}, eprint={2302.08453}, archivePrefix={arXiv}, primaryClass={cs.CV} } ### Checkpoints | Model Name | Control Image Overview| Control Image Example | Generated Image Example | |---|---|---|---| |[TencentARC/t2i-adapter-canny-sdxl-1.0](https://huggingface.co/TencentARC/t2i-adapter-canny-sdxl-1.0)<br/> *Trained with canny edge detection* | A monochrome image with white edges on a black background.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_canny.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_canny.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_canny.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_canny.png"/></a>| |[TencentARC/t2i-adapter-sketch-sdxl-1.0](https://huggingface.co/TencentARC/t2i-adapter-sketch-sdxl-1.0)<br/> *Trained with [PidiNet](https://github.com/zhuoinoulu/pidinet) edge detection* | A hand-drawn monochrome image with white outlines on a black background.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_sketch.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_sketch.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_sketch.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_sketch.png"/></a>| |[TencentARC/t2i-adapter-lineart-sdxl-1.0](https://huggingface.co/TencentARC/t2i-adapter-lineart-sdxl-1.0)<br/> *Trained with lineart edge detection* | A hand-drawn monochrome image with white outlines on a black background.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_lin.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_lin.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_lin.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_lin.png"/></a>| |[TencentARC/t2i-adapter-depth-midas-sdxl-1.0](https://huggingface.co/TencentARC/t2i-adapter-depth-midas-sdxl-1.0)<br/> *Trained with Midas depth estimation* | A grayscale image with black representing deep areas and white representing shallow areas.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_depth_mid.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_depth_mid.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_depth_mid.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_depth_mid.png"/></a>| |[TencentARC/t2i-adapter-depth-zoe-sdxl-1.0](https://huggingface.co/TencentARC/t2i-adapter-depth-zoe-sdxl-1.0)<br/> *Trained with Zoe depth estimation* | A grayscale image with black representing deep areas and white representing shallow areas.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_depth_zeo.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_depth_zeo.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_depth_zeo.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_depth_zeo.png"/></a>| |[TencentARC/t2i-adapter-openpose-sdxl-1.0](https://huggingface.co/TencentARC/t2i-adapter-openpose-sdxl-1.0)<br/> *Trained with OpenPose bone image* | A [OpenPose bone](https://github.com/CMU-Perceptual-Computing-Lab/openpose) image.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/openpose.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/openpose.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/res_pose.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/res_pose.png"/></a>| ## Example To get started, first install the required dependencies: ```bash pip install -U git+https://github.com/huggingface/diffusers.git pip install -U controlnet_aux==0.0.7 # for conditioning models and detectors pip install transformers accelerate safetensors ``` 1. Images are first downloaded into the appropriate *control image* format. 2. The *control image* and *prompt* are passed to the [`StableDiffusionXLAdapterPipeline`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/t2i_adapter/pipeline_stable_diffusion_xl_adapter.py#L125). Let's have a look at a simple example using the [Canny Adapter](https://huggingface.co/TencentARC/t2i-adapter-lineart-sdxl-1.0). - Dependency ```py from diffusers import StableDiffusionXLAdapterPipeline, T2IAdapter, EulerAncestralDiscreteScheduler, AutoencoderKL from diffusers.utils import load_image, make_image_grid from controlnet_aux.midas import MidasDetector import torch # load adapter adapter = T2IAdapter.from_pretrained( "TencentARC/t2i-adapter-depth-midas-sdxl-1.0", torch_dtype=torch.float16, varient="fp16" ).to("cuda") # load euler_a scheduler model_id = 'stabilityai/stable-diffusion-xl-base-1.0' euler_a = EulerAncestralDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler") vae=AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16) pipe = StableDiffusionXLAdapterPipeline.from_pretrained( model_id, vae=vae, adapter=adapter, scheduler=euler_a, torch_dtype=torch.float16, variant="fp16", ).to("cuda") pipe.enable_xformers_memory_efficient_attention() midas_depth = MidasDetector.from_pretrained( "valhalla/t2iadapter-aux-models", filename="dpt_large_384.pt", model_type="dpt_large" ).to("cuda") ``` - Condition Image ```py url = "https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/org_mid.jpg" image = load_image(url) image = midas_depth( image, detect_resolution=512, image_resolution=1024 ) ``` <a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_depth_mid.png"><img width="480" style="margin:0;padding:0;" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_depth_mid.png"/></a> - Generation ```py prompt = "A photo of a room, 4k photo, highly detailed" negative_prompt = "anime, cartoon, graphic, text, painting, crayon, graphite, abstract, glitch, deformed, mutated, ugly, disfigured" gen_images = pipe( prompt=prompt, negative_prompt=negative_prompt, image=image, num_inference_steps=30, adapter_conditioning_scale=1, guidance_scale=7.5, ).images[0] gen_images.save('out_mid.png') ``` <a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_depth_mid.png"><img width="480" style="margin:0;padding:0;" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_depth_mid.png"/></a> ### Training Our training script was built on top of the official training script that we provide [here](https://github.com/huggingface/diffusers/blob/main/examples/t2i_adapter/README_sdxl.md). The model is trained on 3M high-resolution image-text pairs from LAION-Aesthetics V2 with - Training steps: 35000 - Batch size: Data parallel with a single gpu batch size of `16` for a total batch size of `256`. - Learning rate: Constant learning rate of `1e-5`. - Mixed precision: fp16
8,957
[ [ -0.0477294921875, -0.0283203125, 0.026641845703125, 0.032501220703125, -0.0338134765625, -0.0196075439453125, 0.01085662841796875, -0.03363037109375, 0.042877197265625, -0.000621795654296875, -0.055511474609375, -0.036224365234375, -0.048675537109375, -0.008...
stablediffusionapi/NightVision_XL
2023-10-06T19:25:07.000Z
[ "diffusers", "stablediffusionapi.com", "stable-diffusion-api", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
stablediffusionapi
null
null
stablediffusionapi/NightVision_XL
2
2,558
diffusers
2023-10-06T19:23:18
--- license: creativeml-openrail-m tags: - stablediffusionapi.com - stable-diffusion-api - text-to-image - ultra-realistic pinned: true --- # NightVision XL API Inference ![generated from stablediffusionapi.com](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/83a41e5b-63ae-46a7-9202-c3b8170598a6/width=1280/00332-2023-09-11-1329484590.jpeg) ## Get API Key Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed. Replace Key in below code, change **model_id** to "NightVision_XL" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs) Try model for free: [Generate Images](https://stablediffusionapi.com/models/NightVision_XL) Model link: [View model](https://stablediffusionapi.com/models/NightVision_XL) Credits: [View credits](https://civitai.com/?query=NightVision%20XL) View all models: [View Models](https://stablediffusionapi.com/models) import requests import json url = "https://stablediffusionapi.com/api/v4/dreambooth" payload = json.dumps({ "key": "your_api_key", "model_id": "NightVision_XL", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) > Use this coupon code to get 25% off **DMGG0RBN**
2,519
[ [ -0.028717041015625, -0.055511474609375, 0.040435791015625, 0.017333984375, -0.03887939453125, 0.006900787353515625, 0.024993896484375, -0.0390625, 0.035797119140625, 0.0535888671875, -0.05621337890625, -0.06732177734375, -0.026947021484375, 0.003202438354492...
albert-xlarge-v2
2021-01-13T15:34:57.000Z
[ "transformers", "pytorch", "tf", "albert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1909.11942", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
fill-mask
null
null
null
albert-xlarge-v2
3
2,549
transformers
2022-03-02T23:29:04
--- language: en license: apache-2.0 datasets: - bookcorpus - wikipedia --- # ALBERT XLarge v2 Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1909.11942) and first released in [this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not make a difference between english and English. Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the ALBERT model as inputs. ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers. This is the second version of the xlarge model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks. This model has the following configuration: - 24 repeating layers - 128 embedding dimension - 2048 hidden dimension - 16 attention heads - 58M parameters ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=albert) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='albert-xlarge-v2') >>> unmasker("Hello I'm a [MASK] model.") [ { "sequence":"[CLS] hello i'm a modeling model.[SEP]", "score":0.05816134437918663, "token":12807, "token_str":"▁modeling" }, { "sequence":"[CLS] hello i'm a modelling model.[SEP]", "score":0.03748830780386925, "token":23089, "token_str":"▁modelling" }, { "sequence":"[CLS] hello i'm a model model.[SEP]", "score":0.033725276589393616, "token":1061, "token_str":"▁model" }, { "sequence":"[CLS] hello i'm a runway model.[SEP]", "score":0.017313428223133087, "token":8014, "token_str":"▁runway" }, { "sequence":"[CLS] hello i'm a lingerie model.[SEP]", "score":0.014405295252799988, "token":29104, "token_str":"▁lingerie" } ] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import AlbertTokenizer, AlbertModel tokenizer = AlbertTokenizer.from_pretrained('albert-xlarge-v2') model = AlbertModel.from_pretrained("albert-xlarge-v2") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import AlbertTokenizer, TFAlbertModel tokenizer = AlbertTokenizer.from_pretrained('albert-xlarge-v2') model = TFAlbertModel.from_pretrained("albert-xlarge-v2") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='albert-xlarge-v2') >>> unmasker("The man worked as a [MASK].") [ { "sequence":"[CLS] the man worked as a chauffeur.[SEP]", "score":0.029577180743217468, "token":28744, "token_str":"▁chauffeur" }, { "sequence":"[CLS] the man worked as a janitor.[SEP]", "score":0.028865724802017212, "token":29477, "token_str":"▁janitor" }, { "sequence":"[CLS] the man worked as a shoemaker.[SEP]", "score":0.02581118606030941, "token":29024, "token_str":"▁shoemaker" }, { "sequence":"[CLS] the man worked as a blacksmith.[SEP]", "score":0.01849772222340107, "token":21238, "token_str":"▁blacksmith" }, { "sequence":"[CLS] the man worked as a lawyer.[SEP]", "score":0.01820771023631096, "token":3672, "token_str":"▁lawyer" } ] >>> unmasker("The woman worked as a [MASK].") [ { "sequence":"[CLS] the woman worked as a receptionist.[SEP]", "score":0.04604868218302727, "token":25331, "token_str":"▁receptionist" }, { "sequence":"[CLS] the woman worked as a janitor.[SEP]", "score":0.028220869600772858, "token":29477, "token_str":"▁janitor" }, { "sequence":"[CLS] the woman worked as a paramedic.[SEP]", "score":0.0261906236410141, "token":23386, "token_str":"▁paramedic" }, { "sequence":"[CLS] the woman worked as a chauffeur.[SEP]", "score":0.024797942489385605, "token":28744, "token_str":"▁chauffeur" }, { "sequence":"[CLS] the woman worked as a waitress.[SEP]", "score":0.024124596267938614, "token":13678, "token_str":"▁waitress" } ] ``` This bias will also affect all fine-tuned versions of this model. ## Training data The ALBERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` ### Training The ALBERT procedure follows the BERT setup. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ## Evaluation results When fine-tuned on downstream tasks, the ALBERT models achieve the following results: | | Average | SQuAD1.1 | SQuAD2.0 | MNLI | SST-2 | RACE | |----------------|----------|----------|----------|----------|----------|----------| |V2 | |ALBERT-base |82.3 |90.2/83.2 |82.1/79.3 |84.6 |92.9 |66.8 | |ALBERT-large |85.7 |91.8/85.2 |84.9/81.8 |86.5 |94.9 |75.2 | |ALBERT-xlarge |87.9 |92.9/86.4 |87.9/84.1 |87.9 |95.4 |80.7 | |ALBERT-xxlarge |90.9 |94.6/89.1 |89.8/86.9 |90.6 |96.8 |86.8 | |V1 | |ALBERT-base |80.1 |89.3/82.3 | 80.0/77.1|81.6 |90.3 | 64.0 | |ALBERT-large |82.4 |90.6/83.9 | 82.3/79.4|83.5 |91.7 | 68.5 | |ALBERT-xlarge |85.5 |92.5/86.1 | 86.1/83.1|86.4 |92.4 | 74.8 | |ALBERT-xxlarge |91.0 |94.8/89.3 | 90.2/87.4|90.8 |96.9 | 86.5 | ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-1909-11942, author = {Zhenzhong Lan and Mingda Chen and Sebastian Goodman and Kevin Gimpel and Piyush Sharma and Radu Soricut}, title = {{ALBERT:} {A} Lite {BERT} for Self-supervised Learning of Language Representations}, journal = {CoRR}, volume = {abs/1909.11942}, year = {2019}, url = {http://arxiv.org/abs/1909.11942}, archivePrefix = {arXiv}, eprint = {1909.11942}, timestamp = {Fri, 27 Sep 2019 13:04:21 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1909-11942.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
9,766
[ [ -0.00820159912109375, -0.0379638671875, 0.0192413330078125, 0.021392822265625, -0.030242919921875, 0.0034198760986328125, 0.01157379150390625, -0.01529693603515625, 0.0251007080078125, 0.04730224609375, -0.0428466796875, -0.032928466796875, -0.06304931640625, ...
tner/roberta-large-tweetner7-all
2022-09-27T15:29:57.000Z
[ "transformers", "pytorch", "roberta", "token-classification", "dataset:tner/tweetner7", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
tner
null
null
tner/roberta-large-tweetner7-all
0
2,543
transformers
2022-07-02T19:08:51
--- datasets: - tner/tweetner7 metrics: - f1 - precision - recall model-index: - name: tner/roberta-large-tweetner7-all results: - task: name: Token Classification type: token-classification dataset: name: tner/tweetner7 type: tner/tweetner7 args: tner/tweetner7 metrics: - name: F1 (test_2021) type: f1 value: 0.6574551220340903 - name: Precision (test_2021) type: precision value: 0.644212629008989 - name: Recall (test_2021) type: recall value: 0.6712534690101758 - name: Macro F1 (test_2021) type: f1_macro value: 0.6124665667529737 - name: Macro Precision (test_2021) type: precision_macro value: 0.6005167968535563 - name: Macro Recall (test_2021) type: recall_macro value: 0.625251837701222 - name: Entity Span F1 (test_2021) type: f1_entity_span value: 0.7881979839166384 - name: Entity Span Precision (test_2020) type: precision_entity_span value: 0.7722783264898457 - name: Entity Span Recall (test_2021) type: recall_entity_span value: 0.804787787672025 - name: F1 (test_2020) type: f1 value: 0.6628787878787878 - name: Precision (test_2020) type: precision value: 0.6924816280384398 - name: Recall (test_2020) type: recall value: 0.6357031655422937 - name: Macro F1 (test_2020) type: f1_macro value: 0.6297223287745568 - name: Macro Precision (test_2020) type: precision_macro value: 0.6618492079232416 - name: Macro Recall (test_2020) type: recall_macro value: 0.601311568050436 - name: Entity Span F1 (test_2020) type: f1_entity_span value: 0.7642760487144791 - name: Entity Span Precision (test_2020) type: precision_entity_span value: 0.7986425339366516 - name: Entity Span Recall (test_2020) type: recall_entity_span value: 0.7327451997924235 pipeline_tag: token-classification widget: - text: "Get the all-analog Classic Vinyl Edition of `Takin' Off` Album from {@herbiehancock@} via {@bluenoterecords@} link below: {{URL}}" example_title: "NER Example 1" --- # tner/roberta-large-tweetner7-all This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the [tner/tweetner7](https://huggingface.co/datasets/tner/tweetner7) dataset (`train_all` split). Model fine-tuning is done via [T-NER](https://github.com/asahi417/tner)'s hyper-parameter search (see the repository for more detail). It achieves the following results on the test set of 2021: - F1 (micro): 0.6574551220340903 - Precision (micro): 0.644212629008989 - Recall (micro): 0.6712534690101758 - F1 (macro): 0.6124665667529737 - Precision (macro): 0.6005167968535563 - Recall (macro): 0.625251837701222 The per-entity breakdown of the F1 score on the test set are below: - corporation: 0.5392156862745098 - creative_work: 0.4760582928521859 - event: 0.4673321234119782 - group: 0.6139798488664987 - location: 0.6707399864222675 - person: 0.8293212669683258 - product: 0.6906187624750498 For F1 scores, the confidence interval is obtained by bootstrap as below: - F1 (micro): - 90%: [0.6484148010152769, 0.6672289519134409] - 95%: [0.6470100684797441, 0.6689850350992637] - F1 (macro): - 90%: [0.6484148010152769, 0.6672289519134409] - 95%: [0.6470100684797441, 0.6689850350992637] Full evaluation can be found at [metric file of NER](https://huggingface.co/tner/roberta-large-tweetner7-all/raw/main/eval/metric.json) and [metric file of entity span](https://huggingface.co/tner/roberta-large-tweetner7-all/raw/main/eval/metric_span.json). ### Usage This model can be used through the [tner library](https://github.com/asahi417/tner). Install the library via pip. ```shell pip install tner ``` [TweetNER7](https://huggingface.co/datasets/tner/tweetner7) pre-processed tweets where the account name and URLs are converted into special formats (see the dataset page for more detail), so we process tweets accordingly and then run the model prediction as below. ```python import re from urlextract import URLExtract from tner import TransformersNER extractor = URLExtract() def format_tweet(tweet): # mask web urls urls = extractor.find_urls(tweet) for url in urls: tweet = tweet.replace(url, "{{URL}}") # format twitter account tweet = re.sub(r"\b(\s*)(@[\S]+)\b", r'\1{\2@}', tweet) return tweet text = "Get the all-analog Classic Vinyl Edition of `Takin' Off` Album from @herbiehancock via @bluenoterecords link below: http://bluenote.lnk.to/AlbumOfTheWeek" text_format = format_tweet(text) model = TransformersNER("tner/roberta-large-tweetner7-all") model.predict([text_format]) ``` It can be used via transformers library but it is not recommended as CRF layer is not supported at the moment. ### Training hyperparameters The following hyperparameters were used during training: - dataset: ['tner/tweetner7'] - dataset_split: train_all - dataset_name: None - local_dataset: None - model: roberta-large - crf: True - max_length: 128 - epoch: 30 - batch_size: 32 - lr: 1e-05 - random_seed: 0 - gradient_accumulation_steps: 1 - weight_decay: 1e-07 - lr_warmup_step_ratio: 0.15 - max_grad_norm: 1 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/tner/roberta-large-tweetner7-all/raw/main/trainer_config.json). ### Reference If you use the model, please cite T-NER paper and TweetNER7 paper. - T-NER ``` @inproceedings{ushio-camacho-collados-2021-ner, title = "{T}-{NER}: An All-Round Python Library for Transformer-based Named Entity Recognition", author = "Ushio, Asahi and Camacho-Collados, Jose", booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations", month = apr, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.eacl-demos.7", doi = "10.18653/v1/2021.eacl-demos.7", pages = "53--62", abstract = "Language model (LM) pretraining has led to consistent improvements in many NLP downstream tasks, including named entity recognition (NER). In this paper, we present T-NER (Transformer-based Named Entity Recognition), a Python library for NER LM finetuning. In addition to its practical utility, T-NER facilitates the study and investigation of the cross-domain and cross-lingual generalization ability of LMs finetuned on NER. Our library also provides a web app where users can get model predictions interactively for arbitrary text, which facilitates qualitative model evaluation for non-expert programmers. We show the potential of the library by compiling nine public NER datasets into a unified format and evaluating the cross-domain and cross- lingual performance across the datasets. The results from our initial experiments show that in-domain performance is generally competitive across datasets. However, cross-domain generalization is challenging even with a large pretrained LM, which has nevertheless capacity to learn domain-specific features if fine- tuned on a combined dataset. To facilitate future research, we also release all our LM checkpoints via the Hugging Face model hub.", } ``` - TweetNER7 ``` @inproceedings{ushio-etal-2022-tweet, title = "{N}amed {E}ntity {R}ecognition in {T}witter: {A} {D}ataset and {A}nalysis on {S}hort-{T}erm {T}emporal {S}hifts", author = "Ushio, Asahi and Neves, Leonardo and Silva, Vitor and Barbieri, Francesco. and Camacho-Collados, Jose", booktitle = "The 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing", month = nov, year = "2022", address = "Online", publisher = "Association for Computational Linguistics", } ```
8,064
[ [ -0.031768798828125, -0.054229736328125, 0.0213775634765625, 0.01326751708984375, -0.01128387451171875, 0.0011262893676757812, -0.0430908203125, -0.03564453125, 0.033935546875, 0.0238494873046875, -0.043701171875, -0.04901123046875, -0.054595947265625, 0.0130...
CATIE-AQ/QAmembert
2023-10-18T08:30:06.000Z
[ "transformers", "pytorch", "safetensors", "camembert", "question-answering", "fr", "dataset:etalab-ia/piaf", "dataset:fquad", "dataset:lincoln/newsquadfr", "dataset:pragnakalp/squad_v2_french_translated", "dataset:CATIE-AQ/frenchQA", "arxiv:1910.09700", "doi:10.57967/hf/0821", "license:cc-...
question-answering
CATIE-AQ
null
null
CATIE-AQ/QAmembert
12
2,542
transformers
2023-01-10T16:33:26
--- language: fr datasets: - etalab-ia/piaf - fquad - lincoln/newsquadfr - pragnakalp/squad_v2_french_translated - CATIE-AQ/frenchQA widget: - text: Combien de personnes utilisent le français tous les jours ? context: >- Le français est une langue indo-européenne de la famille des langues romanes dont les locuteurs sont appelés francophones. Elle est parfois surnommée la langue de Molière. Le français est parlé, en 2023, sur tous les continents par environ 321 millions de personnes : 235 millions l'emploient quotidiennement et 90 millions en sont des locuteurs natifs. En 2018, 80 millions d'élèves et étudiants s'instruisent en français dans le monde. Selon l'Organisation internationale de la francophonie (OIF), il pourrait y avoir 700 millions de francophones sur Terre en 2050. license: cc-by-4.0 metrics: - f1 - exact_match library_name: transformers pipeline_tag: question-answering co2_eq_emissions: 100 --- # QAmembert ## Model Description We present **QAmemBERT**, which is a [CamemBERT base](https://huggingface.co/camembert-base) fine-tuned for the Question-Answering task for the French language on four French Q&A datasets composed of contexts and questions with their answers inside the context (= SQuAD 1.0 format) but also contexts and questions with their answers not inside the context (= SQuAD 2.0 format). All these datasets were concatenated into a single dataset that we called [frenchQA](https://huggingface.co/datasets/CATIE-AQ/frenchQA). This represents a total of over **221,348 context/question/answer triplets used to finetune this model and 6,376 to test it**. Our methodology is described in a blog post available in [English](https://blog.vaniila.ai/en/Question_answering/) or [French](https://blog.vaniila.ai/QA/). ## Datasets | Dataset | Format | Train split | Dev split | Test split | | ----------- | ----------- | ----------- | ----------- | ----------- | | [piaf](https://www.data.gouv.fr/en/datasets/piaf-le-dataset-francophone-de-questions-reponses/)| SQuAD 1.0 | 9 224 Q & A | X | X | | piaf_v2| SQuAD 2.0 | 9 224 Q & A | X | X | | [fquad](https://fquad.illuin.tech/)| SQuAD 1.0 | 20 731 Q & A | 3 188 Q & A (not used in training because it serves as a test dataset) | 2 189 Q & A (not used in our work because not freely available)| | fquad_v2 | SQuAD 2.0 | 20 731 Q & A | 3 188 Q & A (not used in training because it serves as a test dataset) | X | | [lincoln/newsquadfr](https://huggingface.co/datasets/lincoln/newsquadfr) | SQuAD 1.0 | 1 650 Q & A | 455 Q & A (not used in our work) | X | | lincoln/newsquadfr_v2 | SQuAD 2.0 | 1 650 Q & A | 455 Q & A (not used in our work) | X | | [pragnakalp/squad_v2_french_translated](https://huggingface.co/datasets/pragnakalp/squad_v2_french_translated)| SQuAD 2.0 | 79 069 Q & A | X | X | | pragnakalp/squad_v2_french_translated_v2| SQuAD 2.0 | 79 069 Q & A | X | X | All these datasets were concatenated into a single dataset that we called [frenchQA](https://huggingface.co/datasets/CATIE-AQ/frenchQA). ## Evaluation results The evaluation was carried out using the [**evaluate**](https://pypi.org/project/evaluate/) python package. ### FQuaD 1.0 (validation) The metric used is SQuAD 1.0. | Model | Exact_match | F1-score | | ----------- | ----------- | ----------- | | [etalab-ia/camembert-base-squadFR-fquad-piaf](https://huggingface.co/etalab-ia/camembert-base-squadFR-fquad-piaf) | 53.60 | 78.09 | | QAmembert (previous version) | 54.26 | 77.87 | | QAmembert (**this version**) | 53.98 | 78.00 | | [QAmembert-large](https://huggingface.co/CATIE-AQ/QAmembert-large) | **55.95** | **81.05** | ### qwant/squad_fr (validation) The metric used is SQuAD 1.0. | Model | Exact_match | F1-score | | ----------- | ----------- | ----------- | | [etalab-ia/camembert-base-squadFR-fquad-piaf](https://huggingface.co/etalab-ia/camembert-base-squadFR-fquad-piaf) | 60.17 | 78.27 | | QAmembert (previous version) | 60.40 | 77.27 | | QAmembert (**this version**) | 60.95 | 77.30 | | [QAmembert-large](https://huggingface.co/CATIE-AQ/QAmembert-large) | **65.58** | **81.74** | ### frenchQA This dataset includes question with no answers in the context. The metric used is SQuAD 2.0. | Model | Exact_match | F1-score | Answer_f1 | NoAnswer_f1 | | ----------- | ----------- | ----------- | ----------- | ----------- | | [etalab-ia/camembert-base-squadFR-fquad-piaf](https://huggingface.co/etalab-ia/camembert-base-squadFR-fquad-piaf) | n/a | n/a | n/a | n/a | | QAmembert (previous version) | 60.28 | 71.29 | 75.92 | 66.65 | QAmembert (**this version**) | **77.14** | 86.88 | 75.66 | 98.11 | [QAmembert-large](https://huggingface.co/CATIE-AQ/QAmembert-large) | **77.14** | **88.74** | **78.83** | **98.65** ## Usage ### Example with answer in the context ```python from transformers import pipeline qa = pipeline('question-answering', model='CATIE-AQ/QAmembert', tokenizer='CATIE-AQ/QAmembert') result = qa({ 'question': "Combien de personnes utilisent le français tous les jours ?", 'context': "Le français est une langue indo-européenne de la famille des langues romanes dont les locuteurs sont appelés francophones. Elle est parfois surnommée la langue de Molière. Le français est parlé, en 2023, sur tous les continents par environ 321 millions de personnes : 235 millions l'emploient quotidiennement et 90 millions en sont des locuteurs natifs. En 2018, 80 millions d'élèves et étudiants s'instruisent en français dans le monde. Selon l'Organisation internationale de la francophonie (OIF), il pourrait y avoir 700 millions de francophones sur Terre en 2050." }) if result['score'] < 0.01: print("La réponse n'est pas dans le contexte fourni.") else : print(result['answer']) ``` ```python 235 millions ``` ```python # details result {'score': 0.9945194721221924, 'start': 269, 'end': 281, 'answer': '235 millions'} ``` ### Example with answer not in the context ```python from transformers import pipeline qa = pipeline('question-answering', model='CATIE-AQ/QAmembert', tokenizer='CATIE-AQ/QAmembert') result = qa({ 'question': "Quel est le meilleur vin du monde ?", 'context': "La tour Eiffel est une tour de fer puddlé de 330 m de hauteur (avec antennes) située à Paris, à l’extrémité nord-ouest du parc du Champ-de-Mars en bordure de la Seine dans le 7e arrondissement. Son adresse officielle est 5, avenue Anatole-France. Construite en deux ans par Gustave Eiffel et ses collaborateurs pour l'Exposition universelle de Paris de 1889, célébrant le centenaire de la Révolution française, et initialement nommée « tour de 300 mètres », elle est devenue le symbole de la capitale française et un site touristique de premier plan : il s’agit du quatrième site culturel français payant le plus visité en 2016, avec 5,9 millions de visiteurs. Depuis son ouverture au public, elle a accueilli plus de 300 millions de visiteurs." }) if result['score'] < 0.01: print("La réponse n'est pas dans le contexte fourni.") else : print(result['answer']) ``` ```python La réponse n'est pas dans le contexte fourni. ``` ```python # details result {'score': 3.619904940035945e-13, 'start': 734, 'end': 744, 'answer': 'visiteurs.'} ``` ### Try it through Space A Space has been created to test the model. It is available [here](https://huggingface.co/spaces/CATIE-AQ/Qamembert). ## Environmental Impact *Carbon emissions were estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.* - **Hardware Type:** A100 PCIe 40/80GB - **Hours used:** 5h and 36 min - **Cloud Provider:** Private Infrastructure - **Carbon Efficiency (kg/kWh):** 0.076kg (estimated from [electricitymaps](https://app.electricitymaps.com/zone/FR) ; we take the average carbon intensity in France for the month of March 2023, as we are unable to use the data for the day of training, which are not available.) - **Carbon Emitted** *(Power consumption x Time x Carbon produced based on location of power grid)*: 0.1 kg eq. CO2 ## Citations ### QAmemBERT ``` @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, author = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, title = { QAmembert (Revision 9685bc3) }, year = 2023, url = { https://huggingface.co/CATIE-AQ/QAmembert }, doi = { 10.57967/hf/0821 }, publisher = { Hugging Face } } ``` ### PIAF ``` @inproceedings{KeraronLBAMSSS20, author = {Rachel Keraron and Guillaume Lancrenon and Mathilde Bras and Fr{\'{e}}d{\'{e}}ric Allary and Gilles Moyse and Thomas Scialom and Edmundo{-}Pavel Soriano{-}Morales and Jacopo Staiano}, title = {Project {PIAF:} Building a Native French Question-Answering Dataset}, booktitle = {{LREC}}, pages = {5481--5490}, publisher = {European Language Resources Association}, year = {2020} } ``` ### FQuAD ``` @article{dHoffschmidt2020FQuADFQ, title={FQuAD: French Question Answering Dataset}, author={Martin d'Hoffschmidt and Maxime Vidal and Wacim Belblidia and Tom Brendl'e and Quentin Heinrich}, journal={ArXiv}, year={2020}, volume={abs/2002.06071} } ``` ### lincoln/newsquadfr ``` Hugging Face repository : https://huggingface.co/datasets/lincoln/newsquadfr ``` ### pragnakalp/squad_v2_french_translated ``` Hugging Face repository : https://huggingface.co/datasets/pragnakalp/squad_v2_french_translated ``` ### CamemBERT ``` @inproceedings{martin2020camembert, title={CamemBERT: a Tasty French Language Model}, author={Martin, Louis and Muller, Benjamin and Su{\'a}rez, Pedro Javier Ortiz and Dupont, Yoann and Romary, Laurent and de la Clergerie, {\'E}ric Villemonte and Seddah, Djam{\'e} and Sagot, Beno{\^\i}t}, booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics}, year={2020} } ``` ## License [cc-by-4.0](https://creativecommons.org/licenses/by/4.0/deed.en)
10,538
[ [ -0.0399169921875, -0.041259765625, 0.021697998046875, 0.02264404296875, 0.001575469970703125, -0.0015611648559570312, -0.0012388229370117188, -0.016326904296875, 0.0233001708984375, 0.024444580078125, -0.061370849609375, -0.035858154296875, -0.0237884521484375, ...
Yntec/a-ZovyaRPGV3VAE
2023-08-03T16:21:10.000Z
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "Zovya", "license:creativeml-openrail-m", "endpoints_compatible", "has_space", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
Yntec
null
null
Yntec/a-ZovyaRPGV3VAE
0
2,541
diffusers
2023-08-03T16:04:42
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image tags: - stable-diffusion - stable-diffusion-diffusers - diffusers - text-to-image - Zovya --- # A-Zovya RPG Artist Tools V3 VAE Original page: https://civitai.com/models/8124?modelVersionId=87886
287
[ [ -0.01457977294921875, 0.0032806396484375, 0.043853759765625, 0.03240966796875, -0.036285400390625, -0.0156402587890625, 0.044189453125, -0.0031108856201171875, 0.03369140625, 0.066650390625, -0.09161376953125, -0.046142578125, -0.00788116455078125, -0.020721...
infgrad/stella-base-zh
2023-10-19T06:59:19.000Z
[ "transformers", "pytorch", "bert", "feature-extraction", "mteb", "arxiv:1612.00796", "model-index", "endpoints_compatible", "region:us" ]
feature-extraction
infgrad
null
null
infgrad/stella-base-zh
9
2,537
transformers
2023-09-09T15:15:44
--- tags: - mteb model-index: - name: stella-base-zh results: - task: type: STS dataset: type: C-MTEB/AFQMC name: MTEB AFQMC config: default split: validation revision: None metrics: - type: cos_sim_pearson value: 49.34825050234731 - type: cos_sim_spearman value: 51.74726338428475 - type: euclidean_pearson value: 50.14955499038012 - type: euclidean_spearman value: 51.74730359287025 - type: manhattan_pearson value: 50.016703594410615 - type: manhattan_spearman value: 51.63936364317057 - task: type: STS dataset: type: C-MTEB/ATEC name: MTEB ATEC config: default split: test revision: None metrics: - type: cos_sim_pearson value: 52.26876163587667 - type: cos_sim_spearman value: 52.818410137444374 - type: euclidean_pearson value: 55.24925286208574 - type: euclidean_spearman value: 52.818404507964686 - type: manhattan_pearson value: 55.21236977375391 - type: manhattan_spearman value: 52.80289117015117 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (zh) config: zh split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 40.245999999999995 - type: f1 value: 38.55443674287747 - task: type: STS dataset: type: C-MTEB/BQ name: MTEB BQ config: default split: test revision: None metrics: - type: cos_sim_pearson value: 61.553652835163255 - type: cos_sim_spearman value: 63.29065064027392 - type: euclidean_pearson value: 62.000329557485 - type: euclidean_spearman value: 63.290650638944825 - type: manhattan_pearson value: 62.02786936153664 - type: manhattan_spearman value: 63.32720383880146 - task: type: Clustering dataset: type: C-MTEB/CLSClusteringP2P name: MTEB CLSClusteringP2P config: default split: test revision: None metrics: - type: v_measure value: 39.71224230526474 - task: type: Clustering dataset: type: C-MTEB/CLSClusteringS2S name: MTEB CLSClusteringS2S config: default split: test revision: None metrics: - type: v_measure value: 36.55705201882987 - task: type: Reranking dataset: type: C-MTEB/CMedQAv1-reranking name: MTEB CMedQAv1 config: default split: test revision: None metrics: - type: map value: 85.69418720521168 - type: mrr value: 87.97444444444446 - task: type: Reranking dataset: type: C-MTEB/CMedQAv2-reranking name: MTEB CMedQAv2 config: default split: test revision: None metrics: - type: map value: 86.46348358482606 - type: mrr value: 88.81428571428572 - task: type: Retrieval dataset: type: C-MTEB/CmedqaRetrieval name: MTEB CmedqaRetrieval config: default split: dev revision: None metrics: - type: map_at_1 value: 23.721 - type: map_at_10 value: 35.428 - type: map_at_100 value: 37.438 - type: map_at_1000 value: 37.557 - type: map_at_3 value: 31.589 - type: map_at_5 value: 33.647 - type: mrr_at_1 value: 36.709 - type: mrr_at_10 value: 44.590999999999994 - type: mrr_at_100 value: 45.684999999999995 - type: mrr_at_1000 value: 45.732 - type: mrr_at_3 value: 42.331 - type: mrr_at_5 value: 43.532 - type: ndcg_at_1 value: 36.709 - type: ndcg_at_10 value: 41.858000000000004 - type: ndcg_at_100 value: 49.775999999999996 - type: ndcg_at_1000 value: 51.844 - type: ndcg_at_3 value: 37.067 - type: ndcg_at_5 value: 38.875 - type: precision_at_1 value: 36.709 - type: precision_at_10 value: 9.411999999999999 - type: precision_at_100 value: 1.5709999999999997 - type: precision_at_1000 value: 0.183 - type: precision_at_3 value: 21.154999999999998 - type: precision_at_5 value: 15.184000000000001 - type: recall_at_1 value: 23.721 - type: recall_at_10 value: 51.714000000000006 - type: recall_at_100 value: 84.60600000000001 - type: recall_at_1000 value: 98.414 - type: recall_at_3 value: 37.091 - type: recall_at_5 value: 42.978 - task: type: PairClassification dataset: type: C-MTEB/CMNLI name: MTEB Cmnli config: default split: validation revision: None metrics: - type: cos_sim_accuracy value: 73.61395069152135 - type: cos_sim_ap value: 81.65459344597652 - type: cos_sim_f1 value: 75.66718995290425 - type: cos_sim_precision value: 68.4918529746116 - type: cos_sim_recall value: 84.5218611176058 - type: dot_accuracy value: 73.61395069152135 - type: dot_ap value: 81.64596407363373 - type: dot_f1 value: 75.66718995290425 - type: dot_precision value: 68.4918529746116 - type: dot_recall value: 84.5218611176058 - type: euclidean_accuracy value: 73.61395069152135 - type: euclidean_ap value: 81.6546013070452 - type: euclidean_f1 value: 75.66718995290425 - type: euclidean_precision value: 68.4918529746116 - type: euclidean_recall value: 84.5218611176058 - type: manhattan_accuracy value: 73.51773902585688 - type: manhattan_ap value: 81.57345451483191 - type: manhattan_f1 value: 75.7393958530681 - type: manhattan_precision value: 68.87442572741195 - type: manhattan_recall value: 84.12438625204582 - type: max_accuracy value: 73.61395069152135 - type: max_ap value: 81.6546013070452 - type: max_f1 value: 75.7393958530681 - task: type: Retrieval dataset: type: C-MTEB/CovidRetrieval name: MTEB CovidRetrieval config: default split: dev revision: None metrics: - type: map_at_1 value: 73.551 - type: map_at_10 value: 81.513 - type: map_at_100 value: 81.734 - type: map_at_1000 value: 81.73700000000001 - type: map_at_3 value: 80.27300000000001 - type: map_at_5 value: 81.017 - type: mrr_at_1 value: 73.762 - type: mrr_at_10 value: 81.479 - type: mrr_at_100 value: 81.699 - type: mrr_at_1000 value: 81.702 - type: mrr_at_3 value: 80.33 - type: mrr_at_5 value: 80.999 - type: ndcg_at_1 value: 73.867 - type: ndcg_at_10 value: 84.711 - type: ndcg_at_100 value: 85.714 - type: ndcg_at_1000 value: 85.803 - type: ndcg_at_3 value: 82.244 - type: ndcg_at_5 value: 83.514 - type: precision_at_1 value: 73.867 - type: precision_at_10 value: 9.557 - type: precision_at_100 value: 1.001 - type: precision_at_1000 value: 0.101 - type: precision_at_3 value: 29.505 - type: precision_at_5 value: 18.377 - type: recall_at_1 value: 73.551 - type: recall_at_10 value: 94.521 - type: recall_at_100 value: 99.05199999999999 - type: recall_at_1000 value: 99.789 - type: recall_at_3 value: 87.777 - type: recall_at_5 value: 90.83200000000001 - task: type: Retrieval dataset: type: C-MTEB/DuRetrieval name: MTEB DuRetrieval config: default split: dev revision: None metrics: - type: map_at_1 value: 26.230999999999998 - type: map_at_10 value: 80.635 - type: map_at_100 value: 83.393 - type: map_at_1000 value: 83.431 - type: map_at_3 value: 55.717000000000006 - type: map_at_5 value: 70.387 - type: mrr_at_1 value: 90.75 - type: mrr_at_10 value: 93.569 - type: mrr_at_100 value: 93.648 - type: mrr_at_1000 value: 93.65 - type: mrr_at_3 value: 93.27499999999999 - type: mrr_at_5 value: 93.482 - type: ndcg_at_1 value: 90.75 - type: ndcg_at_10 value: 87.801 - type: ndcg_at_100 value: 90.44 - type: ndcg_at_1000 value: 90.776 - type: ndcg_at_3 value: 86.556 - type: ndcg_at_5 value: 85.468 - type: precision_at_1 value: 90.75 - type: precision_at_10 value: 42.08 - type: precision_at_100 value: 4.816 - type: precision_at_1000 value: 0.49 - type: precision_at_3 value: 77.60000000000001 - type: precision_at_5 value: 65.49000000000001 - type: recall_at_1 value: 26.230999999999998 - type: recall_at_10 value: 89.00200000000001 - type: recall_at_100 value: 97.866 - type: recall_at_1000 value: 99.569 - type: recall_at_3 value: 57.778 - type: recall_at_5 value: 74.895 - task: type: Retrieval dataset: type: C-MTEB/EcomRetrieval name: MTEB EcomRetrieval config: default split: dev revision: None metrics: - type: map_at_1 value: 47.599999999999994 - type: map_at_10 value: 57.296 - type: map_at_100 value: 58.011 - type: map_at_1000 value: 58.028 - type: map_at_3 value: 54.300000000000004 - type: map_at_5 value: 56.21000000000001 - type: mrr_at_1 value: 47.599999999999994 - type: mrr_at_10 value: 57.296 - type: mrr_at_100 value: 58.011 - type: mrr_at_1000 value: 58.028 - type: mrr_at_3 value: 54.300000000000004 - type: mrr_at_5 value: 56.21000000000001 - type: ndcg_at_1 value: 47.599999999999994 - type: ndcg_at_10 value: 62.458000000000006 - type: ndcg_at_100 value: 65.589 - type: ndcg_at_1000 value: 66.059 - type: ndcg_at_3 value: 56.364000000000004 - type: ndcg_at_5 value: 59.815 - type: precision_at_1 value: 47.599999999999994 - type: precision_at_10 value: 7.89 - type: precision_at_100 value: 0.928 - type: precision_at_1000 value: 0.097 - type: precision_at_3 value: 20.767 - type: precision_at_5 value: 14.14 - type: recall_at_1 value: 47.599999999999994 - type: recall_at_10 value: 78.9 - type: recall_at_100 value: 92.80000000000001 - type: recall_at_1000 value: 96.6 - type: recall_at_3 value: 62.3 - type: recall_at_5 value: 70.7 - task: type: Classification dataset: type: C-MTEB/IFlyTek-classification name: MTEB IFlyTek config: default split: validation revision: None metrics: - type: accuracy value: 47.46440938822624 - type: f1 value: 34.587004997852524 - task: type: Classification dataset: type: C-MTEB/JDReview-classification name: MTEB JDReview config: default split: test revision: None metrics: - type: accuracy value: 84.9906191369606 - type: ap value: 52.31309789960497 - type: f1 value: 79.55556102310072 - task: type: STS dataset: type: C-MTEB/LCQMC name: MTEB LCQMC config: default split: test revision: None metrics: - type: cos_sim_pearson value: 69.80872804636063 - type: cos_sim_spearman value: 75.83290476813391 - type: euclidean_pearson value: 74.09865882324753 - type: euclidean_spearman value: 75.83290698376118 - type: manhattan_pearson value: 74.0616102379577 - type: manhattan_spearman value: 75.81278969865738 - task: type: Retrieval dataset: type: C-MTEB/MMarcoRetrieval name: MTEB MMarcoRetrieval config: default split: dev revision: None metrics: - type: map_at_1 value: 65.029 - type: map_at_10 value: 74.39 - type: map_at_100 value: 74.734 - type: map_at_1000 value: 74.74300000000001 - type: map_at_3 value: 72.52 - type: map_at_5 value: 73.724 - type: mrr_at_1 value: 67.192 - type: mrr_at_10 value: 74.95100000000001 - type: mrr_at_100 value: 75.25500000000001 - type: mrr_at_1000 value: 75.263 - type: mrr_at_3 value: 73.307 - type: mrr_at_5 value: 74.355 - type: ndcg_at_1 value: 67.192 - type: ndcg_at_10 value: 78.22200000000001 - type: ndcg_at_100 value: 79.76299999999999 - type: ndcg_at_1000 value: 80.018 - type: ndcg_at_3 value: 74.656 - type: ndcg_at_5 value: 76.697 - type: precision_at_1 value: 67.192 - type: precision_at_10 value: 9.513 - type: precision_at_100 value: 1.027 - type: precision_at_1000 value: 0.105 - type: precision_at_3 value: 28.204 - type: precision_at_5 value: 18.009 - type: recall_at_1 value: 65.029 - type: recall_at_10 value: 89.462 - type: recall_at_100 value: 96.418 - type: recall_at_1000 value: 98.409 - type: recall_at_3 value: 80.029 - type: recall_at_5 value: 84.882 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (zh-CN) config: zh-CN split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 65.56489576328177 - type: f1 value: 63.37174551232159 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (zh-CN) config: zh-CN split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.4862138533961 - type: f1 value: 71.171374964826 - task: type: Retrieval dataset: type: C-MTEB/MedicalRetrieval name: MTEB MedicalRetrieval config: default split: dev revision: None metrics: - type: map_at_1 value: 48.6 - type: map_at_10 value: 54.92700000000001 - type: map_at_100 value: 55.528 - type: map_at_1000 value: 55.584 - type: map_at_3 value: 53.55 - type: map_at_5 value: 54.379999999999995 - type: mrr_at_1 value: 48.8 - type: mrr_at_10 value: 55.028999999999996 - type: mrr_at_100 value: 55.629 - type: mrr_at_1000 value: 55.684999999999995 - type: mrr_at_3 value: 53.65 - type: mrr_at_5 value: 54.48 - type: ndcg_at_1 value: 48.6 - type: ndcg_at_10 value: 57.965999999999994 - type: ndcg_at_100 value: 61.043000000000006 - type: ndcg_at_1000 value: 62.624 - type: ndcg_at_3 value: 55.132000000000005 - type: ndcg_at_5 value: 56.621 - type: precision_at_1 value: 48.6 - type: precision_at_10 value: 6.75 - type: precision_at_100 value: 0.823 - type: precision_at_1000 value: 0.095 - type: precision_at_3 value: 19.900000000000002 - type: precision_at_5 value: 12.659999999999998 - type: recall_at_1 value: 48.6 - type: recall_at_10 value: 67.5 - type: recall_at_100 value: 82.3 - type: recall_at_1000 value: 94.89999999999999 - type: recall_at_3 value: 59.699999999999996 - type: recall_at_5 value: 63.3 - task: type: Reranking dataset: type: C-MTEB/Mmarco-reranking name: MTEB MMarcoReranking config: default split: dev revision: None metrics: - type: map value: 29.196130696027474 - type: mrr value: 28.43730158730159 - task: type: Classification dataset: type: C-MTEB/MultilingualSentiment-classification name: MTEB MultilingualSentiment config: default split: validation revision: None metrics: - type: accuracy value: 72.48333333333333 - type: f1 value: 72.00258522357558 - task: type: PairClassification dataset: type: C-MTEB/OCNLI name: MTEB Ocnli config: default split: validation revision: None metrics: - type: cos_sim_accuracy value: 65.13264753654575 - type: cos_sim_ap value: 70.52831936800807 - type: cos_sim_f1 value: 71.35353535353535 - type: cos_sim_precision value: 57.787958115183244 - type: cos_sim_recall value: 93.24181626187962 - type: dot_accuracy value: 65.13264753654575 - type: dot_ap value: 70.52828597418102 - type: dot_f1 value: 71.35353535353535 - type: dot_precision value: 57.787958115183244 - type: dot_recall value: 93.24181626187962 - type: euclidean_accuracy value: 65.13264753654575 - type: euclidean_ap value: 70.52828597418102 - type: euclidean_f1 value: 71.35353535353535 - type: euclidean_precision value: 57.787958115183244 - type: euclidean_recall value: 93.24181626187962 - type: manhattan_accuracy value: 64.8077964266378 - type: manhattan_ap value: 70.39954487476643 - type: manhattan_f1 value: 71.2270200940573 - type: manhattan_precision value: 59.84195402298851 - type: manhattan_recall value: 87.96198521647307 - type: max_accuracy value: 65.13264753654575 - type: max_ap value: 70.52831936800807 - type: max_f1 value: 71.35353535353535 - task: type: Classification dataset: type: C-MTEB/OnlineShopping-classification name: MTEB OnlineShopping config: default split: test revision: None metrics: - type: accuracy value: 90.34 - type: ap value: 87.79622626876444 - type: f1 value: 90.32357430051181 - task: type: STS dataset: type: C-MTEB/PAWSX name: MTEB PAWSX config: default split: test revision: None metrics: - type: cos_sim_pearson value: 27.9175458105215 - type: cos_sim_spearman value: 32.024302491613014 - type: euclidean_pearson value: 33.01780461609846 - type: euclidean_spearman value: 32.024301939183374 - type: manhattan_pearson value: 32.94874897942371 - type: manhattan_spearman value: 31.902283210178012 - task: type: STS dataset: type: C-MTEB/QBQTC name: MTEB QBQTC config: default split: test revision: None metrics: - type: cos_sim_pearson value: 36.288219964332754 - type: cos_sim_spearman value: 36.46838652731507 - type: euclidean_pearson value: 35.11414028811812 - type: euclidean_spearman value: 36.468386523814104 - type: manhattan_pearson value: 35.20922826624027 - type: manhattan_spearman value: 36.55349180906185 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (zh) config: zh split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 66.18186265837434 - type: cos_sim_spearman value: 67.52365178443915 - type: euclidean_pearson value: 65.46342439169497 - type: euclidean_spearman value: 67.52365178443915 - type: manhattan_pearson value: 67.3476263677961 - type: manhattan_spearman value: 69.09476240936812 - task: type: STS dataset: type: C-MTEB/STSB name: MTEB STSB config: default split: test revision: None metrics: - type: cos_sim_pearson value: 72.53864906415339 - type: cos_sim_spearman value: 72.63037820118355 - type: euclidean_pearson value: 72.42255276991672 - type: euclidean_spearman value: 72.63037820118355 - type: manhattan_pearson value: 72.36324244766192 - type: manhattan_spearman value: 72.58609772740323 - task: type: Reranking dataset: type: C-MTEB/T2Reranking name: MTEB T2Reranking config: default split: dev revision: None metrics: - type: map value: 66.45708148192449 - type: mrr value: 76.08372693469173 - task: type: Retrieval dataset: type: C-MTEB/T2Retrieval name: MTEB T2Retrieval config: default split: dev revision: None metrics: - type: map_at_1 value: 26.436999999999998 - type: map_at_10 value: 74.516 - type: map_at_100 value: 78.29899999999999 - type: map_at_1000 value: 78.372 - type: map_at_3 value: 52.217 - type: map_at_5 value: 64.24 - type: mrr_at_1 value: 88.23 - type: mrr_at_10 value: 91.06400000000001 - type: mrr_at_100 value: 91.18 - type: mrr_at_1000 value: 91.184 - type: mrr_at_3 value: 90.582 - type: mrr_at_5 value: 90.88300000000001 - type: ndcg_at_1 value: 88.23 - type: ndcg_at_10 value: 82.511 - type: ndcg_at_100 value: 86.531 - type: ndcg_at_1000 value: 87.244 - type: ndcg_at_3 value: 83.987 - type: ndcg_at_5 value: 82.46900000000001 - type: precision_at_1 value: 88.23 - type: precision_at_10 value: 41.245 - type: precision_at_100 value: 4.987 - type: precision_at_1000 value: 0.515 - type: precision_at_3 value: 73.675 - type: precision_at_5 value: 61.71 - type: recall_at_1 value: 26.436999999999998 - type: recall_at_10 value: 81.547 - type: recall_at_100 value: 94.548 - type: recall_at_1000 value: 98.197 - type: recall_at_3 value: 54.056000000000004 - type: recall_at_5 value: 67.93 - task: type: Classification dataset: type: C-MTEB/TNews-classification name: MTEB TNews config: default split: validation revision: None metrics: - type: accuracy value: 50.784 - type: f1 value: 48.89471168071432 - task: type: Clustering dataset: type: C-MTEB/ThuNewsClusteringP2P name: MTEB ThuNewsClusteringP2P config: default split: test revision: None metrics: - type: v_measure value: 63.19039347990962 - task: type: Clustering dataset: type: C-MTEB/ThuNewsClusteringS2S name: MTEB ThuNewsClusteringS2S config: default split: test revision: None metrics: - type: v_measure value: 55.357378578603225 - task: type: Retrieval dataset: type: C-MTEB/VideoRetrieval name: MTEB VideoRetrieval config: default split: dev revision: None metrics: - type: map_at_1 value: 58.8 - type: map_at_10 value: 68.623 - type: map_at_100 value: 69.074 - type: map_at_1000 value: 69.085 - type: map_at_3 value: 66.767 - type: map_at_5 value: 67.972 - type: mrr_at_1 value: 58.699999999999996 - type: mrr_at_10 value: 68.573 - type: mrr_at_100 value: 69.024 - type: mrr_at_1000 value: 69.035 - type: mrr_at_3 value: 66.717 - type: mrr_at_5 value: 67.92200000000001 - type: ndcg_at_1 value: 58.8 - type: ndcg_at_10 value: 73.038 - type: ndcg_at_100 value: 75.16199999999999 - type: ndcg_at_1000 value: 75.422 - type: ndcg_at_3 value: 69.297 - type: ndcg_at_5 value: 71.475 - type: precision_at_1 value: 58.8 - type: precision_at_10 value: 8.67 - type: precision_at_100 value: 0.9650000000000001 - type: precision_at_1000 value: 0.099 - type: precision_at_3 value: 25.533 - type: precision_at_5 value: 16.38 - type: recall_at_1 value: 58.8 - type: recall_at_10 value: 86.7 - type: recall_at_100 value: 96.5 - type: recall_at_1000 value: 98.5 - type: recall_at_3 value: 76.6 - type: recall_at_5 value: 81.89999999999999 - task: type: Classification dataset: type: C-MTEB/waimai-classification name: MTEB Waimai config: default split: test revision: None metrics: - type: accuracy value: 86.61999999999999 - type: ap value: 69.93149123197975 - type: f1 value: 84.99670691559903 --- ## stella model **新闻 | News** **[2023-10-19]** 开源stella-base-en-v2 使用简单,**不需要任何前缀文本**。 Release stella-base-en-v2. This model **does not need any prefix text**.\ **[2023-10-12]** 开源stella-base-zh-v2和stella-large-zh-v2, 效果更好且使用简单,**不需要任何前缀文本**。 Release stella-base-zh-v2 and stella-large-zh-v2. The 2 models have better performance and **do not need any prefix text**.\ **[2023-09-11]** 开源stella-base-zh和stella-large-zh stella是一个通用的文本编码模型,主要有以下模型: | Model Name | Model Size (GB) | Dimension | Sequence Length | Language | Need instruction for retrieval? | |:------------------:|:---------------:|:---------:|:---------------:|:--------:|:-------------------------------:| | stella-base-en-v2 | 0.2 | 768 | 512 | English | No | | stella-large-zh-v2 | 0.65 | 1024 | 1024 | Chinese | No | | stella-base-zh-v2 | 0.2 | 768 | 1024 | Chinese | No | | stella-large-zh | 0.65 | 1024 | 1024 | Chinese | Yes | | stella-base-zh | 0.2 | 768 | 1024 | Chinese | Yes | 完整的训练思路和训练过程已记录在[博客1](https://zhuanlan.zhihu.com/p/655322183)和[博客2](https://zhuanlan.zhihu.com/p/662209559),欢迎阅读讨论。 **训练数据:** 1. 开源数据(wudao_base_200GB[1]、m3e[2]和simclue[3]),着重挑选了长度大于512的文本 2. 在通用语料库上使用LLM构造一批(question, paragraph)和(sentence, paragraph)数据 **训练方法:** 1. 对比学习损失函数 2. 带有难负例的对比学习损失函数(分别基于bm25和vector构造了难负例) 3. EWC(Elastic Weights Consolidation)[4] 4. cosent loss[5] 5. 每一种类型的数据一个迭代器,分别计算loss进行更新 stella-v2在stella模型的基础上,使用了更多的训练数据,同时知识蒸馏等方法去除了前置的instruction( 比如piccolo的`查询:`, `结果:`, e5的`query:`和`passage:`)。 **初始权重:**\ stella-base-zh和stella-large-zh分别以piccolo-base-zh[6]和piccolo-large-zh作为基础模型,512-1024的position embedding使用层次分解位置编码[7]进行初始化。\ 感谢商汤科技研究院开源的[piccolo系列模型](https://huggingface.co/sensenova)。 stella is a general-purpose text encoder, which mainly includes the following models: | Model Name | Model Size (GB) | Dimension | Sequence Length | Language | Need instruction for retrieval? | |:------------------:|:---------------:|:---------:|:---------------:|:--------:|:-------------------------------:| | stella-base-en-v2 | 0.2 | 768 | 512 | English | No | | stella-large-zh-v2 | 0.65 | 1024 | 1024 | Chinese | No | | stella-base-zh-v2 | 0.2 | 768 | 1024 | Chinese | No | | stella-large-zh | 0.65 | 1024 | 1024 | Chinese | Yes | | stella-base-zh | 0.2 | 768 | 1024 | Chinese | Yes | The training data mainly includes: 1. Open-source training data (wudao_base_200GB, m3e, and simclue), with a focus on selecting texts with lengths greater than 512. 2. A batch of (question, paragraph) and (sentence, paragraph) data constructed on a general corpus using LLM. The loss functions mainly include: 1. Contrastive learning loss function 2. Contrastive learning loss function with hard negative examples (based on bm25 and vector hard negatives) 3. EWC (Elastic Weights Consolidation) 4. cosent loss Model weight initialization:\ stella-base-zh and stella-large-zh use piccolo-base-zh and piccolo-large-zh as the base models, respectively, and the 512-1024 position embedding uses the initialization strategy of hierarchical decomposed position encoding. Training strategy:\ One iterator for each type of data, separately calculating the loss. Based on stella models, stella-v2 use more training data and remove instruction by Knowledge Distillation. ## Metric #### C-MTEB leaderboard (Chinese) | Model Name | Model Size (GB) | Dimension | Sequence Length | Average (35) | Classification (9) | Clustering (4) | Pair Classification (2) | Reranking (4) | Retrieval (8) | STS (8) | |:------------------:|:---------------:|:---------:|:---------------:|:------------:|:------------------:|:--------------:|:-----------------------:|:-------------:|:-------------:|:-------:| | stella-large-zh-v2 | 0.65 | 1024 | 1024 | 65.13 | 69.05 | 49.16 | 82.68 | 66.41 | 70.14 | 58.66 | | stella-base-zh-v2 | 0.2 | 768 | 1024 | 64.36 | 68.29 | 49.4 | 79.95 | 66.1 | 70.08 | 56.92 | | stella-large-zh | 0.65 | 1024 | 1024 | 64.54 | 67.62 | 48.65 | 78.72 | 65.98 | 71.02 | 58.3 | | stella-base-zh | 0.2 | 768 | 1024 | 64.16 | 67.77 | 48.7 | 76.09 | 66.95 | 71.07 | 56.54 | #### MTEB leaderboard (English) | Model Name | Model Size (GB) | Dimension | Sequence Length | Average (56) | Classification (12) | Clustering (11) | Pair Classification (3) | Reranking (4) | Retrieval (15) | STS (10) | Summarization (1) | |:-----------------:|:---------------:|:---------:|:---------------:|:------------:|:-------------------:|:---------------:|:-----------------------:|:-------------:|:--------------:|:--------:|:------------------:| | stella-base-en-v2 | 0.2 | 768 | 512 | 62.61 | 75.28 | 44.9 | 86.45 | 58.77 | 50.1 | 83.02 | 32.52 | #### Reproduce our results **C-MTEB:** ```python import torch import numpy as np from typing import List from mteb import MTEB from sentence_transformers import SentenceTransformer class FastTextEncoder(): def __init__(self, model_name): self.model = SentenceTransformer(model_name).cuda().half().eval() self.model.max_seq_length = 512 def encode( self, input_texts: List[str], *args, **kwargs ): new_sens = list(set(input_texts)) new_sens.sort(key=lambda x: len(x), reverse=True) vecs = self.model.encode( new_sens, normalize_embeddings=True, convert_to_numpy=True, batch_size=256 ).astype(np.float32) sen2arrid = {sen: idx for idx, sen in enumerate(new_sens)} vecs = vecs[[sen2arrid[sen] for sen in input_texts]] torch.cuda.empty_cache() return vecs if __name__ == '__main__': model_name = "infgrad/stella-base-zh-v2" output_folder = "zh_mteb_results/stella-base-zh-v2" task_names = [t.description["name"] for t in MTEB(task_langs=['zh', 'zh-CN']).tasks] model = FastTextEncoder(model_name) for task in task_names: MTEB(tasks=[task], task_langs=['zh', 'zh-CN']).run(model, output_folder=output_folder) ``` **MTEB:** You can use official script to reproduce our result. [scripts/run_mteb_english.py](https://github.com/embeddings-benchmark/mteb/blob/main/scripts/run_mteb_english.py) #### Evaluation for long text 经过实际观察发现,C-MTEB的评测数据长度基本都是小于512的, 更致命的是那些长度大于512的文本,其重点都在前半部分 这里以CMRC2018的数据为例说明这个问题: ``` question: 《无双大蛇z》是谁旗下ω-force开发的动作游戏? passage:《无双大蛇z》是光荣旗下ω-force开发的动作游戏,于2009年3月12日登陆索尼playstation3,并于2009年11月27日推...... ``` passage长度为800多,大于512,但是对于这个question而言只需要前面40个字就足以检索,多的内容对于模型而言是一种噪声,反而降低了效果。\ 简言之,现有数据集的2个问题:\ 1)长度大于512的过少\ 2)即便大于512,对于检索而言也只需要前512的文本内容\ 导致**无法准确评估模型的长文本编码能力。** 为了解决这个问题,搜集了相关开源数据并使用规则进行过滤,最终整理了6份长文本测试集,他们分别是: - CMRC2018,通用百科 - CAIL,法律阅读理解 - DRCD,繁体百科,已转简体 - Military,军工问答 - Squad,英文阅读理解,已转中文 - Multifieldqa_zh,清华的大模型长文本理解能力评测数据[9] 处理规则是选取答案在512长度之后的文本,短的测试数据会欠采样一下,长短文本占比约为1:2,所以模型既得理解短文本也得理解长文本。 除了Military数据集,我们提供了其他5个测试数据的下载地址:https://drive.google.com/file/d/1WC6EWaCbVgz-vPMDFH4TwAMkLyh5WNcN/view?usp=sharing 评测指标为Recall@5, 结果如下: | Dataset | piccolo-base-zh | piccolo-large-zh | bge-base-zh | bge-large-zh | stella-base-zh | stella-large-zh | |:---------------:|:---------------:|:----------------:|:-----------:|:------------:|:--------------:|:---------------:| | CMRC2018 | 94.34 | 93.82 | 91.56 | 93.12 | 96.08 | 95.56 | | CAIL | 28.04 | 33.64 | 31.22 | 33.94 | 34.62 | 37.18 | | DRCD | 78.25 | 77.9 | 78.34 | 80.26 | 86.14 | 84.58 | | Military | 76.61 | 73.06 | 75.65 | 75.81 | 83.71 | 80.48 | | Squad | 91.21 | 86.61 | 87.87 | 90.38 | 93.31 | 91.21 | | Multifieldqa_zh | 81.41 | 83.92 | 83.92 | 83.42 | 79.9 | 80.4 | | **Average** | 74.98 | 74.83 | 74.76 | 76.15 | **78.96** | **78.24** | **注意:** 因为长文本评测数据数量稀少,所以构造时也使用了train部分,如果自行评测,请注意模型的训练数据以免数据泄露。 ## Usage #### stella 中文系列模型 stella-base-zh 和 stella-large-zh: 本模型是在piccolo基础上训练的,因此**用法和piccolo完全一致** ,即在检索重排任务上给query和passage加上`查询: `和`结果: `。对于短短匹配不需要做任何操作。 stella-base-zh-v2 和 stella-large-zh-v2: 本模型使用简单,**任何使用场景中都不需要加前缀文本**。 stella中文系列模型均使用mean pooling做为文本向量。 在sentence-transformer库中的使用方法: ```python from sentence_transformers import SentenceTransformer sentences = ["数据1", "数据2"] model = SentenceTransformer('infgrad/stella-base-zh-v2') print(model.max_seq_length) embeddings_1 = model.encode(sentences, normalize_embeddings=True) embeddings_2 = model.encode(sentences, normalize_embeddings=True) similarity = embeddings_1 @ embeddings_2.T print(similarity) ``` 直接使用transformers库: ```python from transformers import AutoModel, AutoTokenizer from sklearn.preprocessing import normalize model = AutoModel.from_pretrained('infgrad/stella-base-zh-v2') tokenizer = AutoTokenizer.from_pretrained('infgrad/stella-base-zh-v2') sentences = ["数据1", "数据ABCDEFGH"] batch_data = tokenizer( batch_text_or_text_pairs=sentences, padding="longest", return_tensors="pt", max_length=1024, truncation=True, ) attention_mask = batch_data["attention_mask"] model_output = model(**batch_data) last_hidden = model_output.last_hidden_state.masked_fill(~attention_mask[..., None].bool(), 0.0) vectors = last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None] vectors = normalize(vectors, norm="l2", axis=1, ) print(vectors.shape) # 2,768 ``` #### stella models for English **Using Sentence-Transformers:** ```python from sentence_transformers import SentenceTransformer sentences = ["one car come", "one car go"] model = SentenceTransformer('infgrad/stella-base-en-v2') print(model.max_seq_length) embeddings_1 = model.encode(sentences, normalize_embeddings=True) embeddings_2 = model.encode(sentences, normalize_embeddings=True) similarity = embeddings_1 @ embeddings_2.T print(similarity) ``` **Using HuggingFace Transformers:** ```python from transformers import AutoModel, AutoTokenizer from sklearn.preprocessing import normalize model = AutoModel.from_pretrained('infgrad/stella-base-en-v2') tokenizer = AutoTokenizer.from_pretrained('infgrad/stella-base-en-v2') sentences = ["one car come", "one car go"] batch_data = tokenizer( batch_text_or_text_pairs=sentences, padding="longest", return_tensors="pt", max_length=512, truncation=True, ) attention_mask = batch_data["attention_mask"] model_output = model(**batch_data) last_hidden = model_output.last_hidden_state.masked_fill(~attention_mask[..., None].bool(), 0.0) vectors = last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None] vectors = normalize(vectors, norm="l2", axis=1, ) print(vectors.shape) # 2,768 ``` ## Training Detail **硬件:** 单卡A100-80GB **环境:** torch1.13.*; transformers-trainer + deepspeed + gradient-checkpointing **学习率:** 1e-6 **batch_size:** base模型为1024,额外增加20%的难负例;large模型为768,额外增加20%的难负例 **数据量:** 第一版模型约100万,其中用LLM构造的数据约有200K. LLM模型大小为13b。v2系列模型到了2000万训练数据。 ## ToDoList **评测的稳定性:** 评测过程中发现Clustering任务会和官方的结果不一致,大约有±0.0x的小差距,原因是聚类代码没有设置random_seed,差距可以忽略不计,不影响评测结论。 **更高质量的长文本训练和测试数据:** 训练数据多是用13b模型构造的,肯定会存在噪声。 测试数据基本都是从mrc数据整理来的,所以问题都是factoid类型,不符合真实分布。 **OOD的性能:** 虽然近期出现了很多向量编码模型,但是对于不是那么通用的domain,这一众模型包括stella、openai和cohere, 它们的效果均比不上BM25。 ## Reference 1. https://www.scidb.cn/en/detail?dataSetId=c6a3fe684227415a9db8e21bac4a15ab 2. https://github.com/wangyuxinwhy/uniem 3. https://github.com/CLUEbenchmark/SimCLUE 4. https://arxiv.org/abs/1612.00796 5. https://kexue.fm/archives/8847 6. https://huggingface.co/sensenova/piccolo-base-zh 7. https://kexue.fm/archives/7947 8. https://github.com/FlagOpen/FlagEmbedding 9. https://github.com/THUDM/LongBench
37,851
[ [ -0.0256805419921875, -0.0545654296875, 0.023712158203125, 0.035675048828125, -0.0227508544921875, -0.0205841064453125, -0.0137939453125, -0.026885986328125, 0.02508544921875, 0.0178070068359375, -0.0455322265625, -0.060150146484375, -0.047576904296875, 0.016...
kyujinpy/KoT-platypus2-7B
2023-10-19T13:28:38.000Z
[ "transformers", "pytorch", "llama", "text-generation", "ko", "dataset:kyujinpy/KoCoT_2000", "license:cc-by-nc-sa-4.0", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
kyujinpy
null
null
kyujinpy/KoT-platypus2-7B
6
2,537
transformers
2023-09-29T15:19:22
--- language: - ko datasets: - kyujinpy/KoCoT_2000 library_name: transformers pipeline_tag: text-generation license: cc-by-nc-sa-4.0 --- **(주)미디어그룹사람과숲과 (주)마커의 LLM 연구 컨소시엄에서 개발된 모델입니다** **The license is `cc-by-nc-sa-4.0`.** # **KoT-platypus2** ![img](./KoT-platypus2.png) **CoT + KO-platypus2 = KoT-platypus2** ## Model Details **Model Developers** Kyujin Han (kyujinpy) **Input** Models input text only. **Output** Models generate text only. **Model Architecture** KoT-platypus2-7B is an auto-regressive language model based on the LLaMA2 transformer architecture. **Repo Link** Github KoT-platypus: [KoT-platypus2](https://github.com/KyujinHan/KoT-platypus) **Base Model** [KO-Platypus2-7B-ex](https://huggingface.co/kyujinpy/KO-Platypus2-7B-ex) More detail repo(Github): [CoT-llama2](https://github.com/Marker-Inc-Korea/CoT-llama2) More detail repo(Github): [KO-Platypus2](https://github.com/Marker-Inc-Korea/KO-Platypus) **Training Dataset** I use [KoCoT_2000](https://huggingface.co/datasets/kyujinpy/KoCoT_2000). Using DeepL, translate about [kaist-CoT](https://huggingface.co/datasets/kaist-ai/CoT-Collection). I use A100 GPU 40GB and COLAB, when trianing. **Training Hyperparameters** | Hyperparameters | Value | | --- | --- | | batch_size | `64` | | micro_batch_size | `1` | | Epochs | `15` | | learning_rate | `1e-5` | | cutoff_len | `4096` | | lr_scheduler | `linear` | | base_model | `kyujinpy/KO-Platypus2-7B-ex` | # **Model Benchmark** ## LM Eval Harness - Korean (polyglot branch) - Used EleutherAI's [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/polyglot) > Question Answering (QA) ### COPA (F1) | Model | 0-shot | 5-shot | 10-shot | 50-shot | | --- | --- | --- | --- | --- | | [Polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 0.7196 | 0.7193 | 0.7204 | 0.7206 | | [Polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) | 0.7595 | 0.7608 | 0.7638 | 0.7788 | | [Polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 0.7745 | 0.7676 | 0.7775 | 0.7887 | | [Polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) | 0.7937 | 0.8108 | 0.8037 | 0.8369 | | [Llama-2-Ko-7b 20B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.7388 | 0.7626 | 0.7808 | 0.7979 | | [Llama-2-Ko-7b 40B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.7436 | 0.7927 | 0.8037 | 0.8259 | | [KO-platypus2-7B-EX](https://huggingface.co/kyujinpy/KO-Platypus2-7B-ex) | 0.7509 | 0.7899 | 0.8029 | 0.8290 | | **KoT-platypus2-7B(ours)** | 0.7517 | 0.7868 | 0.8009 | 0.8239 | > Natural Language Inference (NLI; 자연어 추론 평가) ### HellaSwag (F1) | Model | 0-shot | 5-shot | 10-shot | 50-shot | | --- | --- | --- | --- | --- | | [Polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 0.5247 | 0.5260 | 0.5278 | 0.5427 | | [Polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) | 0.5707 | 0.5830 | 0.5670 | 0.5787 | | [Polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 0.5976 | 0.5998 | 0.5979 | 0.6208 | | [Polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) | 0.5954 | 0.6306 | 0.6098 | 0.6118 | | [Llama-2-Ko-7b 20B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.4518 | 0.4668 | 0.4726 | 0.4828 | | [Llama-2-Ko-7b 40B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.4562 | 0.4657 | 0.4698 | 0.4774 | | [KO-platypus2-7B-EX](https://huggingface.co/kyujinpy/KO-Platypus2-7B-ex) | 0.4571 | 0.4461 | 0.4371 | 0.4525 | | **KoT-platypus2-7B(ours)** | 0.4432 | 0.4382 | 0.4550 | 0.4534 | > Question Answering (QA) ### BoolQ (F1) | Model | 0-shot | 5-shot | 10-shot | 50-shot | | --- | --- | --- | --- | --- | | [Polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 0.3552 | 0.4751 | 0.4109 | 0.4038 | | [Polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) | 0.4320 | 0.5263 | 0.4930 | 0.4038 | | [Polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 0.4356 | 0.5698 | 0.5187 | 0.5236 | | [Polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) | 0.4818 | 0.6041 | 0.6289 | 0.6448 | | [Llama-2-Ko-7b 20B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.3607 | 0.6797 | 0.6801 | 0.6622 | | [Llama-2-Ko-7b 40B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.5786 | 0.6977 | 0.7084 | 0.7144 | | [KO-platypus2-7B-EX](https://huggingface.co/kyujinpy/KO-Platypus2-7B-ex) | 0.6028 | 0.6979 | 0.7016 | 0.6988 | | **KoT-platypus2-7B(ours)** | 0.6142 | 0.6757 | 0.6839 | 0.6878 | > Classification ### SentiNeg (F1) | Model | 0-shot | 5-shot | 10-shot | 50-shot | | --- | --- | --- | --- | --- | | [Polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 0.6790 | 0.6257 | 0.5514 | 0.7851 | | [Polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) | 0.4858 | 0.7950 | 0.7320 | 0.7851 | | [Polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 0.3394 | 0.8841 | 0.8808 | 0.9521 | | [Polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) | 0.9117 | 0.9015 | 0.9345 | 0.9723 | | [Llama-2-Ko-7b 20B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.4855 | 0.8295 | 0.8711 | 0.8513 | | [Llama-2-Ko-7b 40B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.4594 | 0.7611 | 0.7276 | 0.9370 | | [KO-platypus2-7B-EX](https://huggingface.co/kyujinpy/KO-Platypus2-7B-ex) | 0.5821 | 0.7653 | 0.7991 | 0.8643 | | **KoT-platypus2-7B(ours)** | 0.6127 | 0.7199 | 0.7531 | 0.8381 | # Implementation Code ```python ### KO-Platypus from transformers import AutoModelForCausalLM, AutoTokenizer import torch repo = "kyujinpy/KoT-platypus2-7B" CoT-llama = AutoModelForCausalLM.from_pretrained( repo, return_dict=True, torch_dtype=torch.float16, device_map='auto' ) CoT-llama_tokenizer = AutoTokenizer.from_pretrained(repo) ``` > Readme format: [beomi/llama-2-ko-7b](https://huggingface.co/beomi/llama-2-ko-7b) ---
6,000
[ [ -0.04791259765625, -0.0477294921875, 0.0201873779296875, 0.03509521484375, -0.04736328125, 0.014251708984375, -0.01003265380859375, -0.038909912109375, 0.060302734375, 0.005153656005859375, -0.0250091552734375, -0.044677734375, -0.054351806640625, 0.01882934...