modelId
stringlengths
4
111
lastModified
stringlengths
24
24
tags
list
pipeline_tag
stringlengths
5
30
author
stringlengths
2
34
config
null
securityStatus
null
id
stringlengths
4
111
likes
int64
0
9.53k
downloads
int64
2
73.6M
library_name
stringlengths
2
84
created
timestamp[us]
card
stringlengths
101
901k
card_len
int64
101
901k
embeddings
list
nickwong64/bert-base-uncased-poems-sentiment
2023-02-08T08:14:08.000Z
[ "transformers", "pytorch", "bert", "text-classification", "sentiment-analysis", "poem-sentiment-detection", "poem-sentiment", "en", "dataset:poem_sentment", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
nickwong64
null
null
nickwong64/bert-base-uncased-poems-sentiment
2
1,091
transformers
2023-02-08T06:29:42
--- language: - en thumbnail: https://cdn.theatlantic.com/thumbor/lx3Vy9ojT2A0mHVooAUARLRpUEc=/0x215:3500x2184/976x549/media/img/mt/2018/03/RTR4F51Y/original.jpg tags: - text-classification - sentiment-analysis - poem-sentiment-detection - poem-sentiment license: apache-2.0 datasets: - poem_sentment metrics: - Accuracy, F1 score widget: - text: "No man is an island, Entire of itself, Every man is a piece of the continent, A part of the main." example_title: "No man is an island" - text: "Ten years, dead and living dim and draw apart. I don’t try to remember, But forgetting is hard." example_title: "Dreaming of my departed wife" - text: "My mind to me a kingdom is; Such present joys therein I find,That it excels all other bliss." example_title: "My mind to me a kingdom is" --- ## nickwong64/bert-base-uncased-poems-sentiment Bert is a Transformer Bidirectional Encoder based Architecture trained on MLM(Mask Language Modeling) objective. [bert-base-uncased](https://huggingface.co/bert-base-uncased) finetuned on the [poem_sentiment](https://huggingface.co/datasets/poem_sentiment) dataset using HuggingFace Trainer with below training parameters. ``` learning rate 2e-5, batch size 8, num_train_epochs=8, ``` ## Model Performance | Epoch | Training Loss | Validation Loss | Accuracy | F1 | | --- | --- | --- | --- | --- | | 8 | 0.468200 | 0.458632 | 0.904762 | 0.899756 | ## How to Use the Model ```python from transformers import pipeline nlp = pipeline(task='text-classification', model='nickwong64/bert-base-uncased-poems-sentiment') p1 = "No man is an island, Entire of itself, Every man is a piece of the continent, A part of the main." p2 = "Ten years, dead and living dim and draw apart. I don’t try to remember, But forgetting is hard." p3 = "My mind to me a kingdom is; Such present joys therein I find,That it excels all other bliss" print(nlp(p1)) print(nlp(p2)) print(nlp(p3)) """ output: [{'label': 'no_impact', 'score': 0.9982421398162842}] [{'label': 'negative', 'score': 0.9856176972389221}] [{'label': 'positive', 'score': 0.9931322932243347}] """ ``` ## Dataset [poem_sentiment](https://huggingface.co/datasets/poem_sentiment) ## Labels ``` {0: 'negative', 1: 'positive', 2: 'no_impact', 3: 'mixed'} ``` ## Evaluation ``` {'test_loss': 0.4359096586704254, 'test_accuracy': 0.9142857142857143, 'test_f1': 0.9120554830816401, 'test_runtime': 0.5689, 'test_samples_per_second': 184.582, 'test_steps_per_second': 24.611} ```
2,502
[ [ -0.029541015625, -0.0309600830078125, -0.0052947998046875, 0.036590576171875, -0.031341552734375, -0.01025390625, -0.0333251953125, -0.003498077392578125, 0.0335693359375, 0.02960205078125, -0.049224853515625, -0.0435791015625, -0.051544189453125, -0.0003998...
gisohi6975/nsfw-waifu-diffusion
2023-05-15T19:14:39.000Z
[ "diffusers", "stable-diffusion", "text-to-image", "en", "license:creativeml-openrail-m", "endpoints_compatible", "has_space", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
gisohi6975
null
null
gisohi6975/nsfw-waifu-diffusion
15
1,091
diffusers
2023-05-15T19:09:21
--- language: - en tags: - stable-diffusion - text-to-image license: creativeml-openrail-m inference: true --- # waifu-diffusion v1.4 - Diffusion for Weebs waifu-diffusion is a latent text-to-image diffusion model that has been conditioned on high-quality anime images through fine-tuning. ![image](https://user-images.githubusercontent.com/26317155/210155933-db3a5f1a-1ec3-4777-915c-6deff2841ce9.png) <sub>masterpiece, best quality, 1girl, green hair, sweater, looking at viewer, upper body, beanie, outdoors, watercolor, night, turtleneck</sub> [Original Weights](https://huggingface.co/hakurei/waifu-diffusion-v1-4) # Gradio & Colab We also support a [Gradio](https://github.com/gradio-app/gradio) Web UI and Colab with Diffusers to run Waifu Diffusion: [![Open In Spaces](https://camo.githubusercontent.com/00380c35e60d6b04be65d3d94a58332be5cc93779f630bcdfc18ab9a3a7d3388/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f25463025394625413425393725323048756767696e67253230466163652d5370616365732d626c7565)](https://huggingface.co/spaces/hakurei/waifu-diffusion-demo) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1_8wPN7dJO746QXsFnB09Uq2VGgSRFuYE#scrollTo=1HaCauSq546O) ## Model Description [See here for a full model overview.](https://gist.github.com/harubaru/f727cedacae336d1f7877c4bbe2196e1) ## License This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) ## Downstream Uses This model can be used for entertainment purposes and as a generative art assistant. ## Example Code ```python import torch from torch import autocast from diffusers import StableDiffusionPipeline pipe = StableDiffusionPipeline.from_pretrained( 'hakurei/waifu-diffusion', torch_dtype=torch.float32 ).to('cuda') prompt = "1girl, aqua eyes, baseball cap, blonde hair, closed mouth, earrings, green background, hat, hoop earrings, jewelry, looking at viewer, shirt, short hair, simple background, solo, upper body, yellow shirt" with autocast("cuda"): image = pipe(prompt, guidance_scale=6)["sample"][0] image.save("test.png") ``` ## Team Members and Acknowledgements This project would not have been possible without the incredible work by Stability AI and Novel AI. - [Haru](https://github.com/harubaru) - [Salt](https://github.com/sALTaccount/) - [Sta @ Bit192](https://twitter.com/naclbbr) In order to reach us, you can join our [Discord server](https://discord.gg/touhouai). [![Discord Server](https://discordapp.com/api/guilds/930499730843250783/widget.png?style=banner2)](https://discord.gg/touhouai)
3,404
[ [ -0.0399169921875, -0.06671142578125, 0.0350341796875, 0.042694091796875, -0.0185394287109375, -0.015899658203125, 0.017242431640625, -0.0217742919921875, 0.0164947509765625, 0.0294647216796875, -0.0391845703125, -0.036468505859375, -0.057037353515625, -0.015...
alon-albalak/xlm-roberta-base-xquad
2021-11-05T20:24:39.000Z
[ "transformers", "pytorch", "xlm-roberta", "question-answering", "multilingual", "dataset:xquad", "autotrain_compatible", "endpoints_compatible", "region:us" ]
question-answering
alon-albalak
null
null
alon-albalak/xlm-roberta-base-xquad
2
1,089
transformers
2022-03-02T23:29:05
--- tags: - multilingual datasets: - xquad --- # xlm-roberta-base for multilingual QA # Overview **Language Model**: xlm-roberta-base \ **Downstream task**: Extractive QA \ **Training data**: [XQuAD](https://github.com/deepmind/xquad)\ **Testing Data**: [XQuAD](https://github.com/deepmind/xquad) # Hyperparameters ```python batch_size = 40 n_epochs = 10 max_seq_len = 384 doc_stride = 128 learning_rate = 3e-5 ``` # Performance Evaluated on held-out test set from XQuAD ```python "exact_match": 79.44756554307116, "f1": 89.79318021513376, "test_samples": 2307 ``` # Usage ## In Transformers ```python from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline model_name = "alon-albalak/xlm-roberta-base-xquad" # a) Get predictions nlp = pipeline('question-answering', model=model_name, tokenizer=model_name) QA_input = { 'question': 'Why is model conversion important?', 'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.' } res = nlp(QA_input) # b) Load model & tokenizer model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) ``` ## In FARM ```python from farm.modeling.adaptive_model import AdaptiveModel from farm.modeling.tokenization import Tokenizer from farm.infer import QAInferencer model_name = "alon-albalak/xlm-roberta-base-xquad" # a) Get predictions nlp = QAInferencer.load(model_name) QA_input = [{"questions": ["Why is model conversion important?"], "text": "The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks."}] res = nlp.inference_from_dicts(dicts=QA_input, rest_api_schema=True) # b) Load model & tokenizer model = AdaptiveModel.convert_from_transformers(model_name, device="cpu", task_type="question_answering") tokenizer = Tokenizer.load(model_name) ``` ## In Haystack ```python reader = FARMReader(model_name_or_path="alon-albalak/xlm-roberta-base-xquad") # or reader = TransformersReader(model="alon-albalak/xlm-roberta-base-xquad",tokenizer="alon-albalak/xlm-roberta-base-xquad") ``` Usage instructions for FARM and Haystack were adopted from https://huggingface.co/deepset/xlm-roberta-large-squad2
2,324
[ [ -0.025543212890625, -0.046478271484375, 0.0309906005859375, 0.01447296142578125, -0.0021648406982421875, -0.00616455078125, -0.0161285400390625, -0.0264739990234375, -0.003936767578125, 0.041412353515625, -0.0537109375, -0.045562744140625, -0.034820556640625, ...
timm/convnext_xlarge.fb_in22k
2023-03-31T22:43:27.000Z
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-22k", "arxiv:2201.03545", "license:apache-2.0", "region:us" ]
image-classification
timm
null
null
timm/convnext_xlarge.fb_in22k
0
1,089
timm
2022-12-13T07:15:58
--- tags: - image-classification - timm library_tag: timm license: apache-2.0 datasets: - imagenet-22k --- # Model card for convnext_xlarge.fb_in22k A ConvNeXt image classification model. Pretrained on ImageNet-22k by paper authors. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 392.9 - GMACs: 61.0 - Activations (M): 57.5 - Image size: 224 x 224 - **Papers:** - A ConvNet for the 2020s: https://arxiv.org/abs/2201.03545 - **Original:** https://github.com/facebookresearch/ConvNeXt - **Dataset:** ImageNet-22k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('convnext_xlarge.fb_in22k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'convnext_xlarge.fb_in22k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 256, 56, 56]) # torch.Size([1, 512, 28, 28]) # torch.Size([1, 1024, 14, 14]) # torch.Size([1, 2048, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'convnext_xlarge.fb_in22k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 2048, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). All timing numbers from eager model PyTorch 1.13 on RTX 3090 w/ AMP. | model |top1 |top5 |img_size|param_count|gmacs |macts |samples_per_sec|batch_size| |------------------------------------------------------------------------------------------------------------------------------|------|------|--------|-----------|------|------|---------------|----------| | [convnextv2_huge.fcmae_ft_in22k_in1k_512](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_512) |88.848|98.742|512 |660.29 |600.81|413.07|28.58 |48 | | [convnextv2_huge.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_384) |88.668|98.738|384 |660.29 |337.96|232.35|50.56 |64 | | [convnext_xxlarge.clip_laion2b_soup_ft_in1k](https://huggingface.co/timm/convnext_xxlarge.clip_laion2b_soup_ft_in1k) |88.612|98.704|256 |846.47 |198.09|124.45|122.45 |256 | | [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384) |88.312|98.578|384 |200.13 |101.11|126.74|196.84 |256 | | [convnextv2_large.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k_384) |88.196|98.532|384 |197.96 |101.1 |126.74|128.94 |128 | | [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320) |87.968|98.47 |320 |200.13 |70.21 |88.02 |283.42 |256 | | [convnext_xlarge.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k_384) |87.75 |98.556|384 |350.2 |179.2 |168.99|124.85 |192 | | [convnextv2_base.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k_384) |87.646|98.422|384 |88.72 |45.21 |84.49 |209.51 |256 | | [convnext_large.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k_384) |87.476|98.382|384 |197.77 |101.1 |126.74|194.66 |256 | | [convnext_large_mlp.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k) |87.344|98.218|256 |200.13 |44.94 |56.33 |438.08 |256 | | [convnextv2_large.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k) |87.26 |98.248|224 |197.96 |34.4 |43.13 |376.84 |256 | | [convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384) |87.138|98.212|384 |88.59 |45.21 |84.49 |365.47 |256 | | [convnext_xlarge.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k) |87.002|98.208|224 |350.2 |60.98 |57.5 |368.01 |256 | | [convnext_base.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k_384) |86.796|98.264|384 |88.59 |45.21 |84.49 |366.54 |256 | | [convnextv2_base.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k) |86.74 |98.022|224 |88.72 |15.38 |28.75 |624.23 |256 | | [convnext_large.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k) |86.636|98.028|224 |197.77 |34.4 |43.13 |581.43 |256 | | [convnext_base.clip_laiona_augreg_ft_in1k_384](https://huggingface.co/timm/convnext_base.clip_laiona_augreg_ft_in1k_384) |86.504|97.97 |384 |88.59 |45.21 |84.49 |368.14 |256 | | [convnext_base.clip_laion2b_augreg_ft_in12k_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k) |86.344|97.97 |256 |88.59 |20.09 |37.55 |816.14 |256 | | [convnextv2_huge.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in1k) |86.256|97.75 |224 |660.29 |115.0 |79.07 |154.72 |256 | | [convnext_small.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_small.in12k_ft_in1k_384) |86.182|97.92 |384 |50.22 |25.58 |63.37 |516.19 |256 | | [convnext_base.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in1k) |86.154|97.68 |256 |88.59 |20.09 |37.55 |819.86 |256 | | [convnext_base.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k) |85.822|97.866|224 |88.59 |15.38 |28.75 |1037.66 |256 | | [convnext_small.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k_384) |85.778|97.886|384 |50.22 |25.58 |63.37 |518.95 |256 | | [convnextv2_large.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in1k) |85.742|97.584|224 |197.96 |34.4 |43.13 |375.23 |256 | | [convnext_small.in12k_ft_in1k](https://huggingface.co/timm/convnext_small.in12k_ft_in1k) |85.174|97.506|224 |50.22 |8.71 |21.56 |1474.31 |256 | | [convnext_tiny.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k_384) |85.118|97.608|384 |28.59 |13.14 |39.48 |856.76 |256 | | [convnextv2_tiny.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k_384) |85.112|97.63 |384 |28.64 |13.14 |39.48 |491.32 |256 | | [convnextv2_base.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in1k) |84.874|97.09 |224 |88.72 |15.38 |28.75 |625.33 |256 | | [convnext_small.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k) |84.562|97.394|224 |50.22 |8.71 |21.56 |1478.29 |256 | | [convnext_large.fb_in1k](https://huggingface.co/timm/convnext_large.fb_in1k) |84.282|96.892|224 |197.77 |34.4 |43.13 |584.28 |256 | | [convnext_tiny.in12k_ft_in1k](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k) |84.186|97.124|224 |28.59 |4.47 |13.44 |2433.7 |256 | | [convnext_tiny.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k_384) |84.084|97.14 |384 |28.59 |13.14 |39.48 |862.95 |256 | | [convnextv2_tiny.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k) |83.894|96.964|224 |28.64 |4.47 |13.44 |1452.72 |256 | | [convnext_base.fb_in1k](https://huggingface.co/timm/convnext_base.fb_in1k) |83.82 |96.746|224 |88.59 |15.38 |28.75 |1054.0 |256 | | [convnextv2_nano.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k_384) |83.37 |96.742|384 |15.62 |7.22 |24.61 |801.72 |256 | | [convnext_small.fb_in1k](https://huggingface.co/timm/convnext_small.fb_in1k) |83.142|96.434|224 |50.22 |8.71 |21.56 |1464.0 |256 | | [convnextv2_tiny.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in1k) |82.92 |96.284|224 |28.64 |4.47 |13.44 |1425.62 |256 | | [convnext_tiny.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k) |82.898|96.616|224 |28.59 |4.47 |13.44 |2480.88 |256 | | [convnext_nano.in12k_ft_in1k](https://huggingface.co/timm/convnext_nano.in12k_ft_in1k) |82.282|96.344|224 |15.59 |2.46 |8.37 |3926.52 |256 | | [convnext_tiny_hnf.a2h_in1k](https://huggingface.co/timm/convnext_tiny_hnf.a2h_in1k) |82.216|95.852|224 |28.59 |4.47 |13.44 |2529.75 |256 | | [convnext_tiny.fb_in1k](https://huggingface.co/timm/convnext_tiny.fb_in1k) |82.066|95.854|224 |28.59 |4.47 |13.44 |2346.26 |256 | | [convnextv2_nano.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k) |82.03 |96.166|224 |15.62 |2.46 |8.37 |2300.18 |256 | | [convnextv2_nano.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in1k) |81.83 |95.738|224 |15.62 |2.46 |8.37 |2321.48 |256 | | [convnext_nano_ols.d1h_in1k](https://huggingface.co/timm/convnext_nano_ols.d1h_in1k) |80.866|95.246|224 |15.65 |2.65 |9.38 |3523.85 |256 | | [convnext_nano.d1h_in1k](https://huggingface.co/timm/convnext_nano.d1h_in1k) |80.768|95.334|224 |15.59 |2.46 |8.37 |3915.58 |256 | | [convnextv2_pico.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_pico.fcmae_ft_in1k) |80.304|95.072|224 |9.07 |1.37 |6.1 |3274.57 |256 | | [convnext_pico.d1_in1k](https://huggingface.co/timm/convnext_pico.d1_in1k) |79.526|94.558|224 |9.05 |1.37 |6.1 |5686.88 |256 | | [convnext_pico_ols.d1_in1k](https://huggingface.co/timm/convnext_pico_ols.d1_in1k) |79.522|94.692|224 |9.06 |1.43 |6.5 |5422.46 |256 | | [convnextv2_femto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_femto.fcmae_ft_in1k) |78.488|93.98 |224 |5.23 |0.79 |4.57 |4264.2 |256 | | [convnext_femto_ols.d1_in1k](https://huggingface.co/timm/convnext_femto_ols.d1_in1k) |77.86 |93.83 |224 |5.23 |0.82 |4.87 |6910.6 |256 | | [convnext_femto.d1_in1k](https://huggingface.co/timm/convnext_femto.d1_in1k) |77.454|93.68 |224 |5.22 |0.79 |4.57 |7189.92 |256 | | [convnextv2_atto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_atto.fcmae_ft_in1k) |76.664|93.044|224 |3.71 |0.55 |3.81 |4728.91 |256 | | [convnext_atto_ols.a2_in1k](https://huggingface.co/timm/convnext_atto_ols.a2_in1k) |75.88 |92.846|224 |3.7 |0.58 |4.11 |7963.16 |256 | | [convnext_atto.d2_in1k](https://huggingface.co/timm/convnext_atto.d2_in1k) |75.664|92.9 |224 |3.7 |0.55 |3.81 |8439.22 |256 | ## Citation ```bibtex @article{liu2022convnet, author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie}, title = {A ConvNet for the 2020s}, journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, year = {2022}, } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
15,610
[ [ -0.06671142578125, -0.03253173828125, -0.00327301025390625, 0.037811279296875, -0.0311737060546875, -0.01432037353515625, -0.01308441162109375, -0.03533935546875, 0.064697265625, 0.0174713134765625, -0.04351806640625, -0.04144287109375, -0.0509033203125, -0....
timm/beitv2_large_patch16_224.in1k_ft_in22k_in1k
2023-05-08T23:46:10.000Z
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "dataset:imagenet-22k", "arxiv:2208.06366", "arxiv:2010.11929", "license:apache-2.0", "region:us" ]
image-classification
timm
null
null
timm/beitv2_large_patch16_224.in1k_ft_in22k_in1k
2
1,089
timm
2022-12-23T02:35:38
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k - imagenet-1k - imagenet-22k --- # Model card for beitv2_large_patch16_224.in1k_ft_in22k_in1k A BEiT-v2 image classification model. Trained on ImageNet-1k with self-supervised masked image modelling (MIM) using a VQ-KD encoder as a visual tokenizer (via OpenAI CLIP B/16 teacher). Fine-tuned on ImageNet-22k and then ImageNet-1k. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 304.4 - GMACs: 61.6 - Activations (M): 63.5 - Image size: 224 x 224 - **Papers:** - BEiT v2: Masked Image Modeling with Vector-Quantized Visual Tokenizers: https://arxiv.org/abs/2208.06366 - An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2 - **Dataset:** ImageNet-1k - **Pretrain Dataset:** - ImageNet-1k - ImageNet-22k - **Original:** https://github.com/microsoft/unilm/tree/master/beit2 ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('beitv2_large_patch16_224.in1k_ft_in22k_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'beitv2_large_patch16_224.in1k_ft_in22k_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 197, 1024) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @article{peng2022beit, title={Beit v2: Masked image modeling with vector-quantized visual tokenizers}, author={Peng, Zhiliang and Dong, Li and Bao, Hangbo and Ye, Qixiang and Wei, Furu}, journal={arXiv preprint arXiv:2208.06366}, year={2022} } ``` ```bibtex @article{dosovitskiy2020vit, title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale}, author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil}, journal={ICLR}, year={2021} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
3,882
[ [ -0.032257080078125, -0.0279541015625, -0.00261688232421875, 0.00853729248046875, -0.040191650390625, -0.01558685302734375, -0.007167816162109375, -0.03759765625, 0.0144500732421875, 0.0302581787109375, -0.030242919921875, -0.054229736328125, -0.054718017578125, ...
Sof22/image-caption-large-copy
2023-10-03T03:32:25.000Z
[ "transformers", "pytorch", "tf", "blip", "text2text-generation", "image-captioning", "image-to-text", "arxiv:2201.12086", "license:bsd-3-clause", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-to-text
Sof22
null
null
Sof22/image-caption-large-copy
6
1,089
transformers
2023-09-19T06:43:27
--- pipeline_tag: image-to-text tags: - image-captioning languages: - en license: bsd-3-clause --- This isi the BLIP salesforce large image captioning model with small adjustments to the paramaters on the back end for testing - note in particular the length of reply is increased. # BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation Model card for image captioning pretrained on COCO dataset - base architecture (with ViT large backbone). | ![BLIP.gif](https://cdn-uploads.huggingface.co/production/uploads/1670928184033-62441d1d9fdefb55a0b7d12c.gif) | |:--:| | <b> Pull figure from BLIP official repo | Image source: https://github.com/salesforce/BLIP </b>| ## TL;DR Authors from the [paper](https://arxiv.org/abs/2201.12086) write in the abstract: *Vision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks. However, most existing pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance improvement has been largely achieved by scaling up the dataset with noisy image-text pairs collected from the web, which is a suboptimal source of supervision. In this paper, we propose BLIP, a new VLP framework which transfers flexibly to both vision-language understanding and generation tasks. BLIP effectively utilizes the noisy web data by bootstrapping the captions, where a captioner generates synthetic captions and a filter removes the noisy ones. We achieve state-of-the-art results on a wide range of vision-language tasks, such as image-text retrieval (+2.7% in average recall@1), image captioning (+2.8% in CIDEr), and VQA (+1.6% in VQA score). BLIP also demonstrates strong generalization ability when directly transferred to videolanguage tasks in a zero-shot manner. Code, models, and datasets are released.* ## Usage You can use this model for conditional and un-conditional image captioning ### Using the Pytorch model #### Running the model on CPU <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # unconditional image captioning inputs = processor(raw_image, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> #### Running the model on GPU ##### In full precision <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large").to("cuda") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # unconditional image captioning inputs = processor(raw_image, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ##### In half precision (`float16`) <details> <summary> Click to expand </summary> ```python import torch import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large", torch_dtype=torch.float16).to("cuda") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # >>> a photography of a woman and her dog # unconditional image captioning inputs = processor(raw_image, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) >>> a woman sitting on the beach with her dog ``` </details> ## BibTex and citation info ``` @misc{https://doi.org/10.48550/arxiv.2201.12086, doi = {10.48550/ARXIV.2201.12086}, url = {https://arxiv.org/abs/2201.12086}, author = {Li, Junnan and Li, Dongxu and Xiong, Caiming and Hoi, Steven}, keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
5,693
[ [ -0.01546478271484375, -0.036163330078125, -0.002933502197265625, 0.03887939453125, -0.039520263671875, -0.00237274169921875, -0.031951904296875, -0.046417236328125, -0.004489898681640625, 0.0256805419921875, -0.0294189453125, -0.0266876220703125, -0.041625976562...
TheBloke/Llama-2-13B-chat-GGUF
2023-09-27T12:47:12.000Z
[ "transformers", "llama", "facebook", "meta", "pytorch", "llama-2", "text-generation", "en", "arxiv:2307.09288", "license:llama2", "has_space", "text-generation-inference", "region:us" ]
text-generation
TheBloke
null
null
TheBloke/Llama-2-13B-chat-GGUF
87
1,088
transformers
2023-09-04T17:20:15
--- language: - en license: llama2 tags: - facebook - meta - pytorch - llama - llama-2 model_name: Llama 2 13B Chat base_model: meta-llama/Llama-2-13b-chat-hf inference: false model_creator: Meta Llama 2 model_type: llama pipeline_tag: text-generation prompt_template: '[INST] <<SYS>> You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don''t know the answer to a question, please don''t share false information. <</SYS>> {prompt}[/INST] ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Llama 2 13B Chat - GGUF - Model creator: [Meta Llama 2](https://huggingface.co/meta-llama) - Original model: [Llama 2 13B Chat](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf) <!-- description start --> ## Description This repo contains GGUF format model files for [Meta's Llama 2 13B-chat](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Llama-2-13B-chat-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-13B-chat-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Llama-2-13B-chat-GGUF) * [Meta Llama 2's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/meta-llama/Llama-2-13B-chat-hf) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Llama-2-Chat ``` [INST] <<SYS>> You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> {prompt}[/INST] ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [llama-2-13b-chat.Q2_K.gguf](https://huggingface.co/TheBloke/Llama-2-13B-chat-GGUF/blob/main/llama-2-13b-chat.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes | | [llama-2-13b-chat.Q3_K_S.gguf](https://huggingface.co/TheBloke/Llama-2-13B-chat-GGUF/blob/main/llama-2-13b-chat.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss | | [llama-2-13b-chat.Q3_K_M.gguf](https://huggingface.co/TheBloke/Llama-2-13B-chat-GGUF/blob/main/llama-2-13b-chat.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss | | [llama-2-13b-chat.Q3_K_L.gguf](https://huggingface.co/TheBloke/Llama-2-13B-chat-GGUF/blob/main/llama-2-13b-chat.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss | | [llama-2-13b-chat.Q4_0.gguf](https://huggingface.co/TheBloke/Llama-2-13B-chat-GGUF/blob/main/llama-2-13b-chat.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [llama-2-13b-chat.Q4_K_S.gguf](https://huggingface.co/TheBloke/Llama-2-13B-chat-GGUF/blob/main/llama-2-13b-chat.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss | | [llama-2-13b-chat.Q4_K_M.gguf](https://huggingface.co/TheBloke/Llama-2-13B-chat-GGUF/blob/main/llama-2-13b-chat.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended | | [llama-2-13b-chat.Q5_0.gguf](https://huggingface.co/TheBloke/Llama-2-13B-chat-GGUF/blob/main/llama-2-13b-chat.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [llama-2-13b-chat.Q5_K_S.gguf](https://huggingface.co/TheBloke/Llama-2-13B-chat-GGUF/blob/main/llama-2-13b-chat.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended | | [llama-2-13b-chat.Q5_K_M.gguf](https://huggingface.co/TheBloke/Llama-2-13B-chat-GGUF/blob/main/llama-2-13b-chat.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended | | [llama-2-13b-chat.Q6_K.gguf](https://huggingface.co/TheBloke/Llama-2-13B-chat-GGUF/blob/main/llama-2-13b-chat.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss | | [llama-2-13b-chat.Q8_0.gguf](https://huggingface.co/TheBloke/Llama-2-13B-chat-GGUF/blob/main/llama-2-13b-chat.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Llama-2-13B-chat-GGUF and below it, a specific filename to download, such as: llama-2-13b-chat.q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub>=0.17.1 ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Llama-2-13B-chat-GGUF llama-2-13b-chat.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Llama-2-13B-chat-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Llama-2-13B-chat-GGUF llama-2-13b-chat.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m llama-2-13b-chat.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "[INST] <<SYS>>\nYou are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.\n<</SYS>>\n{prompt}[/INST]" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model from Python using ctransformers #### First install the package ```bash # Base ctransformers with no GPU acceleration pip install ctransformers>=0.2.24 # Or with CUDA GPU acceleration pip install ctransformers[cuda]>=0.2.24 # Or with ROCm GPU acceleration CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers ``` #### Simple example code to load one of these GGUF models ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/Llama-2-13B-chat-GGUF", model_file="llama-2-13b-chat.q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here's guides on using llama-cpp-python or ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Meta's Llama 2 13B-chat # **Llama 2** Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 13B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom. ## Model Details *Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.* Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM. **Model Developers** Meta **Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations. **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety. ||Training Data|Params|Content Length|GQA|Tokens|LR| |---|---|---|---|---|---|---| |Llama 2|*A new mix of publicly available online data*|7B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|13B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|70B|4k|&#10004;|2.0T|1.5 x 10<sup>-4</sup>| *Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability. **Model Dates** Llama 2 was trained between January 2023 and July 2023. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) **Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288) ## Intended Use **Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212). **Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program. ||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)| |---|---|---|---| |Llama 2 7B|184320|400|31.22| |Llama 2 13B|368640|400|62.44| |Llama 2 70B|1720320|400|291.42| |Total|3311616||539.00| **CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023. ## Evaluation Results In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library. |Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval| |---|---|---|---|---|---|---|---|---|---| |Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9| |Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9| |Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7| |Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6| |Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3| |Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1| |Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**| **Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1. |||TruthfulQA|Toxigen| |---|---|---|---| |Llama 1|7B|27.42|23.00| |Llama 1|13B|41.74|23.08| |Llama 1|33B|44.19|22.57| |Llama 1|65B|48.71|21.77| |Llama 2|7B|33.29|**21.25**| |Llama 2|13B|41.86|26.10| |Llama 2|70B|**50.18**|24.60| **Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better). |||TruthfulQA|Toxigen| |---|---|---|---| |Llama-2-Chat|7B|57.04|**0.00**| |Llama-2-Chat|13B|62.18|**0.00**| |Llama-2-Chat|70B|**64.14**|0.01| **Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above. ## Ethical Considerations and Limitations Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide) ## Reporting Issues Please report any software “bug,” or other problems with the models through one of the following means: - Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama) - Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) - Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) ## Llama Model Index |Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf| |---|---|---|---|---| |7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)| |13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)| |70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)| <!-- original-model-card end -->
27,511
[ [ -0.045074462890625, -0.06494140625, 0.020660400390625, 0.039520263671875, -0.040069580078125, -0.00009107589721679688, 0.004344940185546875, -0.055877685546875, 0.038726806640625, 0.00888824462890625, -0.054473876953125, -0.037017822265625, -0.0430908203125, ...
narendrag11/picture11
2023-09-04T19:29:32.000Z
[ "diffusers", "text-to-image", "autotrain", "region:us" ]
text-to-image
narendrag11
null
null
narendrag11/picture11
1
1,088
diffusers
2023-09-04T17:26:16
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: photo of narendra for linkedIn professional profile picture tags: - text-to-image - diffusers - autotrain inference: true --- # DreamBooth trained by AutoTrain Text encoder was not trained.
272
[ [ 0.0048675537109375, -0.0118255615234375, 0.0156097412109375, 0.00897216796875, -0.036285400390625, 0.06683349609375, 0.0129547119140625, -0.01349639892578125, 0.035552978515625, -0.00022220611572265625, -0.03582763671875, -0.0029754638671875, -0.05975341796875, ...
dumitrescustefan/bert-base-romanian-cased-v1
2022-09-17T18:18:13.000Z
[ "transformers", "pytorch", "jax", "bert", "fill-mask", "ro", "license:mit", "endpoints_compatible", "region:us" ]
fill-mask
dumitrescustefan
null
null
dumitrescustefan/bert-base-romanian-cased-v1
9
1,087
transformers
2022-03-02T23:29:05
--- language: ro tags: - bert - fill-mask license: mit --- # bert-base-romanian-cased-v1 The BERT **base**, **cased** model for Romanian, trained on a 15GB corpus, version ![v1.0](https://img.shields.io/badge/v1.0-21%20Apr%202020-ff6666) ### How to use ```python from transformers import AutoTokenizer, AutoModel import torch # load tokenizer and model tokenizer = AutoTokenizer.from_pretrained("dumitrescustefan/bert-base-romanian-cased-v1") model = AutoModel.from_pretrained("dumitrescustefan/bert-base-romanian-cased-v1") # tokenize a sentence and run through the model input_ids = torch.tensor(tokenizer.encode("Acesta este un test.", add_special_tokens=True)).unsqueeze(0) # Batch size 1 outputs = model(input_ids) # get encoding last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple ``` Remember to always sanitize your text! Replace ``s`` and ``t`` cedilla-letters to comma-letters with : ``` text = text.replace("ţ", "ț").replace("ş", "ș").replace("Ţ", "Ț").replace("Ş", "Ș") ``` because the model was **NOT** trained on cedilla ``s`` and ``t``s. If you don't, you will have decreased performance due to ``<UNK>``s and increased number of tokens per word. ### Evaluation Evaluation is performed on Universal Dependencies [Romanian RRT](https://universaldependencies.org/treebanks/ro_rrt/index.html) UPOS, XPOS and LAS, and on a NER task based on [RONEC](https://github.com/dumitrescustefan/ronec). Details, as well as more in-depth tests not shown here, are given in the dedicated [evaluation page](https://github.com/dumitrescustefan/Romanian-Transformers/tree/master/evaluation/README.md). The baseline is the [Multilingual BERT](https://github.com/google-research/bert/blob/master/multilingual.md) model ``bert-base-multilingual-(un)cased``, as at the time of writing it was the only available BERT model that works on Romanian. | Model | UPOS | XPOS | NER | LAS | |--------------------------------|:-----:|:------:|:-----:|:-----:| | bert-base-multilingual-cased | 97.87 | 96.16 | 84.13 | 88.04 | | bert-base-romanian-cased-v1 | **98.00** | **96.46** | **85.88** | **89.69** | ### Corpus The model is trained on the following corpora (stats in the table below are after cleaning): | Corpus | Lines(M) | Words(M) | Chars(B) | Size(GB) | |-----------|:--------:|:--------:|:--------:|:--------:| | OPUS | 55.05 | 635.04 | 4.045 | 3.8 | | OSCAR | 33.56 | 1725.82 | 11.411 | 11 | | Wikipedia | 1.54 | 60.47 | 0.411 | 0.4 | | **Total** | **90.15** | **2421.33** | **15.867** | **15.2** | ### Citation If you use this model in a research paper, I'd kindly ask you to cite the following paper: ``` Stefan Dumitrescu, Andrei-Marius Avram, and Sampo Pyysalo. 2020. The birth of Romanian BERT. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 4324–4328, Online. Association for Computational Linguistics. ``` or, in bibtex: ``` @inproceedings{dumitrescu-etal-2020-birth, title = "The birth of {R}omanian {BERT}", author = "Dumitrescu, Stefan and Avram, Andrei-Marius and Pyysalo, Sampo", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.findings-emnlp.387", doi = "10.18653/v1/2020.findings-emnlp.387", pages = "4324--4328", } ``` #### Acknowledgements - We'd like to thank [Sampo Pyysalo](https://github.com/spyysalo) from TurkuNLP for helping us out with the compute needed to pretrain the v1.0 BERT models. He's awesome!
3,786
[ [ -0.0210113525390625, -0.05010986328125, 0.02386474609375, 0.025848388671875, -0.0206756591796875, -0.02203369140625, -0.033477783203125, -0.032806396484375, 0.0283966064453125, 0.01142120361328125, -0.033447265625, -0.046630859375, -0.050811767578125, -0.003...
someone13574/Mistral-7B-v0.1-sharded
2023-09-28T21:40:07.000Z
[ "transformers", "pytorch", "mistral", "text-generation", "license:apache-2.0", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
someone13574
null
null
someone13574/Mistral-7B-v0.1-sharded
13
1,087
transformers
2023-09-28T21:00:58
--- license: apache-2.0 --- Just a version of https://huggingface.co/mistralai/Mistral-7B-v0.1 which is sharded as to reduce the ram required when loading.
156
[ [ -0.05078125, -0.044158935546875, -0.00936126708984375, 0.053558349609375, -0.0220794677734375, -0.026885986328125, 0.0257415771484375, -0.03790283203125, 0.043426513671875, 0.07354736328125, -0.056182861328125, -0.004596710205078125, -0.0030498504638671875, ...
VietAI/vit5-large-vietnews-summarization
2022-09-07T02:28:54.000Z
[ "transformers", "pytorch", "tf", "t5", "text2text-generation", "summarization", "vi", "dataset:cc100", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
summarization
VietAI
null
null
VietAI/vit5-large-vietnews-summarization
6
1,086
transformers
2022-05-12T10:09:43
--- language: vi datasets: - cc100 tags: - summarization license: mit widget: - text: "vietnews: VietAI là tổ chức phi lợi nhuận với sứ mệnh ươm mầm tài năng về trí tuệ nhân tạo và xây dựng một cộng đồng các chuyên gia trong lĩnh vực trí tuệ nhân tạo đẳng cấp quốc tế tại Việt Nam." --- # ViT5-large Finetuned on `vietnews` Abstractive Summarization State-of-the-art pretrained Transformer-based encoder-decoder model for Vietnamese. [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/vit5-pretrained-text-to-text-transformer-for/abstractive-text-summarization-on-vietnews)](https://paperswithcode.com/sota/abstractive-text-summarization-on-vietnews?p=vit5-pretrained-text-to-text-transformer-for) ## How to use For more details, do check out [our Github repo](https://github.com/vietai/ViT5) and [eval script](https://github.com/vietai/ViT5/blob/main/eval/Eval_vietnews_sum.ipynb). ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM ​ tokenizer = AutoTokenizer.from_pretrained("VietAI/vit5-large-vietnews-summarization") model = AutoModelForSeq2SeqLM.from_pretrained("VietAI/vit5-large-vietnews-summarization") model.cuda() ​ sentence = "VietAI là tổ chức phi lợi nhuận với sứ mệnh ươm mầm tài năng về trí tuệ nhân tạo và xây dựng một cộng đồng các chuyên gia trong lĩnh vực trí tuệ nhân tạo đẳng cấp quốc tế tại Việt Nam." text = "vietnews: " + sentence + " </s>" encoding = tokenizer(text, return_tensors="pt") input_ids, attention_masks = encoding["input_ids"].to("cuda"), encoding["attention_mask"].to("cuda") outputs = model.generate( input_ids=input_ids, attention_mask=attention_masks, max_length=256, early_stopping=True ) for output in outputs: line = tokenizer.decode(output, skip_special_tokens=True, clean_up_tokenization_spaces=True) print(line) ``` ## Citation ``` @inproceedings{phan-etal-2022-vit5, title = "{V}i{T}5: Pretrained Text-to-Text Transformer for {V}ietnamese Language Generation", author = "Phan, Long and Tran, Hieu and Nguyen, Hieu and Trinh, Trieu H.", booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop", year = "2022", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.naacl-srw.18", pages = "136--142", } ```
2,431
[ [ -0.02423095703125, -0.04217529296875, 0.020904541015625, 0.0265960693359375, -0.03814697265625, -0.003265380859375, -0.0321044921875, -0.0191650390625, 0.00843048095703125, 0.022857666015625, -0.0260467529296875, -0.041351318359375, -0.048431396484375, 0.035...
WizardLM/WizardCoder-3B-V1.0
2023-08-31T08:09:05.000Z
[ "transformers", "pytorch", "gpt_bigcode", "text-generation", "code", "arxiv:2304.12244", "arxiv:2306.08568", "arxiv:2308.09583", "arxiv:2303.08774", "license:bigcode-openrail-m", "model-index", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text-generation
WizardLM
null
null
WizardLM/WizardCoder-3B-V1.0
19
1,086
transformers
2023-08-27T05:27:37
--- license: bigcode-openrail-m metrics: - code_eval library_name: transformers tags: - code model-index: - name: WizardCoder-3B-V1.0 results: - task: type: text-generation dataset: type: openai_humaneval name: HumanEval metrics: - name: pass@1 type: pass@1 value: 0.348 verified: false --- <p align="center"> 🤗 <a href="https://huggingface.co/WizardLM" target="_blank">HF Repo</a> •🐱 <a href="https://github.com/nlpxucan/WizardLM" target="_blank">Github Repo</a> • 🐦 <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> • 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> • 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a> <br> </p> <p align="center"> 👋 Join our <a href="https://discord.gg/VZjjHtWrKs" target="_blank">Discord</a> </p> ## News - 🔥🔥🔥[2023/08/26] We released **WizardCoder-Python-34B-V1.0** , which achieves the **73.2 pass@1** and surpasses **GPT4 (2023/03/15)**, **ChatGPT-3.5**, and **Claude2** on the [HumanEval Benchmarks](https://github.com/openai/human-eval). - [2023/06/16] We released **WizardCoder-15B-V1.0** , which achieves the **57.3 pass@1** and surpasses **Claude-Plus (+6.8)**, **Bard (+15.3)** and **InstructCodeT5+ (+22.3)** on the [HumanEval Benchmarks](https://github.com/openai/human-eval). ❗Note: There are two HumanEval results of GPT4 and ChatGPT-3.5. The 67.0 and 48.1 are reported by the official GPT4 Report (2023/03/15) of [OpenAI](https://arxiv.org/abs/2303.08774). The 82.0 and 72.5 are tested by ourselves with the latest API (2023/08/26). | Model | Checkpoint | Paper | HumanEval | MBPP | Demo | License | | ----- |------| ---- |------|-------| ----- | ----- | | WizardCoder-Python-34B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-Python-34B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 73.2 | 61.2 | [Demo](http://47.103.63.15:50085/) | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> | | WizardCoder-15B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-15B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 59.8 |50.6 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> | | WizardCoder-Python-13B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-Python-13B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 64.0 | 55.6 | -- | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> | | WizardCoder-Python-7B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-Python-7B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 55.5 | 51.6 | [Demo](http://47.103.63.15:50088/) | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> | | WizardCoder-3B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-3B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 34.8 |37.4 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> | | WizardCoder-1B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-1B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 23.8 |28.6 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> | - Our **WizardMath-70B-V1.0** model slightly outperforms some closed-source LLMs on the GSM8K, including **ChatGPT 3.5**, **Claude Instant 1** and **PaLM 2 540B**. - Our **WizardMath-70B-V1.0** model achieves **81.6 pass@1** on the [GSM8k Benchmarks](https://github.com/openai/grade-school-math), which is **24.8** points higher than the SOTA open-source LLM, and achieves **22.7 pass@1** on the [MATH Benchmarks](https://github.com/hendrycks/math), which is **9.2** points higher than the SOTA open-source LLM. <font size=4> | Model | Checkpoint | Paper | GSM8k | MATH |Online Demo| License| | ----- |------| ---- |------|-------| ----- | ----- | | WizardMath-70B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-70B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **81.6** | **22.7** |[Demo](http://47.103.63.15:50083/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a> | | WizardMath-13B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-13B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **63.9** | **14.0** |[Demo](http://47.103.63.15:50082/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a> | | WizardMath-7B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-7B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **54.9** | **10.7** | [Demo ](http://47.103.63.15:50080/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a>| </font> - [08/09/2023] We released **WizardLM-70B-V1.0** model. Here is [Full Model Weight](https://huggingface.co/WizardLM/WizardLM-70B-V1.0). <font size=4> | <sup>Model</sup> | <sup>Checkpoint</sup> | <sup>Paper</sup> |<sup>MT-Bench</sup> | <sup>AlpacaEval</sup> | <sup>GSM8k</sup> | <sup>HumanEval</sup> | <sup>License</sup>| | ----- |------| ---- |------|-------| ----- | ----- | ----- | | <sup>**WizardLM-70B-V1.0**</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-70B-V1.0" target="_blank">HF Link</a> </sup>|<sup>📃**Coming Soon**</sup>| <sup>**7.78**</sup> | <sup>**92.91%**</sup> |<sup>**77.6%**</sup> | <sup> **50.6**</sup>|<sup> <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License </a></sup> | | <sup>WizardLM-13B-V1.2</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.2" target="_blank">HF Link</a> </sup>| | <sup>7.06</sup> | <sup>89.17%</sup> |<sup>55.3%</sup> | <sup>36.6 </sup>|<sup> <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License </a></sup> | | <sup>WizardLM-13B-V1.1</sup> |<sup> 🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.1" target="_blank">HF Link</a> </sup> | | <sup>6.76</sup> |<sup>86.32%</sup> | | <sup>25.0 </sup>| <sup>Non-commercial</sup>| | <sup>WizardLM-30B-V1.0</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-30B-V1.0" target="_blank">HF Link</a></sup> | | <sup>7.01</sup> | | | <sup>37.8 </sup>| <sup>Non-commercial</sup> | | <sup>WizardLM-13B-V1.0</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.0" target="_blank">HF Link</a> </sup> | | <sup>6.35</sup> | <sup>75.31%</sup> | | <sup> 24.0 </sup> | <sup>Non-commercial</sup>| | <sup>WizardLM-7B-V1.0 </sup>| <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-7B-V1.0" target="_blank">HF Link</a> </sup> |<sup> 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> </sup>| | | |<sup>19.1 </sup>|<sup> Non-commercial</sup>| </font> ## Comparing WizardCoder-Python-34B-V1.0 with Other LLMs. 🔥 The following figure shows that our **WizardCoder-Python-34B-V1.0 attains the second position in this benchmark**, surpassing GPT4 (2023/03/15, 73.2 vs. 67.0), ChatGPT-3.5 (73.2 vs. 72.5) and Claude2 (73.2 vs. 71.2). <p align="center" width="100%"> <a ><img src="https://raw.githubusercontent.com/nlpxucan/WizardLM/main/WizardCoder/imgs/compare_sota.png" alt="WizardCoder" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a> </p> ## Prompt Format ``` "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:" ``` ## Inference Demo Script We provide the inference demo code [here](https://github.com/nlpxucan/WizardLM/tree/main/demo). Note: This script supports `WizardLM/WizardCoder-Python-34B/13B/7B-V1.0`. If you want to inference with `WizardLM/WizardCoder-15B/3B/1B-V1.0`, please change the `stop_tokens = ['</s>']` to `stop_tokens = ['<|endoftext|>']` in the script. ## Citation Please cite the repo if you use the data, method or code in this repo. ``` @misc{luo2023wizardcoder, title={WizardCoder: Empowering Code Large Language Models with Evol-Instruct}, author={Ziyang Luo and Can Xu and Pu Zhao and Qingfeng Sun and Xiubo Geng and Wenxiang Hu and Chongyang Tao and Jing Ma and Qingwei Lin and Daxin Jiang}, year={2023}, } ```
9,326
[ [ -0.04779052734375, -0.033416748046875, -0.006988525390625, 0.0266265869140625, 0.0033397674560546875, -0.01239776611328125, 0.002605438232421875, -0.036102294921875, 0.0158538818359375, 0.0211944580078125, -0.048797607421875, -0.047027587890625, -0.0386657714843...
vishanki/my-pet-dog
2023-10-09T10:15:58.000Z
[ "diffusers", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "endpoints_compatible", "has_space", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
vishanki
null
null
vishanki/my-pet-dog
0
1,086
diffusers
2023-10-09T10:02:58
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### My-Pet-Dog Dreambooth model trained by vishanki following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: GoX19932gAS Sample pictures of this concept:
293
[ [ -0.04620361328125, -0.0159912109375, 0.03253173828125, -0.0011272430419921875, -0.0188140869140625, 0.023529052734375, 0.023284912109375, -0.0404052734375, 0.038421630859375, 0.03216552734375, -0.045196533203125, -0.01519775390625, -0.005218505859375, 0.0060...
FasterDecoding/medusa-vicuna-7b-v1.3
2023-09-11T15:52:53.000Z
[ "transformers", "endpoints_compatible", "region:us" ]
null
FasterDecoding
null
null
FasterDecoding/medusa-vicuna-7b-v1.3
11
1,085
transformers
2023-09-10T02:43:41
<div align="center"><img src="https://github.com/FasterDecoding/Medusa/blob/main/assets/logo.png?raw=true" alt="Medusa" width="100" align="center"></div> <div align="center"><h1>&nbsp;Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads</h1></div> <p align="center"> | <a href="https://sites.google.com/view/ medusa-llm"><b>Blog</b></a> | <a href="https://github.com/FasterDecoding/Medusa"><b>Codebase</b></a> | </p> --- ## Installation ### Method 1: With pip ```bash pip install medusa-llm ``` ### Method 2: From source ```bash git clone https://github.com/FasterDecoding/Medusa.git cd Medusa pip install -e . ``` ### Model Weights | Size | Chat Command | Hugging Face Repo | | ---- | --------------------------------------------- | --------------------------------------------------------------------- | | 7B | `python -m medusa.inference.cli --model FasterDecoding/medusa-vicuna-7b-v1.3` | [FasterDecoding/medusa-vicuna-33b-v1.3](https://huggingface.co/FasterDecoding/medusa-vicuna-7b-v1.3) | | 13B | `python -m medusa.inference.cli --model FasterDecoding/medusa-vicuna-13b-v1.3` | [FasterDecoding/medusa-vicuna-13b-v1.3](https://huggingface.co/FasterDecoding/medusa-vicuna-13b-v1.3) | | 33B | `python -m medusa.inference.cli --model FasterDecoding/medusa-vicuna-33b-v1.3` | [FasterDecoding/medusa-vicuna-33b-v1.3](https://huggingface.co/FasterDecoding/medusa-vicuna-33b-v1.3) | ### Inference We currently support inference in the single GPU and batch size 1 setting, which is the most common setup for local model hosting. We are actively working to extend Medusa's capabilities by integrating it into other inference frameworks, please don't hesitate to reach out if you are interested in contributing to this effort. You can use the following command for lauching a CLI interface: ```bash python -m medusa.inference.cli --model [path of medusa model] ``` You can also pass `--load-in-8bit` or `--load-in-4bit` to load the base model in quantized format.
2,090
[ [ -0.043975830078125, -0.035675048828125, 0.049530029296875, 0.012176513671875, -0.039947509765625, -0.022705078125, 0.0008950233459472656, -0.0226287841796875, 0.00449371337890625, 0.038299560546875, -0.050384521484375, -0.038177490234375, -0.045623779296875, ...
krevas/LDCC-Instruct-Llama-2-ko-13B-v4
2023-10-19T00:53:16.000Z
[ "transformers", "safetensors", "llama", "text-generation", "license:cc-by-nc-4.0", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
krevas
null
null
krevas/LDCC-Instruct-Llama-2-ko-13B-v4
0
1,085
transformers
2023-10-13T03:30:13
--- license: cc-by-nc-4.0 --- # LDCC-Instruct-Llama-2-ko-13B <img src="./assets/icon.png" alt="image" width="50%" height="auto"> ## Model Details * **Developed by**: [Lotte Data Communication](https://www.ldcc.co.kr) ## Hardware and Software * **Hardware**: We utilized an A100x8 * 1 for training our model * **Training Factors**: We fine-tuned this model using a combination of the [DeepSpeed library](https://github.com/microsoft/DeepSpeed) and the [HuggingFace Trainer](https://huggingface.co/docs/transformers/main_classes/trainer) / [HuggingFace Accelerate](https://huggingface.co/docs/accelerate/index) ## Prompt Template ``` ### Prompt: {instruction} ### Answer: {output} ``` # **Llama 2** Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B pretrained model, converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom. ## Model Details *Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.* Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM. **Model Developers** Meta **Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations. **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety. ||Training Data|Params|Content Length|GQA|Tokens|LR| |---|---|---|---|---|---|---| |Llama 2|*A new mix of publicly available online data*|7B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|13B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|70B|4k|&#10004;|2.0T|1.5 x 10<sup>-4</sup>| *Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability. **Model Dates** Llama 2 was trained between January 2023 and July 2023. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) ## Intended Use **Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212). **Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program. ||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)| |---|---|---|---| |Llama 2 7B|184320|400|31.22| |Llama 2 13B|368640|400|62.44| |Llama 2 70B|1720320|400|291.42| |Total|3311616||539.00| **CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023. ## Evaluation Results In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library. |Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval| |---|---|---|---|---|---|---|---|---|---| |Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9| |Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9| |Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7| |Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6| |Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3| |Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1| |Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**| **Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1. |||TruthfulQA|Toxigen| |---|---|---|---| |Llama 1|7B|27.42|23.00| |Llama 1|13B|41.74|23.08| |Llama 1|33B|44.19|22.57| |Llama 1|65B|48.71|21.77| |Llama 2|7B|33.29|**21.25**| |Llama 2|13B|41.86|26.10| |Llama 2|70B|**50.18**|24.60| **Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better). |||TruthfulQA|Toxigen| |---|---|---|---| |Llama-2-Chat|7B|57.04|**0.00**| |Llama-2-Chat|13B|62.18|**0.00**| |Llama-2-Chat|70B|**64.14**|0.01| **Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above. ## Ethical Considerations and Limitations Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide) ## Reporting Issues Please report any software “bug,” or other problems with the models through one of the following means: - Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama) - Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) - Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) ## Llama Model Index |Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf| |---|---|---|---|---| |7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)| |13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)| |70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
10,040
[ [ -0.017364501953125, -0.052093505859375, 0.0281524658203125, 0.01480865478515625, -0.026519775390625, 0.0197601318359375, -0.00495147705078125, -0.054962158203125, 0.00579071044921875, 0.0166473388671875, -0.0574951171875, -0.042877197265625, -0.052001953125, ...
mrm8488/t5-base-finetuned-sarcasm-twitter
2023-03-17T22:41:30.000Z
[ "transformers", "pytorch", "safetensors", "t5", "text2text-generation", "en", "arxiv:1910.10683", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text2text-generation
mrm8488
null
null
mrm8488/t5-base-finetuned-sarcasm-twitter
8
1,084
transformers
2022-03-02T23:29:05
--- language: en widget: - text: "As everybody knows Trump is by far the best USA president... XD" --- # T5-base fine-tuned for Sarcasm Detection 🙄 [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) base fine-tuned on [ Twitter Sarcasm Dataset](https://github.com/EducationalTestingService/sarcasm) for **Sequence classification (as text generation)** downstream task. ## Details of T5 The **T5** model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* in Here the abstract: Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code. ![model image](https://i.imgur.com/jVFMMWR.png) ## Details of the downstream task (Sequence Classification as Text generation) - Dataset 📚 [ Twitter Sarcasm Dataset](https://github.com/EducationalTestingService/sarcasm) For Twitter training and testing datasets are provided for sarcasm detection tasks in jsonlines format. Each line contains a JSON object with the following fields : - ***label*** : `SARCASM` or `NOT_SARCASM` - **NOT** in test data - ***id***: String identifier for sample. This id will be required when making submissions. - **ONLY** in test data - ***response*** : the sarcastic response, whether a sarcastic Tweet - ***context*** : the conversation context of the ***response*** - Note, the context is an ordered list of dialogue, i.e., if the context contains three elements, `c1`, `c2`, `c3`, in that order, then `c2` is a reply to `c1` and `c3` is a reply to `c2`. Further, if the sarcastic response is `r`, then `r` is a reply to `c3`. For instance, for the following training example : `"label": "SARCASM", "response": "Did Kelly just call someone else messy? Baaaahaaahahahaha", "context": ["X is looking a First Lady should . #classact, "didn't think it was tailored enough it looked messy"]` The response tweet, "Did Kelly..." is a reply to its immediate context "didn't think it was tailored..." which is a reply to "X is looking...". Your goal is to predict the label of the "response" while also using the context (i.e, the immediate or the full context). ***Dataset size statistics*** : | | Train | Val | Test | |---------|-------|------|------| | Twitter | 4050 | 450 | 500 | The datasets was preprocessed to convert it to a **text-to-text** (classfication as generation task). ## Model fine-tuning 🏋️‍ The training script is a slightly modified version of [this Colab Notebook](https://github.com/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb) created by [Suraj Patil](https://github.com/patil-suraj), so all credits to him! ## Test set metrics 🧾 | | precision| recall | f1-score |support| |----------|----------|---------|----------|-------| | derison | 0.84 | 0.80 | 0.82 | 246 | | normal | 0.82 | 0.85 | 0.83 | 254 | | | |accuracy| | | 0.83| 500| |macro avg| 0.83| 0.83| 0.83| 500| |weighted avg| 0.83| 0.83| 0.83| 500| ## Model in Action 🚀 ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-base-finetuned-sarcasm-twitter") model = AutoModelWithLMHead.from_pretrained("mrm8488/t5-base-finetuned-sarcasm-twitter") def eval_conversation(text): input_ids = tokenizer.encode(text + '</s>', return_tensors='pt') output = model.generate(input_ids=input_ids, max_length=3) dec = [tokenizer.decode(ids) for ids in output] label = dec[0] return label # For similarity with the training dataset we should replace users mentions in twits for @USER token and urls for URL token. twit1 = "Trump just suspended the visa program that allowed me to move to the US to start @USER!" + " Unfortunately, I won’t be able to vote in a few months but if you can, please vote him out, " + "he's destroying what made America great in so many different ways!" twit2 = "@USER @USER @USER We have far more cases than any other country, " + "so leaving remote workers in would be disastrous. Makes Trump sense." twit3 = "My worry is that i wouldn’t be surprised if half the country actually agrees with this move..." me = "Trump doing so??? It must be a mistake... XDDD" conversation = twit1 + twit2 eval_conversation(conversation) #Output: 'derison' conversation = twit1 + twit3 eval_conversation(conversation) #Output: 'normal' conversation = twit1 + me eval_conversation(conversation) #Output: 'derison' # We will get 'normal' when sarcasm is not detected and 'derison' when detected ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
5,941
[ [ -0.0159759521484375, -0.053131103515625, 0.007778167724609375, 0.0265045166015625, -0.01326751708984375, 0.0128021240234375, -0.0240325927734375, -0.023681640625, 0.004405975341796875, 0.0115203857421875, -0.051666259765625, -0.04754638671875, -0.0467529296875, ...
Helsinki-NLP/opus-mt-mk-en
2023-08-16T12:01:06.000Z
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "mk", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
Helsinki-NLP
null
null
Helsinki-NLP/opus-mt-mk-en
1
1,082
transformers
2022-03-02T23:29:04
--- tags: - translation license: apache-2.0 --- ### opus-mt-mk-en * source languages: mk * target languages: en * OPUS readme: [mk-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/mk-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/mk-en/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/mk-en/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/mk-en/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.mk.en | 59.8 | 0.720 |
818
[ [ -0.0192413330078125, -0.0288543701171875, 0.0225830078125, 0.0295257568359375, -0.036468505859375, -0.022796630859375, -0.032257080078125, -0.002117156982421875, 0.0016803741455078125, 0.034210205078125, -0.051177978515625, -0.04443359375, -0.0462646484375, ...
castorini/wiki-all-8-4-multi-dpr2-query-encoder
2023-01-05T19:26:00.000Z
[ "transformers", "jax", "bert", "feature-extraction", "endpoints_compatible", "region:us" ]
feature-extraction
castorini
null
null
castorini/wiki-all-8-4-multi-dpr2-query-encoder
0
1,082
transformers
2023-01-04T02:35:12
Dense passage retriever (DPR) is a dense retrieval method described in the following paper: > Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, Wen-tau Yih. [Dense Passage Retrieval for Open-Domain Question Answering](https://www.aclweb.org/anthology/2020.emnlp-main.550/). _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_, pages 6769-6781, 2020. We have trained our own DPR models with our Wikipedia corpus variants using the [Tevatron](https://github.com/texttron/tevatron) library. Our own efforts are described in the paper entitled: > Pre-Processing Matters! Improved Wikipedia Corpora for Open-Domain Question Answering. This is the query encoder portion of a 2nd iteration DPR model for the wiki-all-8-4 corpus variant trained on the amalgamation of the NQ, TriviaQA, WQ, and CuratedTREC datasets.
904
[ [ -0.040924072265625, -0.072998046875, 0.029754638671875, -0.0045318603515625, -0.0274658203125, 0.0025539398193359375, -0.01125335693359375, -0.012847900390625, -0.00023055076599121094, 0.034454345703125, -0.02587890625, -0.01910400390625, -0.03204345703125, ...
google/vit-large-patch16-384
2022-01-28T10:22:26.000Z
[ "transformers", "pytorch", "tf", "jax", "vit", "image-classification", "vision", "dataset:imagenet", "dataset:imagenet-21k", "arxiv:2010.11929", "arxiv:2006.03677", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
google
null
null
google/vit-large-patch16-384
7
1,080
transformers
2022-03-02T23:29:05
--- license: apache-2.0 tags: - image-classification - vision datasets: - imagenet - imagenet-21k --- # Vision Transformer (large-sized model) Vision Transformer (ViT) model pre-trained on ImageNet-21k (14 million images, 21,843 classes) at resolution 224x224, and fine-tuned on ImageNet 2012 (1 million images, 1,000 classes) at resolution 384x384. It was introduced in the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Dosovitskiy et al. and first released in [this repository](https://github.com/google-research/vision_transformer). However, the weights were converted from the [timm repository](https://github.com/rwightman/pytorch-image-models) by Ross Wightman, who already converted the weights from JAX to PyTorch. Credits go to him. Disclaimer: The team releasing ViT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. Next, the model was fine-tuned on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, at a higher resolution of 384x384. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=google/vit) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import ViTFeatureExtractor, ViTForImageClassification from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-large-patch16-384') model = ViTForImageClassification.from_pretrained('google/vit-large-patch16-384') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` Currently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon, and the API of ViTFeatureExtractor might change. ## Training data The ViT model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes, and fine-tuned on [ImageNet](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found [here](https://github.com/google-research/vision_transformer/blob/master/vit_jax/input_pipeline.py). Images are resized/rescaled to the same resolution (224x224 during pre-training, 384x384 during fine-tuning) and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5). ### Pretraining The model was trained on TPUv3 hardware (8 cores). All model variants are trained with a batch size of 4096 and learning rate warmup of 10k steps. For ImageNet, the authors found it beneficial to additionally apply gradient clipping at global norm 1. Pre-training resolution is 224. ## Evaluation results For evaluation results on several image classification benchmarks, we refer to tables 2 and 5 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance. ### BibTeX entry and citation info ```bibtex @misc{wu2020visual, title={Visual Transformers: Token-based Image Representation and Processing for Computer Vision}, author={Bichen Wu and Chenfeng Xu and Xiaoliang Dai and Alvin Wan and Peizhao Zhang and Zhicheng Yan and Masayoshi Tomizuka and Joseph Gonzalez and Kurt Keutzer and Peter Vajda}, year={2020}, eprint={2006.03677}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ```bibtex @inproceedings{deng2009imagenet, title={Imagenet: A large-scale hierarchical image database}, author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li}, booktitle={2009 IEEE conference on computer vision and pattern recognition}, pages={248--255}, year={2009}, organization={Ieee} } ```
5,476
[ [ -0.04840087890625, -0.0155487060546875, 0.0004432201385498047, -0.006359100341796875, -0.0288238525390625, -0.01215362548828125, -0.006732940673828125, -0.0474853515625, 0.011688232421875, 0.03472900390625, -0.0209503173828125, -0.01885986328125, -0.056579589843...
digiplay/nk15_diffusers
2023-10-24T05:44:54.000Z
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "endpoints_compatible", "has_space", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
digiplay
null
null
digiplay/nk15_diffusers
2
1,080
diffusers
2023-06-06T17:15:35
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- Model info: https://civitai.com/models/84604/nk15 ![下載 - 2023-06-07T060203.781.png](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/nFT03hdo4fsFvqBjGyewC.png) if you use this model in your diffusers, show some AutoencoderKL errors, Don't worry, just use the codes below, you can still generate images :) ``` modelid="digiplay/nk15_diffusers" from diffusers.models import AutoencoderKL vae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-mse") pipe = DiffusionPipeline.from_pretrained(modelid, vae=vae) ```
682
[ [ -0.024169921875, -0.01122283935546875, 0.00824737548828125, 0.047760009765625, -0.036376953125, -0.0222930908203125, 0.029083251953125, 0.0078277587890625, 0.006320953369140625, 0.032318115234375, -0.033782958984375, -0.0265960693359375, -0.043914794921875, ...
ostris/photorealistic-slider-sdxl-lora
2023-08-22T20:12:48.000Z
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "license:other", "region:us", "has_space" ]
text-to-image
ostris
null
null
ostris/photorealistic-slider-sdxl-lora
9
1,080
diffusers
2023-08-22T20:12:47
--- license: other tags: - text-to-image - stable-diffusion - lora - diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: widget: - text: --- # Photorealistic Slider SDXL - LoRA ![Image 0](1739210.jpeg) <h2 id="heading-2">SDXL ONLY</h2><ul><li><p>weight: <strong>0 to 5.0</strong></p></li><li><p>positive: <strong>more realistic</strong></p></li><li><p>negative: <strong>less realistic, cartoon, painting, etc</strong></p></li></ul><p></p><p>I noticed the more bizarre your prompt gets, the more SDXL wants to turn it into a cartoon. This helps give you the ability to adjust the level of realism in a photo. All images were generated without refiner. I refuse. </p><p></p><p>If you like my work, I am not asking for coffee, but a kind review is always appreciated.<br /><br /></p> ## Image examples for the model: ![Image 1](1739267.jpeg) ![Image 2](1739266.jpeg) ![Image 3](1739235.jpeg) ![Image 4](1739247.jpeg)
960
[ [ -0.0307464599609375, -0.04998779296875, 0.03948974609375, 0.0152435302734375, -0.044281005859375, 0.00106048583984375, 0.025360107421875, -0.0245208740234375, 0.033477783203125, 0.046173095703125, -0.033782958984375, -0.0343017578125, -0.034881591796875, 0.0...
rahular/varta-t5
2023-09-17T22:49:45.000Z
[ "transformers", "pytorch", "safetensors", "t5", "text2text-generation", "as", "bh", "bn", "en", "gu", "hi", "kn", "ml", "mr", "ne", "or", "pa", "ta", "te", "ur", "dataset:rahular/varta", "arxiv:2305.05858", "arxiv:1912.08777", "license:apache-2.0", "autotrain_compatib...
text2text-generation
rahular
null
null
rahular/varta-t5
0
1,079
transformers
2023-01-12T01:24:39
--- license: apache-2.0 datasets: - rahular/varta language: - as - bh - bn - en - gu - hi - kn - ml - mr - ne - or - pa - ta - te - ur --- # Varta-T5 ## Model Description Varta-T5 is a model pre-trained on the `full` training set of [Varta](https://huggingface.co/datasets/rahular/varta) in 14 Indic languages (Assamese, Bhojpuri, Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Nepali, Oriya, Punjabi, Tamil, Telugu, and Urdu) and English, using span corruption and gap-sentence generation as objectives. [Varta](https://huggingface.co/datasets/rahular/varta) is a large-scale news corpus for Indic languages, including 41.8 million news articles in 14 different Indic languages (and English), which come from a variety of high-quality sources. The dataset and the model are introduced in [this paper](https://arxiv.org/abs/2305.05858). The code is released in [this repository](https://github.com/rahular/varta). ## Uses You can use this model for causal language modeling, but it's mostly intended to be fine-tuned on a downstream task. Note that the text-to-text framework allows us to use the same model on any NLP task, including text generation tasks (e.g., machine translation, document summarization, question answering), and classification tasks (e.g., sentiment analysis). ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> This work is mainly dedicated to the curation of a new multilingual dataset for Indic languages, many of which are low-resource languages. During data collection, we face several limitations that can potentially result in ethical concerns. Some of the important ones are mentioned below: <br> - Our dataset contains only those articles written by DailyHunt's partner publishers. This has the potential to result in a bias towards a particular narrative or ideology that can affect the representativeness and diversity of the dataset. - Another limitation is the languages represented in Varta. Out of 22 languages with official status in India, our dataset has only 13. There are 122 major languages spoken by at least 10,000 people and 159 other languages which are extremely low-resourced. None of these languages are represented in our dataset. - We do not perform any kind of debiasing on Varta. This means that societal and cultural biases may exist in the dataset, which can adversely affect the fairness and inclusivity of the models trained on it. ## How to Get Started with the Model You can use this model directly for span in-filling. ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("rahular/varta-t5") model = AutoModelForSeq2SeqLM.from_pretrained("rahular/varta-t5") ``` ## Training Details ### Training Data Varta contains 41.8 million high-quality news articles in 14 Indic languages and English. With 34.5 million non-English article-headline pairs, it is the largest document-level dataset of its kind. ### Pretraining - We use span corruption and gap-sentence generation as the pretraining objectives. - Both objectives are sampled uniformly during pretraining. - Span corruption is similar to masked language modeling except that instead of masking random tokens, we mask spans of tokens with an average length of 3. - In gap-sentence prediction, whole sentences are masked instead of spans. We follow [the original work](https://arxiv.org/abs/1912.08777), and select sentences based on their `importance'. - Rouge-1 F1-score between the sentence and the document is used as a proxy for importance. - We use 0.15 and 0.2 as the masking ratios for span corruption and gap-sentence generation, respectively. Since data sizes across languages in Varta vary from 1.5K (Bhojpuri) to 14.4M articles (Hindi), we use standard temperature-based sampling to upsample data when necessary. - We pretrain Varta-T5 using the T5 1.1 base architecture with 12 encoder and decoder layers. - We train with maximum sequence lengths of 512 and 256 for the encoder and decoder respectively. - We use 12 attention heads with an embedding dimension of 768 and a feed-forward width of 2048. - We use a 128K sentencepiece vocabulary. - In total, the model has 395M parameters. - The model is trained with Adafactor optimizer with a warm-up of 10K steps. - We use an initial learning rate of 1e-3 and use square root decay till we reach 2M steps. - We use an effective batch size of 256 and train the model on TPU v3-8 chips. - The model takes 11 days to train. <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> ### Evaluation Results Please see [the paper](https://arxiv.org/pdf/2305.05858.pdf). ## Citation ``` @misc{aralikatte2023varta, title={V\=arta: A Large-Scale Headline-Generation Dataset for Indic Languages}, author={Rahul Aralikatte and Ziling Cheng and Sumanth Doddapaneni and Jackie Chi Kit Cheung}, year={2023}, eprint={2305.05858}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
5,201
[ [ -0.0115509033203125, -0.040130615234375, 0.005527496337890625, 0.03765869140625, -0.026397705078125, 0.01117706298828125, -0.0226287841796875, -0.04803466796875, 0.0031375885009765625, 0.0335693359375, -0.039642333984375, -0.030029296875, -0.0343017578125, 0...
aipicasso/cool-japan-diffusion-2-1-2
2023-03-09T11:58:46.000Z
[ "diffusers", "stable-diffusion", "text-to-image", "arxiv:2112.10752", "arxiv:2212.03860", "license:other", "has_space", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
aipicasso
null
null
aipicasso/cool-japan-diffusion-2-1-2
13
1,078
diffusers
2023-03-08T08:25:37
--- license: other tags: - stable-diffusion - text-to-image inference: false --- # Cool Japan Diffusion 2.1.2 Model Card ![アイキャッチ](eyecatch.jpg) English version is [here](README_en.md). # はじめに Cool Japan Diffusion はStable Diffsionをファインチューニングして、アニメやマンガ、ゲームなどのクールジャパンを表現することに特化したモデルです。なお、内閣府のクールジャパン戦略とは特に関係はありません。 # 使い方 手軽に楽しみたい方は、こちらの[Space](https://huggingface.co/spaces/aipicasso/cool-japan-diffusion-latest-demo)をお使いください。 詳しい本モデルの取り扱い方は[こちらの取扱説明書](https://alfredplpl.hatenablog.com/entry/2023/01/11/182146)にかかれています。 モデルは[ここ](v2-1-2.safetensors)からダウンロードできます。 # ライセンスについて ライセンスについては、もとのライセンス CreativeML Open RAIL++-M License に例外を除き商用利用禁止を追加しただけです。 例外を除き商用利用禁止を追加した理由は創作業界に悪影響を及ぼしかねないという懸念からです。 この懸念が払拭されれば、次のバージョンから元のライセンスに戻し、商用利用可能とします。 ちなみに、元のライセンスの日本語訳は[こちら](https://qiita.com/robitan/items/887d9f3153963114823d)になります。 営利企業にいる方は法務部にいる人と相談してください。 趣味で利用する方はあまり気にしなくても一般常識を守れば大丈夫なはずです。 なお、ライセンスにある通り、このモデルを改造しても、このライセンスを引き継ぐ必要があります。 # 法律や倫理について 本モデルは日本にて作成されました。したがって、日本の法律が適用されます。 本モデルの学習は、著作権法第30条の4に基づき、合法であると主張します。 また、本モデルの配布については、著作権法や刑法175条に照らしてみても、 正犯や幇助犯にも該当しないと主張します。詳しくは柿沼弁護士の[見解](https://twitter.com/tka0120/status/1601483633436393473?s=20&t=yvM9EX0Em-_7lh8NJln3IQ)を御覧ください。 ただし、ライセンスにもある通り、本モデルの生成物は各種法令に従って取り扱って下さい。 しかし、本モデルを配布する行為が倫理的に良くないとは作者は思っています。 これは学習する著作物に対して著作者の許可を得ていないためです。 ただし、学習するには著作者の許可は法律上必要もなく、検索エンジンと同様法律上は問題はありません。 したがって、法的な側面ではなく、倫理的な側面を調査する目的も本配布は兼ねていると考えてください。 以下、一般的なモデルカードの日本語訳です。 ## モデル詳細 - **開発者:** Robin Rombach, Patrick Esser, Alfred Increment - **モデルタイプ:** 拡散モデルベースの text-to-image 生成モデル - **言語:** 日本語 - **ライセンス:** CreativeML Open RAIL++-M-NC License - **モデルの説明:** このモデルはプロンプトに応じて適切な画像を生成することができます。アルゴリズムは [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) と [OpenCLIP-ViT/H](https://github.com/mlfoundations/open_clip) です。 - **補足:** - **参考文献:** @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } ## モデルの使用例 Stable Diffusion v2と同じ使い方です。 たくさんの方法がありますが、2つのパターンを提供します。 - Web UI - Diffusers ### Web UIの場合 **今回からはxformersをインストールすることをおすすめします。** こちらの[取扱説明書](https://alfredplpl.hatenablog.com/entry/2023/01/11/182146)に従って作成してください。 ### Diffusersの場合 [🤗's Diffusers library](https://github.com/huggingface/diffusers) を使ってください。 まずは、以下のスクリプトを実行し、ライブラリをいれてください。 ```bash pip install --upgrade git+https://github.com/huggingface/diffusers.git transformers accelerate scipy ``` 次のスクリプトを実行し、画像を生成してください。 ```python from diffusers import StableDiffusionPipeline, EulerAncestralDiscreteScheduler import torch model_id = "aipicasso/cool-japan-diffusion-2-1-2" scheduler = EulerAncestralDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler") pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, torch_dtype=torch.float32) pipe = pipe.to("cuda") prompt = "anime, masterpiece, a portrait of a girl, good pupil, 4k, detailed" negative_prompt="deformed, blurry, bad anatomy, bad pupil, disfigured, poorly drawn face, mutation, mutated, extra limb, ugly, poorly drawn hands, bad hands, fused fingers, messy drawing, broken legs censor, low quality, mutated hands and fingers, long body, mutation, poorly drawn, bad eyes, ui, error, missing fingers, fused fingers, one hand with more than 5 fingers, one hand with less than 5 fingers, one hand with more than 5 digit, one hand with less than 5 digit, extra digit, fewer digits, fused digit, missing digit, bad digit, liquid digit, long body, uncoordinated body, unnatural body, lowres, jpeg artifacts, 3d, cg, text, japanese kanji" images = pipe(prompt,negative_prompt=negative_prompt, num_inference_steps=20).images images[0].save("girl.png") ``` **注意**: - [xformers](https://github.com/facebookresearch/xformers) を使うと早くなるらしいです。 - GPUを使う際にGPUのメモリが少ない人は `pipe.enable_attention_slicing()` を使ってください。 #### 想定される用途 - 画像生成AIに関する報道 - 公共放送だけでなく、営利企業でも可能 - 画像合成AIに関する情報を「知る権利」は創作業界に悪影響を及ぼさないと判断したためです。また、報道の自由などを尊重しました。 - クールジャパンの紹介 - 他国の人にクールジャパンとはなにかを説明すること。 - 他国の留学生はクールジャパンに惹かれて日本に来ることがおおくあります。そこで、クールジャパンが日本では「クールでない」とされていることにがっかりされることがとても多いとAlfred Incrementは感じております。他国の人が憧れる自国の文化をもっと誇りに思ってください。 - 研究開発 - Discord上でのモデルの利用 - プロンプトエンジニアリング - ファインチューニング(追加学習とも) - DreamBooth など - 他のモデルとのマージ - Latent Diffusion Modelとクールジャパンとの相性 - 本モデルの性能をFIDなどで調べること - 本モデルがStable Diffusion以外のモデルとは独立であることをチェックサムやハッシュ関数などで調べること - 教育 - 美大生や専門学校生の卒業制作 - 大学生の卒業論文や課題制作 - 先生が画像生成AIの現状を伝えること - 自己表現 - SNS上で自分の感情や思考を表現すること - Hugging Face の Community にかいてある用途 - 日本語か英語で質問してください #### 想定されない用途 - 物事を事実として表現するようなこと - 収益化されているYouTubeなどのコンテンツへの使用 - 商用のサービスとして直接提供すること - 先生を困らせるようなこと - その他、創作業界に悪影響を及ぼすこと # 使用してはいけない用途や悪意のある用途 - デジタル贋作 ([Digital Forgery](https://arxiv.org/abs/2212.03860)) は公開しないでください(著作権法に違反するおそれ) - 他人の作品を無断でImage-to-Imageしないでください(著作権法に違反するおそれ) - わいせつ物を頒布しないでください (刑法175条に違反するおそれ) - いわゆる業界のマナーを守らないようなこと - 事実に基づかないことを事実のように語らないようにしてください(威力業務妨害罪が適用されるおそれ) - フェイクニュース ## モデルの限界やバイアス ### モデルの限界 - よくわかっていない ### バイアス Stable Diffusionと同じバイアスが掛かっています。 気をつけてください。 ## 学習 **学習データ** 次のデータやモデルを主に使ってStable Diffusionをファインチューニングしています。 - VAEについて - DanbooruやDanbooru datasetを除いた日本の国内法を遵守したデータ: 65万種類 (データ拡張により無限枚作成) - U-Netについて - DanbooruやDanbooru datasetを除いた日本の国内法を遵守したデータ: 200万ペア - マージしたモデル: 3つ **学習プロセス** Stable DiffusionのVAEとU-Netをファインチューニングしました。 - **ハードウェア:** A6000 - **オプティマイザー:** AdamW - **Gradient Accumulations**: 1 - **バッチサイズ:** 1 ## 評価結果 ## 環境への影響 ほとんどありません。 - **ハードウェアタイプ:** A6000 - **使用時間(単位は時間):** 200 - **クラウド事業者:** なし - **学習した場所:** 日本 - **カーボン排出量:** そんなにない ## 参考文献 @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } *このモデルカードは [Stable Diffusion v2](https://huggingface.co/stabilityai/stable-diffusion-2/raw/main/README.md) に基づいて、Alfred Incrementがかきました。
6,525
[ [ -0.048797607421875, -0.069091796875, 0.030731201171875, 0.0138702392578125, -0.0301513671875, -0.0086822509765625, 0.0028362274169921875, -0.022735595703125, 0.0282135009765625, 0.009307861328125, -0.032623291015625, -0.04315185546875, -0.0458984375, -0.0072...
wavymulder/collage-diffusion
2023-04-17T21:15:53.000Z
[ "diffusers", "stable-diffusion", "text-to-image", "safetensors", "en", "license:creativeml-openrail-m", "endpoints_compatible", "has_space", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
wavymulder
null
null
wavymulder/collage-diffusion
18
1,078
diffusers
2023-04-11T16:47:45
--- language: - en thumbnail: "https://huggingface.co/wavymulder/collage-diffusion/resolve/main/pics/thumb.jpg" license: creativeml-openrail-m tags: - stable-diffusion - text-to-image - safetensors - diffusers inference: true --- **Collage Diffusion** ![Header](https://huggingface.co/wavymulder/collage-diffusion/resolve/main/pics/page1.jpg) [*CKPT DOWNLOAD LINK*](https://huggingface.co/wavymulder/collage-diffusion/resolve/main/collage-1.0.ckpt) - - - [*SAFETENSORS DOWNLOAD LINK*](https://huggingface.co/wavymulder/collage-diffusion/resolve/main/collage-1.0.safetensors) This is a dreambooth model trained on collage images. Use the activation token **collage style** in your prompt (I recommend at the start) I have the most fun with this model when I use simple prompts and let the model go crazy. If you want a model that strictly adheres to your prompt, this isn't that. Trained from 1.5 with VAE. Please see [this document where I share the parameters (prompt, sampler, seed, etc.) used for all example images.](https://huggingface.co/wavymulder/collage-diffusion/resolve/main/parameters_for_samples.txt) ![Page2](https://huggingface.co/wavymulder/collage-diffusion/resolve/main/pics/page2.jpg)
1,217
[ [ -0.0295257568359375, -0.06982421875, 0.042694091796875, 0.0217132568359375, -0.0357666015625, 0.0130462646484375, 0.0159454345703125, -0.021392822265625, 0.033843994140625, 0.035491943359375, -0.03094482421875, -0.05291748046875, -0.05316162109375, -0.016555...
team-lucid/trocr-small-korean
2023-07-01T08:41:35.000Z
[ "transformers", "pytorch", "jax", "safetensors", "vision-encoder-decoder", "trocr", "image-to-text", "ko", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-to-text
team-lucid
null
null
team-lucid/trocr-small-korean
5
1,077
transformers
2023-06-30T16:00:43
--- license: apache-2.0 language: - ko pipeline_tag: image-to-text tags: - trocr - vision-encoder-decoder --- # trocr-small-korean ## Model Details TrOCR은 Encoder-Decoder 모델로, 이미지 트랜스포머 인코더와 텍스트 트랜스포머 디코더로 이루어져 있습니다. 이미지 인코더는 DeiT 가중치로 초기화되었고, 텍스트 디코더는 자체적으로 학습한 RoBERTa 가중치로 초기화되었습니다. 이 연구는 구글의 TPU Research Cloud(TRC)를 통해 지원받은 Cloud TPU로 학습되었습니다. ## How to Get Started with the Model ```python import torch from transformers import VisionEncoderDecoderModel model = VisionEncoderDecoderModel.from_pretrained("team-lucid/trocr-small-korean") pixel_values = torch.rand(1, 3, 384, 384) generated_ids = model.generate(pixel_values) ``` ## Training Details ### Training Data 해당 모델은 [synthtiger](https://github.com/clovaai/synthtiger)로 합성된 6M개의 이미지로 학습되었습니다 ### Training Hyperparameters | Hyperparameter | Small | |:--------------------|--------:| | Warmup Steps | 4,000 | | Learning Rates | 1e-4 | | Batch Size | 512 | | Weight Decay | 0.01 | | Max Steps | 500,000 | | Learning Rate Decay | 0.1 | | \\(Adam\beta_1\\) | 0.9 | | \\(Adam\beta_2\\) | 0.98 |
1,142
[ [ -0.022308349609375, -0.032745361328125, 0.00685882568359375, 0.0027618408203125, -0.04718017578125, 0.00513458251953125, -0.0032749176025390625, -0.0060577392578125, 0.015869140625, 0.0211944580078125, -0.030242919921875, -0.053802490234375, -0.052154541015625, ...
timm/convnext_xxlarge.clip_laion2b_soup_ft_in12k
2023-11-03T21:32:57.000Z
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-12k", "dataset:laion-2b", "arxiv:2210.08402", "arxiv:2103.00020", "arxiv:2201.03545", "license:apache-2.0", "region:us" ]
image-classification
timm
null
null
timm/convnext_xxlarge.clip_laion2b_soup_ft_in12k
1
1,077
timm
2023-11-03T21:29:34
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-12k - laion-2b --- # Model card for convnext_xxlarge.clip_laion2b_soup_ft_in12k A ConvNeXt image classification model. CLIP image tower weights pretrained in [OpenCLIP](https://github.com/mlfoundations/open_clip) on LAION and fine-tuned on ImageNet-12k by Ross Wightman. Please see related OpenCLIP model cards for more details on pretrain: * https://huggingface.co/laion/CLIP-convnext_xxlarge-laion2B-s34B-b82K-augreg-soup * https://huggingface.co/laion/CLIP-convnext_large_d.laion2B-s26B-b102K-augreg * https://huggingface.co/laion/CLIP-convnext_base_w-laion2B-s13B-b82K-augreg * https://huggingface.co/laion/CLIP-convnext_base_w_320-laion_aesthetic-s13B-b82K ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 879.7 - GMACs: 198.1 - Activations (M): 124.5 - Image size: 256 x 256 - **Papers:** - OpenCLIP: https://github.com/mlfoundations/open_clip - LAION-5B: An open large-scale dataset for training next generation image-text models: https://arxiv.org/abs/2210.08402 - @: m - Learning Transferable Visual Models From Natural Language Supervision: https://arxiv.org/abs/2103.00020 - A ConvNet for the 2020s: https://arxiv.org/abs/2201.03545 - **Original:** https://github.com/mlfoundations/open_clip - **Pretrain Dataset:** LAION-2B - **Dataset:** ImageNet-12k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('convnext_xxlarge.clip_laion2b_soup_ft_in12k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'convnext_xxlarge.clip_laion2b_soup_ft_in12k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 384, 64, 64]) # torch.Size([1, 768, 32, 32]) # torch.Size([1, 1536, 16, 16]) # torch.Size([1, 3072, 8, 8]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'convnext_xxlarge.clip_laion2b_soup_ft_in12k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 3072, 8, 8) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). All timing numbers from eager model PyTorch 1.13 on RTX 3090 w/ AMP. | model |top1 |top5 |img_size|param_count|gmacs |macts |samples_per_sec|batch_size| |------------------------------------------------------------------------------------------------------------------------------|------|------|--------|-----------|------|------|---------------|----------| | [convnextv2_huge.fcmae_ft_in22k_in1k_512](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_512) |88.848|98.742|512 |660.29 |600.81|413.07|28.58 |48 | | [convnextv2_huge.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_384) |88.668|98.738|384 |660.29 |337.96|232.35|50.56 |64 | | [convnext_xxlarge.clip_laion2b_soup_ft_in1k](https://huggingface.co/timm/convnext_xxlarge.clip_laion2b_soup_ft_in1k) |88.612|98.704|256 |846.47 |198.09|124.45|122.45 |256 | | [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384) |88.312|98.578|384 |200.13 |101.11|126.74|196.84 |256 | | [convnextv2_large.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k_384) |88.196|98.532|384 |197.96 |101.1 |126.74|128.94 |128 | | [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320) |87.968|98.47 |320 |200.13 |70.21 |88.02 |283.42 |256 | | [convnext_xlarge.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k_384) |87.75 |98.556|384 |350.2 |179.2 |168.99|124.85 |192 | | [convnextv2_base.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k_384) |87.646|98.422|384 |88.72 |45.21 |84.49 |209.51 |256 | | [convnext_large.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k_384) |87.476|98.382|384 |197.77 |101.1 |126.74|194.66 |256 | | [convnext_large_mlp.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k) |87.344|98.218|256 |200.13 |44.94 |56.33 |438.08 |256 | | [convnextv2_large.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k) |87.26 |98.248|224 |197.96 |34.4 |43.13 |376.84 |256 | | [convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384) |87.138|98.212|384 |88.59 |45.21 |84.49 |365.47 |256 | | [convnext_xlarge.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k) |87.002|98.208|224 |350.2 |60.98 |57.5 |368.01 |256 | | [convnext_base.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k_384) |86.796|98.264|384 |88.59 |45.21 |84.49 |366.54 |256 | | [convnextv2_base.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k) |86.74 |98.022|224 |88.72 |15.38 |28.75 |624.23 |256 | | [convnext_large.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k) |86.636|98.028|224 |197.77 |34.4 |43.13 |581.43 |256 | | [convnext_base.clip_laiona_augreg_ft_in1k_384](https://huggingface.co/timm/convnext_base.clip_laiona_augreg_ft_in1k_384) |86.504|97.97 |384 |88.59 |45.21 |84.49 |368.14 |256 | | [convnext_base.clip_laion2b_augreg_ft_in12k_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k) |86.344|97.97 |256 |88.59 |20.09 |37.55 |816.14 |256 | | [convnextv2_huge.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in1k) |86.256|97.75 |224 |660.29 |115.0 |79.07 |154.72 |256 | | [convnext_small.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_small.in12k_ft_in1k_384) |86.182|97.92 |384 |50.22 |25.58 |63.37 |516.19 |256 | | [convnext_base.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in1k) |86.154|97.68 |256 |88.59 |20.09 |37.55 |819.86 |256 | | [convnext_base.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k) |85.822|97.866|224 |88.59 |15.38 |28.75 |1037.66 |256 | | [convnext_small.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k_384) |85.778|97.886|384 |50.22 |25.58 |63.37 |518.95 |256 | | [convnextv2_large.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in1k) |85.742|97.584|224 |197.96 |34.4 |43.13 |375.23 |256 | | [convnext_small.in12k_ft_in1k](https://huggingface.co/timm/convnext_small.in12k_ft_in1k) |85.174|97.506|224 |50.22 |8.71 |21.56 |1474.31 |256 | | [convnext_tiny.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k_384) |85.118|97.608|384 |28.59 |13.14 |39.48 |856.76 |256 | | [convnextv2_tiny.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k_384) |85.112|97.63 |384 |28.64 |13.14 |39.48 |491.32 |256 | | [convnextv2_base.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in1k) |84.874|97.09 |224 |88.72 |15.38 |28.75 |625.33 |256 | | [convnext_small.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k) |84.562|97.394|224 |50.22 |8.71 |21.56 |1478.29 |256 | | [convnext_large.fb_in1k](https://huggingface.co/timm/convnext_large.fb_in1k) |84.282|96.892|224 |197.77 |34.4 |43.13 |584.28 |256 | | [convnext_tiny.in12k_ft_in1k](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k) |84.186|97.124|224 |28.59 |4.47 |13.44 |2433.7 |256 | | [convnext_tiny.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k_384) |84.084|97.14 |384 |28.59 |13.14 |39.48 |862.95 |256 | | [convnextv2_tiny.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k) |83.894|96.964|224 |28.64 |4.47 |13.44 |1452.72 |256 | | [convnext_base.fb_in1k](https://huggingface.co/timm/convnext_base.fb_in1k) |83.82 |96.746|224 |88.59 |15.38 |28.75 |1054.0 |256 | | [convnextv2_nano.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k_384) |83.37 |96.742|384 |15.62 |7.22 |24.61 |801.72 |256 | | [convnext_small.fb_in1k](https://huggingface.co/timm/convnext_small.fb_in1k) |83.142|96.434|224 |50.22 |8.71 |21.56 |1464.0 |256 | | [convnextv2_tiny.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in1k) |82.92 |96.284|224 |28.64 |4.47 |13.44 |1425.62 |256 | | [convnext_tiny.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k) |82.898|96.616|224 |28.59 |4.47 |13.44 |2480.88 |256 | | [convnext_nano.in12k_ft_in1k](https://huggingface.co/timm/convnext_nano.in12k_ft_in1k) |82.282|96.344|224 |15.59 |2.46 |8.37 |3926.52 |256 | | [convnext_tiny_hnf.a2h_in1k](https://huggingface.co/timm/convnext_tiny_hnf.a2h_in1k) |82.216|95.852|224 |28.59 |4.47 |13.44 |2529.75 |256 | | [convnext_tiny.fb_in1k](https://huggingface.co/timm/convnext_tiny.fb_in1k) |82.066|95.854|224 |28.59 |4.47 |13.44 |2346.26 |256 | | [convnextv2_nano.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k) |82.03 |96.166|224 |15.62 |2.46 |8.37 |2300.18 |256 | | [convnextv2_nano.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in1k) |81.83 |95.738|224 |15.62 |2.46 |8.37 |2321.48 |256 | | [convnext_nano_ols.d1h_in1k](https://huggingface.co/timm/convnext_nano_ols.d1h_in1k) |80.866|95.246|224 |15.65 |2.65 |9.38 |3523.85 |256 | | [convnext_nano.d1h_in1k](https://huggingface.co/timm/convnext_nano.d1h_in1k) |80.768|95.334|224 |15.59 |2.46 |8.37 |3915.58 |256 | | [convnextv2_pico.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_pico.fcmae_ft_in1k) |80.304|95.072|224 |9.07 |1.37 |6.1 |3274.57 |256 | | [convnext_pico.d1_in1k](https://huggingface.co/timm/convnext_pico.d1_in1k) |79.526|94.558|224 |9.05 |1.37 |6.1 |5686.88 |256 | | [convnext_pico_ols.d1_in1k](https://huggingface.co/timm/convnext_pico_ols.d1_in1k) |79.522|94.692|224 |9.06 |1.43 |6.5 |5422.46 |256 | | [convnextv2_femto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_femto.fcmae_ft_in1k) |78.488|93.98 |224 |5.23 |0.79 |4.57 |4264.2 |256 | | [convnext_femto_ols.d1_in1k](https://huggingface.co/timm/convnext_femto_ols.d1_in1k) |77.86 |93.83 |224 |5.23 |0.82 |4.87 |6910.6 |256 | | [convnext_femto.d1_in1k](https://huggingface.co/timm/convnext_femto.d1_in1k) |77.454|93.68 |224 |5.22 |0.79 |4.57 |7189.92 |256 | | [convnextv2_atto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_atto.fcmae_ft_in1k) |76.664|93.044|224 |3.71 |0.55 |3.81 |4728.91 |256 | | [convnext_atto_ols.a2_in1k](https://huggingface.co/timm/convnext_atto_ols.a2_in1k) |75.88 |92.846|224 |3.7 |0.58 |4.11 |7963.16 |256 | | [convnext_atto.d2_in1k](https://huggingface.co/timm/convnext_atto.d2_in1k) |75.664|92.9 |224 |3.7 |0.55 |3.81 |8439.22 |256 | ## Citation ```bibtex @software{ilharco_gabriel_2021_5143773, author = {Ilharco, Gabriel and Wortsman, Mitchell and Wightman, Ross and Gordon, Cade and Carlini, Nicholas and Taori, Rohan and Dave, Achal and Shankar, Vaishaal and Namkoong, Hongseok and Miller, John and Hajishirzi, Hannaneh and Farhadi, Ali and Schmidt, Ludwig}, title = {OpenCLIP}, month = jul, year = 2021, note = {If you use this software, please cite it as below.}, publisher = {Zenodo}, version = {0.1}, doi = {10.5281/zenodo.5143773}, url = {https://doi.org/10.5281/zenodo.5143773} } ``` ```bibtex @inproceedings{schuhmann2022laionb, title={{LAION}-5B: An open large-scale dataset for training next generation image-text models}, author={Christoph Schuhmann and Romain Beaumont and Richard Vencu and Cade W Gordon and Ross Wightman and Mehdi Cherti and Theo Coombes and Aarush Katta and Clayton Mullis and Mitchell Wortsman and Patrick Schramowski and Srivatsa R Kundurthy and Katherine Crowson and Ludwig Schmidt and Robert Kaczmarczyk and Jenia Jitsev}, booktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2022}, url={https://openreview.net/forum?id=M3Y74vmsMcY} } ``` ```bibtex i ``` ```bibtex @inproceedings{Radford2021LearningTV, title={Learning Transferable Visual Models From Natural Language Supervision}, author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever}, booktitle={ICML}, year={2021} } ``` ```bibtex @article{liu2022convnet, author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie}, title = {A ConvNet for the 2020s}, journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, year = {2022}, } ```
18,303
[ [ -0.05780029296875, -0.034820556640625, -0.0016794204711914062, 0.036529541015625, -0.0301666259765625, -0.0179443359375, -0.01468658447265625, -0.033203125, 0.056976318359375, 0.0170440673828125, -0.0411376953125, -0.043914794921875, -0.05438232421875, -0.00...
danbrown/RealisticVision-v2-0
2023-04-30T17:23:35.000Z
[ "diffusers", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
null
danbrown
null
null
danbrown/RealisticVision-v2-0
1
1,076
diffusers
2023-04-30T16:37:03
Not official! This are diffusers weights for https://civitai.com/models/4201/realistic-vision-v20 Based on Stable Diffusion v1.5
128
[ [ -0.024505615234375, -0.0188446044921875, 0.0292205810546875, 0.04132080078125, -0.01432037353515625, -0.0028896331787109375, 0.027557373046875, -0.023834228515625, 0.039398193359375, 0.020965576171875, -0.04656982421875, -0.00351715087890625, -0.0296173095703125...
facebook/hubert-xlarge-ll60k
2021-10-20T10:20:44.000Z
[ "transformers", "pytorch", "tf", "hubert", "feature-extraction", "speech", "en", "dataset:libri-light", "arxiv:2106.07447", "license:apache-2.0", "endpoints_compatible", "has_space", "region:us" ]
feature-extraction
facebook
null
null
facebook/hubert-xlarge-ll60k
4
1,075
transformers
2022-03-02T23:29:05
--- language: en datasets: - libri-light tags: - speech license: apache-2.0 --- # Hubert-Extra-Large [Facebook's Hubert](https://ai.facebook.com/blog/hubert-self-supervised-representation-learning-for-speech-recognition-generation-and-compression) The extra large model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc... The model was pretrained on [Libri-Light](https://github.com/facebookresearch/libri-light). [Paper](https://arxiv.org/abs/2106.07447) Authors: Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed **Abstract** Self-supervised approaches for speech representation learning are challenged by three unique problems: (1) there are multiple sound units in each input utterance, (2) there is no lexicon of input sound units during the pre-training phase, and (3) sound units have variable lengths with no explicit segmentation. To deal with these three problems, we propose the Hidden-Unit BERT (HuBERT) approach for self-supervised speech representation learning, which utilizes an offline clustering step to provide aligned target labels for a BERT-like prediction loss. A key ingredient of our approach is applying the prediction loss over the masked regions only, which forces the model to learn a combined acoustic and language model over the continuous inputs. HuBERT relies primarily on the consistency of the unsupervised clustering step rather than the intrinsic quality of the assigned cluster labels. Starting with a simple k-means teacher of 100 clusters, and using two iterations of clustering, the HuBERT model either matches or improves upon the state-of-the-art wav2vec 2.0 performance on the Librispeech (960h) and Libri-light (60,000h) benchmarks with 10min, 1h, 10h, 100h, and 960h fine-tuning subsets. Using a 1B parameter model, HuBERT shows up to 19% and 13% relative WER reduction on the more challenging dev-other and test-other evaluation subsets. The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/hubert . # Usage See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `HubertForCTC`.
2,510
[ [ -0.0360107421875, -0.03558349609375, 0.035675048828125, 0.01242828369140625, -0.01554107666015625, -0.00995635986328125, -0.03009033203125, -0.0401611328125, 0.02484130859375, 0.02117919921875, -0.046661376953125, -0.0220184326171875, -0.0298309326171875, -0...
laion/CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft
2023-04-18T19:28:23.000Z
[ "open_clip", "tensorboard", "zero-shot-image-classification", "clip", "arxiv:2201.03545", "arxiv:2210.08402", "arxiv:1910.04867", "license:mit", "has_space", "region:us" ]
zero-shot-image-classification
laion
null
null
laion/CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft
2
1,075
open_clip
2023-02-11T01:37:59
--- tags: - zero-shot-image-classification - clip license: mit library_name: open_clip pipeline_tag: zero-shot-image-classification --- # Model card for CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft # Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Training Details](#training-details) 4. [Evaluation](#evaluation) 5. [Acknowledgements](#acknowledgements) 6. [Citation](#citation) # Model Details ## Model Description A series of CLIP [ConvNeXt-Large](https://arxiv.org/abs/2201.03545) (w/ extra text depth, vision MLP head) models trained on the LAION-2B (english) subset of [LAION-5B](https://arxiv.org/abs/2210.08402) using [OpenCLIP](https://github.com/mlfoundations/open_clip). The models utilize: * the [timm](https://github.com/rwightman/pytorch-image-models) ConvNeXt-Large model (`convnext_large`) as the image tower * a MLP (`fc - gelu - drop - fc`) head in vision tower instead of the single projection of other CLIP models * a text tower with same width but 4 layers more depth than ViT-L / RN50x16 models (depth 16, embed dim 768). This 320x320 resolution model is a fine-tune of [CLIP-convnext_large_d.laion2B-s26B-b102K-augreg](https://huggingface.co/laion/CLIP-convnext_large_d.laion2B-s26B-b102K-augreg) at a higher resolution. It was fine-tune from the final checkpoint of the original 256x256 training run w/ an additional ~2.5B samples and a lower learning rate. At 320x320, the ConvNext-Large-D is significantly more efficient than the L/14 model at 336x336 that OpenAI fine-tuned. L/14-336 model is 2.5x more GMAC, 2.8x more activations, and 1.22x more parameters. | Model | Dataset | Resolution | AugReg | Top-1 ImageNet Zero-Shot (%) | | ----- | ------- | ---------- | ------------ | --------- | | [convnext_large_d.laion2b_s26b_b102k-augreg](https://huggingface.co/laion/CLIP-convnext_large_d.laion2B-s26B-b102K-augreg) | LAION-2B | 256x256 | RRC (0.33, 1.0), RE (0.35), SD (0.1), D(0.1) | 75.9 | | [convnext_large_d_320.laion2b_s29b_b131k-ft](https://huggingface.co/laion/CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft) | LAION-2B | 320x320 | RRC (0.5, 1.0), RE (0.4), SD (0.1), D(0.0) | 76.6 | | [convnext_large_d_320.laion2b_s29b_b131k-ft-soup](https://huggingface.co/laion/CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft-soup) | LAION-2B | 320x320 | RRC (0.5, 1.0), RE (0.4), SD (0.1), D(0.0) | 76.9 | RRC = Random Resize Crop (crop pcts), RE = Random Erasing (prob), SD = Stochastic Depth (prob) -- image tower only, D = Dropout (prob) -- image tower head only LAION-A = LAION Aesthetic, an ~900M sample subset of LAION-2B with pHash dedupe and asthetic score filtering. Model training done by Ross Wightman on the [stability.ai](https://stability.ai/) cluster. # Uses As per the original [OpenAI CLIP model card](https://github.com/openai/CLIP/blob/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1/model-card.md), this model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such model. The OpenAI CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis. Additionally, the LAION-5B blog (https://laion.ai/blog/laion-5b/) and upcoming paper include additional discussion as it relates specifically to the training dataset. ## Direct Use Zero-shot image classification, image and text retrieval, among others. ## Downstream Use Image classification and other image task fine-tuning, linear probe image classification, image generation guiding and conditioning, among others. ## Out-of-Scope Use As per the OpenAI models, **Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful. Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use. Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases. Further the above notice, the LAION-5B dataset used in training of these models has additional considerations, see below. # Training Details ## Training Data This model was trained with LAION-2B -- A 2 billion sample English subset of LAION-5B (https://laion.ai/blog/laion-5b/). **IMPORTANT NOTE:** The motivation behind dataset creation is to democratize research and experimentation around large-scale multi-modal model training and handling of uncurated, large-scale datasets crawled from publically available internet. Our recommendation is therefore to use the dataset for research purposes. Be aware that this large-scale dataset is uncurated. Keep in mind that the uncurated nature of the dataset means that collected links may lead to strongly discomforting and disturbing content for a human viewer. Therefore, please use the demo links with caution and at your own risk. It is possible to extract a “safe” subset by filtering out samples based on the safety tags (using a customized trained NSFW classifier that we built). While this strongly reduces the chance for encountering potentially harmful content when viewing, we cannot entirely exclude the possibility for harmful content being still present in safe mode, so that the warning holds also there. We think that providing the dataset openly to broad research and other interested communities will allow for transparent investigation of benefits that come along with training large-scale models as well as pitfalls and dangers that may stay unreported or unnoticed when working with closed large datasets that remain restricted to a small community. Providing our dataset openly, we however do not recommend using it for creating ready-to-go industrial products, as the basic research about general properties and safety of such large-scale models, which we would like to encourage with this release, is still in progress. ## Training Procedure All 320x320 model fine-tunes were trained with a global batch size of 131072 for 10-16 checkpoint intervals of 203.7M samples for a total of ~2-3B samples seen over fine-tune. For 320x320 models, a slurm script w/ srun below was used on 64 8-GPU (A100 40GB) nodes (Stability). ``` /opt/slurm/sbin/srun --cpu_bind=v --accel-bind=gn python -m training.main \ --save-frequency 1 \ --name "convnext_large_320" \ --pretrained ""/runs/convnext_large_256/epoch_128.pt" \ --resume 'latest' \ --train-data="pipe:aws s3 cp s3://mybucket/path/{laion{00000..xxxxx}.tar -" \ --train-num-samples 203666042 \ --dataset-type webdataset \ --precision amp_bfloat16 \ --beta2 0.98 \ --warmup 2000 \ --batch-size=256 \ --epochs=12 \ --dataset-resampled \ --aug-cfg use_timm=True scale='(0.5, 1.0)' re_prob=0.4 \ --clip-grad-norm 5.0 \ --lr 5e-5 \ --workers=6 \ --model "convnext_large_d_320" \ --seed 0 \ --ddp-static-graph \ --local-loss \ --gather-with-grad \ --grad-checkpointing ``` # Evaluation Evaluation done with code in the [LAION CLIP Benchmark suite](https://github.com/LAION-AI/CLIP_benchmark). ## Testing Data, Factors & Metrics ### Testing Data The testing is performed with VTAB+ (A combination of VTAB (https://arxiv.org/abs/1910.04867) w/ additional robustness datasets) for classification and COCO and Flickr for retrieval. ## Results The models achieve between 75.9 and 76.9 top-1 zero-shot accuracy on ImageNet-1k. Zero-shot curve of origina from-scratch 256x256 training: ![](convnext_large_zero_shot.png) An initial round of benchmarks have been performed on a wider range of datasets, to be viewable at https://github.com/LAION-AI/CLIP_benchmark/blob/main/benchmark/results.ipynb # Acknowledgements Acknowledging [stability.ai](https://stability.ai/) for compute used to train this model. # Citation **BibTeX:** LAION-5B ```bibtex @inproceedings{schuhmann2022laionb, title={{LAION}-5B: An open large-scale dataset for training next generation image-text models}, author={Christoph Schuhmann and Romain Beaumont and Richard Vencu and Cade W Gordon and Ross Wightman and Mehdi Cherti and Theo Coombes and Aarush Katta and Clayton Mullis and Mitchell Wortsman and Patrick Schramowski and Srivatsa R Kundurthy and Katherine Crowson and Ludwig Schmidt and Robert Kaczmarczyk and Jenia Jitsev}, booktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2022}, url={https://openreview.net/forum?id=M3Y74vmsMcY} } ``` OpenCLIP software ```bibtex @software{ilharco_gabriel_2021_5143773, author = {Ilharco, Gabriel and Wortsman, Mitchell and Wightman, Ross and Gordon, Cade and Carlini, Nicholas and Taori, Rohan and Dave, Achal and Shankar, Vaishaal and Namkoong, Hongseok and Miller, John and Hajishirzi, Hannaneh and Farhadi, Ali and Schmidt, Ludwig}, title = {OpenCLIP}, month = jul, year = 2021, note = {If you use this software, please cite it as below.}, publisher = {Zenodo}, version = {0.1}, doi = {10.5281/zenodo.5143773}, url = {https://doi.org/10.5281/zenodo.5143773} } ``` OpenAI CLIP paper ```bibtex @inproceedings{Radford2021LearningTV, title={Learning Transferable Visual Models From Natural Language Supervision}, author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever}, booktitle={ICML}, year={2021} } ``` ```bibtex @Article{liu2022convnet, author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie}, title = {A ConvNet for the 2020s}, journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, year = {2022}, } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/rwightman/pytorch-image-models}} } ```
11,377
[ [ -0.0347900390625, -0.035491943359375, 0.01416778564453125, 0.0010547637939453125, -0.0302276611328125, -0.02777099609375, -0.01380157470703125, -0.0433349609375, 0.0169830322265625, 0.0305328369140625, -0.03851318359375, -0.03375244140625, -0.045135498046875, ...
JamesFlare/pastel-mix
2023-02-25T05:25:10.000Z
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "en", "license:creativeml-openrail-m", "endpoints_compatible", "has_space", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
JamesFlare
null
null
JamesFlare/pastel-mix
37
1,075
diffusers
2023-02-25T05:24:44
--- language: - en license: creativeml-openrail-m thumbnail: >- https://huggingface.co/andite/pastel-mix/resolve/main/example-images/01194-%20.png tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true duplicated_from: andite/pastel-mix --- Update Logs: [1/27/22] I uploaded the model in CivitAI! -> https://civitai.com/models/5414/pastel-mix-stylized-anime-model I'd appreciate the ratings, thank you! [2/2/22] Uploaded a lora version. <center><h1><b>Pastel Mix</b></h1></center> <p align="center">Welcome to Pastel Mix - a stylized latent diffusion model. This model is intended to produce high-quality, highly detailed anime style with just a few prompts.</p> <p align="center">This model is made with the thought of imitating pastel-like art and the potential of mixing LORAs into a model altogether to create a fantastic mix. Recipe for this mix could be found below. Like other anime-style Stable Diffusion models, it also supports danbooru tags to generate images. </p> <p align="center">e.g. <b>masterpiece, best quality, upper body, 1girl, looking at viewer, red hair, medium hair, purple eyes, demon horns, black coat, indoors, dimly lit</b></p> <p align="center"><img src="https://huggingface.co/andite/Pastel-Mix/resolve/main/example-images/grid-0020.png"> <img src="https://huggingface.co/andite/Pastel-Mix/resolve/main/example-images/grid-0018.png"></p> ------- ## How to download with Git ``` git lfs install git clone https://huggingface.co/andite/pastel-mix ``` ## 🧨 Diffusers This model can be used just like any other Stable Diffusion model. For more information, please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion). You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX](). ```python from diffusers import StableDiffusionPipeline import torch model_id = "andite/pastel-mix" pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe = pipe.to("cuda") prompt = "hatsune_miku" image = pipe(prompt).images[0] image.save("./hatsune_miku.png") ``` # Gradio We support a [Gradio](https://github.com/gradio-app/gradio) Web UI to run pastel-mix: [![Open In Spaces](https://camo.githubusercontent.com/00380c35e60d6b04be65d3d94a58332be5cc93779f630bcdfc18ab9a3a7d3388/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f25463025394625413425393725323048756767696e67253230466163652d5370616365732d626c7565)](https://huggingface.co/spaces/akhaliq/pastel-mix) ## Examples ![](https://huggingface.co/andite/pastel-mix/resolve/main/example-images/grid-0035.png) ``` masterpiece, best quality, ultra-detailed, illustration, portrait, 1girl Negative prompt: lowres, ((bad anatomy)), ((bad hands)), text, missing finger, extra digits, fewer digits, blurry, ((mutated hands and fingers)), (poorly drawn face), ((mutation)), ((deformed face)), (ugly), ((bad proportions)), ((extra limbs)), extra face, (double head), (extra head), ((extra feet)), monster, logo, cropped, worst quality, low quality, normal quality, jpeg, humpbacked, long body, long neck, ((jpeg artifacts)) Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Size: 448x640, Model hash: 7edc8e08, Model: pastelmix-fp32, Denoising strength: 0.6, Clip skip: 2, ENSD: 31337, Hires resize: 960x1280, Hires steps: 20, Hires upscaler: Latent ``` ![](https://huggingface.co/andite/pastel-mix/resolve/main/example-images/grid-reimu.png) ``` masterpiece, best quality, ultra-detailed, illustration, portrait, hakurei reimu, 1girl, throne room, dimly lit Negative prompt: lowres, ((bad anatomy)), ((bad hands)), text, missing finger, extra digits, fewer digits, blurry, ((mutated hands and fingers)), (poorly drawn face), ((mutation)), ((deformed face)), (ugly), ((bad proportions)), ((extra limbs)), extra face, (double head), (extra head), ((extra feet)), monster, logo, cropped, worst quality, low quality, normal quality, jpeg, humpbacked, long body, long neck, ((jpeg artifacts)) Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Size: 448x640, Model hash: 7edc8e08, Model: pastelmix-fp32, Denoising strength: 0.6, Clip skip: 2, ENSD: 31337, Hires resize: 960x1280, Hires steps: 20, Hires upscaler: Latent ``` ![](https://huggingface.co/andite/pastel-mix/resolve/main/example-images/grid-0043.png) ``` masterpiece, best quality, ultra-detailed, illustration, 1girl, witch hat, purple eyes, blonde hair, wielding a purple staff blasting purple energy, purple beam, purple effects, dragons, chaos Negative prompt: lowres, ((bad anatomy)), ((bad hands)), text, missing finger, extra digits, fewer digits, blurry, ((mutated hands and fingers)), (poorly drawn face), ((mutation)), ((deformed face)), (ugly), ((bad proportions)), ((extra limbs)), extra face, (double head), (extra head), ((extra feet)), monster, logo, cropped, worst quality, low quality, normal quality, jpeg, humpbacked, long body, long neck, ((jpeg artifacts)) Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Size: 448x640, Model hash: 7edc8e08, Model: pastelmix-fp32, Denoising strength: 0.6, Clip skip: 2, ENSD: 31337, Hires resize: 960x1280, Hires steps: 20, Hires upscaler: Latent ``` ![](https://huggingface.co/andite/pastel-mix/resolve/main/example-images/01166-%20.png) ``` masterpiece, best quality, ultra-detailed, illustration, close-up, straight on, 1girl, black hair, yellow eyes, red roses, chains Negative prompt: lowres, ((bad anatomy)), ((bad hands)), text, missing finger, extra digits, fewer digits, blurry, ((mutated hands and fingers)), (poorly drawn face), ((mutation)), ((deformed face)), (ugly), ((bad proportions)), ((extra limbs)), extra face, (double head), (extra head), ((extra feet)), monster, logo, cropped, worst quality, low quality, normal quality, jpeg, humpbacked, long body, long neck, ((jpeg artifacts)) Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 2203084815, Size: 640x448, Model hash: 7edc8e08, Model: pastelmix-fp32, Denoising strength: 0.6, Clip skip: 2, ENSD: 31337, Hires resize: 1280x960, Hires steps: 20, Hires upscaler: Latent ``` ![](https://huggingface.co/andite/pastel-mix/resolve/main/example-images/01177-%20.png) ``` masterpiece, best quality, ultra-detailed, illustration, close-up, straight on, face focus, 1girl, white hair, golden eyes, long hair, halo, angel wings, serene expression, looking at viewer Negative prompt: lowres, ((bad anatomy)), ((bad hands)), text, missing finger, extra digits, fewer digits, blurry, ((mutated hands and fingers)), (poorly drawn face), ((mutation)), ((deformed face)), (ugly), ((bad proportions)), ((extra limbs)), extra face, (double head), (extra head), ((extra feet)), monster, logo, cropped, worst quality, low quality, normal quality, jpeg, humpbacked, long body, long neck, ((jpeg artifacts)) Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 240742293, Size: 640x448, Model hash: 7edc8e08, Model: pastelmix-fp32, Denoising strength: 0.6, Clip skip: 2, ENSD: 31337, Hires resize: 1280x960, Hires steps: 20, Hires upscaler: Latent ``` ## So what the hell is the 'better-vae' version? I merged the pastel-waifu-diffusion.vae.pt inside the model so you don't have to set up the vae anymore. ![](https://huggingface.co/andite/pastel-mix/resolve/main/example-images/xyz_grid-0004-%20.png) life so much ez now since you don't have to download the vae and set it up right? ## What is pastelmix-lora.safetensors? It's a lora version which is made from extracting the loras from pastel-mix using a script that is similar to add-difference method. https://github.com/bmaltais/kohya_ss/blob/master/train_network_README.md ## Guide For the settings or parameters, I recommend using these settings. ![](https://huggingface.co/andite/pastel-mix/resolve/main/example-images/settings.png) ``` Sampler: DPM++ 2M Karras Steps: 20 CFG Scale: 7 Hires. Fix: On Upscaler: Latent (MUST!) Hires Steps: 20 Denoising Strength: 0. ``` I prefer using 0.6 since it's the sweet spot of this model. If you can find a better setting for this model, then good for you lol. Latent upscaler is the best setting for me since it retains or enhances the pastel style. Other upscalers like Lanczos or Anime6B tends to smoothen them out, removing the pastel-like brushwork. Please use the **VAE** that I uploaded in this repository. It is from the [Waifu Diffusion](https://huggingface.co/hakurei/waifu-diffusion-v1-4/tree/main/vae) team. Credits to [haru](https://huggingface.co/hakurei) for letting me rename and upload it. ## Tip (Optional) Putting mksks style in the beginning of the prompt can further influence the pastel-like style and make the output better. It is optional though, so it's up to you. You don't really need it. ![](https://huggingface.co/andite/pastel-mix/resolve/main/example-images/xy_grid-0016-%20.png) ``` mksks style, masterpiece, best quality, upper body, 1girl, looking at viewer, red hair, medium hair, purple eyes, demon horns, black coat, indoors, dimly lit Negative prompt: lowres, ((bad anatomy)), ((bad hands)), text, missing finger, extra digits, fewer digits, blurry, ((mutated hands and fingers)), (poorly drawn face), ((mutation)), ((deformed face)), (ugly), ((bad proportions)), ((extra limbs)), extra face, (double head), (extra head), ((extra feet)), monster, logo, cropped, worst quality, low quality, normal quality, jpeg, humpbacked, long body, long neck, ((jpeg artifacts)) Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 580841049, Size: 448x640, Model hash: 7edc8e08, Model: pastelmix-fp32, Denoising strength: 0.6, Clip skip: 2, ENSD: 31337, Hires resize: 960x1280, Hires steps: 20, Hires upscaler: Latent ``` ## Recipe Merging the models. | Model: A | Model: B | Weight | Base alpha | Merge Name | | --- | --- | --- | --- | --- | | [dpepmkmp](https://huggingface.co/closertodeath/dpepmkmp) | [Tea](https://huggingface.co/andite/desserts) | 1,0.9,0.7,0.5,0.3,0.1,1,1,1,1,1,1,0,1,1,1,1,1,1,0.1,0.3,0.5,0.7,0.9,1 | 0 | dpeptea | | dpeptea | [basil-mix](https://huggingface.co/nuigurumi/basil_mix) | 1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 | 0 | dpeptea-basil | Merging the loras into the model. | Model | Lora | Weight | Merge Name | | --- | --- | --- | --- | | [dpeptea-basil](https://huggingface.co/closertodeath/dpepteahands3) | [Magic LORA](https://cdn.discordapp.com/attachments/1065289257243115540/1066346221876301845/MagicLORA.pt) | 0.3 | dpeptea-1 | | dpeptea-1 | [Jordan_3](https://huggingface.co/SatyamSSJ10/ConceptArt) | 1 | dpeptea-2 | | dpeptea-2 | [sttabi_v1.4-04](https://huggingface.co/dolphinz/stlora) | 0.5 | dpeptea-3 | | dpeptea-3 | [xlimo768](https://huggingface.co/closertodeath/ctdlora) | 0.6 | dpeptea-4 | | dpeptea-4 | [dpep 2 768](https://huggingface.co/closertodeath/ctdlora)| 0.35 | Pastel-Mix | ## License This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content. 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) ------- ## Big Thanks to The 東方Project AI community for their wonderful LORAs. - [Closertodeath](https://huggingface.co/closertodeath) for dpepmkmp model, and the loras: xlimo768, dpep 2 768 - [dolphinz/sometimes#9353](https://huggingface.co/dolphinz) for tabi artstyle Lora. - [SatyamSSJ10](https://huggingface.co/SatyamSSJ10/ConceptArt) for Jordan_3 Lora. - randomaccessmemories#4004 for Magic Lora
12,198
[ [ -0.055206298828125, -0.054046630859375, 0.01102447509765625, 0.0294189453125, -0.017242431640625, 0.01116180419921875, -0.0027751922607421875, -0.042205810546875, 0.06201171875, 0.0255889892578125, -0.0435791015625, -0.05035400390625, -0.052154541015625, 0.0...
showlab/show-1-sr1
2023-10-12T03:53:02.000Z
[ "diffusers", "text-to-video", "arxiv:2309.15818", "license:cc-by-nc-4.0", "diffusers:TextToVideoIFSuperResolutionPipeline_Cond", "region:us", "has_space" ]
text-to-video
showlab
null
null
showlab/show-1-sr1
2
1,075
diffusers
2023-10-10T17:08:20
--- license: cc-by-nc-4.0 tags: - text-to-video --- # show-1-sr1 Pixel-based VDMs can generate motion accurately aligned with the textual prompt but typically demand expensive computational costs in terms of time and GPU memory, especially when generating high-resolution videos. Latent-based VDMs are more resource-efficient because they work in a reduced-dimension latent space. But it is challenging for such small latent space (e.g., 64×40 for 256×160 videos) to cover rich yet necessary visual semantic details as described by the textual prompt. To marry the strength and alleviate the weakness of pixel-based and latent-based VDMs, we introduce **Show-1**, an efficient text-to-video model that generates videos of not only decent video-text alignment but also high visual quality. ![](https://showlab.github.io/Show-1/assets/images/method.png) ## Model Details This is the super-resolution model of Show-1 that upscales videos from a 64x40 resolution to 256x160. The model is finetuned from [DeepFloyd/IF-II-M-v1.0](https://huggingface.co/DeepFloyd/IF-II-M-v1.0) on the [WebVid-10M](https://maxbain.com/webvid-dataset/) dataset. - **Developed by:** [Show Lab, National University of Singapore](https://sites.google.com/view/showlab/home?authuser=0) - **Model type:** pixel- and latent-based cascaded text-to-video diffusion model - **Cascade stage:** super-resolution (64x40->256x160) - **Finetuned from model:** [DeepFloyd/IF-II-M-v1.0](https://huggingface.co/DeepFloyd/IF-II-M-v1.0) - **License:** Creative Commons Attribution Non Commercial 4.0 - **Resources for more information:** [GitHub](https://github.com/showlab/Show-1), [Website](https://showlab.github.io/Show-1/), [arXiv](https://arxiv.org/abs/2309.15818) ## Usage Clone the GitHub repository and install the requirements: ```bash git clone https://github.com/showlab/Show-1.git pip install -r requirements.txt ``` Run the following command to generate a video from a text prompt. By default, this will automatically download all the model weights from huggingface. ```bash python run_inference.py ``` You can also download the weights manually and change the `pretrained_model_path` in `run_inference.py` to run the inference. ```bash git lfs install # base git clone https://huggingface.co/showlab/show-1-base # interp git clone https://huggingface.co/showlab/show-1-interpolation # sr1 git clone https://huggingface.co/showlab/show-1-sr1 # sr2 git clone https://huggingface.co/showlab/show-1-sr2 ``` ## Citation If you make use of our work, please cite our paper. ```bibtex @misc{zhang2023show1, title={Show-1: Marrying Pixel and Latent Diffusion Models for Text-to-Video Generation}, author={David Junhao Zhang and Jay Zhangjie Wu and Jia-Wei Liu and Rui Zhao and Lingmin Ran and Yuchao Gu and Difei Gao and Mike Zheng Shou}, year={2023}, eprint={2309.15818}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ## Model Card Contact This model card is maintained by [David Junhao Zhang](https://junhaozhang98.github.io/) and [Jay Zhangjie Wu](https://jayzjwu.github.io/). For any questions, please feel free to contact us or open an issue in the repository.
3,182
[ [ -0.032501220703125, -0.06768798828125, 0.033782958984375, 0.01036834716796875, -0.0267181396484375, -0.025146484375, -0.0010366439819335938, 0.005306243896484375, 0.003269195556640625, 0.0178375244140625, -0.060791015625, -0.0364990234375, -0.060760498046875, ...
indigo-ai/BERTino
2021-09-22T08:51:24.000Z
[ "transformers", "pytorch", "tf", "distilbert", "fill-mask", "DISTILbert", "Italian", "it", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
indigo-ai
null
null
indigo-ai/BERTino
13
1,072
transformers
2022-03-02T23:29:05
--- language: it tags: - DISTILbert - Italian license: mit widget: - text: Vado al [MASK] a fare la spesa - text: Vado al parco a guardare le [MASK] - text: Il cielo è [MASK] di stelle. --- # BERTino: an Italian DistilBERT model This repository hosts BERTino, an Italian DistilBERT model pre-trained by [indigo.ai](https://indigo.ai/en/) on a large general-domain Italian corpus. BERTino is task-agnostic and can be fine-tuned for every downstream task. ### Corpus The pre-training corpus that we used is the union of the [Paisa](https://www.corpusitaliano.it/) and [ItWaC](https://corpora.dipintra.it/public/run.cgi/corp_info?corpname=itwac_full) corpora. The final corpus counts 14 millions of sentences for a total of 12 GB of text. ### Downstream Results To validate the pre-training that we conducted, we evaluated BERTino on the [Italian ParTUT](https://universaldependencies.org/treebanks/it_partut/index.html), [Italian ISDT](https://universaldependencies.org/treebanks/it_isdt/index.html), [Italian WikiNER](https://figshare.com/articles/Learning_multilingual_named_entity_recognition_from_Wikipedia/5462500) and multi-class sentence classification tasks. We report for comparison results obtained by the [teacher model](https://huggingface.co/dbmdz/bert-base-italian-xxl-uncased) fine-tuned in the same tasks and for the same number of epochs. **Italian ISDT:** | Model | F1 score | Fine-tuning time | Evaluation time | |--------------|----------|------------------|-----------------| | BERTino | 0,9801 | 9m, 4s | 3s | | Teacher | 0,983 | 16m, 28s | 5s | **Italian ParTUT:** | Model | F1 score | Fine-tuning time | Evaluation time | |--------------|----------|------------------|-----------------| | BERTino | 0,9268 | 1m, 18s | 1s | | Teacher | 0,9688 | 2m, 18s | 1s | **Italian WikiNER:** | Model | F1 score | Fine-tuning time | Evaluation time | |--------------|----------|------------------|-----------------| | BERTino | 0,9038 | 35m, 35s | 3m, 1s | | Teacher | 0,9178 | 67m, 8s | 5m, 16s | **Multi-class sentence classification:** | Model | F1 score | Fine-tuning time | Evaluation time | |--------------|----------|------------------|-----------------| | BERTino | 0,7788 | 4m, 40s | 6s | | Teacher | 0,7986 | 8m, 52s | 9s |
2,516
[ [ -0.041290283203125, -0.03924560546875, 0.0302581787109375, 0.0172119140625, -0.007503509521484375, -0.00531768798828125, -0.035247802734375, -0.032440185546875, -0.00028133392333984375, 0.0007958412170410156, -0.037689208984375, -0.041748046875, -0.057373046875,...
speechbrain/sepformer-wsj02mix
2022-07-29T14:23:42.000Z
[ "speechbrain", "Source Separation", "Speech Separation", "Audio Source Separation", "WSJ02Mix", "SepFormer", "Transformer", "audio-to-audio", "audio-source-separation", "en", "dataset:WSJ0-2Mix", "arxiv:2010.13154", "arxiv:2106.04624", "license:apache-2.0", "has_space", "region:us" ]
audio-to-audio
speechbrain
null
null
speechbrain/sepformer-wsj02mix
35
1,072
speechbrain
2022-03-02T23:29:05
--- language: "en" thumbnail: tags: - Source Separation - Speech Separation - Audio Source Separation - WSJ02Mix - SepFormer - Transformer - audio-to-audio - audio-source-separation - speechbrain license: "apache-2.0" datasets: - WSJ0-2Mix metrics: - SI-SNRi - SDRi --- <iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe> <br/><br/> # SepFormer trained on WSJ0-2Mix This repository provides all the necessary tools to perform audio source separation with a [SepFormer](https://arxiv.org/abs/2010.13154v2) model, implemented with SpeechBrain, and pretrained on WSJ0-2Mix dataset. For a better experience we encourage you to learn more about [SpeechBrain](https://speechbrain.github.io). The model performance is 22.4 dB on the test set of WSJ0-2Mix dataset. | Release | Test-Set SI-SNRi | Test-Set SDRi | |:-------------:|:--------------:|:--------------:| | 09-03-21 | 22.4dB | 22.6dB | You can listen to example results obtained on the test set of WSJ0-2/3Mix through [here](https://sourceseparationresearch.com/static/sepformer_example_results/sepformer_results.html). ## Install SpeechBrain First of all, please install SpeechBrain with the following command: ``` pip install speechbrain ``` Please notice that we encourage you to read our tutorials and learn more about [SpeechBrain](https://speechbrain.github.io). ### Perform source separation on your own audio file ```python from speechbrain.pretrained import SepformerSeparation as separator import torchaudio model = separator.from_hparams(source="speechbrain/sepformer-wsj02mix", savedir='pretrained_models/sepformer-wsj02mix') # for custom file, change path est_sources = model.separate_file(path='speechbrain/sepformer-wsj02mix/test_mixture.wav') torchaudio.save("source1hat.wav", est_sources[:, :, 0].detach().cpu(), 8000) torchaudio.save("source2hat.wav", est_sources[:, :, 1].detach().cpu(), 8000) ``` The system expects input recordings sampled at 8kHz (single channel). If your signal has a different sample rate, resample it (e.g, using torchaudio or sox) before using the interface. ### Inference on GPU To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method. ### Training The model was trained with SpeechBrain (fc2eabb7). To train it from scratch follows these steps: 1. Clone SpeechBrain: ```bash git clone https://github.com/speechbrain/speechbrain/ ``` 2. Install it: ``` cd speechbrain pip install -r requirements.txt pip install -e . ``` 3. Run Training: ``` cd recipes/WSJ0Mix/separation python train.py hparams/sepformer.yaml --data_folder=your_data_folder ``` You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1cON-eqtKv_NYnJhaE9VjLT_e2ybn-O7u?usp=sharing). ### Limitations The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets. #### Referencing SpeechBrain ```bibtex @misc{speechbrain, title={{SpeechBrain}: A General-Purpose Speech Toolkit}, author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio}, year={2021}, eprint={2106.04624}, archivePrefix={arXiv}, primaryClass={eess.AS}, note={arXiv:2106.04624} } ``` #### Referencing SepFormer ```bibtex @inproceedings{subakan2021attention, title={Attention is All You Need in Speech Separation}, author={Cem Subakan and Mirco Ravanelli and Samuele Cornell and Mirko Bronzi and Jianyuan Zhong}, year={2021}, booktitle={ICASSP 2021} } ``` # **About SpeechBrain** - Website: https://speechbrain.github.io/ - Code: https://github.com/speechbrain/speechbrain/ - HuggingFace: https://huggingface.co/speechbrain/
4,152
[ [ -0.044586181640625, -0.045806884765625, 0.0009255409240722656, 0.00421905517578125, -0.029998779296875, -0.00463104248046875, -0.03289794921875, -0.03466796875, 0.0203704833984375, 0.0152435302734375, -0.04571533203125, -0.030059814453125, -0.051910400390625, ...
timm/resnet26d.bt_in1k
2023-04-05T18:05:05.000Z
[ "timm", "pytorch", "safetensors", "image-classification", "arxiv:1512.03385", "arxiv:1812.01187", "license:apache-2.0", "region:us" ]
image-classification
timm
null
null
timm/resnet26d.bt_in1k
0
1,072
timm
2023-04-05T18:04:52
--- tags: - image-classification - timm library_tag: timm license: apache-2.0 --- # Model card for resnet26d.bt_in1k A ResNet-D image classification model. This model features: * ReLU activations * 3-layer stem of 3x3 convolutions with pooling * 2x2 average pool + 1x1 convolution shortcut downsample Trained on ImageNet-1k in `timm` using recipe template described below. Recipe details: * Bag-of-Tricks recipe. * SGD (w/ Nesterov) optimizer * Cosine LR schedule with warmup ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 16.0 - GMACs: 2.6 - Activations (M): 8.2 - Image size: train = 224 x 224, test = 288 x 288 - **Papers:** - Deep Residual Learning for Image Recognition: https://arxiv.org/abs/1512.03385 - Bag of Tricks for Image Classification with Convolutional Neural Networks: https://arxiv.org/abs/1812.01187 - **Original:** https://github.com/huggingface/pytorch-image-models ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('resnet26d.bt_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'resnet26d.bt_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 64, 112, 112]) # torch.Size([1, 256, 56, 56]) # torch.Size([1, 512, 28, 28]) # torch.Size([1, 1024, 14, 14]) # torch.Size([1, 2048, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'resnet26d.bt_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 2048, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). |model |img_size|top1 |top5 |param_count|gmacs|macts|img/sec| |------------------------------------------|--------|-----|-----|-----------|-----|-----|-------| |[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|320 |86.72|98.17|93.6 |35.2 |69.7 |451 | |[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|288 |86.51|98.08|93.6 |28.5 |56.4 |560 | |[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|288 |86.49|98.03|93.6 |28.5 |56.4 |557 | |[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|224 |85.96|97.82|93.6 |17.2 |34.2 |923 | |[resnext101_32x32d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x32d.fb_wsl_ig1b_ft_in1k)|224 |85.11|97.44|468.5 |87.3 |91.1 |254 | |[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|416 |85.0 |97.12|191.9 |108.4|213.8|134 | |[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|352 |84.96|97.22|102.1 |50.2 |101.2|291 | |[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|320 |84.73|97.18|102.1 |41.5 |83.7 |353 | |[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|384 |84.71|96.99|164.0 |77.6 |154.7|183 | |[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|288 |84.57|97.08|93.6 |28.5 |56.4 |557 | |[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|320 |84.45|97.08|93.2 |31.5 |67.8 |446 | |[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|352 |84.43|96.97|129.9 |51.1 |105.5|280 | |[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|288 |84.36|96.92|93.6 |27.6 |53.0 |595 | |[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|320 |84.35|97.04|66.8 |24.1 |47.7 |610 | |[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|288 |84.3 |96.94|164.0 |43.7 |87.1 |333 | |[resnext101_32x8d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_swsl_ig1b_ft_in1k)|224 |84.28|97.17|88.8 |16.5 |31.2 |1100 | |[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|320 |84.24|96.86|191.9 |64.2 |126.6|228 | |[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|288 |84.19|96.87|93.6 |27.2 |51.6 |613 | |[resnext101_32x16d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_wsl_ig1b_ft_in1k)|224 |84.18|97.19|194.0 |36.3 |51.2 |581 | |[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|288 |84.11|97.11|44.6 |15.1 |29.0 |1144 | |[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|320 |83.97|96.82|64.7 |31.2 |67.3 |518 | |[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|256 |83.87|96.75|93.2 |20.2 |43.4 |692 | |[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|224 |83.86|96.65|93.6 |17.2 |34.2 |923 | |[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|320 |83.72|96.61|86.6 |24.3 |48.1 |617 | |[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|256 |83.69|96.78|66.8 |15.4 |30.6 |943 | |[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|224 |83.68|96.61|93.6 |16.7 |32.0 |986 | |[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|320 |83.67|96.74|60.2 |24.1 |47.7 |706 | |[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|256 |83.59|96.61|129.9 |27.1 |55.8 |526 | |[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|224 |83.58|96.4 |93.6 |16.5 |31.2 |1013 | |[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|224 |83.54|96.83|44.6 |9.1 |17.6 |1864 | |[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|288 |83.46|96.54|60.2 |19.1 |37.3 |904 | |[resnext101_32x16d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_swsl_ig1b_ft_in1k)|224 |83.35|96.85|194.0 |36.3 |51.2 |582 | |[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|256 |83.23|96.53|64.7 |20.0 |43.1 |809 | |[resnext101_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_swsl_ig1b_ft_in1k)|224 |83.22|96.75|44.2 |8.0 |21.2 |1814 | |[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|288 |83.16|96.38|83.5 |25.7 |51.6 |590 | |[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|256 |83.14|96.38|60.2 |15.4 |30.5 |1096 | |[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|320 |83.02|96.45|44.6 |16.5 |34.8 |992 | |[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|288 |82.98|96.54|44.6 |13.4 |28.2 |1077 | |[resnext101_64x4d.tv_in1k](https://huggingface.co/timm/resnext101_64x4d.tv_in1k)|224 |82.98|96.25|83.5 |15.5 |31.2 |989 | |[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|256 |82.86|96.28|86.6 |15.6 |30.8 |951 | |[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|224 |82.83|96.22|88.8 |16.5 |31.2 |1099 | |[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|224 |82.8 |96.13|60.2 |11.6 |22.6 |1486 | |[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|288 |82.8 |96.32|44.6 |13.0 |26.8 |1291 | |[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|288 |82.74|95.71|60.2 |19.1 |37.3 |905 | |[resnext101_32x8d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_wsl_ig1b_ft_in1k)|224 |82.69|96.63|88.8 |16.5 |31.2 |1100 | |[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|288 |82.62|95.75|60.2 |19.1 |37.3 |904 | |[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|288 |82.61|96.49|25.6 |8.9 |20.6 |1729 | |[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|288 |82.53|96.13|36.8 |9.9 |21.5 |1773 | |[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|224 |82.5 |96.02|126.9 |22.8 |21.2 |1078 | |[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|224 |82.46|95.92|83.5 |15.5 |31.2 |987 | |[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|288 |82.36|96.18|35.7 |8.1 |20.9 |1964 | |[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|320 |82.35|96.14|25.6 |8.8 |24.1 |1386 | |[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|288 |82.31|95.63|44.6 |13.0 |26.8 |1291 | |[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|288 |82.29|96.01|63.6 |13.6 |28.5 |1078 | |[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|224 |82.29|96.0 |60.2 |11.6 |22.6 |1484 | |[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|288 |82.27|96.06|68.9 |18.9 |23.8 |1176 | |[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|256 |82.26|96.07|44.6 |10.6 |22.2 |1542 | |[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|288 |82.24|95.73|44.6 |13.0 |26.8 |1290 | |[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|288 |82.2 |96.14|27.6 |7.0 |23.8 |1547 | |[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|224 |82.18|96.05|44.6 |8.1 |17.1 |1771 | |[resnext50_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_swsl_ig1b_ft_in1k)|224 |82.17|96.22|25.0 |4.3 |14.4 |2943 | |[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|288 |82.12|95.65|25.6 |7.1 |19.6 |1704 | |[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|288 |82.03|95.94|25.0 |7.0 |23.8 |1745 | |[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|288 |82.0 |96.15|24.9 |5.8 |12.7 |1787 | |[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|256 |81.99|95.85|36.8 |7.8 |17.0 |2230 | |[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|176 |81.98|95.72|88.8 |10.3 |19.4 |1768 | |[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|224 |81.97|95.24|60.2 |11.6 |22.6 |1486 | |[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|224 |81.93|95.75|44.6 |7.8 |16.2 |2122 | |[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|224 |81.9 |95.77|44.6 |7.8 |16.2 |2118 | |[resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k)|224 |81.84|96.1 |194.0 |36.3 |51.2 |583 | |[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|256 |81.78|95.94|35.7 |6.4 |16.6 |2471 | |[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|224 |81.77|95.22|60.2 |11.6 |22.6 |1485 | |[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|224 |81.74|96.06|25.6 |5.4 |12.4 |2813 | |[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|288 |81.65|95.54|25.6 |7.1 |19.6 |1703 | |[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|288 |81.64|95.88|25.6 |7.2 |19.7 |1694 | |[resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k)|224 |81.62|96.04|88.8 |16.5 |31.2 |1101 | |[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|224 |81.61|95.76|68.9 |11.4 |14.4 |1930 | |[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|288 |81.61|95.83|25.6 |8.5 |19.2 |1868 | |[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|224 |81.5 |95.16|44.6 |7.8 |16.2 |2125 | |[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|288 |81.48|95.16|25.0 |7.0 |23.8 |1745 | |[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|288 |81.47|95.71|25.9 |6.9 |18.6 |2071 | |[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|224 |81.45|95.53|68.9 |11.4 |14.4 |1929 | |[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|288 |81.44|95.22|25.6 |7.2 |19.7 |1908 | |[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|256 |81.44|95.67|25.6 |5.6 |15.4 |2168 | |[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|288 |81.4 |95.82|30.2 |6.8 |13.9 |2132 | |[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|288 |81.37|95.74|25.6 |7.2 |19.7 |1910 | |[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|224 |81.32|95.19|44.6 |7.8 |16.2 |2125 | |[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|288 |81.3 |95.65|28.1 |6.8 |18.4 |1803 | |[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|288 |81.3 |95.11|25.0 |7.0 |23.8 |1746 | |[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|224 |81.27|95.62|27.6 |4.3 |14.4 |2591 | |[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|224 |81.26|95.16|25.6 |4.3 |11.8 |2823 | |[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|288 |81.23|95.54|15.7 |4.8 |19.6 |2117 | |[senet154.gluon_in1k](https://huggingface.co/timm/senet154.gluon_in1k)|224 |81.23|95.35|115.1 |20.8 |38.7 |545 | |[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|288 |81.22|95.11|25.6 |6.8 |18.4 |2089 | |[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|288 |81.22|95.63|25.6 |6.8 |18.4 |676 | |[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|288 |81.18|95.09|25.6 |7.2 |19.7 |1908 | |[resnet50.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet50.fb_swsl_ig1b_ft_in1k)|224 |81.18|95.98|25.6 |4.1 |11.1 |3455 | |[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|224 |81.17|95.34|25.0 |4.3 |14.4 |2933 | |[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|224 |81.1 |95.33|25.0 |4.3 |14.4 |2934 | |[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|288 |81.1 |95.23|28.1 |6.8 |18.4 |1801 | |[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|288 |81.1 |95.12|28.1 |6.8 |18.4 |1799 | |[resnet152s.gluon_in1k](https://huggingface.co/timm/resnet152s.gluon_in1k)|224 |81.02|95.41|60.3 |12.9 |25.0 |1347 | |[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|288 |80.97|95.44|25.6 |6.8 |18.4 |2085 | |[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|256 |80.94|95.45|25.9 |5.4 |14.7 |2571 | |[resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.93|95.73|44.2 |8.0 |21.2 |1814 | |[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|288 |80.91|95.55|25.6 |6.8 |18.4 |2084 | |[seresnext101_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_32x4d.gluon_in1k)|224 |80.9 |95.31|49.0 |8.0 |21.3 |1585 | |[seresnext101_64x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_64x4d.gluon_in1k)|224 |80.9 |95.3 |88.2 |15.5 |31.2 |918 | |[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|288 |80.86|95.52|25.6 |6.8 |18.4 |2085 | |[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|224 |80.85|95.43|25.6 |4.1 |11.1 |3450 | |[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|224 |80.84|95.02|25.6 |4.3 |11.8 |2821 | |[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|224 |80.79|95.62|24.9 |3.5 |7.7 |2961 | |[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|288 |80.79|95.36|19.8 |6.0 |14.8 |2506 | |[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|288 |80.79|95.58|19.9 |4.2 |10.6 |2349 | |[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|288 |80.78|94.99|25.6 |6.8 |18.4 |2088 | |[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|288 |80.71|95.43|25.6 |6.8 |18.4 |2087 | |[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|288 |80.7 |95.39|25.0 |7.0 |23.8 |1749 | |[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|192 |80.69|95.24|63.6 |6.0 |12.7 |2270 | |[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|224 |80.68|94.71|25.6 |4.4 |11.9 |3162 | |[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|288 |80.68|95.36|19.7 |6.0 |14.8 |2637 | |[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|224 |80.67|95.3 |25.6 |4.1 |11.1 |3452 | |[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|288 |80.67|95.42|25.0 |7.4 |25.1 |1626 | |[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|224 |80.63|95.21|25.6 |5.2 |11.6 |3034 | |[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|224 |80.61|95.32|25.6 |4.4 |11.9 |2813 | |[resnext101_64x4d.gluon_in1k](https://huggingface.co/timm/resnext101_64x4d.gluon_in1k)|224 |80.61|94.99|83.5 |15.5 |31.2 |989 | |[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|288 |80.6 |95.31|19.9 |6.0 |14.8 |2578 | |[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|256 |80.57|95.17|15.7 |3.8 |15.5 |2710 | |[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|224 |80.56|95.0 |60.2 |11.6 |22.6 |1483 | |[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|224 |80.53|95.16|25.6 |4.4 |11.9 |3164 | |[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|224 |80.53|94.46|25.0 |4.3 |14.4 |2930 | |[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|176 |80.48|94.98|126.9 |14.3 |13.2 |1719 | |[resnet152d.gluon_in1k](https://huggingface.co/timm/resnet152d.gluon_in1k)|224 |80.47|95.2 |60.2 |11.8 |23.4 |1428 | |[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|288 |80.45|95.32|25.6 |6.8 |18.4 |2086 | |[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|224 |80.45|95.24|30.2 |4.1 |8.4 |3530 | |[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|224 |80.45|94.63|25.0 |4.3 |14.4 |2936 | |[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|176 |80.43|95.09|68.9 |7.3 |9.0 |3015 | |[resnet101d.gluon_in1k](https://huggingface.co/timm/resnet101d.gluon_in1k)|224 |80.42|95.01|44.6 |8.1 |17.0 |2007 | |[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|224 |80.38|94.6 |25.6 |4.1 |11.1 |3461 | |[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|256 |80.36|95.1 |19.8 |4.8 |11.7 |3267 | |[resnext101_32x4d.gluon_in1k](https://huggingface.co/timm/resnext101_32x4d.gluon_in1k)|224 |80.34|94.93|44.2 |8.0 |21.2 |1814 | |[resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.32|95.4 |25.0 |4.3 |14.4 |2941 | |[resnet101s.gluon_in1k](https://huggingface.co/timm/resnet101s.gluon_in1k)|224 |80.28|95.16|44.7 |9.2 |18.6 |1851 | |[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|224 |80.26|95.08|28.1 |4.1 |11.1 |2972 | |[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|288 |80.24|95.24|25.6 |8.5 |19.9 |1523 | |[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|224 |80.22|94.63|25.6 |4.4 |11.9 |3162 | |[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|176 |80.2 |94.64|60.2 |7.2 |14.0 |2346 | |[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|224 |80.08|94.74|28.1 |4.1 |11.1 |2969 | |[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|256 |80.08|94.97|19.7 |4.8 |11.7 |3284 | |[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|256 |80.06|94.99|19.9 |4.8 |11.7 |3216 | |[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|224 |80.06|94.95|25.6 |4.1 |11.1 |1109 | |[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|224 |80.02|94.71|28.1 |4.1 |11.1 |2962 | |[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|288 |79.97|95.05|25.6 |6.8 |18.4 |2086 | |[resnet152c.gluon_in1k](https://huggingface.co/timm/resnet152c.gluon_in1k)|224 |79.92|94.84|60.2 |11.8 |23.4 |1455 | |[seresnext50_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext50_32x4d.gluon_in1k)|224 |79.91|94.82|27.6 |4.3 |14.4 |2591 | |[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|224 |79.91|94.67|25.6 |4.1 |11.1 |3456 | |[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|176 |79.9 |94.6 |44.6 |4.9 |10.1 |3341 | |[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|224 |79.89|94.97|35.7 |4.5 |12.1 |2774 | |[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|224 |79.88|94.87|25.6 |4.1 |11.1 |3455 | |[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|320 |79.86|95.07|16.0 |5.2 |16.4 |2168 | |[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|224 |79.85|94.56|25.6 |4.1 |11.1 |3460 | |[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|288 |79.83|94.97|25.6 |6.8 |18.4 |2087 | |[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|224 |79.82|94.62|44.6 |7.8 |16.2 |2114 | |[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|224 |79.76|94.6 |25.0 |4.3 |14.4 |2943 | |[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|224 |79.74|94.95|25.6 |4.1 |11.1 |3455 | |[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|224 |79.74|94.87|19.9 |2.5 |6.4 |3929 | |[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|288 |79.71|94.83|19.7 |6.0 |14.8 |2710 | |[resnet152.gluon_in1k](https://huggingface.co/timm/resnet152.gluon_in1k)|224 |79.68|94.74|60.2 |11.6 |22.6 |1486 | |[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|224 |79.67|94.87|25.0 |4.5 |15.2 |2729 | |[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|288 |79.63|94.91|25.6 |6.8 |18.4 |2086 | |[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|224 |79.56|94.72|25.6 |4.3 |11.8 |2805 | |[resnet101c.gluon_in1k](https://huggingface.co/timm/resnet101c.gluon_in1k)|224 |79.53|94.58|44.6 |8.1 |17.0 |2062 | |[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|224 |79.52|94.61|25.6 |4.1 |11.1 |3459 | |[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|176 |79.42|94.64|25.6 |2.6 |6.9 |5397 | |[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|288 |79.4 |94.66|18.0 |5.9 |14.6 |2752 | |[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|224 |79.38|94.57|25.6 |4.1 |11.1 |3459 | |[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|176 |79.37|94.3 |25.0 |2.7 |9.0 |4577 | |[resnext50_32x4d.gluon_in1k](https://huggingface.co/timm/resnext50_32x4d.gluon_in1k)|224 |79.36|94.43|25.0 |4.3 |14.4 |2942 | |[resnext101_32x8d.tv_in1k](https://huggingface.co/timm/resnext101_32x8d.tv_in1k)|224 |79.31|94.52|88.8 |16.5 |31.2 |1100 | |[resnet101.gluon_in1k](https://huggingface.co/timm/resnet101.gluon_in1k)|224 |79.31|94.53|44.6 |7.8 |16.2 |2125 | |[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|224 |79.31|94.63|25.6 |5.2 |12.0 |2524 | |[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|176 |79.27|94.49|25.6 |2.6 |6.9 |5404 | |[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|224 |79.25|94.31|25.0 |4.3 |14.4 |2931 | |[resnet50.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet50.fb_ssl_yfcc100m_ft_in1k)|224 |79.22|94.84|25.6 |4.1 |11.1 |3451 | |[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|256 |79.21|94.56|19.7 |4.8 |11.7 |3392 | |[resnet50d.gluon_in1k](https://huggingface.co/timm/resnet50d.gluon_in1k)|224 |79.07|94.48|25.6 |4.4 |11.9 |3162 | |[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|224 |79.03|94.38|25.6 |4.1 |11.1 |3453 | |[resnet50.am_in1k](https://huggingface.co/timm/resnet50.am_in1k)|224 |79.01|94.39|25.6 |4.1 |11.1 |3461 | |[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|256 |79.01|94.37|18.0 |4.6 |11.6 |3440 | |[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|256 |78.9 |94.54|16.0 |3.4 |10.5 |3421 | |[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|160 |78.89|94.11|60.2 |5.9 |11.5 |2745 | |[wide_resnet101_2.tv_in1k](https://huggingface.co/timm/wide_resnet101_2.tv_in1k)|224 |78.84|94.28|126.9 |22.8 |21.2 |1079 | |[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|288 |78.83|94.24|16.8 |4.5 |16.8 |2251 | |[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|224 |78.81|94.32|25.6 |4.1 |11.1 |3454 | |[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|288 |78.74|94.33|16.8 |4.5 |16.7 |2264 | |[resnet50s.gluon_in1k](https://huggingface.co/timm/resnet50s.gluon_in1k)|224 |78.72|94.23|25.7 |5.5 |13.5 |2796 | |[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|224 |78.71|94.24|25.6 |4.4 |11.9 |3154 | |[wide_resnet50_2.tv_in1k](https://huggingface.co/timm/wide_resnet50_2.tv_in1k)|224 |78.47|94.09|68.9 |11.4 |14.4 |1934 | |[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|224 |78.46|94.27|25.6 |4.1 |11.1 |3454 | |[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|288 |78.43|94.35|21.8 |6.5 |7.5 |3291 | |[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|288 |78.42|94.04|10.5 |3.1 |13.3 |3226 | |[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|320 |78.33|94.13|16.0 |5.2 |16.4 |2391 | |[resnet152.tv_in1k](https://huggingface.co/timm/resnet152.tv_in1k)|224 |78.32|94.04|60.2 |11.6 |22.6 |1487 | |[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|288 |78.28|94.1 |10.4 |3.1 |13.3 |3062 | |[bat_resnext26ts.ch_in1k](https://huggingface.co/timm/bat_resnext26ts.ch_in1k)|256 |78.25|94.1 |10.7 |2.5 |12.5 |3393 | |[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|224 |78.06|93.78|25.6 |4.1 |11.1 |3450 | |[resnet50c.gluon_in1k](https://huggingface.co/timm/resnet50c.gluon_in1k)|224 |78.0 |93.99|25.6 |4.4 |11.9 |3286 | |[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|288 |78.0 |93.91|10.3 |3.1 |13.3 |3297 | |[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|224 |77.98|93.75|16.8 |2.7 |10.1 |3841 | |[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|288 |77.92|93.77|21.8 |6.1 |6.2 |3609 | |[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|160 |77.88|93.71|44.6 |4.0 |8.3 |3926 | |[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|256 |77.87|93.84|16.0 |3.4 |10.5 |3772 | |[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|256 |77.86|93.79|10.4 |2.4 |10.5 |4263 | |[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|160 |77.82|93.81|35.7 |2.3 |6.2 |5238 | |[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|256 |77.81|93.82|10.5 |2.4 |10.5 |4183 | |[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|160 |77.79|93.6 |25.6 |2.2 |6.0 |5329 | |[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|160 |77.73|93.32|25.0 |2.2 |7.4 |5576 | |[resnext50_32x4d.tv_in1k](https://huggingface.co/timm/resnext50_32x4d.tv_in1k)|224 |77.61|93.7 |25.0 |4.3 |14.4 |2944 | |[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|224 |77.59|93.61|16.8 |2.7 |10.2 |3807 | |[resnet50.gluon_in1k](https://huggingface.co/timm/resnet50.gluon_in1k)|224 |77.58|93.72|25.6 |4.1 |11.1 |3455 | |[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|256 |77.44|93.56|10.3 |2.4 |10.5 |4284 | |[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|288 |77.41|93.63|16.0 |4.3 |13.5 |2907 | |[resnet101.tv_in1k](https://huggingface.co/timm/resnet101.tv_in1k)|224 |77.38|93.54|44.6 |7.8 |16.2 |2125 | |[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|160 |77.22|93.27|25.6 |2.2 |6.1 |5982 | |[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|288 |77.17|93.47|10.3 |3.1 |13.3 |3392 | |[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|288 |77.15|93.27|21.8 |6.1 |6.2 |3615 | |[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|224 |77.1 |93.37|21.8 |3.9 |4.5 |5436 | |[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|224 |77.02|93.07|28.1 |4.1 |11.1 |2952 | |[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|256 |76.78|93.13|10.3 |2.4 |10.5 |4410 | |[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|224 |76.7 |93.17|16.0 |2.6 |8.2 |4859 | |[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|288 |76.5 |93.35|21.8 |6.1 |6.2 |3617 | |[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|224 |76.42|92.87|21.8 |3.7 |3.7 |5984 | |[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|288 |76.35|93.18|16.0 |3.9 |12.2 |3331 | |[resnet50.tv_in1k](https://huggingface.co/timm/resnet50.tv_in1k)|224 |76.13|92.86|25.6 |4.1 |11.1 |3457 | |[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|160 |75.96|92.5 |25.6 |2.1 |5.7 |6490 | |[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|224 |75.52|92.44|21.8 |3.7 |3.7 |5991 | |[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|224 |75.3 |92.58|16.0 |2.4 |7.4 |5583 | |[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|224 |75.16|92.18|21.8 |3.7 |3.7 |5994 | |[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|160 |75.1 |92.08|28.1 |2.1 |5.7 |5513 | |[resnet34.gluon_in1k](https://huggingface.co/timm/resnet34.gluon_in1k)|224 |74.57|91.98|21.8 |3.7 |3.7 |5984 | |[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|288 |73.81|91.83|11.7 |3.4 |5.4 |5196 | |[resnet34.tv_in1k](https://huggingface.co/timm/resnet34.tv_in1k)|224 |73.32|91.42|21.8 |3.7 |3.7 |5979 | |[resnet18.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet18.fb_swsl_ig1b_ft_in1k)|224 |73.28|91.73|11.7 |1.8 |2.5 |10213 | |[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|288 |73.16|91.03|11.7 |3.0 |4.1 |6050 | |[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|224 |72.98|91.11|21.8 |3.7 |3.7 |5967 | |[resnet18.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet18.fb_ssl_yfcc100m_ft_in1k)|224 |72.6 |91.42|11.7 |1.8 |2.5 |10213 | |[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|288 |72.37|90.59|11.7 |3.0 |4.1 |6051 | |[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|224 |72.26|90.31|10.1 |1.7 |5.8 |7026 | |[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|224 |72.26|90.68|11.7 |2.1 |3.3 |8707 | |[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|224 |71.49|90.07|11.7 |1.8 |2.5 |10187 | |[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|176 |71.31|89.69|10.1 |1.1 |3.6 |10970 | |[resnet18.gluon_in1k](https://huggingface.co/timm/resnet18.gluon_in1k)|224 |70.84|89.76|11.7 |1.8 |2.5 |10210 | |[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|224 |70.64|89.47|11.7 |1.8 |2.5 |10194 | |[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|160 |70.56|89.52|21.8 |1.9 |1.9 |10737 | |[resnet18.tv_in1k](https://huggingface.co/timm/resnet18.tv_in1k)|224 |69.76|89.07|11.7 |1.8 |2.5 |10205 | |[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|224 |68.34|88.03|5.4 |1.1 |2.4 |13079 | |[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|224 |68.25|88.17|11.7 |1.8 |2.5 |10167 | |[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|176 |66.71|86.96|5.4 |0.7 |1.5 |20327 | |[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|160 |65.66|86.26|11.7 |0.9 |1.3 |18229 | ## Citation ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ``` ```bibtex @article{He2015, author = {Kaiming He and Xiangyu Zhang and Shaoqing Ren and Jian Sun}, title = {Deep Residual Learning for Image Recognition}, journal = {arXiv preprint arXiv:1512.03385}, year = {2015} } ``` ```bibtex @article{He2018BagOT, title={Bag of Tricks for Image Classification with Convolutional Neural Networks}, author={Tong He and Zhi Zhang and Hang Zhang and Zhongyue Zhang and Junyuan Xie and Mu Li}, journal={2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, year={2018}, pages={558-567} } ```
38,514
[ [ -0.06561279296875, -0.016448974609375, 0.0023250579833984375, 0.0283966064453125, -0.032806396484375, -0.007843017578125, -0.00983428955078125, -0.0286407470703125, 0.0860595703125, 0.020599365234375, -0.050079345703125, -0.04095458984375, -0.0460205078125, ...
iknow-lab/AULM-12.8b-v0
2023-08-18T05:29:08.000Z
[ "transformers", "pytorch", "gpt_neox", "text-generation", "ko", "dataset:beomi/KoAlpaca-v1.1a", "dataset:dbdu/ShareGPT-74k-ko", "dataset:heegyu/korquad-chat-v1", "dataset:HAERAE-HUB/KoInstruct-QA", "dataset:changpt/ko-lima-vicuna", "dataset:nlpai-lab/kullm-v2", "endpoints_compatible", "text-...
text-generation
iknow-lab
null
null
iknow-lab/AULM-12.8b-v0
1
1,072
transformers
2023-08-18T04:23:08
--- datasets: - beomi/KoAlpaca-v1.1a - dbdu/ShareGPT-74k-ko - heegyu/korquad-chat-v1 - HAERAE-HUB/KoInstruct-QA - changpt/ko-lima-vicuna - nlpai-lab/kullm-v2 language: - ko --- # iknow-lab/AULM-12.8b-v0 - [EleutherAI/polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b)를 여러 한국어 instruction 데이터셋으로 학습한 모델 ## 생성예시 프롬프트 ``` 당신은 아주대학교의 한국어 챗봇 아우름입니다. 사용자에게 도움이 되고 유익한 내용을 제공해야합니다. 답변은 길고 자세해야 하며, 반드시 친절한 설명을 포함해야합니다. ### 사용자: 아주대학교에 대해 알아? ### 챗봇: ``` 생성된 답변: ``` 아주대학교는 대한민국의 대학교로, 1973년부터 위치하고 있으며, 경기도 수원시 영통구에 소재하고 있습니다. 대학교의 규모는 매우 크고, 다양한 전공과 학문 분야를 제공하며, 높은 수준의 교육과 연구 활동을 수행하고 있습니다. 교육 기관으로서 아주대의 목표는 다양한 국제적 기준을 갖춘 전문적이고 우수한 인재를 배출하는 것입니다. 아주대학교는 학생들의 역량을 키우기 위해 다양한 프로그램을 제공하고 있습니다. 예를 들어, 인문학과 이공계 학과의 학생들은 인문학과 자연의 과학적 이해 기반을 높이는 과정을 통해 지식을 확장할 수 있으며, 공학·IT 분야 학생들은 기초 과학 분야의 역량을 함양할 수 있습니다. 또한, 아주대의 학점 체제는 다양한 과목과 평가 스타일을 가지고 있어, 학생들은 자신만의 공부 방향을 설정하고 학점 시스템에 적응할 수 있도록 지원하고 있습니다. 아주대학교에는 국내 최고의 대학교로서의 자격을 입증하기 위해 다양한 학술 연구 센터들이 존재합니다. 그 중에서도 학술적 연구 센터들은 아주대학교의 핵심 기관으로, 학생들이 높은 수준의 학술 활동을 경험할 수 있도록 도와줍니다. 또한, 아주대학교에는 외국어 센터 및 외국어 교육 연구를 진행하는 외국어 교육센터가 있어서, 학생들이 외국어 능력을 강화하고 해외취업 가능성을 높일 수 있도록 지원하고 있습니다. 아주대학교는 다양한 사회적 공익사업을 진행하며, 학생들과 사회와의 상호작용이 매우 중요하다고 인식하고 있습니다. 이러한 노력들 중 하나는 아주대학교 학생들을 대상으로 한 장학금 제도를 운영하고 있는 것입니다. 장학금은 경제적인 이유로 학업에 어려움을 겪는 학생들을 지원하기 위해 제공 되고 있으며, 이러한 노력은 아주대학교에서도 지속되고 있습니다. 또한, 아주대학교는 지역 사회와의 상호작용에도 많은 관심을 두고 있습니다. 이를 위해 아주대학교의 캠퍼스 내에서 지역 내 여러 기관들과 함께하는 교육 협력 프로그램을 운영하고 있습니다. 이러한 협력 프로그램은 지역 내의 학생들이 다양한 교육과 문화 체험을 통해 글로벌한 인재로 성장할 수 있도록 지원하고 있습니다. 아주대학교는 학생 중심 교육을 추구하고 있습니다. 이를 위해 아주대학교는 학생들의 의견을 최대한 수용하려고 노력하며, 교수들은 학생들이 창의성과 문제해결 능력을 발전시킬 수 있도록 다양한 강의와 실습 활동을 제공하고 있습니다. 이렇게 학생들을 중심으로 하는 노력과 지원은 아주대학교의 발전에 큰 영향을 미치고 있습니다. ``` ## 사용한 데이터셋 | Dataset | # instance | 타입 | | --- | --- | --- | | [KoAlpaca v1.1](https://raw.githubusercontent.com/Beomi/KoAlpaca/main/KoAlpaca_v1.1.jsonl) | 50K | 싱글턴 | | [dbdu/ShareGPT-74k-ko 의 part2_ko_uncleaned](https://huggingface.co/datasets/dbdu/ShareGPT-74k-ko/resolve/main/part2_ko_uncleaned.json) | 36K | 멀티턴 | | [heegyu/korquad-chat-v1](https://huggingface.co/datasets/heegyu/korquad-chat-v1) | 9.6K | 멀티턴, 지식기반 | | [lcw99/evolve-instruct](https://github.com/lcw99/evolve-instruct/) | 37K | 싱글턴 | | [HAERAE-HUB/KoInstruct-QA](https://huggingface.co/datasets/HAERAE-HUB/KoInstruct-QA) | 50.3k | 싱글턴 | | [changpt/ko-lima-vicuna](https://huggingface.co/datasets/changpt/ko-lima-vicuna) | 1K | 싱글턴, 멀티턴(극히 일부) | | [nlpai-lab/kullm-v2](https://huggingface.co/datasets/nlpai-lab/kullm-v2) | 15K | 싱글턴 | - KULLM v2 데이터셋에서는 GPT4ALL, Dolly 데이터만 추출해서 사용했습니다. - 다양한 학습 데이터셋은 [HeegyuKim/open-korean-instructions](https://github.com/HeegyuKim/open-korean-instructions) GitHub repository를 참고하세요.
2,698
[ [ -0.047882080078125, -0.041351318359375, 0.0188751220703125, 0.02008056640625, -0.03460693359375, 0.00948333740234375, 0.0210723876953125, -0.0285491943359375, 0.052886962890625, 0.028656005859375, -0.021453857421875, -0.034759521484375, -0.05029296875, 0.003...
casperhansen/vicuna-7b-v1.5-awq
2023-10-31T22:08:03.000Z
[ "transformers", "pytorch", "llama", "text-generation", "license:llama2", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
casperhansen
null
null
casperhansen/vicuna-7b-v1.5-awq
4
1,072
transformers
2023-08-19T19:46:12
--- license: llama2 --- To use this model, you must have [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) installed. ``` pip install autoawq ``` Example generation with streaming: ```python from awq import AutoAWQForCausalLM from transformers import AutoTokenizer, TextStreamer quant_path = "casperhansen/vicuna-7b-v1.5-awq" quant_file = "awq_model_w4_g128.pt" # Load model model = AutoAWQForCausalLM.from_quantized(quant_path, quant_file, fuse_layers=True) tokenizer = AutoTokenizer.from_pretrained(quant_path, trust_remote_code=True) streamer = TextStreamer(tokenizer, skip_special_tokens=True) # Convert prompt to tokens prompt_template = """\ A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT:""" tokens = tokenizer( prompt_template.format(prompt="How are you today?"), return_tensors='pt' ).input_ids.cuda() # Generate output generation_output = model.generate( tokens, streamer=streamer, max_new_tokens=512 ) ```
1,083
[ [ -0.0222625732421875, -0.054840087890625, 0.0211181640625, 0.01837158203125, -0.036285400390625, 0.0081329345703125, 0.0234527587890625, -0.00704193115234375, 0.0170440673828125, 0.022979736328125, -0.050811767578125, -0.02825927734375, -0.04718017578125, 0.0...
DDSC/roberta-base-danish
2023-03-22T12:19:08.000Z
[ "transformers", "pytorch", "jax", "tensorboard", "safetensors", "roberta", "fill-mask", "danish", "da", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
DDSC
null
null
DDSC/roberta-base-danish
1
1,071
transformers
2022-03-02T23:29:05
--- language: da license: cc-by-4.0 tags: - danish - roberta pipeline_tag: fill-mask widget: - text: På biblioteket kan du låne en <mask>. --- # RøBÆRTa - Danish Roberta Base ## Description RøBÆRTa is a danish pretrained Roberta base model. RøBÆRTa was pretrained on the danish mC4 dataset during the flax community week. This project was organized by Dansk Data Science Community (DDSC) 👇 <br><br> https://www.linkedin.com/groups/9017904/ ## Team RøBÆRTa: - Dan Saattrup Nielsen (saattrupdan) - Malte Højmark-Bertelsen (Maltehb) - Morten Kloster Pedersen (MortenKP) - Kasper Junge (Juunge) - Per Egil Kummervold (pere) - Birger Moëll (birgermoell) ---
659
[ [ -0.0255889892578125, -0.044921875, 0.0308074951171875, 0.01174163818359375, -0.0215301513671875, 0.007354736328125, -0.00983428955078125, -0.01934814453125, 0.0154571533203125, 0.0259857177734375, -0.042572021484375, -0.04827880859375, -0.0543212890625, 0.01...
timm/tf_efficientnet_b3.aa_in1k
2023-04-27T21:18:24.000Z
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:1905.11946", "arxiv:1805.09501", "license:apache-2.0", "region:us" ]
image-classification
timm
null
null
timm/tf_efficientnet_b3.aa_in1k
0
1,071
timm
2022-12-13T00:02:35
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for tf_efficientnet_b3.aa_in1k A EfficientNet image classification model. Trained on ImageNet-1k with auto-augment in Tensorflow by paper authors, ported to PyTorch by Ross Wightman. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 12.2 - GMACs: 1.9 - Activations (M): 23.8 - Image size: 300 x 300 - **Papers:** - EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks: https://arxiv.org/abs/1905.11946 - AutoAugment: Learning Augmentation Policies from Data: https://arxiv.org/abs/1805.09501 - **Dataset:** ImageNet-1k - **Original:** https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('tf_efficientnet_b3.aa_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'tf_efficientnet_b3.aa_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 24, 150, 150]) # torch.Size([1, 32, 75, 75]) # torch.Size([1, 48, 38, 38]) # torch.Size([1, 136, 19, 19]) # torch.Size([1, 384, 10, 10]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'tf_efficientnet_b3.aa_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 1536, 10, 10) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @inproceedings{tan2019efficientnet, title={Efficientnet: Rethinking model scaling for convolutional neural networks}, author={Tan, Mingxing and Le, Quoc}, booktitle={International conference on machine learning}, pages={6105--6114}, year={2019}, organization={PMLR} } ``` ```bibtex @inproceedings{47890, title = {AutoAugment: Learning Augmentation Policies from Data}, author = {Ekin Dogus Cubuk and Barret Zoph and Dandelion Mane and Vijay Vasudevan and Quoc V. Le}, year = {2019}, URL = {https://arxiv.org/pdf/1805.09501.pdf} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
4,494
[ [ -0.02960205078125, -0.041748046875, -0.0088043212890625, 0.0075836181640625, -0.01514434814453125, -0.03057861328125, -0.021820068359375, -0.0318603515625, 0.01348876953125, 0.0245208740234375, -0.0282745361328125, -0.043731689453125, -0.05572509765625, -0.0...
timm/vit_base_patch32_224.augreg_in1k
2023-05-06T00:03:15.000Z
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2106.10270", "arxiv:2010.11929", "license:apache-2.0", "region:us" ]
image-classification
timm
null
null
timm/vit_base_patch32_224.augreg_in1k
0
1,071
timm
2022-12-22T07:32:24
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for vit_base_patch32_224.augreg_in1k A Vision Transformer (ViT) image classification model. Trained on ImageNet-1k (with additional augmentation and regularization) in JAX by paper authors, ported to PyTorch by Ross Wightman. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 88.2 - GMACs: 4.4 - Activations (M): 4.2 - Image size: 224 x 224 - **Papers:** - How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers: https://arxiv.org/abs/2106.10270 - An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2 - **Dataset:** ImageNet-1k - **Original:** https://github.com/google-research/vision_transformer ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('vit_base_patch32_224.augreg_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'vit_base_patch32_224.augreg_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 50, 768) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @article{steiner2021augreg, title={How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers}, author={Steiner, Andreas and Kolesnikov, Alexander and and Zhai, Xiaohua and Wightman, Ross and Uszkoreit, Jakob and Beyer, Lucas}, journal={arXiv preprint arXiv:2106.10270}, year={2021} } ``` ```bibtex @article{dosovitskiy2020vit, title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale}, author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil}, journal={ICLR}, year={2021} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
3,793
[ [ -0.0386962890625, -0.0302276611328125, -0.003604888916015625, 0.006320953369140625, -0.029632568359375, -0.02716064453125, -0.0204315185546875, -0.033905029296875, 0.013519287109375, 0.02447509765625, -0.041168212890625, -0.037811279296875, -0.048065185546875, ...
mssongit/Koala-12.8b-v1
2023-06-02T06:46:25.000Z
[ "transformers", "pytorch", "gpt_neox", "feature-extraction", "polyglot-ko", "gpt-neox", "ko", "dataset:beomi/KoAlpaca-v1.1a", "license:apache-2.0", "endpoints_compatible", "text-generation-inference", "region:us" ]
feature-extraction
mssongit
null
null
mssongit/Koala-12.8b-v1
0
1,071
transformers
2023-05-26T08:05:35
--- license: apache-2.0 datasets: - beomi/KoAlpaca-v1.1a language: - ko tags: - polyglot-ko - gpt-neox --- This model is a fine-tuned version of [EleutherAI/polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2.0 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.29.0.dev0 - Pytorch 2.0.0+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
634
[ [ -0.054779052734375, -0.061981201171875, 0.0180206298828125, 0.00969696044921875, -0.0404052734375, -0.016448974609375, -0.01316070556640625, -0.03350830078125, 0.0294189453125, 0.0157318115234375, -0.047119140625, -0.0231475830078125, -0.04229736328125, 0.00...
timm/convnext_femto.d1_in1k
2023-03-31T22:04:24.000Z
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2201.03545", "license:apache-2.0", "region:us" ]
image-classification
timm
null
null
timm/convnext_femto.d1_in1k
0
1,070
timm
2022-12-13T07:08:14
--- tags: - image-classification - timm library_tag: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for convnext_femto.d1_in1k A ConvNeXt image classification model. Trained in `timm` on ImageNet-1k by Ross Wightman. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 5.2 - GMACs: 0.8 - Activations (M): 4.6 - Image size: train = 224 x 224, test = 288 x 288 - **Papers:** - A ConvNet for the 2020s: https://arxiv.org/abs/2201.03545 - **Original:** https://github.com/huggingface/pytorch-image-models - **Dataset:** ImageNet-1k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('convnext_femto.d1_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'convnext_femto.d1_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 48, 56, 56]) # torch.Size([1, 96, 28, 28]) # torch.Size([1, 192, 14, 14]) # torch.Size([1, 384, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'convnext_femto.d1_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 384, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). All timing numbers from eager model PyTorch 1.13 on RTX 3090 w/ AMP. | model |top1 |top5 |img_size|param_count|gmacs |macts |samples_per_sec|batch_size| |------------------------------------------------------------------------------------------------------------------------------|------|------|--------|-----------|------|------|---------------|----------| | [convnextv2_huge.fcmae_ft_in22k_in1k_512](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_512) |88.848|98.742|512 |660.29 |600.81|413.07|28.58 |48 | | [convnextv2_huge.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_384) |88.668|98.738|384 |660.29 |337.96|232.35|50.56 |64 | | [convnext_xxlarge.clip_laion2b_soup_ft_in1k](https://huggingface.co/timm/convnext_xxlarge.clip_laion2b_soup_ft_in1k) |88.612|98.704|256 |846.47 |198.09|124.45|122.45 |256 | | [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384) |88.312|98.578|384 |200.13 |101.11|126.74|196.84 |256 | | [convnextv2_large.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k_384) |88.196|98.532|384 |197.96 |101.1 |126.74|128.94 |128 | | [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320) |87.968|98.47 |320 |200.13 |70.21 |88.02 |283.42 |256 | | [convnext_xlarge.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k_384) |87.75 |98.556|384 |350.2 |179.2 |168.99|124.85 |192 | | [convnextv2_base.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k_384) |87.646|98.422|384 |88.72 |45.21 |84.49 |209.51 |256 | | [convnext_large.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k_384) |87.476|98.382|384 |197.77 |101.1 |126.74|194.66 |256 | | [convnext_large_mlp.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k) |87.344|98.218|256 |200.13 |44.94 |56.33 |438.08 |256 | | [convnextv2_large.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k) |87.26 |98.248|224 |197.96 |34.4 |43.13 |376.84 |256 | | [convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384) |87.138|98.212|384 |88.59 |45.21 |84.49 |365.47 |256 | | [convnext_xlarge.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k) |87.002|98.208|224 |350.2 |60.98 |57.5 |368.01 |256 | | [convnext_base.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k_384) |86.796|98.264|384 |88.59 |45.21 |84.49 |366.54 |256 | | [convnextv2_base.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k) |86.74 |98.022|224 |88.72 |15.38 |28.75 |624.23 |256 | | [convnext_large.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k) |86.636|98.028|224 |197.77 |34.4 |43.13 |581.43 |256 | | [convnext_base.clip_laiona_augreg_ft_in1k_384](https://huggingface.co/timm/convnext_base.clip_laiona_augreg_ft_in1k_384) |86.504|97.97 |384 |88.59 |45.21 |84.49 |368.14 |256 | | [convnext_base.clip_laion2b_augreg_ft_in12k_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k) |86.344|97.97 |256 |88.59 |20.09 |37.55 |816.14 |256 | | [convnextv2_huge.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in1k) |86.256|97.75 |224 |660.29 |115.0 |79.07 |154.72 |256 | | [convnext_small.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_small.in12k_ft_in1k_384) |86.182|97.92 |384 |50.22 |25.58 |63.37 |516.19 |256 | | [convnext_base.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in1k) |86.154|97.68 |256 |88.59 |20.09 |37.55 |819.86 |256 | | [convnext_base.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k) |85.822|97.866|224 |88.59 |15.38 |28.75 |1037.66 |256 | | [convnext_small.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k_384) |85.778|97.886|384 |50.22 |25.58 |63.37 |518.95 |256 | | [convnextv2_large.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in1k) |85.742|97.584|224 |197.96 |34.4 |43.13 |375.23 |256 | | [convnext_small.in12k_ft_in1k](https://huggingface.co/timm/convnext_small.in12k_ft_in1k) |85.174|97.506|224 |50.22 |8.71 |21.56 |1474.31 |256 | | [convnext_tiny.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k_384) |85.118|97.608|384 |28.59 |13.14 |39.48 |856.76 |256 | | [convnextv2_tiny.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k_384) |85.112|97.63 |384 |28.64 |13.14 |39.48 |491.32 |256 | | [convnextv2_base.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in1k) |84.874|97.09 |224 |88.72 |15.38 |28.75 |625.33 |256 | | [convnext_small.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k) |84.562|97.394|224 |50.22 |8.71 |21.56 |1478.29 |256 | | [convnext_large.fb_in1k](https://huggingface.co/timm/convnext_large.fb_in1k) |84.282|96.892|224 |197.77 |34.4 |43.13 |584.28 |256 | | [convnext_tiny.in12k_ft_in1k](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k) |84.186|97.124|224 |28.59 |4.47 |13.44 |2433.7 |256 | | [convnext_tiny.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k_384) |84.084|97.14 |384 |28.59 |13.14 |39.48 |862.95 |256 | | [convnextv2_tiny.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k) |83.894|96.964|224 |28.64 |4.47 |13.44 |1452.72 |256 | | [convnext_base.fb_in1k](https://huggingface.co/timm/convnext_base.fb_in1k) |83.82 |96.746|224 |88.59 |15.38 |28.75 |1054.0 |256 | | [convnextv2_nano.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k_384) |83.37 |96.742|384 |15.62 |7.22 |24.61 |801.72 |256 | | [convnext_small.fb_in1k](https://huggingface.co/timm/convnext_small.fb_in1k) |83.142|96.434|224 |50.22 |8.71 |21.56 |1464.0 |256 | | [convnextv2_tiny.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in1k) |82.92 |96.284|224 |28.64 |4.47 |13.44 |1425.62 |256 | | [convnext_tiny.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k) |82.898|96.616|224 |28.59 |4.47 |13.44 |2480.88 |256 | | [convnext_nano.in12k_ft_in1k](https://huggingface.co/timm/convnext_nano.in12k_ft_in1k) |82.282|96.344|224 |15.59 |2.46 |8.37 |3926.52 |256 | | [convnext_tiny_hnf.a2h_in1k](https://huggingface.co/timm/convnext_tiny_hnf.a2h_in1k) |82.216|95.852|224 |28.59 |4.47 |13.44 |2529.75 |256 | | [convnext_tiny.fb_in1k](https://huggingface.co/timm/convnext_tiny.fb_in1k) |82.066|95.854|224 |28.59 |4.47 |13.44 |2346.26 |256 | | [convnextv2_nano.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k) |82.03 |96.166|224 |15.62 |2.46 |8.37 |2300.18 |256 | | [convnextv2_nano.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in1k) |81.83 |95.738|224 |15.62 |2.46 |8.37 |2321.48 |256 | | [convnext_nano_ols.d1h_in1k](https://huggingface.co/timm/convnext_nano_ols.d1h_in1k) |80.866|95.246|224 |15.65 |2.65 |9.38 |3523.85 |256 | | [convnext_nano.d1h_in1k](https://huggingface.co/timm/convnext_nano.d1h_in1k) |80.768|95.334|224 |15.59 |2.46 |8.37 |3915.58 |256 | | [convnextv2_pico.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_pico.fcmae_ft_in1k) |80.304|95.072|224 |9.07 |1.37 |6.1 |3274.57 |256 | | [convnext_pico.d1_in1k](https://huggingface.co/timm/convnext_pico.d1_in1k) |79.526|94.558|224 |9.05 |1.37 |6.1 |5686.88 |256 | | [convnext_pico_ols.d1_in1k](https://huggingface.co/timm/convnext_pico_ols.d1_in1k) |79.522|94.692|224 |9.06 |1.43 |6.5 |5422.46 |256 | | [convnextv2_femto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_femto.fcmae_ft_in1k) |78.488|93.98 |224 |5.23 |0.79 |4.57 |4264.2 |256 | | [convnext_femto_ols.d1_in1k](https://huggingface.co/timm/convnext_femto_ols.d1_in1k) |77.86 |93.83 |224 |5.23 |0.82 |4.87 |6910.6 |256 | | [convnext_femto.d1_in1k](https://huggingface.co/timm/convnext_femto.d1_in1k) |77.454|93.68 |224 |5.22 |0.79 |4.57 |7189.92 |256 | | [convnextv2_atto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_atto.fcmae_ft_in1k) |76.664|93.044|224 |3.71 |0.55 |3.81 |4728.91 |256 | | [convnext_atto_ols.a2_in1k](https://huggingface.co/timm/convnext_atto_ols.a2_in1k) |75.88 |92.846|224 |3.7 |0.58 |4.11 |7963.16 |256 | | [convnext_atto.d2_in1k](https://huggingface.co/timm/convnext_atto.d2_in1k) |75.664|92.9 |224 |3.7 |0.55 |3.81 |8439.22 |256 | ## Citation ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ``` ```bibtex @article{liu2022convnet, author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie}, title = {A ConvNet for the 2020s}, journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, year = {2022}, } ```
15,630
[ [ -0.06744384765625, -0.03314208984375, -0.0028171539306640625, 0.037139892578125, -0.031982421875, -0.01421356201171875, -0.0106201171875, -0.033966064453125, 0.06597900390625, 0.018096923828125, -0.045318603515625, -0.041473388671875, -0.051422119140625, -0....
heegyu/polyglot-ko-5.8b-chat
2023-08-18T05:08:42.000Z
[ "transformers", "pytorch", "gpt_neox", "text-generation", "ko", "dataset:beomi/KoAlpaca-v1.1a", "dataset:dbdu/ShareGPT-74k-ko", "dataset:heegyu/korquad-chat-v1", "dataset:HAERAE-HUB/KoInstruct-QA", "dataset:changpt/ko-lima-vicuna", "dataset:nlpai-lab/kullm-v2", "endpoints_compatible", "text-...
text-generation
heegyu
null
null
heegyu/polyglot-ko-5.8b-chat
0
1,070
transformers
2023-08-18T00:24:41
--- datasets: - beomi/KoAlpaca-v1.1a - dbdu/ShareGPT-74k-ko - heegyu/korquad-chat-v1 - HAERAE-HUB/KoInstruct-QA - changpt/ko-lima-vicuna - nlpai-lab/kullm-v2 language: - ko --- # heegyu/polyglot-ko-5.8b-chat - [EleutherAI/polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b)를 여러 한국어 instruction 데이터셋으로 학습한 모델 ## 사용한 데이터셋 | Dataset | # instance | 타입 | | --- | --- | --- | | [KoAlpaca v1.1](https://raw.githubusercontent.com/Beomi/KoAlpaca/main/KoAlpaca_v1.1.jsonl) | 50K | 싱글턴 | | [dbdu/ShareGPT-74k-ko 의 part2_ko_uncleaned](https://huggingface.co/datasets/dbdu/ShareGPT-74k-ko/resolve/main/part2_ko_uncleaned.json) | 36K | 멀티턴 | | [heegyu/korquad-chat-v1](https://huggingface.co/datasets/heegyu/korquad-chat-v1) | 9.6K | 멀티턴, 지식기반 | | [lcw99/evolve-instruct](https://github.com/lcw99/evolve-instruct/) | 37K | 싱글턴 | | [HAERAE-HUB/KoInstruct-QA](https://huggingface.co/datasets/HAERAE-HUB/KoInstruct-QA) | 50.3k | 싱글턴 | | [changpt/ko-lima-vicuna](https://huggingface.co/datasets/changpt/ko-lima-vicuna) | 1K | 싱글턴, 멀티턴(극히 일부) | | [nlpai-lab/kullm-v2](https://huggingface.co/datasets/nlpai-lab/kullm-v2) | 15K | 싱글턴 | - KULLM v2 데이터셋에서는 GPT4ALL, Dolly 데이터만 추출해서 사용했습니다. - 다양한 학습 데이터셋은 [HeegyuKim/open-korean-instructions](https://github.com/HeegyuKim/open-korean-instructions) GitHub repository를 참고하세요. ## 생성 Prompt - EOS token(<|endoftext|>)이 나올 때까지 생성하면 됩니다. - 최상단 프롬프트는 있는게 좋은 답변이 더 자주 나오는 것 같아요. ``` 당신은 AI 챗봇입니다. 사용자에게 도움이 되고 유익한 내용을 제공해야합니다. 답변은 길고 자세하며 친절한 설명을 덧붙여서 작성하세요. ### 사용자: 서울 관광지 추천해줘요 ### 챗봇: ``` 생성된 답변: ``` 서울 관광지를 추천하겠습니다. 경복궁 (경복궁 또는 경복궁) 경복궁은 한국 전통 궁궐로서 조선 시대의 역사와 건축 철학을 체험할 수 있는 공간입니다. 궁을 방문하면 조선시대 생활과 통치방식에 대해 배울 수 있으며, 동시에 전통적인 궁궐 문화를 체험할 수 있습니다. 창덕궁 (창덕궁) 창덕궁은 조선시대에 왕실의 정원으로 유명한 궁입니다. 한국의 르네상스 시대를 보여주는 아름다움을 경험할 수 있으며, 전통적인 건축양식을 경험할 수 있는 기회가 주어집니다. 덕수궁 (덕수궁) 덕수궁은 1896년에 설립된 왕이 사는 궁궐입니다. 당시의 아름다움과 건축 철학을 동시에 느낄 수 있는 궁궐입니다. 한국의 궁궐 문화, 역사 및 자연 환경에 대한 학습을 원하는 사람에게 추천됩니다. 종묘 (종묘) 종묘는 조선시대의 왕실 정원으로서 궁궐과 함께 왕의 거처 및 제사를 지내는 곳입니다. 전통적인 의례, 역사 및 건축 철학을 느끼고 체험할 수 있는 곳입니다. 한옥마을 (한옥마을) 한옥마을은 서울 중심부에 위치한 유명한 관광지로, 한국 전통 문화 체험을 원하는 사람과 외국인에게도 인기가 많습니다. 한옥 건물과 전통 정원을 경험할 수 있으며, 도시 속의 전통 문화를 경험할 수 있는 기회를 제공합니다. 인사동 (인사동) 인사동은 한국의 전통적인 문화와 예술, 음식 및 쇼핑을 체험할 수 있는 곳입니다. 한국의 전통문화를 체험하고, 전통 문화의 아름다움을 느낄 수 있는 장소입니다. 위의 관광지를 방문함으로써, 우리는 한국의 전통적인 문화와 역사를 경험하고 배울 수 있습니다. 또한, 아름다움과 건축 철학에 대한 이해도를 높일 수 있으며, 우리 지역의 문화, 전통, 역사를 더욱 이해하고 사랑하게 될 수 있습니다. ``` 멀티턴 대화 예시 ``` 당신은 AI 챗봇입니다. 사용자에게 도움이 되고 유익한 내용을 제공해야합니다. 답변은 길고 자세하며 친절한 설명을 덧붙여서 작성하세요. ### 사용자: 저녁 메뉴 추천해주세요 ### 챗봇: 한식을 선호하세요? ### 사용자: 네네 ### 챗봇: 한식 중에서는 찌게류를 추천드리겠어요. 된장찌개, 김치찌개, 부대찌개와 같은 전통적인 찌게 메뉴를 선택해보세요. 더 좋은 선택을 위해 다양한 종류의 찌게를 시도해보세요. 만약에 다른 음식을 먹고 싶다면 그것도 좋아요! ```
2,681
[ [ -0.046173095703125, -0.04290771484375, 0.021148681640625, 0.0228729248046875, -0.03515625, 0.003276824951171875, 0.0249786376953125, -0.0241546630859375, 0.051025390625, 0.027008056640625, -0.0235137939453125, -0.03717041015625, -0.041961669921875, 0.0027160...
MingZhong/unieval-fact
2022-10-14T01:10:31.000Z
[ "transformers", "pytorch", "t5", "text2text-generation", "arxiv:2210.07197", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text2text-generation
MingZhong
null
null
MingZhong/unieval-fact
1
1,069
transformers
2022-10-12T00:11:36
Pre-trained evaluator in EMNLP 2022 paper *[Towards a Unified Multi-Dimensional Evaluator for Text Generation](https://arxiv.org/abs/2210.07197)* ## Introduction **Multi-dimensional evaluation** is the dominant paradigm for human evaluation in Natural Language Generation (NLG), i.e., evaluating the generated text from multiple explainable dimensions, such as coherence and fluency. However, automatic evaluation in NLG is still dominated by similarity-based metrics (e.g., ROUGE, BLEU), but they are not sufficient to portray the difference between the advanced generation models. Therefore, we propose **UniEval** to bridge this gap so that a more comprehensive and fine-grained evaluation of NLG systems can be achieved. ## Pre-trained Evaluator **unieval-fact** is the pre-trained evaluator for the factual consistency detection task. It can evaluate the model output and predict a consistency score. ## Usage Please refer to [our GitHub repository](https://github.com/maszhongming/UniEval).
1,007
[ [ -0.035980224609375, -0.052825927734375, 0.04217529296875, 0.021453857421875, -0.0122833251953125, 0.0047454833984375, -0.027923583984375, -0.0189971923828125, 0.0054779052734375, 0.038360595703125, -0.034637451171875, -0.045623779296875, -0.041717529296875, ...
TheBloke/MLewd-L2-Chat-13B-AWQ
2023-09-27T12:51:47.000Z
[ "transformers", "safetensors", "llama", "text-generation", "not-for-all-audiences", "nsfw", "license:cc-by-nc-4.0", "text-generation-inference", "region:us" ]
text-generation
TheBloke
null
null
TheBloke/MLewd-L2-Chat-13B-AWQ
2
1,069
transformers
2023-09-19T12:01:21
--- license: cc-by-nc-4.0 tags: - not-for-all-audiences - nsfw model_name: MLewd L2 Chat 13B base_model: Undi95/mlewd-l2-chat-13B inference: false model_creator: Undi95 model_type: llama prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # MLewd L2 Chat 13B - AWQ - Model creator: [Undi95](https://huggingface.co/Undi95) - Original model: [MLewd L2 Chat 13B](https://huggingface.co/Undi95/mlewd-l2-chat-13B) <!-- description start --> ## Description This repo contains AWQ model files for [Undi95's MLewd L2 Chat 13B](https://huggingface.co/Undi95/mlewd-l2-chat-13B). ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference. It is also now supported by continuous batching server [vLLM](https://github.com/vllm-project/vllm), allowing use of AWQ models for high-throughput concurrent inference in multi-user server scenarios. Note that, at the time of writing, overall throughput is still lower than running vLLM with unquantised models, however using AWQ enables using much smaller GPUs which can lead to easier deployment and overall cost savings. For example, a 70B model can be run on 1 x 48GB GPU instead of 2 x 80GB. <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/MLewd-L2-Chat-13B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/MLewd-L2-Chat-13B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/MLewd-L2-Chat-13B-GGUF) * [Undi95's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Undi95/mlewd-l2-chat-13B) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- licensing start --> ## Licensing The creator of the source model has listed its license as `cc-by-nc-4.0`, and this quantization has therefore used that same license. As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly. In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Undi95's MLewd L2 Chat 13B](https://huggingface.co/Undi95/mlewd-l2-chat-13B). <!-- licensing end --> <!-- README_AWQ.md-provided-files start --> ## Provided files and AWQ parameters For my first release of AWQ models, I am releasing 128g models only. I will consider adding 32g as well if there is interest, and once I have done perplexity and evaluation comparisons, but at this time 32g models are still not fully tested with AutoAWQ and vLLM. Models are released as sharded safetensors files. | Branch | Bits | GS | AWQ Dataset | Seq Len | Size | | ------ | ---- | -- | ----------- | ------- | ---- | | [main](https://huggingface.co/TheBloke/MLewd-L2-Chat-13B-AWQ/tree/main) | 4 | 128 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.25 GB <!-- README_AWQ.md-provided-files end --> <!-- README_AWQ.md-use-from-vllm start --> ## Serving this model from vLLM Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/). - When using vLLM as a server, pass the `--quantization awq` parameter, for example: ```shell python3 python -m vllm.entrypoints.api_server --model TheBloke/MLewd-L2-Chat-13B-AWQ --quantization awq ``` When using vLLM from Python code, pass the `quantization=awq` parameter, for example: ```python from vllm import LLM, SamplingParams prompts = [ "Hello, my name is", "The president of the United States is", "The capital of France is", "The future of AI is", ] sampling_params = SamplingParams(temperature=0.8, top_p=0.95) llm = LLM(model="TheBloke/MLewd-L2-Chat-13B-AWQ", quantization="awq") outputs = llm.generate(prompts, sampling_params) # Print the outputs. for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}") ``` <!-- README_AWQ.md-use-from-vllm start --> <!-- README_AWQ.md-use-from-python start --> ## How to use this AWQ model from Python code ### Install the necessary packages Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.0.2 or later ```shell pip3 install autoawq ``` If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y autoawq git clone https://github.com/casper-hansen/AutoAWQ cd AutoAWQ pip3 install . ``` ### You can then try the following example code ```python from awq import AutoAWQForCausalLM from transformers import AutoTokenizer model_name_or_path = "TheBloke/MLewd-L2-Chat-13B-AWQ" # Load model model = AutoAWQForCausalLM.from_quantized(model_name_or_path, fuse_layers=True, trust_remote_code=False, safetensors=True) tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=False) prompt = "Tell me about AI" prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ''' print("\n\n*** Generate:") tokens = tokenizer( prompt_template, return_tensors='pt' ).input_ids.cuda() # Generate output generation_output = model.generate( tokens, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, max_new_tokens=512 ) print("Output: ", tokenizer.decode(generation_output[0])) # Inference can also be done using transformers' pipeline from transformers import pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_AWQ.md-use-from-python end --> <!-- README_AWQ.md-compatibility start --> ## Compatibility The files provided are tested to work with [AutoAWQ](https://github.com/casper-hansen/AutoAWQ), and [vLLM](https://github.com/vllm-project/vllm). [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is not yet compatible with AWQ, but a PR is open which should bring support soon: [TGI PR #781](https://github.com/huggingface/text-generation-inference/issues/781). <!-- README_AWQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: Undi95's MLewd L2 Chat 13B ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/v6lvAhmkl02GoUQoarFaG.png) THIS MODEL IS MADE FOR LEWD SEXUAL, CRUDE AND KINKY CONTENT IN OUTPUT CAN AND WILL HAPPEN. YOU'RE WARNED This is an attempt to make an uncensored Llama2-chat that can RP. It's not perfect, but I'm satified. The beginning of the conversation is important for keeping good context and consistancy! <!-- description start --> ## Description This repo contains fp16 files of MLewd-L2-Chat-13B, very hot and lewd model based on Llama2-chat. <!-- description end --> <!-- description start --> ## Models and loras used - Undi95/MLewd-L2-13B-Part3 (checkpoint of MLewd without LORA) - posicube/Llama2-chat-AYT-13B - zattio770/120-Days-of-LORA-v2-13B - royallab/Pygmalion-2-13b-SuperCOT - Undi95/MMSoul-13b-lora <!-- description end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` Special thanks to Sushi and Shena ♥
12,464
[ [ -0.037384033203125, -0.06036376953125, 0.0194854736328125, 0.0114898681640625, -0.019683837890625, -0.0090484619140625, 0.004688262939453125, -0.04119873046875, -0.002017974853515625, 0.0290374755859375, -0.054168701171875, -0.034698486328125, -0.022354125976562...
valhalla/bart-large-finetuned-squadv1
2021-06-14T10:20:35.000Z
[ "transformers", "pytorch", "jax", "bart", "question-answering", "dataset:squad", "arxiv:1910.13461", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
question-answering
valhalla
null
null
valhalla/bart-large-finetuned-squadv1
4
1,068
transformers
2022-03-02T23:29:05
--- datasets: - squad --- # BART-LARGE finetuned on SQuADv1 This is bart-large model finetuned on SQuADv1 dataset for question answering task ## Model details BART was propsed in the [paper](https://arxiv.org/abs/1910.13461) **BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension**. BART is a seq2seq model intended for both NLG and NLU tasks. To use BART for question answering tasks, we feed the complete document into the encoder and decoder, and use the top hidden state of the decoder as a representation for each word. This representation is used to classify the token. As given in the paper bart-large achives comparable to ROBERTa on SQuAD. Another notable thing about BART is that it can handle sequences with upto 1024 tokens. | Param | #Value | |---------------------|--------| | encoder layers | 12 | | decoder layers | 12 | | hidden size | 4096 | | num attetion heads | 16 | | on disk size | 1.63GB | ## Model training This model was trained on google colab v100 GPU. You can find the fine-tuning colab here [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1I5cK1M_0dLaf5xoewh6swcm5nAInfwHy?usp=sharing). ## Results The results are actually slightly worse than given in the paper. In the paper the authors mentioned that bart-large achieves 88.8 EM and 94.6 F1 | Metric | #Value | |--------|--------| | EM | 86.8022| | F1 | 92.7342| ## Model in Action 🚀 ```python3 from transformers import BartTokenizer, BartForQuestionAnswering import torch tokenizer = BartTokenizer.from_pretrained('valhalla/bart-large-finetuned-squadv1') model = BartForQuestionAnswering.from_pretrained('valhalla/bart-large-finetuned-squadv1') question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" encoding = tokenizer(question, text, return_tensors='pt') input_ids = encoding['input_ids'] attention_mask = encoding['attention_mask'] start_scores, end_scores = model(input_ids, attention_mask=attention_mask, output_attentions=False)[:2] all_tokens = tokenizer.convert_ids_to_tokens(input_ids[0]) answer = ' '.join(all_tokens[torch.argmax(start_scores) : torch.argmax(end_scores)+1]) answer = tokenizer.convert_tokens_to_ids(answer.split()) answer = tokenizer.decode(answer) #answer => 'a nice puppet' ``` > Created with ❤️ by Suraj Patil [![Github icon](https://cdn0.iconfinder.com/data/icons/octicons/1024/mark-github-32.png)](https://github.com/patil-suraj/) [![Twitter icon](https://cdn0.iconfinder.com/data/icons/shift-logotypes/32/Twitter-32.png)](https://twitter.com/psuraj28)
2,699
[ [ -0.052154541015625, -0.07537841796875, 0.0166015625, 0.005695343017578125, -0.020721435546875, 0.019012451171875, -0.0218353271484375, -0.0254669189453125, 0.0286102294921875, 0.0330810546875, -0.05352783203125, -0.0306854248046875, -0.038299560546875, 0.015...
Owesh12/License-plate-detection
2023-11-03T19:33:56.000Z
[ "yolov5", "tensorboard", "yolo", "vision", "object-detection", "pytorch", "dataset:keremberke/license-plate-object-detection", "model-index", "region:us" ]
object-detection
Owesh12
null
null
Owesh12/License-plate-detection
0
1,068
yolov5
2023-11-03T19:32:46
--- tags: - yolov5 - yolo - vision - object-detection - pytorch library_name: yolov5 library_version: 7.0.6 inference: false datasets: - keremberke/license-plate-object-detection model-index: - name: keremberke/yolov5m-license-plate results: - task: type: object-detection dataset: type: keremberke/license-plate-object-detection name: keremberke/license-plate-object-detection split: validation metrics: - type: precision # since mAP@0.5 is not available on hf.co/metrics value: 0.9882982754936463 # min: 0.0 - max: 1.0 name: mAP@0.5 --- <div align="center"> <img width="640" alt="keremberke/yolov5m-license-plate" src="https://huggingface.co/keremberke/yolov5m-license-plate/resolve/main/sample_visuals.jpg"> </div> ### How to use - Install [yolov5](https://github.com/fcakyon/yolov5-pip): ```bash pip install -U yolov5 ``` - Load model and perform prediction: ```python import yolov5 # load model model = yolov5.load('keremberke/yolov5m-license-plate') # set model parameters model.conf = 0.25 # NMS confidence threshold model.iou = 0.45 # NMS IoU threshold model.agnostic = False # NMS class-agnostic model.multi_label = False # NMS multiple labels per box model.max_det = 1000 # maximum number of detections per image # set image img = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg' # perform inference results = model(img, size=640) # inference with test time augmentation results = model(img, augment=True) # parse results predictions = results.pred[0] boxes = predictions[:, :4] # x1, y1, x2, y2 scores = predictions[:, 4] categories = predictions[:, 5] # show detection bounding boxes on image results.show() # save results into "results/" folder results.save(save_dir='results/') ``` - Finetune the model on your custom dataset: ```bash yolov5 train --data data.yaml --img 640 --batch 16 --weights keremberke/yolov5m-license-plate --epochs 10 ``` **More models available at: [awesome-yolov5-models](https://github.com/keremberke/awesome-yolov5-models)**
2,082
[ [ -0.068115234375, -0.0277252197265625, 0.036163330078125, -0.0200347900390625, -0.0281829833984375, -0.0271148681640625, 0.008148193359375, -0.044036865234375, 0.01006317138671875, 0.024749755859375, -0.04437255859375, -0.0572509765625, -0.0321044921875, -0.0...
castorini/wiki-all-8-4-multi-dpr2-passage-encoder
2023-01-05T19:26:22.000Z
[ "transformers", "jax", "bert", "feature-extraction", "endpoints_compatible", "region:us" ]
feature-extraction
castorini
null
null
castorini/wiki-all-8-4-multi-dpr2-passage-encoder
0
1,067
transformers
2023-01-04T02:34:47
Dense passage retriever (DPR) is a dense retrieval method described in the following paper: > Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, Wen-tau Yih. [Dense Passage Retrieval for Open-Domain Question Answering](https://www.aclweb.org/anthology/2020.emnlp-main.550/). _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_, pages 6769-6781, 2020. We have trained our own DPR models with our Wikipedia corpus variants using the [Tevatron](https://github.com/texttron/tevatron) library. Our own efforts are described in the paper entitled: > Pre-Processing Matters! Improved Wikipedia Corpora for Open-Domain Question Answering. This is the passage encoder portion of a 2nd iteration DPR model for the wiki-all-8-4 corpus variant trained on the amalgamation of the NQ, TriviaQA, WQ, and CuratedTREC datasets.
906
[ [ -0.038177490234375, -0.07257080078125, 0.032470703125, -0.0032138824462890625, -0.0289154052734375, 0.00334930419921875, -0.010498046875, -0.01157379150390625, -0.0016088485717773438, 0.033172607421875, -0.025665283203125, -0.019256591796875, -0.032257080078125,...
oliverwang15/FinGPT_v33_Llama2_13B_Sentiment_Instruction_LoRA_FT_8bit
2023-10-13T03:02:16.000Z
[ "peft", "region:us" ]
null
oliverwang15
null
null
oliverwang15/FinGPT_v33_Llama2_13B_Sentiment_Instruction_LoRA_FT_8bit
10
1,067
peft
2023-10-12T23:28:12
--- library_name: peft --- # FinGPT_v3.3 ## Model info - Base model: Llama2-13B - Training method: Instruction Fine-tuning + LoRA + 8bit - Task: Sentiment Analysis ## Try the model ``` python from transformers import AutoModel, AutoTokenizer, AutoModelForCausalLM, LlamaForCausalLM, LlamaTokenizerFast from peft import PeftModel # 0.5.0 # Load Models base_model = "NousResearch/Llama-2-13b-hf" peft_model = "oliverwang15/FinGPT_v33_Llama2_13B_Sentiment_Instruction_LoRA_FT_8bit" tokenizer = LlamaTokenizerFast.from_pretrained(base_model, trust_remote_code=True) tokenizer.pad_token = tokenizer.eos_token model = LlamaForCausalLM.from_pretrained(base_model, trust_remote_code=True, device_map = "cuda:0", load_in_8bit = True,) model = PeftModel.from_pretrained(model, peft_model) model = model.eval() # Make prompts prompt = [ '''Instruction: What is the sentiment of this news? Please choose an answer from {negative/neutral/positive} Input: FINANCING OF ASPOCOMP 'S GROWTH Aspocomp is aggressively pursuing its growth strategy by increasingly focusing on technologically more demanding HDI printed circuit boards PCBs . Answer: ''', '''Instruction: What is the sentiment of this news? Please choose an answer from {negative/neutral/positive} Input: According to Gran , the company has no plans to move all production to Russia , although that is where the company is growing . Answer: ''', '''Instruction: What is the sentiment of this news? Please choose an answer from {negative/neutral/positive} Input: A tinyurl link takes users to a scamming site promising that users can earn thousands of dollars by becoming a Google ( NASDAQ : GOOG ) Cash advertiser . Answer: ''', ] # Generate results tokens = tokenizer(prompt, return_tensors='pt', padding=True, max_length=512) res = model.generate(**tokens, max_length=512) res_sentences = [tokenizer.decode(i) for i in res] out_text = [o.split("Answer: ")[1] for o in res_sentences] # show results for sentiment in out_text: print(sentiment) # Output: # positive # neutral # negative ``` ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.5.0
2,496
[ [ -0.029815673828125, -0.0570068359375, 0.019561767578125, 0.0338134765625, -0.046539306640625, -0.00506591796875, -0.01001739501953125, -0.0166778564453125, 0.0089263916015625, 0.0286865234375, -0.053558349609375, -0.0411376953125, -0.046844482421875, 0.00072...
microsoft/layoutlm-large-uncased
2021-08-11T05:28:26.000Z
[ "transformers", "pytorch", "tf", "layoutlm", "arxiv:1912.13318", "endpoints_compatible", "region:us" ]
null
microsoft
null
null
microsoft/layoutlm-large-uncased
7
1,066
transformers
2022-03-02T23:29:05
# LayoutLM Multimodal (text + layout/format + image) pre-training for document AI [Microsoft Document AI](https://www.microsoft.com/en-us/research/project/document-ai/) | [GitHub](https://aka.ms/layoutlm) ## Model description LayoutLM is a simple but effective pre-training method of text and layout for document image understanding and information extraction tasks, such as form understanding and receipt understanding. LayoutLM archives the SOTA results on multiple datasets. For more details, please refer to our paper: [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou, [KDD 2020](https://www.kdd.org/kdd2020/accepted-papers) ## Training data We pre-train LayoutLM on IIT-CDIP Test Collection 1.0\* dataset with two settings. * LayoutLM-Base, Uncased (11M documents, 2 epochs): 12-layer, 768-hidden, 12-heads, 113M parameters * LayoutLM-Large, Uncased (11M documents, 2 epochs): 24-layer, 1024-hidden, 16-heads, 343M parameters **(This Model)** ## Citation If you find LayoutLM useful in your research, please cite the following paper: ``` latex @misc{xu2019layoutlm, title={LayoutLM: Pre-training of Text and Layout for Document Image Understanding}, author={Yiheng Xu and Minghao Li and Lei Cui and Shaohan Huang and Furu Wei and Ming Zhou}, year={2019}, eprint={1912.13318}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
1,497
[ [ -0.0189971923828125, -0.05108642578125, 0.042449951171875, 0.0197906494140625, -0.0194549560546875, -0.00785064697265625, 0.0156402587890625, -0.0106658935546875, -0.0052490234375, 0.026947021484375, -0.03851318359375, -0.04656982421875, -0.03839111328125, -...
baichuan-inc/Baichuan2-7B-Intermediate-Checkpoints
2023-10-07T04:32:12.000Z
[ "en", "zh", "license:other", "region:us" ]
null
baichuan-inc
null
null
baichuan-inc/Baichuan2-7B-Intermediate-Checkpoints
10
1,066
null
2023-09-05T09:35:23
--- language: - en - zh license: other tasks: - text-generation --- <!-- markdownlint-disable first-line-h1 --> <!-- markdownlint-disable html --> <div align="center"> <h1> Baichuan 2 </h1> </div> <div align="center"> <a href="https://github.com/baichuan-inc/Baichuan2" target="_blank">🦉GitHub</a> | <a href="https://github.com/baichuan-inc/Baichuan-7B/blob/main/media/wechat.jpeg?raw=true" target="_blank">💬WeChat</a> </div> <div align="center"> 🚀 <a href="https://www.baichuan-ai.com/" target="_blank">百川大模型在线对话平台</a> 已正式向公众开放 🎉 </div> # 目录/Table of Contents - [📖 模型介绍/Introduction](#Introduction) - [⚙️ 快速开始/Quick Start](#Start) - [📊 Benchmark评估/Benchmark Evaluation](#Benchmark) - [📜 声明与协议/Terms and Conditions](#Terms) # <span id="Introduction">模型介绍/Introduction</span> Baichuan 2 是[百川智能]推出的新一代开源大语言模型,采用 **2.6 万亿** Tokens 的高质量语料训练,在权威的中文和英文 benchmark 上均取得同尺寸最好的效果。本次发布包含有 7B、13B 的 Base 和 Chat 版本,并提供了 Chat 版本的 4bits 量化,所有版本不仅对学术研究完全开放,开发者也仅需[邮件申请]并获得官方商用许可后,即可以免费商用。具体发布版本和下载见下表: Baichuan 2 is the new generation of large-scale open-source language models launched by [Baichuan Intelligence inc.](https://www.baichuan-ai.com/). It is trained on a high-quality corpus with 2.6 trillion tokens and has achieved the best performance in authoritative Chinese and English benchmarks of the same size. This release includes 7B and 13B versions for both Base and Chat models, along with a 4bits quantized version for the Chat model. All versions are fully open to academic research, and developers can also use them for free in commercial applications after obtaining an official commercial license through [email request](mailto:opensource@baichuan-inc.com). The specific release versions and download links are listed in the table below: | | Base Model | Chat Model | 4bits Quantized Chat Model | |:---:|:--------------------:|:--------------------:|:--------------------------:| | 7B | [Baichuan2-7B-Base](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base) | [Baichuan2-7B-Chat](https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat) | [Baichuan2-7B-Chat-4bits](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base-4bits) | | 13B | [Baichuan2-13B-Base](https://huggingface.co/baichuan-inc/Baichuan2-13B-Base) | [Baichuan2-13B-Chat](https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat) | [Baichuan2-13B-Chat-4bits](https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat-4bits) | # <span id="Start">快速开始/Quick Start</span> 在Baichuan2系列模型中,我们为了加快推理速度使用了Pytorch2.0加入的新功能F.scaled_dot_product_attention,因此模型需要在Pytorch2.0环境下运行。 In the Baichuan 2 series models, we have utilized the new feature `F.scaled_dot_product_attention` introduced in PyTorch 2.0 to accelerate inference speed. Therefore, the model needs to be run in a PyTorch 2.0 environment. **我们将训练中的Checkpoints上传到了本项目中,可以通过指定revision来加载不同step的Checkpoint。** **We have uploaded the checkpoints during training to this project. You can load checkpoints from different steps by specifying the revision.** ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("baichuan-inc/Baichuan2-7B-Intermediate-Checkpoints", revision="train_02200B", use_fast=False, trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("baichuan-inc/Baichuan2-7B-Intermediate-Checkpoints", revision="train_02200B", device_map="auto", torch_dtype=torch.bfloat16, trust_remote_code=True) inputs = tokenizer('登鹳雀楼->王之涣\n夜雨寄北->', return_tensors='pt') inputs = inputs.to('cuda:0') pred = model.generate(**inputs, max_new_tokens=64, repetition_penalty=1.1) print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True)) ``` # <span id="Benchmark">Benchmark 结果/Benchmark Evaluation</span> 我们在[通用]、[法律]、[医疗]、[数学]、[代码]和[多语言翻译]六个领域的中英文权威数据集上对模型进行了广泛测试,更多详细测评结果可查看[GitHub]。 We have extensively tested the model on authoritative Chinese-English datasets across six domains: [General](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#general-domain), [Legal](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#law-and-medicine), [Medical](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#law-and-medicine), [Mathematics](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#mathematics-and-code), [Code](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#mathematics-and-code), and [Multilingual Translation](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#multilingual-translation). For more detailed evaluation results, please refer to [GitHub](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md). ### 7B Model Results | | **C-Eval** | **MMLU** | **CMMLU** | **Gaokao** | **AGIEval** | **BBH** | |:-----------------------:|:----------:|:--------:|:---------:|:----------:|:-----------:|:-------:| | | 5-shot | 5-shot | 5-shot | 5-shot | 5-shot | 3-shot | | **GPT-4** | 68.40 | 83.93 | 70.33 | 66.15 | 63.27 | 75.12 | | **GPT-3.5 Turbo** | 51.10 | 68.54 | 54.06 | 47.07 | 46.13 | 61.59 | | **LLaMA-7B** | 27.10 | 35.10 | 26.75 | 27.81 | 28.17 | 32.38 | | **LLaMA2-7B** | 28.90 | 45.73 | 31.38 | 25.97 | 26.53 | 39.16 | | **MPT-7B** | 27.15 | 27.93 | 26.00 | 26.54 | 24.83 | 35.20 | | **Falcon-7B** | 24.23 | 26.03 | 25.66 | 24.24 | 24.10 | 28.77 | | **ChatGLM2-6B** | 50.20 | 45.90 | 49.00 | 49.44 | 45.28 | 31.65 | | **[Baichuan-7B]** | 42.80 | 42.30 | 44.02 | 36.34 | 34.44 | 32.48 | | **[Baichuan2-7B-Base]** | 54.00 | 54.16 | 57.07 | 47.47 | 42.73 | 41.56 | ### 13B Model Results | | **C-Eval** | **MMLU** | **CMMLU** | **Gaokao** | **AGIEval** | **BBH** | |:---------------------------:|:----------:|:--------:|:---------:|:----------:|:-----------:|:-------:| | | 5-shot | 5-shot | 5-shot | 5-shot | 5-shot | 3-shot | | **GPT-4** | 68.40 | 83.93 | 70.33 | 66.15 | 63.27 | 75.12 | | **GPT-3.5 Turbo** | 51.10 | 68.54 | 54.06 | 47.07 | 46.13 | 61.59 | | **LLaMA-13B** | 28.50 | 46.30 | 31.15 | 28.23 | 28.22 | 37.89 | | **LLaMA2-13B** | 35.80 | 55.09 | 37.99 | 30.83 | 32.29 | 46.98 | | **Vicuna-13B** | 32.80 | 52.00 | 36.28 | 30.11 | 31.55 | 43.04 | | **Chinese-Alpaca-Plus-13B** | 38.80 | 43.90 | 33.43 | 34.78 | 35.46 | 28.94 | | **XVERSE-13B** | 53.70 | 55.21 | 58.44 | 44.69 | 42.54 | 38.06 | | **[Baichuan-13B-Base]** | 52.40 | 51.60 | 55.30 | 49.69 | 43.20 | 43.01 | | **[Baichuan2-13B-Base]** | 58.10 | 59.17 | 61.97 | 54.33 | 48.17 | 48.78 | ## 训练过程模型/Training Dynamics 除了训练了 2.6 万亿 Tokens 的 [Baichuan2-7B-Base](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base) 模型,我们还提供了在此之前的另外 11 个中间过程的模型(分别对应训练了约 0.2 ~ 2.4 万亿 Tokens)供社区研究使用 ([训练过程checkpoint下载](https://huggingface.co/baichuan-inc/Baichuan2-7B-Intermediate-Checkpoints))。下图给出了这些 checkpoints 在 C-Eval、MMLU、CMMLU 三个 benchmark 上的效果变化: In addition to the [Baichuan2-7B-Base](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base) model trained on 2.6 trillion tokens, we also offer 11 additional intermediate-stage models for community research, corresponding to training on approximately 0.2 to 2.4 trillion tokens each ([Intermediate Checkpoints Download](https://huggingface.co/baichuan-inc/Baichuan2-7B-Intermediate-Checkpoints)). The graph below shows the performance changes of these checkpoints on three benchmarks: C-Eval, MMLU, and CMMLU. ![checkpoint](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/resolve/main/checkpoints.jpeg) # <span id="Terms">声明与协议/Terms and Conditions</span> ## 声明 我们在此声明,我们的开发团队并未基于 Baichuan 2 模型开发任何应用,无论是在 iOS、Android、网页或任何其他平台。我们强烈呼吁所有使用者,不要利用 Baichuan 2 模型进行任何危害国家社会安全或违法的活动。另外,我们也要求使用者不要将 Baichuan 2 模型用于未经适当安全审查和备案的互联网服务。我们希望所有的使用者都能遵守这个原则,确保科技的发展能在规范和合法的环境下进行。 我们已经尽我们所能,来确保模型训练过程中使用的数据的合规性。然而,尽管我们已经做出了巨大的努力,但由于模型和数据的复杂性,仍有可能存在一些无法预见的问题。因此,如果由于使用 Baichuan 2 开源模型而导致的任何问题,包括但不限于数据安全问题、公共舆论风险,或模型被误导、滥用、传播或不当利用所带来的任何风险和问题,我们将不承担任何责任。 We hereby declare that our team has not developed any applications based on Baichuan 2 models, not on iOS, Android, the web, or any other platform. We strongly call on all users not to use Baichuan 2 models for any activities that harm national / social security or violate the law. Also, we ask users not to use Baichuan 2 models for Internet services that have not undergone appropriate security reviews and filings. We hope that all users can abide by this principle and ensure that the development of technology proceeds in a regulated and legal environment. We have done our best to ensure the compliance of the data used in the model training process. However, despite our considerable efforts, there may still be some unforeseeable issues due to the complexity of the model and data. Therefore, if any problems arise due to the use of Baichuan 2 open-source models, including but not limited to data security issues, public opinion risks, or any risks and problems brought about by the model being misled, abused, spread or improperly exploited, we will not assume any responsibility. ## 协议 社区使用 Baichuan 2 模型需要遵循 [Apache 2.0](https://github.com/baichuan-inc/Baichuan2/blob/main/LICENSE) 和[《Baichuan 2 模型社区许可协议》](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/resolve/main/Baichuan%202%E6%A8%A1%E5%9E%8B%E7%A4%BE%E5%8C%BA%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf)。Baichuan 2 模型支持商业用途,如果您计划将 Baichuan 2 模型或其衍生品用于商业目的,请您确认您的主体符合以下情况: 1. 您或您的关联方的服务或产品的日均用户活跃量(DAU)低于100万。 2. 您或您的关联方不是软件服务提供商、云服务提供商。 3. 您或您的关联方不存在将授予您的商用许可,未经百川许可二次授权给其他第三方的可能。 在符合以上条件的前提下,您需要通过以下联系邮箱 opensource@baichuan-inc.com ,提交《Baichuan 2 模型社区许可协议》要求的申请材料。审核通过后,百川将特此授予您一个非排他性、全球性、不可转让、不可再许可、可撤销的商用版权许可。 The community usage of Baichuan 2 model requires adherence to [Apache 2.0](https://github.com/baichuan-inc/Baichuan2/blob/main/LICENSE) and [Community License for Baichuan2 Model](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/resolve/main/Baichuan%202%E6%A8%A1%E5%9E%8B%E7%A4%BE%E5%8C%BA%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf). The Baichuan 2 model supports commercial use. If you plan to use the Baichuan 2 model or its derivatives for commercial purposes, please ensure that your entity meets the following conditions: 1. The Daily Active Users (DAU) of your or your affiliate's service or product is less than 1 million. 2. Neither you nor your affiliates are software service providers or cloud service providers. 3. There is no possibility for you or your affiliates to grant the commercial license given to you, to reauthorize it to other third parties without Baichuan's permission. Upon meeting the above conditions, you need to submit the application materials required by the Baichuan 2 Model Community License Agreement via the following contact email: opensource@baichuan-inc.com. Once approved, Baichuan will hereby grant you a non-exclusive, global, non-transferable, non-sublicensable, revocable commercial copyright license. [GitHub]:https://github.com/baichuan-inc/Baichuan2 [Baichuan2]:https://github.com/baichuan-inc/Baichuan2 [Baichuan-7B]:https://huggingface.co/baichuan-inc/Baichuan-7B [Baichuan2-7B-Base]:https://huggingface.co/baichuan-inc/Baichuan2-7B-Base [Baichuan2-7B-Chat]:https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat [Baichuan2-7B-Chat-4bits]:https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat-4bits [Baichuan-13B-Base]:https://huggingface.co/baichuan-inc/Baichuan-13B-Base [Baichuan2-13B-Base]:https://huggingface.co/baichuan-inc/Baichuan2-13B-Base [Baichuan2-13B-Chat]:https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat [Baichuan2-13B-Chat-4bits]:https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat-4bits [通用]:https://github.com/baichuan-inc/Baichuan2#%E9%80%9A%E7%94%A8%E9%A2%86%E5%9F%9F [法律]:https://github.com/baichuan-inc/Baichuan2#%E6%B3%95%E5%BE%8B%E5%8C%BB%E7%96%97 [医疗]:https://github.com/baichuan-inc/Baichuan2#%E6%B3%95%E5%BE%8B%E5%8C%BB%E7%96%97 [数学]:https://github.com/baichuan-inc/Baichuan2#%E6%95%B0%E5%AD%A6%E4%BB%A3%E7%A0%81 [代码]:https://github.com/baichuan-inc/Baichuan2#%E6%95%B0%E5%AD%A6%E4%BB%A3%E7%A0%81 [多语言翻译]:https://github.com/baichuan-inc/Baichuan2#%E5%A4%9A%E8%AF%AD%E8%A8%80%E7%BF%BB%E8%AF%91 [《Baichuan 2 模型社区许可协议》]:https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/blob/main/Baichuan%202%E6%A8%A1%E5%9E%8B%E7%A4%BE%E5%8C%BA%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf [邮件申请]: mailto:opensource@baichuan-inc.com [Email]: mailto:opensource@baichuan-inc.com [opensource@baichuan-inc.com]: mailto:opensource@baichuan-inc.com [训练过程heckpoint下载]: https://huggingface.co/baichuan-inc/Baichuan2-7B-Intermediate-Checkpoints [百川智能]: https://www.baichuan-ai.com
13,270
[ [ -0.0249176025390625, -0.0513916015625, 0.0029315948486328125, 0.02880859375, -0.021087646484375, -0.0033435821533203125, -0.0196990966796875, -0.030670166015625, 0.020294189453125, 0.00737762451171875, -0.03558349609375, -0.037628173828125, -0.04937744140625, ...
Helsinki-NLP/opus-mt-ga-en
2023-08-16T11:37:45.000Z
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ga", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
Helsinki-NLP
null
null
Helsinki-NLP/opus-mt-ga-en
0
1,065
transformers
2022-03-02T23:29:04
--- language: - ga - en tags: - translation license: apache-2.0 --- ### gle-eng * source group: Irish * target group: English * OPUS readme: [gle-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gle-eng/README.md) * model: transformer-align * source language(s): gle * target language(s): eng * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/gle-eng/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gle-eng/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gle-eng/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.gle.eng | 51.6 | 0.672 | ### System Info: - hf_name: gle-eng - source_languages: gle - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gle-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ga', 'en'] - src_constituents: {'gle'} - tgt_constituents: {'eng'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/gle-eng/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/gle-eng/opus-2020-06-17.test.txt - src_alpha3: gle - tgt_alpha3: eng - short_pair: ga-en - chrF2_score: 0.672 - bleu: 51.6 - brevity_penalty: 1.0 - ref_len: 11247.0 - src_name: Irish - tgt_name: English - train_date: 2020-06-17 - src_alpha2: ga - tgt_alpha2: en - prefer_old: False - long_pair: gle-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
2,051
[ [ -0.0213775634765625, -0.041534423828125, 0.0278472900390625, 0.0224456787109375, -0.0234222412109375, -0.0140533447265625, -0.0236053466796875, -0.038116455078125, 0.022247314453125, 0.0135345458984375, -0.04290771484375, -0.053955078125, -0.038818359375, 0....
bhadresh-savani/distilbert-base-uncased-sentiment-sst2
2022-06-15T11:48:33.000Z
[ "transformers", "pytorch", "tf", "jax", "distilbert", "text-classification", "en", "dataset:sst2", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
bhadresh-savani
null
null
bhadresh-savani/distilbert-base-uncased-sentiment-sst2
0
1,065
transformers
2022-03-02T23:29:05
--- language: en license: apache-2.0 datasets: - sst2 --- # distilbert-base-uncased-sentiment-sst2 This model will be able to identify positivity or negativity present in the sentence ## Dataset: The Stanford Sentiment Treebank from GLUE ## Results: ``` ***** eval metrics ***** epoch = 3.0 eval_accuracy = 0.9094 eval_loss = 0.3514 eval_runtime = 0:00:03.60 eval_samples = 872 eval_samples_per_second = 242.129 eval_steps_per_second = 30.266 ```
557
[ [ -0.0186614990234375, -0.08502197265625, 0.0394287109375, 0.0238494873046875, -0.040191650390625, 0.002178192138671875, 0.0023555755615234375, 0.0165557861328125, 0.0262451171875, 0.007793426513671875, -0.03863525390625, -0.0465087890625, -0.059539794921875, ...
Yntec/HitenDiffusion
2023-09-23T23:43:10.000Z
[ "diffusers", "Anime", "andite", "text-to-image", "stable-diffusion", "stable-diffusion-diffusers", "license:creativeml-openrail-m", "endpoints_compatible", "has_space", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
Yntec
null
null
Yntec/HitenDiffusion
1
1,065
diffusers
2023-09-23T22:43:29
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image tags: - Anime - andite - text-to-image - stable-diffusion - stable-diffusion-diffusers - diffusers --- # Hiten Diffusion andite's Hiten model from https://huggingface.co/LMFResearchSociety/andite-finetunes-backup Samples and prompt: ![Sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/nQD-I6F4b_fXZDv6r_wMt.png) ![Sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/4n82KM9OBBrT71UDWGYW4.png) sitting girl, Pretty CUTE LITTLE, gorgeous hair, DETAILED CHIBI EYES, Magazine ad, iconic, 1943, sharp focus. acrylic art by ROSSDRAWS and Clay Mann and ilya kuvshinov
726
[ [ -0.0292816162109375, -0.0782470703125, 0.032196044921875, 0.0158538818359375, 0.01001739501953125, -0.01003265380859375, 0.011993408203125, -0.0261993408203125, 0.06341552734375, 0.041229248046875, -0.052764892578125, -0.042694091796875, -0.033050537109375, ...
UCSC-VLAA/ViT-bigG-14-CLIPA-datacomp1B
2023-10-20T21:52:47.000Z
[ "open_clip", "clip", "zero-shot-image-classification", "dataset:mlfoundations/datacomp_1b", "arxiv:2306.15658", "arxiv:2305.07017", "license:apache-2.0", "region:us" ]
zero-shot-image-classification
UCSC-VLAA
null
null
UCSC-VLAA/ViT-bigG-14-CLIPA-datacomp1B
1
1,065
open_clip
2023-10-20T21:43:50
--- tags: - clip library_name: open_clip pipeline_tag: zero-shot-image-classification license: apache-2.0 datasets: - mlfoundations/datacomp_1b --- # Model card for ViT-bigG-14-CLIPA-datacomp1B A CLIPA-v2 model... ## Model Details - **Model Type:** Contrastive Image-Text, Zero-Shot Image Classification. - **Original:** https://github.com/UCSC-VLAA/CLIPA - **Dataset:** mlfoundations/datacomp_1b - **Papers:** - CLIPA-v2: Scaling CLIP Training with 81.1% Zero-shot ImageNet Accuracy within a $10,000 Budget; An Extra $4,000 Unlocks 81.8% Accuracy: https://arxiv.org/abs/2306.15658 - An Inverse Scaling Law for CLIP Training: https://arxiv.org/abs/2305.07017 ## Model Usage ### With OpenCLIP ``` import torch import torch.nn.functional as F from urllib.request import urlopen from PIL import Image from open_clip import create_model_from_pretrained, get_tokenizer model, preprocess = create_model_from_pretrained('hf-hub:ViT-bigG-14-CLIPA') tokenizer = get_tokenizer('hf-hub:ViT-bigG-14-CLIPA') image = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) image = preprocess(image).unsqueeze(0) text = tokenizer(["a diagram", "a dog", "a cat", "a beignet"], context_length=model.context_length) with torch.no_grad(), torch.cuda.amp.autocast(): image_features = model.encode_image(image) text_features = model.encode_text(text) image_features = F.normalize(image_features, dim=-1) text_features = F.normalize(text_features, dim=-1) text_probs = (100.0 * image_features @ text_features.T).softmax(dim=-1) print("Label probs:", text_probs) # prints: [[0., 0., 0., 1.0]] ``` ## Citation ```bibtex @article{li2023clipav2, title={CLIPA-v2: Scaling CLIP Training with 81.1% Zero-shot ImageNet Accuracy within a $10,000 Budget; An Extra $4,000 Unlocks 81.8% Accuracy}, author={Xianhang Li and Zeyu Wang and Cihang Xie}, journal={arXiv preprint arXiv:2306.15658}, year={2023}, } ``` ```bibtex @inproceedings{li2023clipa, title={An Inverse Scaling Law for CLIP Training}, author={Xianhang Li and Zeyu Wang and Cihang Xie}, booktitle={NeurIPS}, year={2023}, } ```
2,219
[ [ -0.027984619140625, -0.035247802734375, 0.00798797607421875, 0.019775390625, -0.0305023193359375, -0.0232391357421875, -0.006023406982421875, -0.0281829833984375, 0.036956787109375, 0.01486968994140625, -0.037811279296875, -0.037109375, -0.052520751953125, -...
HuggingFaceH4/mistral-7b-sft-beta
2023-10-26T14:26:06.000Z
[ "transformers", "pytorch", "tensorboard", "mistral", "text-generation", "generated_from_trainer", "en", "dataset:HuggingFaceH4/ultrachat_200k", "license:mit", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
HuggingFaceH4
null
null
HuggingFaceH4/mistral-7b-sft-beta
7
1,065
transformers
2023-10-26T13:43:58
--- license: mit base_model: mistralai/Mistral-7B-v0.1 tags: - generated_from_trainer model-index: - name: mistral-7b-sft-beta results: [] datasets: - HuggingFaceH4/ultrachat_200k language: - en --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Model Card for Mistral 7B SFT β This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the HuggingFaceH4/ultrachat_200k dataset. It is the SFT model that was used to train Zephyr-7B-β with Direct Preference Optimization. It achieves the following results on the evaluation set: - Loss: 0.9399 ## Model description - **Model type:** A 7B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets. - **Language(s) (NLP):** Primarily English - **License:** MIT - **Finetuned from model:** [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) ### Model Sources <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/huggingface/alignment-handbook ## Intended uses & limitations The model was fine-tuned with [🤗 TRL's](https://github.com/huggingface/trl) `SFTTrainer` on a filtered and preprocessed of the [`UltraChat`](https://huggingface.co/datasets/stingning/ultrachat) dataset, which contains a diverse range of synthetic dialogues generated by ChatGPT. Here's how you can run the model using the `pipeline()` function from 🤗 Transformers: ```python # Install transformers from source - only needed for versions <= v4.34 # pip install git+https://github.com/huggingface/transformers.git # pip install accelerate import torch from transformers import pipeline pipe = pipeline("text-generation", model="HuggingFaceH4/mistral-7b-sft-beta", torch_dtype=torch.bfloat16, device_map="auto") # We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating messages = [ { "role": "system", "content": "You are a friendly chatbot who always responds in the style of a pirate", }, {"role": "user", "content": "How many helicopters can a human eat in one sitting?"}, ] prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) # <|system|> # You are a friendly chatbot who always responds in the style of a pirate.</s> # <|user|> # How many helicopters can a human eat in one sitting?</s> # <|assistant|> # Ah, me hearty matey! But yer question be a puzzler! A human cannot eat a helicopter in one sitting, as helicopters are not edible. They be made of metal, plastic, and other materials, not food! ``` ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - distributed_type: multi-GPU - num_devices: 16 - gradient_accumulation_steps: 4 - total_train_batch_size: 512 - total_eval_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.9367 | 0.67 | 272 | 0.9397 | ### Framework versions - Transformers 4.35.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.14.0
3,671
[ [ -0.026519775390625, -0.06500244140625, -0.0017194747924804688, 0.01558685302734375, -0.02374267578125, -0.009552001953125, -0.0084686279296875, -0.028167724609375, 0.0251312255859375, 0.0272216796875, -0.04296875, -0.0205078125, -0.042724609375, 0.0080566406...
DMetaSoul/sbert-chinese-general-v2
2022-04-04T07:22:23.000Z
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "semantic-search", "chinese", "endpoints_compatible", "region:us", "has_space" ]
sentence-similarity
DMetaSoul
null
null
DMetaSoul/sbert-chinese-general-v2
13
1,062
sentence-transformers
2022-03-25T08:59:33
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers - semantic-search - chinese --- # DMetaSoul/sbert-chinese-general-v2 此模型基于 [bert-base-chinese](https://huggingface.co/bert-base-chinese) 版本 BERT 模型,在百万级语义相似数据集 [SimCLUE](https://github.com/CLUEbenchmark/SimCLUE) 上进行训练,适用于**通用语义匹配**场景,从效果来看该模型在各种任务上**泛化能力更好**。 注:此模型的[轻量化版本](https://huggingface.co/DMetaSoul/sbert-chinese-general-v2-distill),也已经开源啦! # Usage ## 1. Sentence-Transformers 通过 [sentence-transformers](https://www.SBERT.net) 框架来使用该模型,首先进行安装: ``` pip install -U sentence-transformers ``` 然后使用下面的代码来载入该模型并进行文本表征向量的提取: ```python from sentence_transformers import SentenceTransformer sentences = ["我的儿子!他猛然间喊道,我的儿子在哪儿?", "我的儿子呢!他突然喊道,我的儿子在哪里?"] model = SentenceTransformer('DMetaSoul/sbert-chinese-general-v2') embeddings = model.encode(sentences) print(embeddings) ``` ## 2. HuggingFace Transformers 如果不想使用 [sentence-transformers](https://www.SBERT.net) 的话,也可以通过 HuggingFace Transformers 来载入该模型并进行文本向量抽取: ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ["我的儿子!他猛然间喊道,我的儿子在哪儿?", "我的儿子呢!他突然喊道,我的儿子在哪里?"] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('DMetaSoul/sbert-chinese-general-v2') model = AutoModel.from_pretrained('DMetaSoul/sbert-chinese-general-v2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation 该模型在公开的几个语义匹配数据集上进行了评测,计算了向量相似度跟真实标签之间的相关性系数: | | **csts_dev** | **csts_test** | **afqmc** | **lcqmc** | **bqcorpus** | **pawsx** | **xiaobu** | | ---------------------------- | ------------ | ------------- | ---------- | ---------- | ------------ | ---------- | ---------- | | **sbert-chinese-general-v1** | **84.54%** | **82.17%** | 23.80% | 65.94% | 45.52% | 11.52% | 48.51% | | **sbert-chinese-general-v2** | 77.20% | 72.60% | **36.80%** | **76.92%** | **49.63%** | **16.24%** | **63.16%** | 这里对比了本模型跟之前我们发布 [sbert-chinese-general-v1](https://huggingface.co/DMetaSoul/sbert-chinese-general-v1) 之间的差异,可以看到本模型在多个任务上的泛化能力更好。 ## Citing & Authors E-mail: xiaowenbin@dmetasoul.com
3,020
[ [ -0.022979736328125, -0.042144775390625, 0.00408935546875, 0.031341552734375, -0.037445068359375, -0.021209716796875, -0.01520538330078125, -0.012298583984375, 0.0202484130859375, 0.01861572265625, -0.0450439453125, -0.034088134765625, -0.051971435546875, 0.0...
wanglab/medsam-vit-base
2023-05-03T07:03:04.000Z
[ "transformers", "pytorch", "sam", "mask-generation", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
wanglab
null
null
wanglab/medsam-vit-base
8
1,062
transformers
2023-04-29T15:27:55
--- license: apache-2.0 --- # Model Card for Segment Anything Model (SAM) - ViT Base (ViT-B) version, fine-tuned for medical image segmentation <p> <img src="https://s3.amazonaws.com/moonup/production/uploads/62441d1d9fdefb55a0b7d12c/F1LWM9MXjHJsiAtgBFpDP.png" alt="Model architecture"> <em> Detailed architecture of Segment Anything Model (SAM).</em> </p> # Table of Contents 0. [TL;DR](#TL;DR) 1. [Model Details](#model-details) 2. [Usage](#usage) 3. [Citation](#citation) # TL;DR [Link to original SAM repository](https://github.com/facebookresearch/segment-anything) [Link to original MedSAM repository](https://github.com/bowang-lab/medsam) | <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/sam-beancans.png" alt="Snow" width="600" height="600"> | <img src="https://s3.amazonaws.com/moonup/production/uploads/62441d1d9fdefb55a0b7d12c/wHXbJx1oXqHCYNeUNKHs8.png" alt="Forest" width="600" height="600"> | <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/sam-car-seg.png" alt="Mountains" width="600" height="600"> | |---------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------| The **Segment Anything Model (SAM)** produces high-quality object masks from input prompts such as points or boxes, and it can be used to generate masks for all objects in an image. It has been trained on a [dataset](https://segment-anything.com/dataset/index.html) of 11 million images and 1.1 billion masks, and has strong zero-shot performance on a variety of segmentation tasks. The abstract of the paper states: > We introduce the Segment Anything (SA) project: a new task, model, and dataset for image segmentation. Using our efficient model in a data collection loop, we built the largest segmentation dataset to date (by far), with over 1 billion masks on 11M licensed and privacy respecting images. The model is designed and trained to be promptable, so it can transfer zero-shot to new image distributions and tasks. We evaluate its capabilities on numerous tasks and find that its zero-shot performance is impressive -- often competitive with or even superior to prior fully supervised results. We are releasing the Segment Anything Model (SAM) and corresponding dataset (SA-1B) of 1B masks and 11M images at [https://segment-anything.com](https://segment-anything.com) to foster research into foundation models for computer vision. **Disclaimer**: Content from **this** model card has been written by the Hugging Face team, and parts of it were copy pasted from the original [SAM model card](https://github.com/facebookresearch/segment-anything). # Model Details The SAM model is made up of 3 modules: - The `VisionEncoder`: a VIT based image encoder. It computes the image embeddings using attention on patches of the image. Relative Positional Embedding is used. - The `PromptEncoder`: generates embeddings for points and bounding boxes - The `MaskDecoder`: a two-ways transformer which performs cross attention between the image embedding and the point embeddings (->) and between the point embeddings and the image embeddings. The outputs are fed - The `Neck`: predicts the output masks based on the contextualized masks produced by the `MaskDecoder`. # Usage Refer to the demo notebooks: - [this one](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/SAM/Run_inference_with_MedSAM_using_HuggingFace_Transformers.ipynb) showcasing inference with MedSAM - [this one](https://github.com/huggingface/notebooks/blob/main/examples/segment_anything.ipynb) showcasing general usage of SAM, as well as the [docs](https://huggingface.co/docs/transformers/main/en/model_doc/sam). # Citation If you use this model, please use the following BibTeX entry. ``` @article{kirillov2023segany, title={Segment Anything}, author={Kirillov, Alexander and Mintun, Eric and Ravi, Nikhila and Mao, Hanzi and Rolland, Chloe and Gustafson, Laura and Xiao, Tete and Whitehead, Spencer and Berg, Alexander C. and Lo, Wan-Yen and Doll{\'a}r, Piotr and Girshick, Ross}, journal={arXiv:2304.02643}, year={2023} } ```
4,515
[ [ -0.04345703125, -0.04510498046875, 0.042572021484375, 0.00045108795166015625, -0.03326416015625, -0.018402099609375, 0.027587890625, -0.0360107421875, 0.046142578125, 0.03216552734375, -0.04510498046875, -0.049285888671875, -0.046905517578125, -0.01654052734...
TheBloke/vicuna-13B-v1.5-AWQ
2023-09-27T12:50:21.000Z
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:2307.09288", "arxiv:2306.05685", "license:llama2", "text-generation-inference", "region:us" ]
text-generation
TheBloke
null
null
TheBloke/vicuna-13B-v1.5-AWQ
0
1,062
transformers
2023-09-19T04:22:21
--- license: llama2 model_name: Vicuna 13B v1.5 base_model: lmsys/vicuna-13b-v1.5 inference: false model_creator: lmsys model_type: llama prompt_template: 'A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user''s questions. USER: {prompt} ASSISTANT: ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Vicuna 13B v1.5 - AWQ - Model creator: [lmsys](https://huggingface.co/lmsys) - Original model: [Vicuna 13B v1.5](https://huggingface.co/lmsys/vicuna-13b-v1.5) <!-- description start --> ## Description This repo contains AWQ model files for [lmsys's Vicuna 13B v1.5](https://huggingface.co/lmsys/vicuna-13b-v1.5). ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference. It is also now supported by continuous batching server [vLLM](https://github.com/vllm-project/vllm), allowing use of AWQ models for high-throughput concurrent inference in multi-user server scenarios. Note that, at the time of writing, overall throughput is still lower than running vLLM with unquantised models, however using AWQ enables using much smaller GPUs which can lead to easier deployment and overall cost savings. For example, a 70B model can be run on 1 x 48GB GPU instead of 2 x 80GB. <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/vicuna-13B-v1.5-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/vicuna-13B-v1.5-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/vicuna-13B-v1.5-GGUF) * [lmsys's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/lmsys/vicuna-13b-v1.5) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Vicuna ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT: ``` <!-- prompt-template end --> <!-- README_AWQ.md-provided-files start --> ## Provided files and AWQ parameters For my first release of AWQ models, I am releasing 128g models only. I will consider adding 32g as well if there is interest, and once I have done perplexity and evaluation comparisons, but at this time 32g models are still not fully tested with AutoAWQ and vLLM. Models are released as sharded safetensors files. | Branch | Bits | GS | AWQ Dataset | Seq Len | Size | | ------ | ---- | -- | ----------- | ------- | ---- | | [main](https://huggingface.co/TheBloke/vicuna-13B-v1.5-AWQ/tree/main) | 4 | 128 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.25 GB <!-- README_AWQ.md-provided-files end --> <!-- README_AWQ.md-use-from-vllm start --> ## Serving this model from vLLM Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/). - When using vLLM as a server, pass the `--quantization awq` parameter, for example: ```shell python3 python -m vllm.entrypoints.api_server --model TheBloke/vicuna-13B-v1.5-AWQ --quantization awq ``` When using vLLM from Python code, pass the `quantization=awq` parameter, for example: ```python from vllm import LLM, SamplingParams prompts = [ "Hello, my name is", "The president of the United States is", "The capital of France is", "The future of AI is", ] sampling_params = SamplingParams(temperature=0.8, top_p=0.95) llm = LLM(model="TheBloke/vicuna-13B-v1.5-AWQ", quantization="awq") outputs = llm.generate(prompts, sampling_params) # Print the outputs. for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}") ``` <!-- README_AWQ.md-use-from-vllm start --> <!-- README_AWQ.md-use-from-python start --> ## How to use this AWQ model from Python code ### Install the necessary packages Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.0.2 or later ```shell pip3 install autoawq ``` If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y autoawq git clone https://github.com/casper-hansen/AutoAWQ cd AutoAWQ pip3 install . ``` ### You can then try the following example code ```python from awq import AutoAWQForCausalLM from transformers import AutoTokenizer model_name_or_path = "TheBloke/vicuna-13B-v1.5-AWQ" # Load model model = AutoAWQForCausalLM.from_quantized(model_name_or_path, fuse_layers=True, trust_remote_code=False, safetensors=True) tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=False) prompt = "Tell me about AI" prompt_template=f'''A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT: ''' print("\n\n*** Generate:") tokens = tokenizer( prompt_template, return_tensors='pt' ).input_ids.cuda() # Generate output generation_output = model.generate( tokens, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, max_new_tokens=512 ) print("Output: ", tokenizer.decode(generation_output[0])) # Inference can also be done using transformers' pipeline from transformers import pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_AWQ.md-use-from-python end --> <!-- README_AWQ.md-compatibility start --> ## Compatibility The files provided are tested to work with [AutoAWQ](https://github.com/casper-hansen/AutoAWQ), and [vLLM](https://github.com/vllm-project/vllm). [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is not yet compatible with AWQ, but a PR is open which should bring support soon: [TGI PR #781](https://github.com/huggingface/text-generation-inference/issues/781). <!-- README_AWQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: lmsys's Vicuna 13B v1.5 # Vicuna Model Card ## Model Details Vicuna is a chat assistant trained by fine-tuning Llama 2 on user-shared conversations collected from ShareGPT. - **Developed by:** [LMSYS](https://lmsys.org/) - **Model type:** An auto-regressive language model based on the transformer architecture - **License:** Llama 2 Community License Agreement - **Finetuned from model:** [Llama 2](https://arxiv.org/abs/2307.09288) ### Model Sources - **Repository:** https://github.com/lm-sys/FastChat - **Blog:** https://lmsys.org/blog/2023-03-30-vicuna/ - **Paper:** https://arxiv.org/abs/2306.05685 - **Demo:** https://chat.lmsys.org/ ## Uses The primary use of Vicuna is research on large language models and chatbots. The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence. ## How to Get Started with the Model - Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights - APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api ## Training Details Vicuna v1.5 is fine-tuned from Llama 2 with supervised instruction fine-tuning. The training data is around 125K conversations collected from ShareGPT.com. See more details in the "Training Details of Vicuna Models" section in the appendix of this [paper](https://arxiv.org/pdf/2306.05685.pdf). ## Evaluation ![Evaluation Results](https://github.com/lm-sys/lm-sys.github.io/blob/main/public/images/webdata/vicuna_v1.5_eval.png?raw=true) Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf) and [leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard). ## Difference between different versions of Vicuna See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md)
12,392
[ [ -0.0400390625, -0.0589599609375, 0.029083251953125, -0.0013341903686523438, -0.01611328125, -0.01416015625, 0.006465911865234375, -0.036773681640625, -0.002147674560546875, 0.02392578125, -0.051055908203125, -0.0350341796875, -0.018218994140625, -0.005107879...
jphme/em_german_mistral_v01
2023-10-12T11:13:24.000Z
[ "transformers", "safetensors", "mistral", "text-generation", "pytorch", "german", "deutsch", "de", "license:apache-2.0", "text-generation-inference", "region:us" ]
text-generation
jphme
null
null
jphme/em_german_mistral_v01
11
1,062
transformers
2023-09-28T09:40:33
--- inference: false language: - de library_name: transformers license: apache-2.0 model_creator: jphme model_name: EM German model_type: mistral pipeline_tag: text-generation prompt_template: 'Du bist ein hilfreicher Assistent. USER: Was ist 1+1? ASSISTANT:' tags: - pytorch - german - deutsch - mistral --- ![EM Logo](em_model_logo_web.jpeg) *Please note that the Mistral architecture is still recent and possibly not yet supported by all libraries. In case of any problems, please try updating your environment or using a different format/base model.* # Table of Contents 1. [Introduction](#introduction) 2. [Links & Demos](#links--demos) - [Model Links](#model-links) - [Demos](#demos) 3. [Prompt Format](#prompt-format) 4. [Example Output](#example-output) 5. [Acknowledgements](#acknowledgements) 6. [Contact](#contact) 7. [Disclaimer](#disclaimer) # Introduction **EM German** is a Llama2/Mistral/LeoLM-based model family, finetuned on a large dataset of various instructions in German language. The models are optimized for German text, providing proficiency in understanding, generating, and interacting with German language content. We offer versions based on 7b, 13b and 70b Llama-2, Mistral and LeoLM (Llama-2/Mistral with continued pretraining on German texts) models. Please find all Informations, Example Outputs, the special RAG prompt format, output examples and eval results for the EM German Model family in [our Github Repository](https://github.com/jphme/EM_German). ([Deutsche Version](https://github.com/jphme/EM_German/blob/main/README_DE.md)). You will also find instructions on how to run the models with a GUI (GPT4All/LM Studio). # Links & Demos ## Model Links Should you only try one model version, I strongly recommend the **[LeoLM Mistral](https://huggingface.co/jphme/em_german_leo_mistral)** model which offers by far the best combination of performance and computing requirements! | Base Model | HF | GPTQ | GGUF | AWQ | |-------|-------|-------|-------|-------| | Llama2 7b | [Link](https://huggingface.co/jphme/em_german_7b_v01) | [Link](https://huggingface.co/TheBloke/em_german_7b_v01-GPTQ) | [Link](https://huggingface.co/TheBloke/em_german_7b_v01-GGUF) | [Link](https://huggingface.co/TheBloke/em_german_7b_v01-AWQ) | | Llama2 13b | [Link](https://huggingface.co/jphme/em_german_13b_v01) | [Link](https://huggingface.co/TheBloke/em_german_13b_v01-GPTQ) | [Link](https://huggingface.co/TheBloke/em_german_13b_v01-GGUF) | [Link](https://huggingface.co/TheBloke/em_german_13b_v01-AWQ) | | Llama2 70b | [Link](https://huggingface.co/jphme/em_german_70b_v01) | [Link](https://huggingface.co/TheBloke/em_german_70b_v01-GPTQ) | [Link](https://huggingface.co/TheBloke/em_german_70b_v01-GGUF) | [Link](https://huggingface.co/TheBloke/em_german_70b_v01-AWQ) | | [Mistral 7b](https://huggingface.co/mistralai/Mistral-7B-v0.1) | [Link](https://huggingface.co/jphme/em_german_mistral_v01) | [Link](https://huggingface.co/TheBloke/em_german_mistral_v01-GPTQ) | [Link](https://huggingface.co/TheBloke/em_german_mistral_v01-GGUF) | [Link](https://huggingface.co/TheBloke/em_german_mistral_v01-AWQ) | | [LeoLM 7b](https://huggingface.co/LeoLM/leo-hessianai-7b) | [Link](https://huggingface.co/jphme/em_german_7b_leo) | [Link](https://huggingface.co/jphme/em_german_7b_leo_gptq) | [Link](hhttps://huggingface.co/jphme/em_german_7b_leo_gguf) | tbc | | [LeoLM 13b](https://huggingface.co/LeoLM/leo-hessianai-13b) | soon | soon | [Link](https://huggingface.co/jphme/em_german_13b_leo_gguf) | tbc | | [LeoLM Mistral](https://huggingface.co/LeoLM/leo-mistral-hessianai-7b) | [Link](https://huggingface.co/jphme/em_german_leo_mistral) | [Link](https://huggingface.co/TheBloke/em_german_leo_mistral-GPTQ) | [Link](https://huggingface.co/TheBloke/em_german_leo_mistral-GGUF) | [Link](https://huggingface.co/TheBloke/em_german_leo_mistral-AWQ) | ### Notes about the different versions: See also the [comparison of example outputs](https://github.com/jphme/EM_German/blob/main/example_output_comparison.md) for a comparison of (7b) model capabilities. If you get unsatisfying results with one or another EM German version, please try a different (and/or larger) model or version for your usecase. ## Demos: You can use some of the models with **free** google Colab instances (e.g. the 7bn model in 8bit or the 13b model with GPTQ): * [Example Colab Notebook for 13b with GPTQ](https://colab.research.google.com/drive/1IJfJdVwGkfe5MYOqHptystR3FBeEUdGn?usp=sharing) * [Example Colab Notebook for 7b with 8bit-Loading](https://colab.research.google.com/drive/1bsv6vkLM4AlCpSyXA6ol9P32zxZmf7Zu?usp=sharing) * [Example Colab Notebook for 7b Mistral GGUF with Grammar-based structured output](https://colab.research.google.com/drive/17O-a3D4BnVc_V9Of3dFfed18v1ULu1Sv?usp=sharing) # Prompt Format This model follows the Vicuna format without linebreaks (but should work with linebreaks as well). The format is as follows: ``` Du bist ein hilfreicher Assistent. USER: <instruction> ASSISTANT: ``` You can swap the standard system prompt for a better suited one (see below for RAG-tasks). # Example Output Please find a selection of various example outputs for all 7b versions of EM German [here](https://github.com/jphme/EM_German/blob/main/example_output_comparison.md). **Prompt:** > Erkläre die Bedeutung von drei wesentlichen Elementen der Bilanzierung und des Accountings nach IFRS für Unternehmen, die auf internationaler Ebene tätig sind. **Antwort EM German Leo Mistral 7b:** ``` Die International Financial Reporting Standards (IFRS) sind eine Reihe von internationalen Rechnungslegungsstandards, die von der International Accounting Standards Board (IASB) entwickelt wurden. Sie sollen dazu beitragen, dass Finanzberichte vergleichbar, zuverlässig und relevant sind. 1. Fair Value: Dieses Konzept ist ein zentraler Bestandteil der IFRS. Es besagt, dass Vermögenswerte und Schulden zum fairen Wert bilanziert werden sollten, was dem Preis entspricht, zu dem sie zwischen Marktteilnehmern gehandelt werden könnten. Dadurch wird sichergestellt, dass das Unternehmen eine genaue Darstellung seiner finanziellen Lage liefert. 2. Konsolidierung: Die IFRS verlangen von Unternehmen, ihre Finanzergebnisse auf konsolidierter Basis darzustellen. Das bedeutet, dass alle Tochtergesellschaften und verbundenen Unternehmen in die Bilanz des Mutterunternehmens aufgenommen werden. Dadurch wird ein klareres Bild von der finanziellen Situation des Konzerns geschaffen. 3. Vollständigkeitsprinzip: Die IFRS verlangen von Unternehmen, alle relevanten Informationen offenzulegen. Dies umfasst sowohl positive als auch negative Faktoren, wie zum Beispiel Gewinne und Verluste, Einnahmen und Ausgaben sowie Risiken und Unsicherheiten. Durch dieses Prinzip wird sichergestellt, dass Investoren, Kreditgeber und andere Stakeholder eine umfassende Einschätzung der finanziellen Lage des Unternehmens vornehmen können. ``` **Prompt:** > Ein Mensch atmet durchschnittlich 16 Mal pro Minute. Wie oft atmet er an einem Tag? **Antwort EM German Leo Mistral 7b:** ``` Zuerst berechnen wir, wie oft eine Person in einer Stunde atmet. Wir wissen, dass eine Stunde 60 Minuten hat, also: Atemzüge pro Stunde = Atemfrequenz * Minuten pro Stunde = 16 Atemzüge pro Minute * 60 Minuten = 960 Atemzüge Um herauszufinden, wie viele Atemzüge eine Person an einem Tag macht, müssen wir diese stündliche Rate mit der Anzahl der Stunden an einem Tag multiplizieren, also 24: Atemzüge pro Tag = Atemzüge pro Stunde * Stunden pro Tag = 960 Atemzüge * 24 Stunden = 23.040 Atemzüge Also macht ein durchschnittlicher Mensch etwa 23.040 Atemzüge an einem Tag. ``` (For more examples, please visit [our Github Repository](https://github.com/jphme/EM_German).) # Acknowledgements: Many thanks to [winglian/caseus](https://huggingface.co/winglian) for his great work on Axolotl which I used to train the EM mdoels. I am also grateful to [Jon Durbin](https://huggingface.co/jondurbin) and his [Airoboros](https://huggingface.co/jondurbin/airoboros-l2-70b-2.2.1) models and code from which I borrowed many ideas and code snippets. Additionally many thanks to [Björn Plüster](https://huggingface.co/bjoernp) and the LeoLM team for the outstanding pretraining work on LeoLM and last but not least many many thanks to [TheBloke](https://huggingface.co/TheBloke) for the preparation of quantized versions in all formats under the sun. The 70b model was trained with support of the [OVH Cloud Startup Program](https://startup.ovhcloud.com/en/). # Contact For detailed feedback & feature requests, please open an issue or get in contact with me via [my website](https://www.jph.me). *PS: We are also always interested in support for our startup [ellamind](https://ellamind.com), which will offer customized models for business applications in the future (we are currently still in stealth mode). If you use our models for business applications and have advanced needs for specialized capabilities, please get in touch.* # Disclaimer: I am not responsible for the actions of third parties who use this model or the outputs of the model. This model should only be used for research purposes. The original base model license applies and is distributed with the model files.
9,289
[ [ -0.04632568359375, -0.046051025390625, 0.021392822265625, 0.039794921875, -0.028778076171875, -0.0257568359375, -0.0027141571044921875, -0.043426513671875, 0.03826904296875, 0.0007686614990234375, -0.044586181640625, -0.0440673828125, -0.035919189453125, 0.0...
Raspberry-ai/monse_lora_version_1_2
2023-10-17T10:42:23.000Z
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "license:creativeml-openrail-m", "region:us", "has_space" ]
text-to-image
Raspberry-ai
null
null
Raspberry-ai/monse_lora_version_1_2
0
1,061
diffusers
2023-10-17T04:57:35
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA text2image fine-tuning - julian-raspberry-ai/monse_lora_v2 These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the Raspberry-ai/monse-v1 dataset. You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png)
543
[ [ -0.0282745361328125, -0.04461669921875, 0.020538330078125, 0.022979736328125, -0.023895263671875, -0.0127105712890625, 0.0272216796875, -0.0136566162109375, 0.043670654296875, 0.046905517578125, -0.0609130859375, -0.028289794921875, -0.057464599609375, -0.01...
stablediffusionapi/animexl-xuebimix
2023-10-17T09:20:12.000Z
[ "diffusers", "stablediffusionapi.com", "stable-diffusion-api", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
stablediffusionapi
null
null
stablediffusionapi/animexl-xuebimix
1
1,061
diffusers
2023-10-17T09:16:24
--- license: creativeml-openrail-m tags: - stablediffusionapi.com - stable-diffusion-api - text-to-image - ultra-realistic pinned: true --- # AnimeXL-xuebiMIX API Inference ![generated from stablediffusionapi.com](https://pub-3626123a908346a7a8be8d9295f44e26.r2.dev/generations/138251871697534084.png) ## Get API Key Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed. Replace Key in below code, change **model_id** to "animexl-xuebimix" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs) Try model for free: [Generate Images](https://stablediffusionapi.com/models/animexl-xuebimix) Model link: [View model](https://stablediffusionapi.com/models/animexl-xuebimix) Credits: [View credits](https://civitai.com/?query=AnimeXL-xuebiMIX) View all models: [View Models](https://stablediffusionapi.com/models) import requests import json url = "https://stablediffusionapi.com/api/v4/dreambooth" payload = json.dumps({ "key": "your_api_key", "model_id": "animexl-xuebimix", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) > Use this coupon code to get 25% off **DMGG0RBN**
2,486
[ [ -0.03424072265625, -0.058837890625, 0.033538818359375, 0.03204345703125, -0.0267181396484375, 0.00485992431640625, 0.022918701171875, -0.03912353515625, 0.030181884765625, 0.044158935546875, -0.058990478515625, -0.0635986328125, -0.0270538330078125, -0.00014...
Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2
2022-05-26T12:49:05.000Z
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "fi", "finnish", "generated_from_trainer", "hf-asr-leaderboard", "robust-speech-event", "dataset:mozilla-foundation/common_voice_7_0", "arxiv:2111.09296", "license:apache-2.0", "model-index", "endpoints_compatible", "...
automatic-speech-recognition
Finnish-NLP
null
null
Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2
1
1,060
transformers
2022-03-27T18:10:56
--- license: apache-2.0 language: fi metrics: - wer - cer tags: - automatic-speech-recognition - fi - finnish - generated_from_trainer - hf-asr-leaderboard - robust-speech-event datasets: - mozilla-foundation/common_voice_7_0 model-index: - name: wav2vec2-xlsr-1b-finnish-lm-v2 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 7 type: mozilla-foundation/common_voice_7_0 args: fi metrics: - name: Test WER type: wer value: 4.09 - name: Test CER type: cer value: 0.88 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: FLEURS ASR type: google/fleurs args: fi_fi metrics: - name: Test WER type: wer value: 12.11 - name: Test CER type: cer value: 5.65 --- # Wav2vec2-xls-r-1b for Finnish ASR This acoustic model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) for Finnish ASR. The model has been fine-tuned with 275.6 hours of Finnish transcribed speech data. Wav2Vec2 XLS-R was introduced in [this paper](https://arxiv.org/abs/2111.09296) and first released at [this page](https://github.com/pytorch/fairseq/tree/main/examples/wav2vec#wav2vec-20). This repository also includes Finnish KenLM language model used in the decoding phase with the acoustic model. **Note**: this model is exactly the same as the [aapot/wav2vec2-xlsr-1b-finnish-lm-v2](https://huggingface.co/aapot/wav2vec2-xlsr-1b-finnish-lm-v2) model so that model has just been copied/moved to this `Finnish-NLP` Hugging Face organization. ## Model description Wav2Vec2 XLS-R is Facebook AI's large-scale multilingual pretrained model for speech. It is pretrained on 436k hours of unlabeled speech, including VoxPopuli, MLS, CommonVoice, BABEL, and VoxLingua107. It uses the wav2vec 2.0 objective, in 128 languages. You can read more about the pretrained model from [this blog](https://ai.facebook.com/blog/xls-r-self-supervised-speech-processing-for-128-languages) and [this paper](https://arxiv.org/abs/2111.09296). This model is fine-tuned version of the pretrained model (1 billion parameter variant) for Finnish ASR. ## Intended uses & limitations You can use this model for Finnish ASR (speech-to-text) task. ### How to use Check the [run-finnish-asr-models.ipynb](https://huggingface.co/Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2/blob/main/run-finnish-asr-models.ipynb) notebook in this repository for an detailed example on how to use this model. ### Limitations and bias This model was fine-tuned with audio samples which maximum length was 20 seconds so this model most likely works the best for quite short audios of similar length. However, you can try this model with a lot longer audios too and see how it works. If you encounter out of memory errors with very long audio files you can use the audio chunking method introduced in [this blog post](https://huggingface.co/blog/asr-chunking). A vast majority of the data used for fine-tuning was from the Finnish Parliament dataset so this model may not generalize so well to very different domains like common daily spoken Finnish with dialects etc. In addition, audios of the datasets tend to be adult male dominated so this model may not work as well for speeches of children and women, for example. The Finnish KenLM language model used in the decoding phase has been trained with text data from the audio transcriptions and from a subset of Finnish Wikipedia. Thus, the decoder's language model may not generalize to very different language, for example to spoken daily language with dialects (because especially the Wikipedia contains mostly formal Finnish language). It may be beneficial to train your own KenLM language model for your domain language and use that in the decoding. ## Training data This model was fine-tuned with 275.6 hours of Finnish transcribed speech data from following datasets: | Dataset | Hours | % of total hours | |:------------------------------------------------------------------------------------------------------------------------------ |:--------:|:----------------:| | [Common Voice 7.0 Finnish train + evaluation + other splits](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) | 9.70 h | 3.52 % | | [Finnish parliament session 2](https://b2share.eudat.eu/records/4df422d631544ce682d6af1d4714b2d4) | 0.24 h | 0.09 % | | [VoxPopuli Finnish](https://github.com/facebookresearch/voxpopuli) | 21.97 h | 7.97 % | | [CSS10 Finnish](https://github.com/kyubyong/css10) | 10.32 h | 3.74 % | | [Aalto Finnish Parliament ASR Corpus](http://urn.fi/urn:nbn:fi:lb-2021051903) | 228.00 h | 82.73 % | | [Finnish Broadcast Corpus](http://urn.fi/urn:nbn:fi:lb-2016042502) | 5.37 h | 1.95 % | Datasets were filtered to include maximum length of 20 seconds long audio samples. ## Training procedure This model was trained during [Robust Speech Challenge Event](https://discuss.huggingface.co/t/open-to-the-community-robust-speech-recognition-challenge/13614) organized by Hugging Face. Training was done on a Tesla V100 GPU, sponsored by OVHcloud. Training script was provided by Hugging Face and it is available [here](https://github.com/huggingface/transformers/blob/main/examples/research_projects/robust-speech-event/run_speech_recognition_ctc_bnb.py). We only modified its data loading for our custom datasets. For the KenLM language model training, we followed the [blog post tutorial](https://huggingface.co/blog/wav2vec2-with-ngram) provided by Hugging Face. Training data for the 5-gram KenLM were text transcriptions of the audio training data and 100k random samples of cleaned [Finnish Wikipedia](https://huggingface.co/datasets/wikipedia) (August 2021) dataset. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: [8-bit Adam](https://github.com/facebookresearch/bitsandbytes) with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 - mixed_precision_training: Native AMP The pretrained `facebook/wav2vec2-xls-r-1b` model was initialized with following hyperparameters: - attention_dropout: 0.094 - hidden_dropout: 0.047 - feat_proj_dropout: 0.04 - mask_time_prob: 0.082 - layerdrop: 0.041 - activation_dropout: 0.055 - ctc_loss_reduction: "mean" ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.7778 | 0.17 | 500 | 0.2851 | 0.3572 | | 0.5506 | 0.34 | 1000 | 0.1595 | 0.2130 | | 0.6569 | 0.5 | 1500 | 0.1458 | 0.2046 | | 0.5997 | 0.67 | 2000 | 0.1374 | 0.1975 | | 0.542 | 0.84 | 2500 | 0.1390 | 0.1956 | | 0.4815 | 1.01 | 3000 | 0.1266 | 0.1813 | | 0.6982 | 1.17 | 3500 | 0.1441 | 0.1965 | | 0.4522 | 1.34 | 4000 | 0.1232 | 0.1822 | | 0.4655 | 1.51 | 4500 | 0.1209 | 0.1702 | | 0.4069 | 1.68 | 5000 | 0.1149 | 0.1688 | | 0.4226 | 1.84 | 5500 | 0.1121 | 0.1560 | | 0.3993 | 2.01 | 6000 | 0.1091 | 0.1557 | | 0.406 | 2.18 | 6500 | 0.1115 | 0.1553 | | 0.4098 | 2.35 | 7000 | 0.1144 | 0.1560 | | 0.3995 | 2.51 | 7500 | 0.1028 | 0.1476 | | 0.4101 | 2.68 | 8000 | 0.1129 | 0.1511 | | 0.3636 | 2.85 | 8500 | 0.1025 | 0.1517 | | 0.3534 | 3.02 | 9000 | 0.1068 | 0.1480 | | 0.3836 | 3.18 | 9500 | 0.1072 | 0.1459 | | 0.3531 | 3.35 | 10000 | 0.0928 | 0.1367 | | 0.3649 | 3.52 | 10500 | 0.1042 | 0.1426 | | 0.3645 | 3.69 | 11000 | 0.0979 | 0.1433 | | 0.3685 | 3.85 | 11500 | 0.0947 | 0.1346 | | 0.3325 | 4.02 | 12000 | 0.0991 | 0.1352 | | 0.3497 | 4.19 | 12500 | 0.0919 | 0.1358 | | 0.3303 | 4.36 | 13000 | 0.0888 | 0.1272 | | 0.3323 | 4.52 | 13500 | 0.0888 | 0.1277 | | 0.3452 | 4.69 | 14000 | 0.0894 | 0.1279 | | 0.337 | 4.86 | 14500 | 0.0917 | 0.1289 | | 0.3114 | 5.03 | 15000 | 0.0942 | 0.1313 | | 0.3099 | 5.19 | 15500 | 0.0902 | 0.1239 | | 0.3079 | 5.36 | 16000 | 0.0871 | 0.1256 | | 0.3293 | 5.53 | 16500 | 0.0861 | 0.1263 | | 0.3123 | 5.7 | 17000 | 0.0876 | 0.1203 | | 0.3093 | 5.86 | 17500 | 0.0848 | 0.1226 | | 0.2903 | 6.03 | 18000 | 0.0914 | 0.1221 | | 0.297 | 6.2 | 18500 | 0.0841 | 0.1185 | | 0.2797 | 6.37 | 19000 | 0.0858 | 0.1165 | | 0.2878 | 6.53 | 19500 | 0.0874 | 0.1161 | | 0.2974 | 6.7 | 20000 | 0.0835 | 0.1173 | | 0.3051 | 6.87 | 20500 | 0.0835 | 0.1178 | | 0.2941 | 7.04 | 21000 | 0.0852 | 0.1155 | | 0.258 | 7.21 | 21500 | 0.0832 | 0.1132 | | 0.2778 | 7.37 | 22000 | 0.0829 | 0.1110 | | 0.2751 | 7.54 | 22500 | 0.0822 | 0.1069 | | 0.2887 | 7.71 | 23000 | 0.0819 | 0.1103 | | 0.2509 | 7.88 | 23500 | 0.0787 | 0.1055 | | 0.2501 | 8.04 | 24000 | 0.0807 | 0.1076 | | 0.2399 | 8.21 | 24500 | 0.0784 | 0.1052 | | 0.2539 | 8.38 | 25000 | 0.0772 | 0.1075 | | 0.248 | 8.55 | 25500 | 0.0772 | 0.1055 | | 0.2689 | 8.71 | 26000 | 0.0763 | 0.1027 | | 0.2855 | 8.88 | 26500 | 0.0756 | 0.1035 | | 0.2421 | 9.05 | 27000 | 0.0771 | 0.0998 | | 0.2497 | 9.22 | 27500 | 0.0756 | 0.0971 | | 0.2367 | 9.38 | 28000 | 0.0741 | 0.0974 | | 0.2473 | 9.55 | 28500 | 0.0739 | 0.0982 | | 0.2396 | 9.72 | 29000 | 0.0756 | 0.0991 | | 0.2602 | 9.89 | 29500 | 0.0737 | 0.0975 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0 ## Evaluation results Evaluation was done with the [Common Voice 7.0 Finnish test split](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0), [Common Voice 9.0 Finnish test split](https://huggingface.co/datasets/mozilla-foundation/common_voice_9_0) and with the [FLEURS ASR Finnish test split](https://huggingface.co/datasets/google/fleurs). This model's training data includes the training splits of Common Voice 7.0 but our newer `Finnish-NLP/wav2vec2-base-fi-voxpopuli-v2-finetuned` and `Finnish-NLP/wav2vec2-large-uralic-voxpopuli-v2-finnish` models include the Common Voice 9.0 so we ran tests for both Common Voice versions. Note: Common Voice doesn't seem to fully preserve the test split as fixed between the dataset versions so it is possible that some of the training examples of Common Voice 9.0 are in the test split of the Common Voice 7.0 and vice versa. Thus, Common Voice test result comparisons are not fully accurate between the models trained with different Common Voice versions but the comparison should still be meaningful enough. ### Common Voice 7.0 testing To evaluate this model, run the `eval.py` script in this repository: ```bash python3 eval.py --model_id Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2 --dataset mozilla-foundation/common_voice_7_0 --config fi --split test ``` This model (the fift row of the table) achieves the following WER (Word Error Rate) and CER (Character Error Rate) results compared to our other models and their parameter counts: | | Model parameters | WER (with LM) | WER (without LM) | CER (with LM) | CER (without LM) | |-------------------------------------------------------|------------------|---------------|------------------|---------------|------------------| |Finnish-NLP/wav2vec2-base-fi-voxpopuli-v2-finetuned | 95 million |5.85 |13.52 |1.35 |2.44 | |Finnish-NLP/wav2vec2-large-uralic-voxpopuli-v2-finnish | 300 million |4.13 |**9.66** |0.90 |1.66 | |Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm | 300 million |8.16 |17.92 |1.97 |3.36 | |Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm | 1000 million |5.65 |13.11 |1.20 |2.23 | |Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2 | 1000 million |**4.09** |9.73 |**0.88** |**1.65** | ### Common Voice 9.0 testing To evaluate this model, run the `eval.py` script in this repository: ```bash python3 eval.py --model_id Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2 --dataset mozilla-foundation/common_voice_9_0 --config fi --split test ``` This model (the fift row of the table) achieves the following WER (Word Error Rate) and CER (Character Error Rate) results compared to our other models and their parameter counts: | | Model parameters | WER (with LM) | WER (without LM) | CER (with LM) | CER (without LM) | |-------------------------------------------------------|------------------|---------------|------------------|---------------|------------------| |Finnish-NLP/wav2vec2-base-fi-voxpopuli-v2-finetuned | 95 million |5.93 |14.08 |1.40 |2.59 | |Finnish-NLP/wav2vec2-large-uralic-voxpopuli-v2-finnish | 300 million |4.13 |9.83 |0.92 |1.71 | |Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm | 300 million |7.42 |16.45 |1.79 |3.07 | |Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm | 1000 million |5.35 |13.00 |1.14 |2.20 | |Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2 | 1000 million |**3.72** |**8.96** |**0.80** |**1.52** | ### FLEURS ASR testing To evaluate this model, run the `eval.py` script in this repository: ```bash python3 eval.py --model_id Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2 --dataset google/fleurs --config fi_fi --split test ``` This model (the fift row of the table) achieves the following WER (Word Error Rate) and CER (Character Error Rate) results compared to our other models and their parameter counts: | | Model parameters | WER (with LM) | WER (without LM) | CER (with LM) | CER (without LM) | |-------------------------------------------------------|------------------|---------------|------------------|---------------|------------------| |Finnish-NLP/wav2vec2-base-fi-voxpopuli-v2-finetuned | 95 million |13.99 |17.16 |6.07 |6.61 | |Finnish-NLP/wav2vec2-large-uralic-voxpopuli-v2-finnish | 300 million |12.44 |**14.63** |5.77 |6.22 | |Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm | 300 million |17.72 |23.30 |6.78 |7.67 | |Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm | 1000 million |20.34 |16.67 |6.97 |6.35 | |Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2 | 1000 million |**12.11** |14.89 |**5.65** |**6.06** | ## Team Members - Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/) - Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/) Feel free to contact us for more details 🤗
16,676
[ [ -0.03375244140625, -0.048980712890625, 0.013641357421875, 0.0135955810546875, -0.0164031982421875, -0.01435089111328125, -0.02392578125, -0.03143310546875, 0.0172882080078125, 0.0296478271484375, -0.046661376953125, -0.04412841796875, -0.047698974609375, -0....
timm/vit_medium_patch16_gap_240.sw_in12k
2023-05-06T00:26:35.000Z
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-12k", "arxiv:2010.11929", "license:apache-2.0", "region:us" ]
image-classification
timm
null
null
timm/vit_medium_patch16_gap_240.sw_in12k
0
1,059
timm
2022-12-01T01:48:18
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-12k --- # Model card for vit_medium_patch16_gap_240.sw_in12k A Vision Transformer (ViT) image classification model. This is a `timm` specific variation of the architecture with token global average pooling. Trained on ImageNet-12k by Ross Wightman in `timm` using recipe template described below. Recipe details: * Based on Swin Transformer train / pretrain recipe with modifications (related to both DeiT and ConvNeXt recipes) * AdamW optimizer, gradient clipping, EMA weight averaging * Cosine LR schedule with warmup ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 44.4 - GMACs: 8.6 - Activations (M): 12.6 - Image size: 240 x 240 - **Papers:** - An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2 - **Dataset:** ImageNet-12k - **Original:** https://github.com/huggingface/pytorch-image-models ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('vit_medium_patch16_gap_240.sw_in12k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'vit_medium_patch16_gap_240.sw_in12k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 225, 512) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ``` ```bibtex @article{dosovitskiy2020vit, title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale}, author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil}, journal={ICLR}, year={2021} } ```
3,622
[ [ -0.03485107421875, -0.0310821533203125, 0.001617431640625, 0.01306915283203125, -0.024078369140625, -0.0240020751953125, -0.0176544189453125, -0.03521728515625, 0.023223876953125, 0.02001953125, -0.041168212890625, -0.04595947265625, -0.05047607421875, -0.00...
jonatasgrosman/wav2vec2-large-xlsr-53-hungarian
2022-12-14T01:57:43.000Z
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "hu", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "has_space", "region:us" ]
automatic-speech-recognition
jonatasgrosman
null
null
jonatasgrosman/wav2vec2-large-xlsr-53-hungarian
2
1,058
transformers
2022-03-02T23:29:05
--- language: hu datasets: - common_voice metrics: - wer - cer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: XLSR Wav2Vec2 Hungarian by Jonatas Grosman results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice hu type: common_voice args: hu metrics: - name: Test WER type: wer value: 31.40 - name: Test CER type: cer value: 6.20 --- # Fine-tuned XLSR-53 large model for speech recognition in Hungarian Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Hungarian using the train and validation splits of [Common Voice 6.1](https://huggingface.co/datasets/common_voice) and [CSS10](https://github.com/Kyubyong/css10). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :) The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint ## Usage The model can be used directly (without a language model) as follows... Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library: ```python from huggingsound import SpeechRecognitionModel model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-large-xlsr-53-hungarian") audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"] transcriptions = model.transcribe(audio_paths) ``` Writing your own inference script: ```python import torch import librosa from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor LANG_ID = "hu" MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-hungarian" SAMPLES = 5 test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]") processor = Wav2Vec2Processor.from_pretrained(MODEL_ID) model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000) batch["speech"] = speech_array batch["sentence"] = batch["sentence"].upper() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) predicted_sentences = processor.batch_decode(predicted_ids) for i, predicted_sentence in enumerate(predicted_sentences): print("-" * 100) print("Reference:", test_dataset[i]["sentence"]) print("Prediction:", predicted_sentence) ``` | Reference | Prediction | | ------------- | ------------- | | BÜSZKÉK VAGYUNK A MAGYAR EMBEREK NAGYSZERŰ SZELLEMI ALKOTÁSAIRA. | BÜSZKÉK VAGYUNK A MAGYAR EMBEREK NAGYSZERŰ SZELLEMI ALKOTÁSAIRE | | A NEMZETSÉG TAGJAI KÖZÜL EZT TERMESZTIK A LEGSZÉLESEBB KÖRBEN ÍZLETES TERMÉSÉÉRT. | A NEMZETSÉG TAGJAI KÖZÜL ESZSZERMESZTIK A LEGSZELESEBB KÖRBEN IZLETES TERMÉSSÉÉRT | | A VÁROSBA VÁGYÓDOTT A LEGJOBBAN, ÉPPEN MERT ODA NEM JUTHATOTT EL SOHA. | A VÁROSBA VÁGYÓDOTT A LEGJOBBAN ÉPPEN MERT ODA NEM JUTHATOTT EL SOHA | | SÍRJA MÁRA MEGSEMMISÜLT. | SIMGI A MANDO MEG SEMMICSEN | | MINDEN ZENESZÁMOT DRÁGAKŐNEK NEVEZETT. | MINDEN ZENA SZÁMODRAGAKŐNEK NEVEZETT | | ÍGY MÚLT EL A DÉLELŐTT. | ÍGY MÚLT EL A DÍN ELŐTT | | REMEK POFA! | A REMEG PUFO | | SZEMET SZEMÉRT, FOGAT FOGÉRT. | SZEMET SZEMÉRT FOGADD FOGÉRT | | BIZTOSAN LAKIK ITT NÉHÁNY ATYÁMFIA. | BIZTOSAN LAKIKÉT NÉHANY ATYAMFIA | | A SOROK KÖZÖTT OLVAS. | A SOROG KÖZÖTT OLVAS | ## Evaluation The model can be evaluated as follows on the Hungarian test data of Common Voice. ```python import torch import re import librosa from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor LANG_ID = "hu" MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-hungarian" DEVICE = "cuda" CHARS_TO_IGNORE = [",", "?", "¿", ".", "!", "¡", ";", ";", ":", '""', "%", '"', "�", "ʿ", "·", "჻", "~", "՞", "؟", "،", "।", "॥", "«", "»", "„", "“", "”", "「", "」", "‘", "’", "《", "》", "(", ")", "[", "]", "{", "}", "=", "`", "_", "+", "<", ">", "…", "–", "°", "´", "ʾ", "‹", "›", "©", "®", "—", "→", "。", "、", "﹂", "﹁", "‧", "~", "﹏", ",", "{", "}", "(", ")", "[", "]", "【", "】", "‥", "〽", "『", "』", "〝", "〟", "⟨", "⟩", "〜", ":", "!", "?", "♪", "؛", "/", "\\", "º", "−", "^", "ʻ", "ˆ"] test_dataset = load_dataset("common_voice", LANG_ID, split="test") wer = load_metric("wer.py") # https://github.com/jonatasgrosman/wav2vec2-sprint/blob/main/wer.py cer = load_metric("cer.py") # https://github.com/jonatasgrosman/wav2vec2-sprint/blob/main/cer.py chars_to_ignore_regex = f"[{re.escape(''.join(CHARS_TO_IGNORE))}]" processor = Wav2Vec2Processor.from_pretrained(MODEL_ID) model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID) model.to(DEVICE) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): with warnings.catch_warnings(): warnings.simplefilter("ignore") speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000) batch["speech"] = speech_array batch["sentence"] = re.sub(chars_to_ignore_regex, "", batch["sentence"]).upper() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the audio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to(DEVICE), attention_mask=inputs.attention_mask.to(DEVICE)).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) predictions = [x.upper() for x in result["pred_strings"]] references = [x.upper() for x in result["sentence"]] print(f"WER: {wer.compute(predictions=predictions, references=references, chunk_size=1000) * 100}") print(f"CER: {cer.compute(predictions=predictions, references=references, chunk_size=1000) * 100}") ``` **Test Result**: In the table below I report the Word Error Rate (WER) and the Character Error Rate (CER) of the model. I ran the evaluation script described above on other models as well (on 2021-04-22). Note that the table below may show different results from those already reported, this may have been caused due to some specificity of the other evaluation scripts used. | Model | WER | CER | | ------------- | ------------- | ------------- | | jonatasgrosman/wav2vec2-large-xlsr-53-hungarian | **31.40%** | **6.20%** | | anton-l/wav2vec2-large-xlsr-53-hungarian | 42.39% | 9.39% | | gchhablani/wav2vec2-large-xlsr-hu | 46.42% | 10.04% | | birgermoell/wav2vec2-large-xlsr-hungarian | 46.93% | 10.31% | ## Citation If you want to cite this model you can use this: ```bibtex @misc{grosman2021xlsr53-large-hungarian, title={Fine-tuned {XLSR}-53 large model for speech recognition in {H}ungarian}, author={Grosman, Jonatas}, howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-hungarian}}, year={2021} } ```
7,559
[ [ -0.0229034423828125, -0.051727294921875, 0.01367950439453125, 0.01433563232421875, -0.012176513671875, -0.017242431640625, -0.02874755859375, -0.035888671875, 0.00872802734375, 0.02880859375, -0.046051025390625, -0.058563232421875, -0.037872314453125, -0.001...
rinna/youri-7b
2023-10-31T00:14:00.000Z
[ "transformers", "pytorch", "llama", "text-generation", "ja", "en", "dataset:mc4", "dataset:wikipedia", "dataset:EleutherAI/pile", "dataset:oscar-corpus/colossal-oscar-1.0", "dataset:cc100", "arxiv:2307.09288", "license:llama2", "text-generation-inference", "region:us" ]
text-generation
rinna
null
null
rinna/youri-7b
12
1,057
transformers
2023-10-30T15:12:17
--- thumbnail: https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png license: llama2 datasets: - mc4 - wikipedia - EleutherAI/pile - oscar-corpus/colossal-oscar-1.0 - cc100 language: - ja - en inference: false --- # `rinna/youri-7b` ![rinna-icon](./rinna.png) # Overview We conduct continual pre-training of [llama2-7b](https://huggingface.co/meta-llama/Llama-2-7b-hf) on **40B** tokens from a mixture of Japanese and English datasets. The continual pre-training significantly improves the model's performance on Japanese tasks. The name `youri` comes from the Japanese word [`妖狸/ようり/Youri`](https://ja.wikipedia.org/wiki/%E5%8C%96%E3%81%91%E7%8B%B8), which is a kind of Japanese mythical creature ([`妖怪/ようかい/Youkai`](https://ja.wikipedia.org/wiki/%E5%A6%96%E6%80%AA)). * **Library** The model was trained using code based on [EleutherAI/gpt-neox](https://github.com/EleutherAI/gpt-neox). * **Model architecture** A 32-layer, 4096-hidden-size transformer-based language model. Refer to the [llama2 paper](https://arxiv.org/abs/2307.09288) for architecture details. * **Continual pre-training** The model was initialized with the [llama2-7b](https://huggingface.co/meta-llama/Llama-2-7b-hf) model and continually trained on around **40B** tokens from a mixture of the following corpora - [Japanese CC-100](http://data.statmt.org/cc-100/ja.txt.xz) - [Japanese C4](https://huggingface.co/datasets/mc4) - [Japanese OSCAR](https://huggingface.co/datasets/oscar-corpus/colossal-oscar-1.0) - [The Pile](https://huggingface.co/datasets/EleutherAI/pile) - [Wikipedia](https://dumps.wikimedia.org/other/cirrussearch) - rinna curated Japanese dataset * **Authors** - [Tianyu Zhao](https://huggingface.co/tianyuz) - [Akio Kaga](https://huggingface.co/rakaga) - [Kei Sawada](https://huggingface.co/keisawada) --- # Benchmarking Evaluation experiments suggest that rinna's `youri-7b` series outperforms other open-source Japanese LLMs on Japanese tasks according to our runs. | Model | Model type | 4-task score | 6-task score | 8-task score | | :-- | :-- | :-- | :-- | :-- | | rinna/youri-7b-instruction | SFT | 83.88 | 80.93 | 63.63 | | rinna/youri-7b-chat | SFT | 78.29 | 78.47 | 62.18 | | matsuo-lab/weblab-10b-instruction-sft | SFT | 78.75 | 75.05 | 59.11 | | **rinna/youri-7b** | **pre-trained** | **73.32** | **74.58** | **58.87** | | stabilityai/japanese-stablelm-instruct-alpha-7b | SFT | 70.10 | 71.32 | 54.71 | | elyza/ELYZA-japanese-Llama-2-7b | pre-trained | 71.72 | 69.28 | 53.17 | | elyza/ELYZA-japanese-Llama-2-7b-instruct | SFT | 70.57 | 68.12 | 53.14 | | stabilityai/japanese-stablelm-base-alpha-7b | pre-trained | 61.03 | 65.83 | 51.05 | | matsuo-lab/weblab-10b | pre-trained | 66.33 | 65.58 | 50.74 | | meta/llama2-7b | pre-trained | 56.33 | 54.80 | 42.97 | | rinna/japanese-gpt-neox-3.6b | pre-trained | 47.20 | 54.68 | 41.80 | | rinna/bilingual-gpt-neox-4b | pre-trained | 46.60 | 52.04 | 40.03 | --- # How to use the model ~~~~python import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("rinna/youri-7b") model = AutoModelForCausalLM.from_pretrained("rinna/youri-7b") if torch.cuda.is_available(): model = model.to("cuda") text = "西田幾多郎は、" token_ids = tokenizer.encode(text, add_special_tokens=False, return_tensors="pt") with torch.no_grad(): output_ids = model.generate( token_ids.to(model.device), max_new_tokens=200, min_new_tokens=200, do_sample=True, temperature=1.0, top_p=0.95, pad_token_id=tokenizer.pad_token_id, bos_token_id=tokenizer.bos_token_id, eos_token_id=tokenizer.eos_token_id ) output = tokenizer.decode(output_ids.tolist()[0]) print(output) """ 西田幾多郎は、プラトンの復権を主張し、対する従来の西洋哲学は、近代の合理主義哲学に委ね、「従来の哲学は破 壊されてしまった」と述べている。 西田幾多郎は、西洋近代哲学の「徹底的な検討」を拒んだ。それは、「現代的理解の脆弱性を補う筈の、従来のヨーロッパに伝わる哲学的な方法では到底それができなかったからである」とい """ ~~~~ --- # Tokenization The model uses the original llama-2 tokenizer. --- # How to cite ~~~ @misc{RinnaYouri7b, url={https://huggingface.co/rinna/youri-7b}, title={rinna/youri-7b}, author={Zhao, Tianyu and Kaga, Akio and Sawada, Kei} } ~~~ --- # Citations ~~~ @software{gpt-neox-library, title = {{GPT-NeoX: Large Scale Autoregressive Language Modeling in PyTorch}}, author = {Andonian, Alex and Anthony, Quentin and Biderman, Stella and Black, Sid and Gali, Preetham and Gao, Leo and Hallahan, Eric and Levy-Kramer, Josh and Leahy, Connor and Nestler, Lucas and Parker, Kip and Pieler, Michael and Purohit, Shivanshu and Songz, Tri and Phil, Wang and Weinbach, Samuel}, url = {https://www.github.com/eleutherai/gpt-neox}, doi = {10.5281/zenodo.5879544}, month = {8}, year = {2021}, version = {0.0.1}, } ~~~ --- # License [The llama2 license](https://ai.meta.com/llama/license/)
4,919
[ [ -0.0291595458984375, -0.045928955078125, 0.016204833984375, 0.0173187255859375, -0.033538818359375, 0.00948333740234375, -0.01336669921875, -0.04345703125, 0.0270843505859375, 0.02032470703125, -0.029754638671875, -0.044403076171875, -0.05242919921875, 0.013...
deepset/gelectra-base-germanquad-distilled
2023-05-05T06:58:40.000Z
[ "transformers", "pytorch", "safetensors", "electra", "question-answering", "exbert", "de", "dataset:deepset/germanquad", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
question-answering
deepset
null
null
deepset/gelectra-base-germanquad-distilled
3
1,056
transformers
2022-03-02T23:29:05
--- language: de datasets: - deepset/germanquad license: mit thumbnail: https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg tags: - exbert --- ![bert_image](https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg) ## Overview **Language model:** gelectra-base-germanquad-distilled **Language:** German **Training data:** GermanQuAD train set (~ 12MB) **Eval data:** GermanQuAD test set (~ 5MB) **Infrastructure**: 1x V100 GPU **Published**: Apr 21st, 2021 ## Details - We trained a German question answering model with a gelectra-base model as its basis. - The dataset is GermanQuAD, a new, German language dataset, which we hand-annotated and published [online](https://deepset.ai/germanquad). - The training dataset is one-way annotated and contains 11518 questions and 11518 answers, while the test dataset is three-way annotated so that there are 2204 questions and with 2204·3−76 = 6536answers, because we removed 76 wrong answers. - In addition to the annotations in GermanQuAD, haystack's distillation feature was used for training. deepset/gelectra-large-germanquad was used as the teacher model. See https://deepset.ai/germanquad for more details and dataset download in SQuAD format. ## Hyperparameters ``` batch_size = 24 n_epochs = 6 max_seq_len = 384 learning_rate = 3e-5 lr_schedule = LinearWarmup embeds_dropout_prob = 0.1 temperature = 2 distillation_loss_weight = 0.75 ``` ## Performance We evaluated the extractive question answering performance on our GermanQuAD test set. Model types and training data are included in the model name. For finetuning XLM-Roberta, we use the English SQuAD v2.0 dataset. The GELECTRA models are warm started on the German translation of SQuAD v1.1 and finetuned on \\\\germanquad. The human baseline was computed for the 3-way test set by taking one answer as prediction and the other two as ground truth. ``` "exact": 62.4773139745916 "f1": 80.9488017070188 ``` ![performancetable](https://lh3.google.com/u/0/d/1IFqkq8OZ7TFnGzxmW6eoxXSYa12f2M7O=w1970-h1546-iv1) ## Authors - Timo Möller: `timo.moeller [at] deepset.ai` - Julian Risch: `julian.risch [at] deepset.ai` - Malte Pietsch: `malte.pietsch [at] deepset.ai` - Michel Bartels: `michel.bartels [at] deepset.ai` ## About us ![deepset logo](https://workablehr.s3.amazonaws.com/uploads/account/logo/476306/logo) We bring NLP to the industry via open source! Our focus: Industry specific language models & large scale QA systems. Some of our work: - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert) - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad) - [FARM](https://github.com/deepset-ai/FARM) - [Haystack](https://github.com/deepset-ai/haystack/) Get in touch: [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Slack](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai) By the way: [we're hiring!](http://www.deepset.ai/jobs)
3,233
[ [ -0.039031982421875, -0.07379150390625, 0.0299224853515625, 0.002063751220703125, 0.0027790069580078125, 0.00029468536376953125, -0.02862548828125, -0.035064697265625, 0.0027751922607421875, 0.0274810791015625, -0.0487060546875, -0.0572509765625, -0.0201263427734...
EleutherAI/pythia-12b-deduped-v0
2023-03-29T18:48:21.000Z
[ "transformers", "pytorch", "gpt_neox", "text-generation", "causal-lm", "pythia", "pythia_v0", "en", "dataset:EleutherAI/the_pile_deduplicated", "arxiv:2101.00027", "arxiv:2201.07311", "license:apache-2.0", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text-generation
EleutherAI
null
null
EleutherAI/pythia-12b-deduped-v0
26
1,055
transformers
2022-10-18T03:09:06
--- language: - en tags: - pytorch - causal-lm - pythia - pythia_v0 license: apache-2.0 datasets: - EleutherAI/the_pile_deduplicated --- The *Pythia Scaling Suite* is a collection of models developed to facilitate interpretability research. It contains two sets of eight models of sizes 70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two models: one trained on the Pile, and one trained on the Pile after the dataset has been globally deduplicated. All 8 model sizes are trained on the exact same data, in the exact same order. All Pythia models are available [on Hugging Face](https://huggingface.co/models?other=pythia). The Pythia model suite was deliberately designed to promote scientific research on large language models, especially interpretability research. Despite not centering downstream performance as a design goal, we find the models <a href="#evaluations">match or exceed</a> the performance of similar and same-sized models, such as those in the OPT and GPT-Neo suites. Please note that all models in the *Pythia* suite were renamed in January 2023. For clarity, a <a href="#naming-convention-and-parameter-count">table comparing the old and new names</a> is provided in this model card, together with exact parameter counts. ## Pythia-12B-deduped ### Model Details - Developed by: [EleutherAI](http://eleuther.ai) - Model type: Transformer-based Language Model - Language: English - Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia) for training procedure, config files, and details on how to use. - Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox) - License: Apache 2.0 - Contact: to ask questions about this model, join the [EleutherAI Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`. Please read the existing *Pythia* documentation before asking about it in the EleutherAI Discord. For general correspondence: [contact@eleuther. ai](mailto:contact@eleuther.ai). <figure> | Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models | | -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: | | 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — | | 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M | | 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M | | 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — | | 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B | | 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B | | 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B | | 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — | <figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and non-deduped models of a given size have the same hyperparameters. “Equivalent” models have <b>exactly</b> the same architecture, and the same number of non-embedding parameters.</figcaption> </figure> ### Uses and Limitations #### Intended Use The primary intended use of Pythia is research on the behavior, functionality, and limitations of large language models. This suite is intended to provide a controlled setting for performing scientific experiments. To enable the study of how language models change in the course of training, we provide 143 evenly spaced intermediate checkpoints per model. These checkpoints are hosted on Hugging Face as branches. Note that branch `143000` corresponds exactly to the model checkpoint on the `main` branch of each model. You may also further fine-tune and adapt Pythia-12B-deduped for deployment, as long as your use is in accordance with the Apache 2.0 license. Pythia models work with the Hugging Face [Transformers Library](https://huggingface.co/docs/transformers/index). If you decide to use pre-trained Pythia-12B-deduped as a basis for your fine-tuned model, please conduct your own risk and bias assessment. #### Out-of-scope use The Pythia Suite is **not** intended for deployment. It is not a in itself a product and cannot be used for human-facing interactions. Pythia models are English-language only, and are not suitable for translation or generating text in other languages. Pythia-12B-deduped has not been fine-tuned for downstream contexts in which language models are commonly deployed, such as writing genre prose, or commercial chatbots. This means Pythia-12B-deduped will **not** respond to a given prompt the way a product like ChatGPT does. This is because, unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement Learning from Human Feedback (RLHF) to better “understand” human instructions. #### Limitations and biases The core functionality of a large language model is to take a string of text and predict the next token. The token deemed statistically most likely by the model need not produce the most “accurate” text. Never rely on Pythia-12B-deduped to produce factually accurate output. This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset known to contain profanity and texts that are lewd or otherwise offensive. See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a discussion of documented biases with regards to gender, religion, and race. Pythia-12B-deduped may produce socially unacceptable or undesirable text, *even if* the prompt itself does not include anything explicitly offensive. If you plan on using text generated through, for example, the Hosted Inference API, we recommend having a human curate the outputs of this language model before presenting it to other people. Please inform your audience that the text was generated by Pythia-12B-deduped. ### Quickstart Pythia models can be loaded and used via the following code, demonstrated here for the third `pythia-70m-deduped` checkpoint: ```python from transformers import GPTNeoXForCausalLM, AutoTokenizer model = GPTNeoXForCausalLM.from_pretrained( "EleutherAI/pythia-70m-deduped", revision="step3000", cache_dir="./pythia-70m-deduped/step3000", ) tokenizer = AutoTokenizer.from_pretrained( "EleutherAI/pythia-70m-deduped", revision="step3000", cache_dir="./pythia-70m-deduped/step3000", ) inputs = tokenizer("Hello, I am", return_tensors="pt") tokens = model.generate(**inputs) tokenizer.decode(tokens[0]) ``` Revision/branch `step143000` corresponds exactly to the model checkpoint on the `main` branch of each model.<br> For more information on how to use all Pythia models, see [documentation on GitHub](https://github.com/EleutherAI/pythia). ### Training #### Training data Pythia-12B-deduped was trained on the Pile **after the dataset has been globally deduplicated**.<br> [The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in English. It was created by EleutherAI specifically for training large language models. It contains texts from 22 diverse sources, roughly broken down into five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl), prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and miscellaneous (e.g. GitHub, Enron Emails). See [the Pile paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources, methodology, and a discussion of ethical implications. Consult [the datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation about the Pile and its component datasets. The Pile can be downloaded from the [official website](https://pile.eleuther.ai/), or from a [community mirror](https://the-eye.eu/public/AI/pile/). #### Training procedure All models were trained on the exact same data, in the exact same order. Each model saw 299,892,736,000 tokens during training, and 143 checkpoints for each model are saved every 2,097,152,000 tokens, spaced evenly throughout training. This corresponds to training for just under 1 epoch on the Pile for non-deduplicated models, and about 1.5 epochs on the deduplicated Pile. All *Pythia* models trained for the equivalent of 143000 steps at a batch size of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch size of 4M tokens listed were originally trained for 71500 steps instead, with checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for consistency with all 2M batch models, so `step1000` is the first checkpoint for `pythia-1.4b` that was saved (corresponding to step 500 in training), and `step1000` is likewise the first `pythia-6.9b` checkpoint that was saved (corresponding to 1000 “actual” steps).<br> See [GitHub](https://github.com/EleutherAI/pythia) for more details on training procedure, including [how to reproduce it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br> Pythia uses the same tokenizer as [GPT-NeoX- 20B](https://huggingface.co/EleutherAI/gpt-neox-20b). ### Evaluations All 16 *Pythia* models were evaluated using the [LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access the results by model and step at `results/json/*` in the [GitHub repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br> Expand the sections below to see plots of evaluation results for all Pythia and Pythia-deduped models compared with OPT and BLOOM. <details> <summary>LAMBADA – OpenAI</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/> </details> <details> <summary>Physical Interaction: Question Answering (PIQA)</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/> </details> <details> <summary>WinoGrande</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/> </details> <details> <summary>AI2 Reasoning Challenge—Challenge Set</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/> </details> <details> <summary>SciQ</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/> </details> ### Naming convention and parameter count *Pythia* models were renamed in January 2023. It is possible that the old naming convention still persists in some documentation by accident. The current naming convention (70M, 160M, etc.) is based on total parameter count. <figure style="width:32em"> | current Pythia suffix | old suffix | total params | non-embedding params | | --------------------: | ---------: | -------------: | -------------------: | | 70M | 19M | 70,426,624 | 18,915,328 | | 160M | 125M | 162,322,944 | 85,056,000 | | 410M | 350M | 405,334,016 | 302,311,424 | | 1B | 800M | 1,011,781,632 | 805,736,448 | | 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 | | 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 | | 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 | | 12B | 13B | 11,846,072,320 | 11,327,027,200 | </figure>
11,883
[ [ -0.025238037109375, -0.06427001953125, 0.020538330078125, 0.00473785400390625, -0.0165863037109375, -0.01045989990234375, -0.0159454345703125, -0.03472900390625, 0.01520538330078125, 0.0141143798828125, -0.0253753662109375, -0.024444580078125, -0.03570556640625,...
Lykon/NeverEnding-Dream
2023-05-11T23:43:42.000Z
[ "diffusers", "stable-diffusion", "text-to-image", "art", "artistic", "en", "license:other", "has_space", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
Lykon
null
null
Lykon/NeverEnding-Dream
159
1,055
diffusers
2023-02-19T17:54:51
--- language: - en license: other tags: - stable-diffusion - text-to-image - art - artistic - diffusers inference: false --- # NeverEnding Dream (NED) ## Official Repository Read more about this model here: https://civitai.com/models/10028/neverending-dream-ned Also please support by giving 5 stars and a heart, which will notify new updates. Also consider supporting me on Patreon or ByuMeACoffee - https://www.patreon.com/Lykon275 You can run this model on: - https://sinkin.ai/m/qGdxrYG Some sample output: ![sample 1](https://huggingface.co/Lykon/NeverEnding-Dream/resolve/main/1.png) ![sample 2](https://huggingface.co/Lykon/NeverEnding-Dream/resolve/main/2.png) ![sample 3](https://huggingface.co/Lykon/NeverEnding-Dream/resolve/main/3.png) ![sample 4](https://huggingface.co/Lykon/NeverEnding-Dream/resolve/main/4.png) ![sample 5](https://huggingface.co/Lykon/NeverEnding-Dream/resolve/main/5.png) ![sample 6](https://huggingface.co/Lykon/NeverEnding-Dream/resolve/main/6.jpg)
992
[ [ -0.0205078125, -0.006786346435546875, 0.037689208984375, 0.02667236328125, -0.046539306640625, 0.00794219970703125, 0.0179443359375, -0.035491943359375, 0.050872802734375, 0.0615234375, -0.08135986328125, -0.056854248046875, -0.050048828125, -0.0037689208984...
heegyu/polyglot-ko-3.8b-chat
2023-09-20T01:09:40.000Z
[ "transformers", "pytorch", "safetensors", "gpt_neox", "text-generation", "ko", "dataset:beomi/KoAlpaca-v1.1a", "dataset:dbdu/ShareGPT-74k-ko", "dataset:heegyu/korquad-chat-v1", "dataset:HAERAE-HUB/KoInstruct-QA", "dataset:changpt/ko-lima-vicuna", "dataset:nlpai-lab/kullm-v2", "endpoints_comp...
text-generation
heegyu
null
null
heegyu/polyglot-ko-3.8b-chat
0
1,055
transformers
2023-08-21T04:09:50
--- datasets: - beomi/KoAlpaca-v1.1a - dbdu/ShareGPT-74k-ko - heegyu/korquad-chat-v1 - HAERAE-HUB/KoInstruct-QA - changpt/ko-lima-vicuna - nlpai-lab/kullm-v2 language: - ko --- # heegyu/polyglot-ko-3.8b-chat - [EleutherAI/polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b)를 여러 한국어 instruction 데이터셋으로 학습한 모델 ## 사용한 데이터셋 | Dataset | # instance | 타입 | | --- | --- | --- | | [KoAlpaca v1.1](https://raw.githubusercontent.com/Beomi/KoAlpaca/main/KoAlpaca_v1.1.jsonl) | 50K | 싱글턴 | | [dbdu/ShareGPT-74k-ko 의 part2_ko_uncleaned](https://huggingface.co/datasets/dbdu/ShareGPT-74k-ko/resolve/main/part2_ko_uncleaned.json) | 36K | 멀티턴 | | [heegyu/korquad-chat-v1](https://huggingface.co/datasets/heegyu/korquad-chat-v1) | 9.6K | 멀티턴, 지식기반 | | [lcw99/evolve-instruct](https://github.com/lcw99/evolve-instruct/) | 37K | 싱글턴 | | [HAERAE-HUB/KoInstruct-QA](https://huggingface.co/datasets/HAERAE-HUB/KoInstruct-QA) | 50.3k | 싱글턴 | | [changpt/ko-lima-vicuna](https://huggingface.co/datasets/changpt/ko-lima-vicuna) | 1K | 싱글턴, 멀티턴(극히 일부) | | [nlpai-lab/kullm-v2](https://huggingface.co/datasets/nlpai-lab/kullm-v2) | 15K | 싱글턴 | - KULLM v2 데이터셋에서는 GPT4ALL, Dolly 데이터만 추출해서 사용했습니다. - 다양한 학습 데이터셋은 [HeegyuKim/open-korean-instructions](https://github.com/HeegyuKim/open-korean-instructions) GitHub repository를 참고하세요. ## 생성 Prompt - EOS token(<|endoftext|>)이 나올 때까지 생성하면 됩니다. - 최상단 프롬프트는 있는게 좋은 답변이 더 자주 나오는 것 같아요. ``` 당신은 AI 챗봇입니다. 사용자에게 도움이 되고 유익한 내용을 제공해야합니다. 답변은 길고 자세하며 친절한 설명을 덧붙여서 작성하세요. ### 사용자: 서울에서 강릉 가려면 어떻게 가야해? ### 챗봇: ``` 생성된 답변: ``` 서울에서 강릉 가는 방법은 다양합니다. 비행기를 타고 가거나 KTX를 이용하거나 자가용으로 이동하는 방법이 있습니다. 1. 비행기: 비행기는 서울에서 양양 또는 원주 공항까지 운항합니다. 소요 시간은 약 2시간이며, 운임은 약 20,000원입니다. 2. KTX: KTX는 서울에서 강릉까지 운행하며, 요금은 일반 및 특실 모두 6,550원으로 통일되어 있습니다. 열차 시간은 서울에서 강릉까지 1시간 30분 정도 소요됩니다. 3. 자가용: 자가용을 이용해 강릉으로 가는 방법도 많이 있습니다. 서울에서 경포에 이르는 길은 다양하며, 도로 상황에 따라 주행 시간 및 이동 거리는 달라질 수 있습니다. 참고할 수 있는 정보는 강릉 관광안내소에서 받으실 수 있습니다. 위 방법을 이용하여 서울에서 강릉까지의 이동 시간과 비용을 고려했습니다. 그러나 교통 상황이나 도로 상황 등에 따라 추가 정보가 필요할 수 있습니다. ``` 대화 예시 (with prompt) ``` 당신은 AI 챗봇입니다. 사용자에게 도움이 되고 유익한 내용을 제공해야합니다. 답변은 길고 자세하며 친절한 설명을 덧붙여서 작성하세요. ### 사용자: 중국의 여행지 추천해줘 ### 챗봇: 아래는 중국의 다양한 여행지 중 몇 가지를 추천해드리겠습니다. - 상하이: 중국의 상하이는 현대적인 건물과 높은 빌딩이 많은 도시입니다. 강을 따라 이동하는 선박은 상하이의 풍경과 문화를 감상할 수 있는 장소입니다. (사진: 중국 사진 제공 사이트) - 치바오: 치바오는 상하이의 한 도시로, 독특한 건축물과 역사적인 유물들이 많이 있는 곳입니다. 아름다운 해변도 있으니 일정을 확인해보시면 좋을 것 같습니다. (사진: 중국 사진 제공 사이트) - 항저우: 항저우는 중국의 4대 정원 중 하나로 유명한 곳입니다. 아름다운 정원, 오래된 건물, 그리고 풍부한 역사 문화 체험을 즐길 수 있는 도시입니다. (사진: 중국 사진 제공 사이트) - 난징: 난징은 중국의 수도인 난징에 위치한 도시로, 역사와 문화를 기반으로 다양한 명소들을 찾을 수 있습니다. (사진: 중국 사진 제공 사이트) - 베이징: 중국의 수도인 베이징은 중국의 문화와 역사를 체험할 수 있는 도시입니다. 다양한 관광 명소와 역사 유물들이 여러분을 기다리고 있을 것입니다. (사진: 중국 사진 제공 사이트) - 베이징의 창춘: 창춘은 중국의 동북 지방에 위치한 도시로, 전통 문화와 역사를 체험할 수 있는 좋은 장소입니다. 이 외에도 중국 문화와 역사에 대한 자세한 정보는 위의 목록을 참고하여 다양한 매체에서 찾아보실 수 있습니다. ``` 멀티턴 대화 예시 (with prompt) ``` 당신은 AI 챗봇입니다. 사용자에게 도움이 되고 유익한 내용을 제공해야합니다. 답변은 길고 자세하며 친절한 설명을 덧붙여서 작성하세요. ### 사용자: 요즘 즐겨하는 컴퓨터 게임이 있어? ### 챗봇: 죄송하지만 질문에 대한 정확한 답변을 위해서는 게임의 유형과 장르를 더 자세히 알려주시는 것이 좋습니다. ### 사용자: 음 RPG 장르 쪽으로 추천받고 싶은데 ### 챗봇: RPG 장르 중에서도 간단한 캐주얼 게임을 선호하신다면, 제가 추천해드릴 수 있습니다. 바로 'Rogue Dragon(가칭)'이라는 게임입니다. 이 게임은 판타지, 액션 등 다양한 장르를 접목하여 즐길 수 있으며, 화려한 그래픽, 탄탄한 스토리에 중독성 강한 타격시스템도 특징입니다. ### 사용자: 어디서 할 수 있어? ### 챗봇: 'Rogue Dragon'은 스팀에서 무료로 플레이하실 수 있습니다. 다른 방법으로는, 공식 웹사이트에서 다운로드하셔서 플레이하실 수 있습니다. ```
3,349
[ [ -0.04095458984375, -0.037841796875, 0.025970458984375, 0.028564453125, -0.031005859375, 0.0010747909545898438, 0.01959228515625, -0.0205535888671875, 0.042877197265625, 0.02862548828125, -0.0325927734375, -0.038421630859375, -0.037506103515625, 0.01115417480...
digiplay/Pika_v1
2023-07-09T06:44:58.000Z
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "endpoints_compatible", "has_space", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
digiplay
null
null
digiplay/Pika_v1
2
1,054
diffusers
2023-06-22T13:13:29
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- Model info: https://civitai.com/models/47067?modelVersionId=51650 Original Author's DEMO images : ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/e5f65d46-539f-4b71-cfe0-748300ded200/31490.jpeg) ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/c18ef225-90b1-479f-525a-770c42637500/31488.jpeg)
427
[ [ -0.0361328125, -0.01558685302734375, 0.0345458984375, 0.01505279541015625, -0.0321044921875, -0.01404571533203125, 0.0155029296875, -0.0021724700927734375, 0.043243408203125, 0.039581298828125, -0.056793212890625, -0.0211334228515625, 0.0014190673828125, -0....
Norod78/SDXL-VintageMagStyle-Lora
2023-09-19T16:32:02.000Z
[ "diffusers", "text-to-image", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "stable-diffusion", "lora", "en", "license:mit", "has_space", "region:us" ]
text-to-image
Norod78
null
null
Norod78/SDXL-VintageMagStyle-Lora
8
1,053
diffusers
2023-09-06T12:17:31
--- license: mit base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: VintageMagStyle tags: - text-to-image - stable-diffusion-xl - stable-diffusion-xl-diffusers - stable-diffusion - lora - diffusers widget: - text: A pulp cover poster featuring dora the explorer VintageMagStyle - text: Wonderwoman VintageMagStyle Very detailed, clean, high quality, sharp image - text: A pulp magazine cover featuring detective Pikachu VintageMagStyle - text: An alien woman in pink boots sitting on a chair in front of a computer desk VintageMagStyle inference: true language: - en --- # SDXL Vintage Magazine Style LoRA # Trigger words Use "**VintageMagStyle**" in your prompts Try to use theme-fitting phrases like "A pulp magazine cover", "pulp cover poster", "Vintage comics", "An old movie poster" and such # Examples Pokemon pulp cover VintageMagStyle ![Pokemon](https://huggingface.co/Norod78/SDXL-VintageMagStyle-Lora/resolve/main/Examples/00090-20230906111020-7780-Pokemon%20pulp%20cover%20%20VintageMagStyle%20_lora_SDXL-VintageMagStyle-Lora_1_.jpg) Wonderwoman VintageMagStyle Very detailed, clean, high quality, sharp image ![Wonderwoman](https://huggingface.co/Norod78/SDXL-VintageMagStyle-Lora/resolve/main/Examples/00015-20230906102032-7778-Wonderwoman%20VintageMagStyle%20%20%20_lora_SDXL-VintageMagStyle-Lora_1_%2C%20Very%20detailed%2C%20clean%2C%20high%20quality%2C%20sharp%20image.jpg) A pulp cover poster featuring dora the explorer VintageMagStyle ![Dora](https://huggingface.co/Norod78/SDXL-VintageMagStyle-Lora/resolve/main/Examples/00077-20230906110045-7779-A%20pulp%20cover%20poster%20featuring%20dora%20the%20explorer%20%20VintageMagStyle-before-highres-fix.jpg)
1,702
[ [ -0.0252532958984375, -0.036041259765625, 0.04656982421875, 0.00933074951171875, -0.034423828125, 0.0163726806640625, 0.028778076171875, -0.012542724609375, 0.08685302734375, 0.06494140625, -0.06829833984375, -0.05487060546875, -0.035003662109375, 0.019760131...
Yntec/GoodLife
2023-09-10T01:49:46.000Z
[ "diffusers", "Anime", "Children", "Book Illustrations", "3D", "Pixar", "Clumsy_Trainer", "DucHaiten", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:creativeml-openrail-m", "endpoints_compatible", "has_space", "diffusers:StableDiffusionPipeline", "region:us...
text-to-image
Yntec
null
null
Yntec/GoodLife
1
1,052
diffusers
2023-09-09T20:04:11
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image tags: - Anime - Children - Book Illustrations - 3D - Pixar - Clumsy_Trainer - DucHaiten - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers --- # Good Life DucHaiten-GoldLife merged with the COOLKIDSV2 Lora. The original plan was to make COOLKIDSV2 obsolete, but both models pack their own punch. Comparison: ![Sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/apvHt7VY9Wiz7JJg3oj3F.png) (Click for larger) Sample and prompt: ![Sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/4rHpvgkcx4jo1eMmdDVNs.png) Cartoon CUTE Pretty LITTLE Girl, sitting on a box of bottles, CHIBI EYES, holding PEPSI bottle, gorgeous detailed hair, Ponytail, Magazine ad, iconic, 1940, sharp focus. Illustration By KlaysMoji and artgerm and Clay Mann and and leyendecker and Dave Rapoza # DucHaiten GoodLife (a model not by DucHaiten) - This one was in the half-way point between the models, I don't think it looks bad. ![Sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/hJm_YiO_IFNF-j3oZ990Y.png) Original pages: KIDS ILLUSTRATIONS V2 - https://civitai.com/models/60724?modelVersionId=67980 DucHaiten-GoldenLife - https://tensor.art/models/628276277415133426 # Recipes - Merge Lora 1.0 Primary Model: DucHaiten-GoldenLife Lora: COOL KIDS V2 Output: DucHaiten-GoldLife-COOLKIDSV2 - SuperMerger Weight Sum Train Difference MBW 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1 Model A: DucHaiten-GoldLife-COOLKIDSV2 Model B: DucHaiten-GoldenLife Output: GoodLife - Fp16 no ema Output: GoodLife-mini - SuperMerger Weight Sum Train Difference MBW 0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1 Model A: DucHaiten-GoldLife-COOLKIDSV2 Model B: DucHaiten-GoldenLife Output: DucHaiten-GoodLife
1,957
[ [ -0.055023193359375, -0.035614013671875, -0.003116607666015625, -0.005046844482421875, -0.032012939453125, -0.00905609130859375, 0.0233612060546875, -0.033111572265625, 0.04888916015625, 0.057037353515625, -0.0653076171875, -0.0145263671875, -0.041168212890625, ...
OFA-Sys/chinese-clip-vit-huge-patch14
2022-12-09T06:11:22.000Z
[ "transformers", "pytorch", "chinese_clip", "zero-shot-image-classification", "vision", "arxiv:2211.01335", "endpoints_compatible", "has_space", "region:us" ]
zero-shot-image-classification
OFA-Sys
null
null
OFA-Sys/chinese-clip-vit-huge-patch14
12
1,050
transformers
2022-11-09T09:45:11
--- tags: - vision widget: - src: https://huggingface.co/OFA-Sys/chinese-clip-vit-base-patch16/resolve/main/festival.jpg candidate_labels: 灯笼, 鞭炮, 对联 example_title: festival - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png candidate_labels: 音乐表演, 体育运动 example_title: cat & dog - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/football-match.jpg candidate_labels: 梅西, C罗, 马奎尔 example_title: football --- # Chinese-CLIP-ViT-Huge-Patch14 ## Introduction This is the huge-version of the Chinese CLIP, with ViT-H/14 as the image encoder and RoBERTa-wwm-large as the text encoder. Chinese CLIP is a simple implementation of CLIP on a large-scale dataset of around 200 million Chinese image-text pairs. For more details, please refer to our technical report https://arxiv.org/abs/2211.01335 and our official github repo https://github.com/OFA-Sys/Chinese-CLIP (Welcome to star! 🔥🔥) ## Use with the official API We provide a simple code snippet to show how to use the API of Chinese-CLIP to compute the image & text embeddings and similarities. ```python from PIL import Image import requests from transformers import ChineseCLIPProcessor, ChineseCLIPModel model = ChineseCLIPModel.from_pretrained("OFA-Sys/chinese-clip-vit-huge-patch14") processor = ChineseCLIPProcessor.from_pretrained("OFA-Sys/chinese-clip-vit-huge-patch14") url = "https://clip-cn-beijing.oss-cn-beijing.aliyuncs.com/pokemon.jpeg" image = Image.open(requests.get(url, stream=True).raw) # Squirtle, Bulbasaur, Charmander, Pikachu in English texts = ["杰尼龟", "妙蛙种子", "小火龙", "皮卡丘"] # compute image feature inputs = processor(images=image, return_tensors="pt") image_features = model.get_image_features(**inputs) image_features = image_features / image_features.norm(p=2, dim=-1, keepdim=True) # normalize # compute text features inputs = processor(text=texts, padding=True, return_tensors="pt") text_features = model.get_text_features(**inputs) text_features = text_features / text_features.norm(p=2, dim=-1, keepdim=True) # normalize # compute image-text similarity scores inputs = processor(text=texts, images=image, return_tensors="pt", padding=True) outputs = model(**inputs) logits_per_image = outputs.logits_per_image # this is the image-text similarity score probs = logits_per_image.softmax(dim=1) # probs: [[1.1419e-02, 1.0478e-02, 5.2018e-04, 9.7758e-01]] ``` However, if you are not satisfied with only using the API, feel free to check our github repo https://github.com/OFA-Sys/Chinese-CLIP for more details about training and inference. <br><br> ## Results **MUGE Text-to-Image Retrieval**: <table border="1" width="100%"> <tr align="center"> <th>Setup</th><th colspan="4">Zero-shot</th><th colspan="4">Finetune</th> </tr> <tr align="center"> <td>Metric</td><td>R@1</td><td>R@5</td><td>R@10</td><td>MR</td><td>R@1</td><td>R@5</td><td>R@10</td><td>MR</td> </tr> <tr align="center"> <td width="120%">Wukong</td><td>42.7</td><td>69.0</td><td>78.0</td><td>63.2</td><td>52.7</td><td>77.9</td><td>85.6</td><td>72.1</td> </tr> <tr align="center"> <td width="120%">R2D2</td><td>49.5</td><td>75.7</td><td>83.2</td><td>69.5</td><td>60.1</td><td>82.9</td><td>89.4</td><td>77.5</td> </tr> <tr align="center"> <td width="120%">CN-CLIP</td><td>63.0</td><td>84.1</td><td>89.2</td><td>78.8</td><td>68.9</td><td>88.7</td><td>93.1</td><td>83.6</td> </tr> </table> <br> **Flickr30K-CN Retrieval**: <table border="1" width="120%"> <tr align="center"> <th>Task</th><th colspan="6">Text-to-Image</th><th colspan="6">Image-to-Text</th> </tr> <tr align="center"> <th>Setup</th><th colspan="3">Zero-shot</th><th colspan="3">Finetune</th><th colspan="3">Zero-shot</th><th colspan="3">Finetune</th> </tr> <tr align="center"> <td>Metric</td><td>R@1</td><td>R@5</td><td>R@10</td><td>R@1</td><td>R@5</td><td>R@10</td><td>R@1</td><td>R@5</td><td>R@10</td><td>R@1</td><td>R@5</td><td>R@10</td> </tr> <tr align="center"> <td width="120%">Wukong</td><td>51.7</td><td>78.9</td><td>86.3</td><td>77.4</td><td>94.5</td><td>97.0</td><td>76.1</td><td>94.8</td><td>97.5</td><td>92.7</td><td>99.1</td><td>99.6</td> </tr> <tr align="center"> <td width="120%">R2D2</td><td>60.9</td><td>86.8</td><td>92.7</td><td>84.4</td><td>96.7</td><td>98.4</td><td>77.6</td><td>96.7</td><td>98.9</td><td>95.6</td><td>99.8</td><td>100.0</td> </tr> <tr align="center"> <td width="120%">CN-CLIP</td><td>71.2</td><td>91.4</td><td>95.5</td><td>83.8</td><td>96.9</td><td>98.6</td><td>81.6</td><td>97.5</td><td>98.8</td><td>95.3</td><td>99.7</td><td>100.0</td> </tr> </table> <br> **COCO-CN Retrieval**: <table border="1" width="100%"> <tr align="center"> <th>Task</th><th colspan="6">Text-to-Image</th><th colspan="6">Image-to-Text</th> </tr> <tr align="center"> <th>Setup</th><th colspan="3">Zero-shot</th><th colspan="3">Finetune</th><th colspan="3">Zero-shot</th><th colspan="3">Finetune</th> </tr> <tr align="center"> <td>Metric</td><td>R@1</td><td>R@5</td><td>R@10</td><td>R@1</td><td>R@5</td><td>R@10</td><td>R@1</td><td>R@5</td><td>R@10</td><td>R@1</td><td>R@5</td><td>R@10</td> </tr> <tr align="center"> <td width="120%">Wukong</td><td>53.4</td><td>80.2</td><td>90.1</td><td>74.0</td><td>94.4</td><td>98.1</td><td>55.2</td><td>81.0</td><td>90.6</td><td>73.3</td><td>94.0</td><td>98.0</td> </tr> <tr align="center"> <td width="120%">R2D2</td><td>56.4</td><td>85.0</td><td>93.1</td><td>79.1</td><td>96.5</td><td>98.9</td><td>63.3</td><td>89.3</td><td>95.7</td><td>79.3</td><td>97.1</td><td>98.7</td> </tr> <tr align="center"> <td width="120%">CN-CLIP</td><td>69.2</td><td>89.9</td><td>96.1</td><td>81.5</td><td>96.9</td><td>99.1</td><td>63.0</td><td>86.6</td><td>92.9</td><td>83.5</td><td>97.3</td><td>99.2</td> </tr> </table> <br> **Zero-shot Image Classification**: <table border="1" width="100%"> <tr align="center"> <th>Task</th><th>CIFAR10</th><th>CIFAR100</th><th>DTD</th><th>EuroSAT</th><th>FER</th><th>FGVC</th><th>KITTI</th><th>MNIST</th><th>PC</th><th>VOC</th> </tr> <tr align="center"> <td width="150%">GIT</td><td>88.5</td><td>61.1</td><td>42.9</td><td>43.4</td><td>41.4</td><td>6.7</td><td>22.1</td><td>68.9</td><td>50.0</td><td>80.2</td> </tr> <tr align="center"> <td width="150%">ALIGN</td><td>94.9</td><td>76.8</td><td>66.1</td><td>52.1</td><td>50.8</td><td>25.0</td><td>41.2</td><td>74.0</td><td>55.2</td><td>83.0</td> </tr> <tr align="center"> <td width="150%">CLIP</td><td>94.9</td><td>77.0</td><td>56.0</td><td>63.0</td><td>48.3</td><td>33.3</td><td>11.5</td><td>79.0</td><td>62.3</td><td>84.0</td> </tr> <tr align="center"> <td width="150%">Wukong</td><td>95.4</td><td>77.1</td><td>40.9</td><td>50.3</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td> </tr> <tr align="center"> <td width="150%">CN-CLIP</td><td>96.0</td><td>79.7</td><td>51.2</td><td>52.0</td><td>55.1</td><td>26.2</td><td>49.9</td><td>79.4</td><td>63.5</td><td>84.9</td> </tr> </table> <br> ## Citation If you find Chinese CLIP helpful, feel free to cite our paper. Thanks for your support! ``` @article{chinese-clip, title={Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese}, author={Yang, An and Pan, Junshu and Lin, Junyang and Men, Rui and Zhang, Yichang and Zhou, Jingren and Zhou, Chang}, journal={arXiv preprint arXiv:2211.01335}, year={2022} } ``` <br>
7,630
[ [ -0.0494384765625, -0.042266845703125, 0.002201080322265625, 0.0253753662109375, -0.0244293212890625, 0.0001748800277709961, -0.01335906982421875, -0.0310821533203125, 0.03302001953125, 0.0002574920654296875, -0.060333251953125, -0.024566650390625, -0.04104614257...
timm/tf_efficientnetv2_m.in1k
2023-04-27T21:43:50.000Z
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2104.00298", "license:apache-2.0", "region:us" ]
image-classification
timm
null
null
timm/tf_efficientnetv2_m.in1k
0
1,050
timm
2022-12-13T00:17:33
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for tf_efficientnetv2_m.in1k A EfficientNet-v2 image classification model. Trained on ImageNet-1k in Tensorflow by paper authors, ported to PyTorch by Ross Wightman. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 54.1 - GMACs: 15.9 - Activations (M): 57.5 - Image size: train = 384 x 384, test = 480 x 480 - **Papers:** - EfficientNetV2: Smaller Models and Faster Training: https://arxiv.org/abs/2104.00298 - **Dataset:** ImageNet-1k - **Original:** https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('tf_efficientnetv2_m.in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'tf_efficientnetv2_m.in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 24, 192, 192]) # torch.Size([1, 48, 96, 96]) # torch.Size([1, 80, 48, 48]) # torch.Size([1, 176, 24, 24]) # torch.Size([1, 512, 12, 12]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'tf_efficientnetv2_m.in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 1280, 12, 12) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @inproceedings{tan2021efficientnetv2, title={Efficientnetv2: Smaller models and faster training}, author={Tan, Mingxing and Le, Quoc}, booktitle={International conference on machine learning}, pages={10096--10106}, year={2021}, organization={PMLR} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
4,070
[ [ -0.0266571044921875, -0.033782958984375, -0.00481414794921875, 0.0067291259765625, -0.024322509765625, -0.031982421875, -0.0196380615234375, -0.02813720703125, 0.01308441162109375, 0.02874755859375, -0.0253448486328125, -0.048095703125, -0.054962158203125, -...
lanwuwei/GigaBERT-v3-Arabic-and-English
2023-01-08T00:59:27.000Z
[ "transformers", "pytorch", "jax", "bert", "feature-extraction", "en", "ar", "multilingual", "dataset:gigaword", "dataset:oscar", "dataset:wikipedia", "endpoints_compatible", "region:us" ]
feature-extraction
lanwuwei
null
null
lanwuwei/GigaBERT-v3-Arabic-and-English
0
1,049
transformers
2022-03-02T23:29:05
--- language: - en - ar - multilingual datasets: - gigaword - oscar - wikipedia --- ## GigaBERT-v3 GigaBERT-v3 is a customized bilingual BERT for English and Arabic. It was pre-trained in a large-scale corpus (Gigaword+Oscar+Wikipedia) with ~10B tokens, showing state-of-the-art zero-shot transfer performance from English to Arabic on information extraction (IE) tasks. More details can be found in the following paper: @inproceedings{lan2020gigabert, author = {Lan, Wuwei and Chen, Yang and Xu, Wei and Ritter, Alan}, title = {An Empirical Study of Pre-trained Transformers for Arabic Information Extraction}, booktitle = {Proceedings of The 2020 Conference on Empirical Methods on Natural Language Processing (EMNLP)}, year = {2020} } ## Usage ``` from transformers import * tokenizer = BertTokenizer.from_pretrained("lanwuwei/GigaBERT-v3-Arabic-and-English", do_lower_case=True) model = BertForTokenClassification.from_pretrained("lanwuwei/GigaBERT-v3-Arabic-and-English") ``` More code examples can be found [here](https://github.com/lanwuwei/GigaBERT).
1,101
[ [ -0.047271728515625, -0.05023193359375, 0.01422119140625, 0.0174102783203125, -0.0294342041015625, 0.010833740234375, -0.01514434814453125, -0.05078125, 0.0209503173828125, 0.019317626953125, -0.0278472900390625, -0.0372314453125, -0.050079345703125, 0.004940...
digio/Twitter4SSE
2021-12-17T09:01:29.000Z
[ "transformers", "pytorch", "roberta", "feature-extraction", "Pytorch", "Sentence Transformers", "Transformers", "sentence-similarity", "en", "arxiv:2110.02030", "license:apache-2.0", "endpoints_compatible", "region:us" ]
sentence-similarity
digio
null
null
digio/Twitter4SSE
5
1,048
transformers
2022-03-02T23:29:05
--- language: - en pipeline_tag: sentence-similarity tags: - Pytorch - Sentence Transformers - Transformers license: "apache-2.0" --- # Twitter4SSE This model maps texts to 768 dimensional dense embeddings that encode semantic similarity. It was trained with Multiple Negatives Ranking Loss (MNRL) on a Twitter dataset. It was initialized from [BERTweet](https://huggingface.co/vinai/bertweet-base) and trained with [Sentence-transformers](https://www.sbert.net/). ## Usage The model is easier to use with sentence-trainsformers library ``` pip install -U sentence-transformers ``` ``` from sentence_transformers import SentenceTransformer sentences = ["This is the first tweet", "This is the second tweet"] model = SentenceTransformer('digio/Twitter4SSE') embeddings = model.encode(sentences) print(embeddings) ``` Without sentence-transfomer library, please refer to [this repository](https://huggingface.co/sentence-transformers) for detailed instructions on how to use Sentence Transformers on Huggingface. ## Citing & Authors The official paper [Exploiting Twitter as Source of Large Corpora of Weakly Similar Pairs for Semantic Sentence Embeddings](https://arxiv.org/abs/2110.02030) will be presented at EMNLP 2021. Further details will be available soon. ``` @inproceedings{di-giovanni-brambilla-2021-exploiting, title = "Exploiting {T}witter as Source of Large Corpora of Weakly Similar Pairs for Semantic Sentence Embeddings", author = "Di Giovanni, Marco and Brambilla, Marco", booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.780", pages = "9902--9910", } ``` The official code is available on [GitHub](https://github.com/marco-digio/Twitter4SSE)
1,965
[ [ -0.0034847259521484375, -0.049224853515625, 0.041046142578125, 0.03668212890625, -0.0177001953125, 0.00640869140625, -0.0254058837890625, -0.0261383056640625, 0.0357666015625, 0.03497314453125, -0.04986572265625, -0.055145263671875, -0.044891357421875, 0.021...
valhalla/t5-small-qg-hl
2021-06-23T14:43:48.000Z
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "question-generation", "dataset:squad", "arxiv:1910.10683", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text2text-generation
valhalla
null
null
valhalla/t5-small-qg-hl
1
1,048
transformers
2022-03-02T23:29:05
--- datasets: - squad tags: - question-generation widget: - text: "<hl> 42 <hl> is the answer to life, the universe and everything. </s>" - text: "Python is a programming language. It is developed by <hl> Guido Van Rossum <hl>. </s>" - text: "Simple is better than <hl> complex <hl>. </s>" license: mit --- ## T5 for question-generation This is [t5-small](https://arxiv.org/abs/1910.10683) model trained for answer aware question generation task. The answer spans are highlighted within the text with special highlight tokens. You can play with the model using the inference API, just highlight the answer spans with `<hl>` tokens and end the text with `</s>`. For example `<hl> 42 <hl> is the answer to life, the universe and everything. </s>` For more deatils see [this](https://github.com/patil-suraj/question_generation) repo. ### Model in action 🚀 You'll need to clone the [repo](https://github.com/patil-suraj/question_generation). [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patil-suraj/question_generation/blob/master/question_generation.ipynb) ```python3 from pipelines import pipeline nlp = pipeline("question-generation") nlp("42 is the answer to life, universe and everything.") => [{'answer': '42', 'question': 'What is the answer to life, universe and everything?'}] ```
1,367
[ [ -0.031402587890625, -0.0748291015625, 0.03009033203125, 0.012054443359375, -0.0082550048828125, -0.00037932395935058594, 0.0280609130859375, -0.01251983642578125, 0.01045989990234375, 0.037109375, -0.0732421875, -0.016510009765625, -0.005191802978515625, 0.0...
timm/eva_large_patch14_196.in22k_ft_in1k
2023-03-31T06:11:02.000Z
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "dataset:imagenet-22k", "arxiv:2211.07636", "license:mit", "region:us" ]
image-classification
timm
null
null
timm/eva_large_patch14_196.in22k_ft_in1k
0
1,048
timm
2022-12-22T07:05:00
--- tags: - image-classification - timm library_tag: timm license: mit datasets: - imagenet-1k - imagenet-22k --- # Model card for eva_large_patch14_196.in22k_ft_in1k An EVA image classification model. Pretrained on ImageNet-22k with masked image modeling (using EVA-CLIP as a MIM teacher) and fine-tuned on ImageNet-1k by paper authors. NOTE: `timm` checkpoints are float32 for consistency with other models. Original checkpoints are float16 or bfloat16 in some cases, see originals if that's preferred. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 304.1 - GMACs: 61.6 - Activations (M): 63.5 - Image size: 196 x 196 - **Papers:** - EVA: Exploring the Limits of Masked Visual Representation Learning at Scale: https://arxiv.org/abs/2211.07636 - **Pretrain Dataset:** ImageNet-22k - **Dataset:** ImageNet-1k - **Original:** - https://github.com/baaivision/EVA - https://huggingface.co/BAAI/EVA ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('eva_large_patch14_196.in22k_ft_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'eva_large_patch14_196.in22k_ft_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 197, 1024) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). |model |top1 |top5 |param_count|img_size| |-----------------------------------------------|------|------|-----------|--------| |eva02_large_patch14_448.mim_m38m_ft_in22k_in1k |90.054|99.042|305.08 |448 | |eva02_large_patch14_448.mim_in22k_ft_in22k_in1k|89.946|99.01 |305.08 |448 | |eva_giant_patch14_560.m30m_ft_in22k_in1k |89.792|98.992|1014.45 |560 | |eva02_large_patch14_448.mim_in22k_ft_in1k |89.626|98.954|305.08 |448 | |eva02_large_patch14_448.mim_m38m_ft_in1k |89.57 |98.918|305.08 |448 | |eva_giant_patch14_336.m30m_ft_in22k_in1k |89.56 |98.956|1013.01 |336 | |eva_giant_patch14_336.clip_ft_in1k |89.466|98.82 |1013.01 |336 | |eva_large_patch14_336.in22k_ft_in22k_in1k |89.214|98.854|304.53 |336 | |eva_giant_patch14_224.clip_ft_in1k |88.882|98.678|1012.56 |224 | |eva02_base_patch14_448.mim_in22k_ft_in22k_in1k |88.692|98.722|87.12 |448 | |eva_large_patch14_336.in22k_ft_in1k |88.652|98.722|304.53 |336 | |eva_large_patch14_196.in22k_ft_in22k_in1k |88.592|98.656|304.14 |196 | |eva02_base_patch14_448.mim_in22k_ft_in1k |88.23 |98.564|87.12 |448 | |eva_large_patch14_196.in22k_ft_in1k |87.934|98.504|304.14 |196 | |eva02_small_patch14_336.mim_in22k_ft_in1k |85.74 |97.614|22.13 |336 | |eva02_tiny_patch14_336.mim_in22k_ft_in1k |80.658|95.524|5.76 |336 | ## Citation ```bibtex @article{EVA, title={EVA: Exploring the Limits of Masked Visual Representation Learning at Scale}, author={Fang, Yuxin and Wang, Wen and Xie, Binhui and Sun, Quan and Wu, Ledell and Wang, Xinggang and Huang, Tiejun and Wang, Xinlong and Cao, Yue}, journal={arXiv preprint arXiv:2211.07636}, year={2022} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
4,997
[ [ -0.049346923828125, -0.0300140380859375, 0.00760650634765625, 0.00994873046875, -0.02099609375, 0.001468658447265625, -0.0140838623046875, -0.0300140380859375, 0.044403076171875, 0.03271484375, -0.036041259765625, -0.053985595703125, -0.052734375, 0.00414657...
Tanrei/GPTSAN-japanese
2023-04-21T19:04:49.000Z
[ "transformers", "pytorch", "safetensors", "gptsan-japanese", "text2text-generation", "text-generation", "ja", "arxiv:1910.10683", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
text-generation
Tanrei
null
null
Tanrei/GPTSAN-japanese
9
1,048
transformers
2023-01-06T05:41:12
--- license: mit language: - ja pipeline_tag: text-generation --- # Model Card for Tanrei/GPTSAN-japanese ![GPTSAN](https://github.com/tanreinama/GPTSAN/blob/main/report/logo-bk.png?raw=true) General-purpose Swich transformer based Japanese language model GPTSAN has some unique features. It has a model structure of Prefix-LM. It works as a shifted Masked Language Model for Prefix Input tokens. Un-prefixed inputs behave like normal generative models. The Spout vector is a GPTSAN specific input. Spout is pre-trained with random inputs, but you can specify a class of text or an arbitrary vector during fine-tuning. This allows you to indicate the tendency of the generated text. GPTSAN has a sparse Feed Forward based on Switch-Transformer. You can also add other layers and train them partially. See the original [GPTSAN repository](https://github.com/tanreinama/GPTSAN/) for details. ## Text Generation ```python >>> from transformers import AutoModel, AutoTokenizer, trainer_utils >>> device = "cuda" >>> model = AutoModel.from_pretrained("Tanrei/GPTSAN-japanese").to(device) >>> tokenizer = AutoTokenizer.from_pretrained("Tanrei/GPTSAN-japanese") >>> x_token = tokenizer("織田信長は、", return_tensors="pt") >>> trainer_utils.set_seed(30) >>> input_ids = x_token.input_ids.to(device) >>> gen_token = model.generate(input_ids, max_new_tokens=50) >>> tokenizer.decode(gen_token[0]) "織田信長は、政治・軍事の中枢まで掌握した政治家であり、日本史上類を見ない驚異的な軍事侵攻を続け..." ``` ## Text Generation with Prefix-LM model ```python >>> from transformers import AutoModel, AutoTokenizer, trainer_utils >>> device = "cuda" >>> model = AutoModel.from_pretrained("Tanrei/GPTSAN-japanese").to(device) >>> tokenizer = AutoTokenizer.from_pretrained("Tanrei/GPTSAN-japanese") >>> x_token = tokenizer("", prefix_text="織田信長は、", return_tensors="pt") >>> trainer_utils.set_seed(30) >>> input_ids = x_token.input_ids.to(device) >>> token_type_ids = x_token.token_type_ids.to(device) >>> gen_token = model.generate(input_ids, token_type_ids=token_type_ids, max_new_tokens=50) >>> tokenizer.decode(gen_token[0]) "織田信長は、政治・外交で数々の戦果を上げるが、1568年からは、いわゆる本能寺の変で細川晴元に暗殺される..." ``` ## Masked Language Model And Text Generation ```python >>> from transformers import AutoModel, AutoTokenizer, trainer_utils >>> device = "cuda" >>> model = AutoModel.from_pretrained("Tanrei/GPTSAN-japanese").to(device) >>> tokenizer = AutoTokenizer.from_pretrained("Tanrei/GPTSAN-japanese") >>> x_token = tokenizer( "", prefix_text="武田信玄は、<|inputmask|>時代ファンならぜひ押さえ<|inputmask|>きたい名将の一人。", return_tensors="pt" ) >>> trainer_utils.set_seed(30) >>> input_ids = x_token.input_ids.to(device) >>> token_type_ids = x_token.token_type_ids.to(device) >>> out_lm_token = model.generate(input_ids, token_type_ids=token_type_ids, max_new_tokens=50) >>> out_mlm_token = model(input_ids, token_type_ids=token_type_ids).logits.argmax(axis=-1) >>> tokenizer.decode(out_mlm_token[0]) "武田信玄は、戦国時代ファンならぜひ押さえておきたい名将の一人。" >>> tokenizer.decode(out_lm_token[0][input_ids.shape[1] :]) "武田氏の三代に渡った武田家のひとり\n甲斐市に住む、日本史上最大の戦国大名。..." ``` # Model Details ## Model Description Japanese language model using Switch Transformer. It has the same structure as the model introduced as `Prefix LM` in the T5 paper, and works with both Test Generation and Masked Language Model. - **Developed by:** Toshiyuki Sakamoto (tanreinama) - **Model type:** Switch Transformer - **Language(s) (NLP):** Japanese - **License:** MIT License ### Prefix-LM Model GPTSAN has the structure of the model named Prefix-LM in the [T5 paper](https://arxiv.org/abs/1910.10683). (The original GPTSAN repository calls it `hybrid`) In GPTSAN, the `Prefix` part of Prefix-LM, that is, the input position that can be referenced by both tokens, can be specified with any length. Arbitrary lengths can also be specified differently for each batch. This length applies to the text entered in `prefix_text` for the tokenizer. The tokenizer returns the mask of the `Prefix` part of Prefix-LM as `token_type_ids`. The model treats the part where `token_type_ids` is 1 as a `Prefix` part, that is, the input can refer to both tokens before and after. ### Spout Vector A Spout Vector is a special vector for controlling text generation. This vector is treated as the first embedding in self-attention to bring extraneous attention to the generated tokens. In this pre-trained model, the Spout Vector is a 128-dimensional vector that passes through 8 fully connected layers in the model and is projected into the space acting as external attention. The Spout Vector projected by the fully connected layer is split to be passed to all self-attentions. ## Model Sources <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/tanreinama/GPTSAN
4,748
[ [ -0.0278167724609375, -0.05572509765625, 0.022216796875, 0.022430419921875, -0.03729248046875, -0.006549835205078125, -0.023223876953125, -0.0168609619140625, 0.015228271484375, 0.01258087158203125, -0.0548095703125, -0.039581298828125, -0.04888916015625, 0.0...
google/owlv2-large-patch14
2023-10-23T09:17:00.000Z
[ "transformers", "pytorch", "owlv2", "zero-shot-object-detection", "vision", "object-detection", "arxiv:2306.09683", "license:apache-2.0", "region:us" ]
object-detection
google
null
null
google/owlv2-large-patch14
0
1,048
transformers
2023-10-13T10:57:17
--- license: apache-2.0 tags: - vision - object-detection inference: false --- # Model Card: OWLv2 ## Model Details The OWLv2 model (short for Open-World Localization) was proposed in [Scaling Open-Vocabulary Object Detection](https://arxiv.org/abs/2306.09683) by Matthias Minderer, Alexey Gritsenko, Neil Houlsby. OWLv2, like OWL-ViT, is a zero-shot text-conditioned object detection model that can be used to query an image with one or multiple text queries. The model uses CLIP as its multi-modal backbone, with a ViT-like Transformer to get visual features and a causal language model to get the text features. To use CLIP for detection, OWL-ViT removes the final token pooling layer of the vision model and attaches a lightweight classification and box head to each transformer output token. Open-vocabulary classification is enabled by replacing the fixed classification layer weights with the class-name embeddings obtained from the text model. The authors first train CLIP from scratch and fine-tune it end-to-end with the classification and box heads on standard detection datasets using a bipartite matching loss. One or multiple text queries per image can be used to perform zero-shot text-conditioned object detection. ### Model Date June 2023 ### Model Type The model uses a CLIP backbone with a ViT-L/14 Transformer architecture as an image encoder and uses a masked self-attention Transformer as a text encoder. These encoders are trained to maximize the similarity of (image, text) pairs via a contrastive loss. The CLIP backbone is trained from scratch and fine-tuned together with the box and class prediction heads with an object detection objective. ### Documents - [OWLv2 Paper](https://arxiv.org/abs/2306.09683) ### Use with Transformers ```python3 import requests from PIL import Image import torch from transformers import Owlv2Processor, Owlv2ForObjectDetection processor = Owlv2Processor.from_pretrained("google/owlv2-large-patch14") model = Owlv2ForObjectDetection.from_pretrained("google/owlv2-large-patch14") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) texts = [["a photo of a cat", "a photo of a dog"]] inputs = processor(text=texts, images=image, return_tensors="pt") outputs = model(**inputs) # Target image sizes (height, width) to rescale box predictions [batch_size, 2] target_sizes = torch.Tensor([image.size[::-1]]) # Convert outputs (bounding boxes and class logits) to COCO API results = processor.post_process_object_detection(outputs=outputs, threshold=0.1, target_sizes=target_sizes) i = 0 # Retrieve predictions for the first image for the corresponding text queries text = texts[i] boxes, scores, labels = results[i]["boxes"], results[i]["scores"], results[i]["labels"] # Print detected objects and rescaled box coordinates for box, score, label in zip(boxes, scores, labels): box = [round(i, 2) for i in box.tolist()] print(f"Detected {text[label]} with confidence {round(score.item(), 3)} at location {box}") ``` ## Model Use ### Intended Use The model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, text-conditioned object detection. We also hope it can be used for interdisciplinary studies of the potential impact of such models, especially in areas that commonly require identifying objects whose label is unavailable during training. #### Primary intended uses The primary intended users of these models are AI researchers. We primarily imagine the model will be used by researchers to better understand robustness, generalization, and other capabilities, biases, and constraints of computer vision models. ## Data The CLIP backbone of the model was trained on publicly available image-caption data. This was done through a combination of crawling a handful of websites and using commonly-used pre-existing image datasets such as [YFCC100M](http://projects.dfki.uni-kl.de/yfcc100m/). A large portion of the data comes from our crawling of the internet. This means that the data is more representative of people and societies most connected to the internet. The prediction heads of OWL-ViT, along with the CLIP backbone, are fine-tuned on publicly available object detection datasets such as [COCO](https://cocodataset.org/#home) and [OpenImages](https://storage.googleapis.com/openimages/web/index.html). (to be updated for v2) ### BibTeX entry and citation info ```bibtex @misc{minderer2023scaling, title={Scaling Open-Vocabulary Object Detection}, author={Matthias Minderer and Alexey Gritsenko and Neil Houlsby}, year={2023}, eprint={2306.09683}, archivePrefix={arXiv}, primaryClass={cs.CV} } ```
4,810
[ [ -0.0242919921875, -0.05120849609375, 0.0257720947265625, -0.01412200927734375, -0.02142333984375, -0.03558349609375, -0.0032958984375, -0.06805419921875, 0.0025997161865234375, 0.03155517578125, -0.0240631103515625, -0.048004150390625, -0.0479736328125, 0.01...
facebook/xglm-2.9B
2023-01-24T16:35:40.000Z
[ "transformers", "pytorch", "xglm", "text-generation", "multilingual", "en", "ru", "zh", "de", "es", "fr", "ja", "it", "pt", "el", "ko", "fi", "id", "tr", "ar", "vi", "th", "bg", "ca", "hi", "et", "bn", "ta", "ur", "sw", "te", "eu", "my", "ht", "qu"...
text-generation
facebook
null
null
facebook/xglm-2.9B
7
1,047
transformers
2022-03-02T23:29:05
--- language: - multilingual - en - ru - zh - de - es - fr - ja - it - pt - el - ko - fi - id - tr - ar - vi - th - bg - ca - hi - et - bn - ta - ur - sw - te - eu - my - ht - qu license: mit thumbnail: https://huggingface.co/front/thumbnails/facebook.png inference: false --- # XGLM-2.9B XGLM-2.9B is a multilingual autoregressive language model (with 2.9 billion parameters) trained on a balanced corpus of a diverse set of languages totaling 500 billion sub-tokens. It was introduced in the paper [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) by Xi Victoria Lin\*, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li\* (\*Equal Contribution). The original implementation was released in [this repository](https://github.com/pytorch/fairseq/tree/main/examples/xglm). ## Training Data Statistics The training data statistics of XGLM-2.9B is shown in the table below. | ISO-639-1| family | name | # tokens | ratio | ratio w/ lowRes upsampling | |:--------|:-----------------|:------------------------|-------------:|------------:|-------------:| | en | Indo-European | English | 803526736124 | 0.489906 | 0.3259 | | ru | Indo-European | Russian | 147791898098 | 0.0901079 | 0.0602 | | zh | Sino-Tibetan | Chinese | 132770494630 | 0.0809494 | 0.0483 | | de | Indo-European | German | 89223707856 | 0.0543992 | 0.0363 | | es | Indo-European | Spanish | 87303083105 | 0.0532282 | 0.0353 | | fr | Indo-European | French | 77419639775 | 0.0472023 | 0.0313 | | ja | Japonic | Japanese | 66054364513 | 0.040273 | 0.0269 | | it | Indo-European | Italian | 41930465338 | 0.0255648 | 0.0171 | | pt | Indo-European | Portuguese | 36586032444 | 0.0223063 | 0.0297 | | el | Indo-European | Greek (modern) | 28762166159 | 0.0175361 | 0.0233 | | ko | Koreanic | Korean | 20002244535 | 0.0121953 | 0.0811 | | fi | Uralic | Finnish | 16804309722 | 0.0102455 | 0.0681 | | id | Austronesian | Indonesian | 15423541953 | 0.00940365 | 0.0125 | | tr | Turkic | Turkish | 12413166065 | 0.00756824 | 0.0101 | | ar | Afro-Asiatic | Arabic | 12248607345 | 0.00746791 | 0.0099 | | vi | Austroasiatic | Vietnamese | 11199121869 | 0.00682804 | 0.0091 | | th | Tai–Kadai | Thai | 10842172807 | 0.00661041 | 0.044 | | bg | Indo-European | Bulgarian | 9703797869 | 0.00591635 | 0.0393 | | ca | Indo-European | Catalan | 7075834775 | 0.0043141 | 0.0287 | | hi | Indo-European | Hindi | 3448390110 | 0.00210246 | 0.014 | | et | Uralic | Estonian | 3286873851 | 0.00200399 | 0.0133 | | bn | Indo-European | Bengali, Bangla | 1627447450 | 0.000992245 | 0.0066 | | ta | Dravidian | Tamil | 1476973397 | 0.000900502 | 0.006 | | ur | Indo-European | Urdu | 1351891969 | 0.000824241 | 0.0055 | | sw | Niger–Congo | Swahili | 907516139 | 0.000553307 | 0.0037 | | te | Dravidian | Telugu | 689316485 | 0.000420272 | 0.0028 | | eu | Language isolate | Basque | 105304423 | 6.42035e-05 | 0.0043 | | my | Sino-Tibetan | Burmese | 101358331 | 6.17976e-05 | 0.003 | | ht | Creole | Haitian, Haitian Creole | 86584697 | 5.27902e-05 | 0.0035 | | qu | Quechuan | Quechua | 3236108 | 1.97304e-06 | 0.0001 | ## Model card For intended usage of the model, please refer to the [model card](https://github.com/pytorch/fairseq/blob/main/examples/xglm/model_card.md) released by the XGLM-2.9B development team. ## Example (COPA) The following snippet shows how to evaluate our models (GPT-3 style, zero-shot) on the Choice of Plausible Alternatives (COPA) task, using examples in English, Chinese and Hindi. ```python import torch import torch.nn.functional as F from transformers import XGLMTokenizer, XGLMForCausalLM tokenizer = XGLMTokenizer.from_pretrained("facebook/xglm-2.9B") model = XGLMForCausalLM.from_pretrained("facebook/xglm-2.9B") data_samples = { 'en': [ { "premise": "I wanted to conserve energy.", "choice1": "I swept the floor in the unoccupied room.", "choice2": "I shut off the light in the unoccupied room.", "question": "effect", "label": "1" }, { "premise": "The flame on the candle went out.", "choice1": "I blew on the wick.", "choice2": "I put a match to the wick.", "question": "cause", "label": "0" } ], 'zh': [ { "premise": "我想节约能源。", "choice1": "我在空着的房间里扫了地板。", "choice2": "我把空房间里的灯关了。", "question": "effect", "label": "1" }, { "premise": "蜡烛上的火焰熄灭了。", "choice1": "我吹灭了灯芯。", "choice2": "我把一根火柴放在灯芯上。", "question": "cause", "label": "0" } ], 'hi': [ { "premise": "M te vle konsève enèji.", "choice1": "Mwen te fin baleye chanm lib la.", "choice2": "Mwen te femen limyè nan chanm lib la.", "question": "effect", "label": "1" }, { "premise": "Flam bouji a te etenn.", "choice1": "Mwen te soufle bouji a.", "choice2": "Mwen te limen mèch bouji a.", "question": "cause", "label": "0" } ] } def get_logprobs(prompt): inputs = tokenizer(prompt, return_tensors="pt") input_ids, output_ids = inputs["input_ids"], inputs["input_ids"][:, 1:] outputs = model(**inputs, labels=input_ids) logits = outputs.logits logprobs = torch.gather(F.log_softmax(logits, dim=2), 2, output_ids.unsqueeze(2)) return logprobs # Zero-shot evaluation for the Choice of Plausible Alternatives (COPA) task. # A return value of 0 indicates that the first alternative is more plausible, # while 1 indicates that the second alternative is more plausible. def COPA_eval(prompt, alternative1, alternative2): lprob1 = get_logprobs(prompt + "\n" + alternative1).sum() lprob2 = get_logprobs(prompt + "\n" + alternative2).sum() return 0 if lprob1 > lprob2 else 1 for lang in data_samples_long: for idx, example in enumerate(data_samples_long[lang]): predict = COPA_eval(example["premise"], example["choice1"], example["choice2"]) print(f'{lang}-{idx}', predict, example['label']) # en-0 1 1 # en-1 0 0 # zh-0 1 1 # zh-1 0 0 # hi-0 1 1 # hi-1 0 0 ```
7,585
[ [ -0.0304718017578125, -0.0445556640625, 0.02203369140625, -0.00016891956329345703, -0.01277923583984375, 0.00445556640625, -0.012176513671875, -0.02850341796875, 0.018585205078125, 0.00974273681640625, -0.040069580078125, -0.056610107421875, -0.037017822265625, ...
bigcode/octocoder
2023-08-17T18:06:53.000Z
[ "transformers", "pytorch", "safetensors", "code", "text-generation", "dataset:bigcode/commitpackft", "dataset:bigcode/oasst-octopack", "arxiv:2308.07124", "license:bigcode-openrail-m", "model-index", "endpoints_compatible", "has_space", "region:us" ]
text-generation
bigcode
null
null
bigcode/octocoder
54
1,047
transformers
2023-07-23T19:03:41
--- pipeline_tag: text-generation inference: true widget: - text: 'Question: Please write a function in Python that performs bubble sort.\n\nAnswer:' example_title: Bubble sort group: Python license: bigcode-openrail-m datasets: - bigcode/commitpackft - bigcode/oasst-octopack metrics: - code_eval library_name: transformers tags: - code model-index: - name: OctoCoder results: - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalSynthesize Python metrics: - name: pass@1 type: pass@1 value: 46.2 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalSynthesize JavaScript metrics: - name: pass@1 type: pass@1 value: 39.2 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalSynthesize Java metrics: - name: pass@1 type: pass@1 value: 38.2 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalSynthesize Go metrics: - name: pass@1 type: pass@1 value: 30.4 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalSynthesize C++ metrics: - name: pass@1 type: pass@1 value: 35.6 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalSynthesize Rust metrics: - name: pass@1 type: pass@1 value: 23.4 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalSynthesize Average metrics: - name: pass@1 type: pass@1 value: 35.5 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalFix Python metrics: - name: pass@1 type: pass@1 value: 30.4 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalFix JavaScript metrics: - name: pass@1 type: pass@1 value: 28.4 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalFix Java metrics: - name: pass@1 type: pass@1 value: 30.6 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalFix Go metrics: - name: pass@1 type: pass@1 value: 30.2 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalFix C++ metrics: - name: pass@1 type: pass@1 value: 26.1 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalFix Rust metrics: - name: pass@1 type: pass@1 value: 16.5 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalFix Average metrics: - name: pass@1 type: pass@1 value: 27.0 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalExplain Python metrics: - name: pass@1 type: pass@1 value: 35.1 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalExplain JavaScript metrics: - name: pass@1 type: pass@1 value: 24.5 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalExplain Java metrics: - name: pass@1 type: pass@1 value: 27.3 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalExplain Go metrics: - name: pass@1 type: pass@1 value: 21.1 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalExplain C++ metrics: - name: pass@1 type: pass@1 value: 24.1 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalExplain Rust metrics: - name: pass@1 type: pass@1 value: 14.8 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalExplain Average metrics: - name: pass@1 type: pass@1 value: 24.5 verified: false --- ![Octopack](https://github.com/bigcode-project/octopack/blob/31f3320f098703c7910e43492c39366eeea68d83/banner.png?raw=true) # Table of Contents 1. [Model Summary](#model-summary) 2. [Use](#use) 3. [Training](#training) 4. [Citation](#citation) # Model Summary > OctoCoder is an instruction tuned model with 15.5B parameters created by finetuning StarCoder on CommitPackFT & OASST as described in the OctoPack paper. - **Repository:** [bigcode-project/octopack](https://github.com/bigcode-project/octopack) - **Paper:** [OctoPack: Instruction Tuning Code Large Language Models](https://arxiv.org/abs/2308.07124) - **Languages:** 80+ Programming languages - **OctoPack🐙🎒:** <table> <tr> <th>Data</t> <th><a href=https://huggingface.co/datasets/bigcode/commitpack>CommitPack</a></th> <td>4TB of GitHub commits across 350 programming languages</td> </tr> <tr> <th></t> <th><a href=https://huggingface.co/datasets/bigcode/commitpackft>CommitPackFT</a></th> <td>Filtered version of CommitPack for high-quality commit messages that resemble instructions</td> </tr> <tr> <th>Model</t> <th><a href=https://huggingface.co/bigcode/octocoder>OctoCoder</a></th> <td>StarCoder (16B parameters) instruction tuned on CommitPackFT + OASST</td> </tr> <tr> <th></t> <th><a href=https://huggingface.co/bigcode/octogeex>OctoGeeX</a></th> <td>CodeGeeX2 (6B parameters) instruction tuned on CommitPackFT + OASST</td> </tr> <tr> <th>Evaluation&nbsp;&nbsp;</t> <th><a href=https://huggingface.co/datasets/bigcode/humanevalpack>HumanEvalPack</a></th> <td>Extension of OpenAI's HumanEval to cover 3 scenarios across 6 languages</td> </tr> </table> # Use ## Intended use The model follows instructions provided in the input. You should always preface your input with "Question: " and finish it with "Answer:", for example: "Question: Please write a function in Python that performs bubble sort.\n\nAnswer:" **Feel free to share your generations in the Community tab!** ## Generation ```python # pip install -q transformers from transformers import AutoModelForCausalLM, AutoTokenizer checkpoint = "bigcode/octocoder" device = "cuda" # for GPU usage or "cpu" for CPU usage tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device) inputs = tokenizer.encode("Question: Please write a function in Python that performs bubble sort.\n\nAnswer:", return_tensors="pt").to(device) outputs = model.generate(inputs) print(tokenizer.decode(outputs[0])) ``` # Training ## Model - **Architecture:** GPT-2 model with multi-query attention and Fill-in-the-Middle objective - **Steps:** 250k pretraining & 30 instruction tuning - **Pretraining tokens:** 1 trillion pretraining & 2M instruction tuning - **Precision:** bfloat16 ## Hardware - **Pretraining:** - **GPUs:** 512 Tesla A100 - **Training time:** 24 days - **Instruction tuning:** - **GPUs:** 8 Tesla A100 - **Training time:** 4 hours ## Software - **Orchestration:** [Megatron-LM/Transformers](https://github.com/bigcode-project/octopack#training) - **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch) # Citation ```bibtex @article{muennighoff2023octopack, title={OctoPack: Instruction Tuning Code Large Language Models}, author={Niklas Muennighoff and Qian Liu and Armel Zebaze and Qinkai Zheng and Binyuan Hui and Terry Yue Zhuo and Swayam Singh and Xiangru Tang and Leandro von Werra and Shayne Longpre}, journal={arXiv preprint arXiv:2308.07124}, year={2023} } ```
8,339
[ [ -0.04180908203125, -0.05731201171875, 0.004581451416015625, 0.01090240478515625, -0.01317596435546875, -0.01415252685546875, -0.0206298828125, -0.048553466796875, 0.0002620220184326172, 0.03564453125, -0.02227783203125, -0.038055419921875, -0.0499267578125, ...
IUseAMouse/PointConImageModel
2023-10-20T14:04:37.000Z
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dataset:IUseAMouse/PointConImages", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
IUseAMouse
null
null
IUseAMouse/PointConImageModel
0
1,047
diffusers
2023-10-20T13:41:44
--- license: creativeml-openrail-m base_model: CompVis/stable-diffusion-v1-4 datasets: - IUseAMouse/PointConImages tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- # Text-to-image finetuning - IUseAMouse/PointConImageModel This pipeline was finetuned from **CompVis/stable-diffusion-v1-4** on the **IUseAMouse/PointConImages** dataset. Below are some example images generated with the finetuned pipeline using the following prompts: ['Un patron donne un dossier à un employé']: ![val_imgs_grid](./val_imgs_grid.png) ## Pipeline usage You can use the pipeline like so: ```python from diffusers import DiffusionPipeline import torch pipeline = DiffusionPipeline.from_pretrained("IUseAMouse/PointConImageModel", torch_dtype=torch.float16) prompt = "Un patron donne un dossier à un employé" image = pipeline(prompt).images[0] image.save("my_image.png") ``` ## Training info These are the key hyperparameters used during training: * Epochs: 60 * Learning rate: 1e-05 * Batch size: 1 * Gradient accumulation steps: 4 * Image resolution: 512 * Mixed-precision: fp16 More information on all the CLI arguments and the environment are available on your [`wandb` run page](https://wandb.ai/acadys/text2image-fine-tune/runs/cpxo8qcw).
1,295
[ [ -0.042388916015625, -0.0465087890625, 0.0269775390625, 0.0199737548828125, -0.038604736328125, -0.03765869140625, -0.00716400146484375, 0.01031494140625, -0.01296234130859375, 0.05157470703125, -0.043212890625, -0.055145263671875, -0.0640869140625, -0.001991...
Ahmed9275/Vit-Cifar100
2022-05-19T01:26:45.000Z
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "dataset:cifar100", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
Ahmed9275
null
null
Ahmed9275/Vit-Cifar100
2
1,046
transformers
2022-05-18T22:16:08
--- license: apache-2.0 tags: - image-classification - generated_from_trainer datasets: - cifar100 metrics: - accuracy model-index: - name: vit-base-beans-demo-v5 results: - task: name: Image Classification type: image-classification dataset: name: Cifar100 type: cifar100 args: cifar100 metrics: - name: Accuracy type: accuracy value: 0.8985 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-beans-demo-v5 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the Cifar100 dataset. It achieves the following results on the evaluation set: - Loss: 0.4420 - Accuracy: 0.8985 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.08 | 1.0 | 3125 | 0.6196 | 0.8262 | | 0.3816 | 2.0 | 6250 | 0.5322 | 0.8555 | | 0.1619 | 3.0 | 9375 | 0.4817 | 0.8765 | | 0.0443 | 4.0 | 12500 | 0.4420 | 0.8985 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.1 - Tokenizers 0.12.1
1,871
[ [ -0.03302001953125, -0.051483154296875, 0.01416015625, 0.023162841796875, -0.014404296875, -0.0224151611328125, -0.01007843017578125, -0.01277923583984375, 0.018951416015625, 0.0247802734375, -0.039031982421875, -0.03948974609375, -0.0526123046875, -0.0202178...
Fictiverse/Stable_Diffusion_PaperCut_Model
2023-05-07T08:22:26.000Z
[ "diffusers", "text-to-image", "license:creativeml-openrail-m", "endpoints_compatible", "has_space", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
Fictiverse
null
null
Fictiverse/Stable_Diffusion_PaperCut_Model
360
1,046
diffusers
2022-11-08T12:23:08
--- license: creativeml-openrail-m tags: - text-to-image --- # 🧩 Paper Cut model V1 This is the fine-tuned Stable Diffusion model trained on Paper Cut images. Use **PaperCut** in your prompts. ### Sample images: ![PaperCut.jpg](https://s3.amazonaws.com/moonup/production/uploads/1667910351389-635749860725c2f190a76e88.jpeg) ![PaperCut.jpg](https://s3.amazonaws.com/moonup/production/uploads/1667912285222-635749860725c2f190a76e88.jpeg) Based on StableDiffusion 1.5 model ### 🧨 Diffusers This model can be used just like any other Stable Diffusion model. For more information, please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion). You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX](). ```python from diffusers import StableDiffusionPipeline import torch model_id = "Fictiverse/Stable_Diffusion_PaperCut_Model" pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe = pipe.to("cuda") prompt = "PaperCut R2-D2" image = pipe(prompt).images[0] image.save("./R2-D2.png") ``` ### ✨ Community spotlight : @PiyarSquare : [![PiyarSquare video](https://img.youtube.com/vi/wQWHnZlxFj8/0.jpg)](https://www.youtube.com/watch?v=wQWHnZlxFj8) @MrPlasm0 : [![MrPlasm0 video](https://img.youtube.com/vi/J68hZ_-L6w4/0.jpg)](https://www.youtube.com/watch?v=J68hZ_-L6w4) @omsk13 : [![Omsk Music video](https://img.youtube.com/vi/BCZOD2AQCFg/0.jpg)](https://www.youtube.com/watch?v=BCZOD2AQCFg)
1,606
[ [ -0.0335693359375, -0.062347412109375, 0.0423583984375, 0.0062713623046875, -0.019744873046875, 0.0014066696166992188, 0.01522064208984375, 0.0028095245361328125, 0.00995635986328125, 0.048126220703125, -0.043853759765625, -0.0343017578125, -0.05792236328125, ...
selfrag/selfrag_llama2_7b
2023-10-22T17:47:59.000Z
[ "transformers", "pytorch", "llama", "text-generation", "arxiv:2310.11511", "license:mit", "endpoints_compatible", "text-generation-inference", "region:us", "has_space" ]
text-generation
selfrag
null
null
selfrag/selfrag_llama2_7b
34
1,046
transformers
2023-10-18T00:12:34
--- license: mit --- This model is a 7B [Self-RAG](https://selfrag.github.io/) model that generates outputs to diverse user queries as well as *reflection tokens* to call the retrieval system adaptively and criticize its own output and retrieved passages. Self-RAG is trained on our instruction-following corpora with interleaving passages and reflection tokens using the standard next-token prediction objective, enabling efficient and stable learning with fine-grained feedback. At inference, we leverage reflection tokens covering diverse aspects of generations to sample the best output aligning users' preferences. See full descriptions in See full descriptions in [our paper](https://arxiv.org/abs/2310.11511). ## Usage Here, we show an easy way to quickly download our model from HuggingFace and run with `vllm` with pre-given passages. Make sure to install dependencies listed at [self-rag/requirements.txt](https://github.com/AkariAsai/self-rag/blob/main/requirementd.txt). To run our full inference pipeline with a retrieval system and fine-grained tree decoding, please use [our code](https://github.com/AkariAsai/self-rag). ```py from transformers import AutoTokenizer, AutoModelForCausalLM from vllm import LLM, SamplingParams model = LLM("selfrag/selfrag_llama2_7b", download_dir="/gscratch/h2lab/akari/model_cache", dtype="half") sampling_params = SamplingParams(temperature=0.0, top_p=1.0, max_tokens=100, skip_special_tokens=False) def format_prompt(input, paragraph=None): prompt = "### Instruction:\n{0}\n\n### Response:\n".format(input) if paragraph is not None: prompt += "[Retrieval]<paragraph>{0}</paragraph>".format(paragraph) return prompt query_1 = "Leave odd one out: twitter, instagram, whatsapp." query_2 = "Can you tell me the difference between llamas and alpacas?" queries = [query_1, query_2] preds = model.generate([format_prompt(query) for query in queries], sampling_params) for pred in preds: print("Model prediction: {0}".format(pred.outputs[0].text)) # Model prediction: Twitter, Instagram, and WhatsApp are all social media platforms.[No Retrieval]WhatsApp is the odd one out because it is a messaging app, while Twitter and # Instagram are primarily used for sharing photos and videos.[Utility:5]</s> (this query doesn't require factual grounding; just skip retrieval and do normal instruction-following generation) # Model prediction: Sure![Retrieval]<paragraph> ... (this query requires factual grounding, call a retriever) # generate with retrieved passage prompt = format_prompt("Can you tell me the difference between llamas and alpacas?", paragraph="The alpaca (Lama pacos) is a species of South American camelid mammal. It is similar to, and often confused with, the llama. Alpacas are considerably smaller than llamas, and unlike llamas, they were not bred to be working animals, but were bred specifically for their fiber.") preds = model.generate([prompt], sampling_params) print([pred.outputs[0].text for pred in preds]) # ['[Relevant]Alpacas are considerably smaller than llamas, and unlike llamas, they were not bred to be working animals, but were bred specifically for their fiber.[Fully supported][Utility:5]</s>'] ``` ## Input Format As described in the `format_prompt` function, your input should be formed as ``` ### Instruction:\n{instruction}\n\n### Response:\n".format(instruction) ``` or ``` ### Instruction:\n{instruction}\n\n### Input:\n{input}\n\n### Response:\n" ``` If you have additional input. You can insert paragraphs anywhere after `### Response:\n"`, but make sure to mark paragraphs as paragraph tokens (i.e., `<paragraph>{0}</paragraph>`). ## Training details Our training data is available at the HuggingFace dataset [selfrag_train_data](https://huggingface.co/datasets/selfrag/selfrag_train_data). See our official repository for the training details. We used 8 A100 40GB for training on the Stability HPC server. ## Citation and contact If you use this model, please cite our work: ``` @article{asai2023selfrag, author = {Asai, Akari and Wu, Zeqiu and Wang, Yizhong and Sil, Avirup and Hajishirzi, Hannaneh}, title = {{Self-RAG}: Learning to Retrieve, Generate, and Critique through Self-Reflection}, year = {2023}, journal = { arXiv preprint arXiv:2310.11511 }, URL = {https://arxiv.org/abs/2310.11511} } ```
4,355
[ [ -0.0298004150390625, -0.05291748046875, 0.0093994140625, 0.033966064453125, -0.0181884765625, -0.019134521484375, -0.0003218650817871094, -0.03466796875, 0.038604736328125, 0.0273284912109375, -0.043914794921875, -0.031585693359375, -0.034820556640625, 0.014...
PlanTL-GOB-ES/bsc-bio-es
2022-11-15T15:14:27.000Z
[ "transformers", "pytorch", "roberta", "fill-mask", "biomedical", "clinical", "spanish", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
PlanTL-GOB-ES
null
null
PlanTL-GOB-ES/bsc-bio-es
2
1,042
transformers
2022-04-08T13:15:24
--- language: - es tags: - biomedical - clinical - spanish license: apache-2.0 metrics: - ppl widget: - text: "El único antecedente personal a reseñar era la <mask> arterial." - text: "Las radiologías óseas de cuerpo entero no detectan alteraciones <mask>, ni alteraciones vertebrales." - text: "En el <mask> toraco-abdómino-pélvico no se encontraron hallazgos patológicos de interés." --- # Biomedical language model for Spanish ## Table of contents <details> <summary>Click to expand</summary> - [Model description](#model-description) - [Intended uses and limitations](#intended-use) - [How to use](#how-to-use) - [Limitations and bias](#limitations-and-bias) - [Training](#training) - [Tokenization and model pretraining](#Tokenization-modelpretraining) - [Training corpora and preprocessing](#Trainingcorpora-preprocessing) - [Evaluation](#evaluation) - [Additional information](#additional-information) - [Author](#author) - [Contact information](#contact-information) - [Copyright](#copyright) - [Licensing information](#licensing-information) - [Funding](#funding) - [Citation information](#citation-information) - [Disclaimer](#disclaimer) </details> ## Model description Biomedical pretrained language model for Spanish. For more details about the corpus, the pretraining and the evaluation, check the official [repository](https://github.com/PlanTL-GOB-ES/lm-biomedical-clinical-es). ## Intended uses and limitations The model is ready-to-use only for masked language modelling to perform the Fill Mask task (try the inference API or read the next section). However, it is intended to be fine-tuned on downstream tasks such as Named Entity Recognition or Text Classification. ## How to use ## Limitations and bias At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. ## Training ### Tokenization and model pretraining This model is a [RoBERTa-based](https://github.com/pytorch/fairseq/tree/master/examples/roberta) model trained on a **biomedical** corpus in Spanish collected from several sources (see next section). The training corpus has been tokenized using a byte version of [Byte-Pair Encoding (BPE)](https://github.com/openai/gpt-2) used in the original [RoBERTA](https://github.com/pytorch/fairseq/tree/master/examples/roberta) model with a vocabulary size of 52,000 tokens. The pretraining consists of a masked language model training at the subword level following the approach employed for the RoBERTa base model with the same hyperparameters as in the original work. The training lasted a total of 48 hours with 16 NVIDIA V100 GPUs of 16GB DDRAM, using Adam optimizer with a peak learning rate of 0.0005 and an effective batch size of 2,048 sentences. ### Training corpora and preprocessing The training corpus is composed of several biomedical corpora in Spanish, collected from publicly available corpora and crawlers. To obtain a high-quality training corpus, a cleaning pipeline with the following operations has been applied: - data parsing in different formats - sentence splitting - language detection - filtering of ill-formed sentences - deduplication of repetitive contents - keep the original document boundaries Finally, the corpora are concatenated and further global deduplication among the corpora has been applied. The result is a medium-size biomedical corpus for Spanish composed of about 963M tokens. The table below shows some basic statistics of the individual cleaned corpora: | Name | No. tokens | Description | |-----------------------------------------------------------------------------------------|-------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | [Medical crawler](https://zenodo.org/record/4561970) | 903,558,136 | Crawler of more than 3,000 URLs belonging to Spanish biomedical and health domains. | | Clinical cases misc. | 102,855,267 | A miscellany of medical content, essentially clinical cases. Note that a clinical case report is a scientific publication where medical practitioners share patient cases and it is different from a clinical note or document. | | [Scielo](https://zenodo.org/record/2541681#.YlP1DshBwio) | 60,007,289 | Publications written in Spanish crawled from the Spanish SciELO server in 2017. | | [BARR2_background](https://temu.bsc.es/BARR2/downloads/background_set.raw_text.tar.bz2) | 24,516,442 | Biomedical Abbreviation Recognition and Resolution (BARR2) containing Spanish clinical case study sections from a variety of clinical disciplines. | | Wikipedia_life_sciences | 13,890,501 | Wikipedia articles crawled 04/01/2021 with the [Wikipedia API python library](https://pypi.org/project/Wikipedia-API/) starting from the "Ciencias\_de\_la\_vida" category up to a maximum of 5 subcategories. Multiple links to the same articles are then discarded to avoid repeating content. | | Patents | 13,463,387 | Google Patent in Medical Domain for Spain (Spanish). The accepted codes (Medical Domain) for Json files of patents are: "A61B", "A61C","A61F", "A61H", "A61K", "A61L","A61M", "A61B", "A61P". | | [EMEA](http://opus.nlpl.eu/download.php?f=EMEA/v3/moses/en-es.txt.zip) | 5,377,448 | Spanish-side documents extracted from parallel corpora made out of PDF documents from the European Medicines Agency. | | [mespen_Medline](https://zenodo.org/record/3562536#.YTt1fH2xXbR) | 4,166,077 | Spanish-side articles extracted from a collection of Spanish-English parallel corpus consisting of biomedical scientific literature. The collection of parallel resources is aggregated from the MedlinePlus source. | | PubMed | 1,858,966 | Open-access articles from the PubMed repository crawled in 2017. | ## Evaluation The model has been fine-tuned on three Named Entity Recognition (NER) tasks using three clinical NER datasets: - [PharmaCoNER](https://zenodo.org/record/4270158): is a track on chemical and drug mention recognition from Spanish medical texts (for more info see: https://temu.bsc.es/pharmaconer/). - [CANTEMIST](https://zenodo.org/record/3978041#.YTt5qH2xXbQ): is a shared task specifically focusing on named entity recognition of tumour morphology, in Spanish (for more info see: https://zenodo.org/record/3978041#.YTt5qH2xXbQ). - ICTUSnet: consists of 1,006 hospital discharge reports of patients admitted for stroke from 18 different Spanish hospitals. It contains more than 79,000 annotations for 51 different kinds of variables. We addressed the NER task as a token classification problem using a standard linear layer along with the BIO tagging schema. We compared our models with the general-domain Spanish [roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne), the general-domain multilingual model that supports Spanish [mBERT](https://huggingface.co/bert-base-multilingual-cased), the domain-specific English model [BioBERT](https://huggingface.co/dmis-lab/biobert-base-cased-v1.2), and three domain-specific models based on continual pre-training, [mBERT-Galén](https://ieeexplore.ieee.org/document/9430499), [XLM-R-Galén](https://ieeexplore.ieee.org/document/9430499) and [BETO-Galén](https://ieeexplore.ieee.org/document/9430499). The table below shows the F1 scores obtained: | Tasks/Models | bsc-bio-es | XLM-R-Galén | BETO-Galén | mBERT-Galén | mBERT | BioBERT | roberta-base-bne | |--------------|----------------|--------------------|--------------|--------------|--------------|--------------|------------------| | PharmaCoNER | **0.8907** | 0.8754 | 0.8537 | 0.8594 | 0.8671 | 0.8545 | 0.8474 | | CANTEMIST | **0.8220** | 0.8078 | 0.8153 | 0.8168 | 0.8116 | 0.8070 | 0.7875 | | ICTUSnet | **0.8727** | 0.8716 | 0.8498 | 0.8509 | 0.8631 | 0.8521 | 0.8677 | The fine-tuning scripts can be found in the official GitHub [repository](https://github.com/PlanTL-GOB-ES/lm-biomedical-clinical-es). ## Additional information ### Author Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es) ### Contact information For further information, send an email to <plantl-gob-es@bsc.es> ### Copyright Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022) ### Licensing information [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) ### Funding This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL. ### Citation information If you use these models, please cite our work: ```bibtext @inproceedings{carrino-etal-2022-pretrained, title = "Pretrained Biomedical Language Models for Clinical {NLP} in {S}panish", author = "Carrino, Casimiro Pio and Llop, Joan and P{\`a}mies, Marc and Guti{\'e}rrez-Fandi{\~n}o, Asier and Armengol-Estap{\'e}, Jordi and Silveira-Ocampo, Joaqu{\'\i}n and Valencia, Alfonso and Gonzalez-Agirre, Aitor and Villegas, Marta", booktitle = "Proceedings of the 21st Workshop on Biomedical Language Processing", month = may, year = "2022", address = "Dublin, Ireland", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.bionlp-1.19", doi = "10.18653/v1/2022.bionlp-1.19", pages = "193--199", abstract = "This work presents the first large-scale biomedical Spanish language models trained from scratch, using large biomedical corpora consisting of a total of 1.1B tokens and an EHR corpus of 95M tokens. We compared them against general-domain and other domain-specific models for Spanish on three clinical NER tasks. As main results, our models are superior across the NER tasks, rendering them more convenient for clinical NLP applications. Furthermore, our findings indicate that when enough data is available, pre-training from scratch is better than continual pre-training when tested on clinical tasks, raising an exciting research question about which approach is optimal. Our models and fine-tuning scripts are publicly available at HuggingFace and GitHub.", } ``` ### Disclaimer <details> <summary>Click to expand</summary> The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence. In no event shall the owner of the models (SEDIA – State Secretariat for Digitalization and Artificial Intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models. Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables. Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial. En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos. </details>
14,152
[ [ -0.0224456787109375, -0.037933349609375, 0.036407470703125, 0.03631591796875, -0.0224151611328125, 0.01416015625, -0.0016069412231445312, -0.04315185546875, 0.050140380859375, 0.0308380126953125, -0.0269775390625, -0.057220458984375, -0.058929443359375, 0.03...
microsoft/git-base-vqav2
2023-01-24T17:21:03.000Z
[ "transformers", "pytorch", "git", "text-generation", "vision", "visual-question-answering", "en", "arxiv:2205.14100", "license:mit", "has_space", "region:us" ]
visual-question-answering
microsoft
null
null
microsoft/git-base-vqav2
3
1,042
transformers
2022-12-06T11:02:00
--- language: en license: mit tags: - vision model_name: microsoft/git-base-vqav2 inference: false pipeline_tag: visual-question-answering --- # GIT (GenerativeImage2Text), base-sized, fine-tuned on VQAv2 GIT (short for GenerativeImage2Text) model, base-sized version, fine-tuned on VQAv2. It was introduced in the paper [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) by Wang et al. and first released in [this repository](https://github.com/microsoft/GenerativeImage2Text). Disclaimer: The team releasing GIT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description GIT is a Transformer decoder conditioned on both CLIP image tokens and text tokens. The model is trained using "teacher forcing" on a lot of (image, text) pairs. The goal for the model is simply to predict the next text token, giving the image tokens and previous text tokens. The model has full access to (i.e. a bidirectional attention mask is used for) the image patch tokens, but only has access to the previous text tokens (i.e. a causal attention mask is used for the text tokens) when predicting the next text token. ![GIT architecture](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/git_architecture.jpg) This allows the model to be used for tasks like: - image and video captioning - visual question answering (VQA) on images and videos - even image classification (by simply conditioning the model on the image and asking it to generate a class for it in text). ## Intended uses & limitations You can use the raw model for visual question answering (VQA). See the [model hub](https://huggingface.co/models?search=microsoft/git) to look for fine-tuned versions on a task that interests you. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/model_doc/git#transformers.GitForCausalLM.forward.example-2). ## Training data From the paper: > We collect 0.8B image-text pairs for pre-training, which include COCO (Lin et al., 2014), Conceptual Captions (CC3M) (Sharma et al., 2018), SBU (Ordonez et al., 2011), Visual Genome (VG) (Krishna et al., 2016), Conceptual Captions (CC12M) (Changpinyo et al., 2021), ALT200M (Hu et al., 2021a), and an extra 0.6B data following a similar collection procedure in Hu et al. (2021a). => however this is for the model referred to as "GIT" in the paper, which is not open-sourced. This checkpoint is "GIT-base", which is a smaller variant of GIT trained on 10 million image-text pairs. Next, the model was fine-tuned on VQAv2. See table 11 in the [paper](https://arxiv.org/abs/2205.14100) for more details. ### Preprocessing We refer to the original repo regarding details for preprocessing during training. During validation, one resizes the shorter edge of each image, after which center cropping is performed to a fixed-size resolution. Next, frames are normalized across the RGB channels with the ImageNet mean and standard deviation. ## Evaluation results For evaluation results, we refer readers to the [paper](https://arxiv.org/abs/2205.14100).
3,226
[ [ -0.04620361328125, -0.055511474609375, 0.01483917236328125, -0.013885498046875, -0.034759521484375, -0.0035152435302734375, -0.004138946533203125, -0.033966064453125, 0.016571044921875, 0.0335693359375, -0.050079345703125, -0.025848388671875, -0.061431884765625,...
svalabs/infoxlm-german-question-answering
2022-10-05T09:20:54.000Z
[ "transformers", "pytorch", "xlm-roberta", "question-answering", "qa", "de", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
question-answering
svalabs
null
null
svalabs/infoxlm-german-question-answering
2
1,041
transformers
2022-10-05T06:25:08
--- license: cc-by-4.0 language: - de task_categories: - question-answering tags: - question-answering - pytorch - qa - de --- # SVALabs - InfoXLM German QA In this repository we present our german question answering model. The trained model is based on [InfoXLM-large](https://huggingface.co/microsoft/infoxlm-large) and was finetuned using the [SQuAD](https://huggingface.co/datasets/squad) dataset and the [GermanQuAD](https://huggingface.co/datasets/deepset/germanquad) dataset. ### Model Details | | Description or Link | |---|---| |**Base model** | [```InfoXLM-large```](https://huggingface.co/microsoft/infoxlm-large) | |**Finetuning task**| Question Answering | |**Source datasets**| [```SQuAD```](https://huggingface.co/datasets/squad); [```GermanQuAD```](https://huggingface.co/datasets/deepset/germanquad)| ### Performance The model was tested on 1692 samples of the GermanQuAD test dataset (the other samples were used for validation) F1-Score: 89.05 EM: 74.76
987
[ [ -0.033538818359375, -0.053314208984375, 0.0297393798828125, 0.032501220703125, -0.0278472900390625, 0.0093841552734375, 0.02587890625, -0.01953125, 0.003391265869140625, 0.035247802734375, -0.055084228515625, -0.05487060546875, -0.0189971923828125, -0.000405...
timm/efficientnet_es.ra_in1k
2023-04-27T21:12:10.000Z
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2110.00476", "arxiv:2003.02838", "arxiv:1905.11946", "license:apache-2.0", "region:us" ]
image-classification
timm
null
null
timm/efficientnet_es.ra_in1k
0
1,040
timm
2022-12-12T23:58:07
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for efficientnet_es.ra_in1k A EfficientNet-EdgeTPU image classification model. Trained on ImageNet-1k in `timm` using recipe template described below. Recipe details: * RandAugment `RA` recipe. Inspired by and evolved from EfficientNet RandAugment recipes. Published as `B` recipe in [ResNet Strikes Back](https://arxiv.org/abs/2110.00476). * RMSProp (TF 1.0 behaviour) optimizer, EMA weight averaging * Step (exponential decay w/ staircase) LR schedule with warmup ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 5.4 - GMACs: 1.8 - Activations (M): 8.7 - Image size: 224 x 224 - **Papers:** - Accelerator-aware Neural Network Design using AutoML: https://arxiv.org/abs/2003.02838 - EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks: https://arxiv.org/abs/1905.11946 - ResNet strikes back: An improved training procedure in timm: https://arxiv.org/abs/2110.00476 - **Dataset:** ImageNet-1k - **Original:** https://github.com/huggingface/pytorch-image-models ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('efficientnet_es.ra_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'efficientnet_es.ra_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 24, 112, 112]) # torch.Size([1, 32, 56, 56]) # torch.Size([1, 48, 28, 28]) # torch.Size([1, 144, 14, 14]) # torch.Size([1, 192, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'efficientnet_es.ra_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 1280, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @article{gupta2020accelerator, title={Accelerator-aware neural network design using automl}, author={Gupta, Suyog and Akin, Berkin}, journal={arXiv preprint arXiv:2003.02838}, year={2020} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ``` ```bibtex @inproceedings{tan2019efficientnet, title={Efficientnet: Rethinking model scaling for convolutional neural networks}, author={Tan, Mingxing and Le, Quoc}, booktitle={International conference on machine learning}, pages={6105--6114}, year={2019}, organization={PMLR} } ``` ```bibtex @inproceedings{wightman2021resnet, title={ResNet strikes back: An improved training procedure in timm}, author={Wightman, Ross and Touvron, Hugo and Jegou, Herve}, booktitle={NeurIPS 2021 Workshop on ImageNet: Past, Present, and Future} } ```
5,015
[ [ -0.031524658203125, -0.040191650390625, -0.00815582275390625, 0.003520965576171875, -0.016754150390625, -0.032745361328125, -0.02069091796875, -0.02850341796875, 0.0192718505859375, 0.026336669921875, -0.0318603515625, -0.0433349609375, -0.054351806640625, -...
aubmindlab/aragpt2-base
2023-10-30T13:53:25.000Z
[ "transformers", "pytorch", "tf", "jax", "tensorboard", "safetensors", "gpt2", "text-generation", "ar", "dataset:wikipedia", "dataset:Osian", "dataset:1.5B-Arabic-Corpus", "dataset:oscar-arabic-unshuffled", "dataset:Assafir(private)", "arxiv:2012.15520", "endpoints_compatible", "has_s...
text-generation
aubmindlab
null
null
aubmindlab/aragpt2-base
15
1,039
transformers
2022-03-02T23:29:05
--- language: ar datasets: - wikipedia - Osian - 1.5B-Arabic-Corpus - oscar-arabic-unshuffled - Assafir(private) widget: - text: "يحكى أن مزارعا مخادعا قام ببيع بئر الماء الموجود في أرضه لجاره مقابل مبلغ كبير من المال" - text: "القدس مدينة تاريخية، بناها الكنعانيون في" - text: "كان يا ما كان في قديم الزمان" --- # Arabic GPT2 <img src="https://raw.githubusercontent.com/aub-mind/arabert/master/AraGPT2.png" width="100" align="left"/> You can find more information in our paper [AraGPT2](https://arxiv.org/abs/2012.15520) The code in this repository was used to train all GPT2 variants. The code support training and fine-tuning GPT2 on GPUs and TPUs via the TPUEstimator API. GPT2-base and medium uses the code from the `gpt2` folder and can trains models from the [minimaxir/gpt-2-simple](https://github.com/minimaxir/gpt-2-simple) repository. These models were trained using the `lamb` optimizer and follow the same architecture as `gpt2` and are fully compatible with the `transformers` library. GPT2-large and GPT2-mega were trained using the [imcaspar/gpt2-ml](https://github.com/imcaspar/gpt2-ml/) library, and follow the `grover` architecture. You can use the pytorch classes found in `grover/modeling_gpt2.py` as a direct replacement for classes in the `transformers` library (it should support version `v4.x` from `transformers`). Both models are trained using the `adafactor` optimizer, since the `adam` and `lamb` optimizer use too much memory causing the model to not even fit 1 batch on a TPU core. AraGPT2 is trained on the same large Arabic Dataset as AraBERTv2. # Usage ## Testing the model using `transformers`: ```python from transformers import GPT2TokenizerFast, pipeline #for base and medium from transformers import GPT2LMHeadModel #for large and mega # pip install arabert from arabert.aragpt2.grover.modeling_gpt2 import GPT2LMHeadModel from arabert.preprocess import ArabertPreprocessor MODEL_NAME='aubmindlab/aragpt2-base' arabert_prep = ArabertPreprocessor(model_name=MODEL_NAME) text="" text_clean = arabert_prep.preprocess(text) model = GPT2LMHeadModel.from_pretrained(MODEL_NAME) tokenizer = GPT2TokenizerFast.from_pretrained(MODEL_NAME) generation_pipeline = pipeline("text-generation",model=model,tokenizer=tokenizer) #feel free to try different decoding settings generation_pipeline(text, pad_token_id=tokenizer.eos_token_id, num_beams=10, max_length=200, top_p=0.9, repetition_penalty = 3.0, no_repeat_ngram_size = 3)[0]['generated_text'] ``` ## Finetunning using `transformers`: Follow the guide linked [here](https://towardsdatascience.com/fine-tuning-gpt2-on-colab-gpu-for-free-340468c92ed) ## Finetuning using our code with TF 1.15.4: Create the Training TFRecords: ```bash python create_pretraining_data.py --input_file=<RAW TEXT FILE with documents/article separated by an empty line> --output_file=<OUTPUT TFRecord> --tokenizer_dir=<Directory with the GPT2 Tokenizer files> ``` Finetuning: ```bash python3 run_pretraining.py \\r\n --input_file="gs://<GS_BUCKET>/pretraining_data/*" \\r\n --output_dir="gs://<GS_BUCKET>/pretraining_model/" \\r\n --config_file="config/small_hparams.json" \\r\n --batch_size=128 \\r\n --eval_batch_size=8 \\r\n --num_train_steps= \\r\n --num_warmup_steps= \\r\n --learning_rate= \\r\n --save_checkpoints_steps= \\r\n --max_seq_length=1024 \\r\n --max_eval_steps= \\r\n --optimizer="lamb" \\r\n --iterations_per_loop=5000 \\r\n --keep_checkpoint_max=10 \\r\n --use_tpu=True \\r\n --tpu_name=<TPU NAME> \\r\n --do_train=True \\r\n --do_eval=False ``` # Model Sizes Model | Optimizer | Context size | Embedding Size | Num of heads | Num of layers | Model Size / Num of Params | ---|:---:|:---:|:---:|:---:|:---:|:---: AraGPT2-base | `lamb` | 1024 | 768 | 12 | 12 | 527MB / 135M | AraGPT2-medium | `lamb` | 1024 | 1024 | 16 | 24 | 1.38G/370M | AraGPT2-large | `adafactor` | 1024 | 1280 | 20 | 36 | 2.98GB/792M | AraGPT2-mega | `adafactor` | 1024 | 1536 | 25 | 48 | 5.5GB/1.46B | All models are available in the `HuggingFace` model page under the [aubmindlab](https://huggingface.co/aubmindlab/) name. Checkpoints are available in PyTorch, TF2 and TF1 formats. ## Compute Model | Hardware | num of examples (seq len = 1024) | Batch Size | Num of Steps | Time (in days) ---|:---:|:---:|:---:|:---:|:---: AraGPT2-base | TPUv3-128 | 9.7M | 1792 | 125K | 1.5 AraGPT2-medium | TPUv3-8 | 9.7M | 1152 | 85K | 1.5 AraGPT2-large | TPUv3-128 | 9.7M | 256 | 220k | 3 AraGPT2-mega | TPUv3-128 | 9.7M | 256 | 780K | 9 # Dataset The pretraining data used for the new AraGPT2 model is also used for **AraBERTv2 and AraELECTRA**. The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation) For the new dataset we added the unshuffled OSCAR corpus after we thoroughly filter it, to the dataset used in AraBERTv1 but without the websites that we previously crawled: - OSCAR unshuffled and filtered. - [Arabic Wikipedia dump](https://archive.org/details/arwiki-20190201) from 2020/09/01 - [The 1.5B words Arabic Corpus](https://www.semanticscholar.org/paper/1.5-billion-words-Arabic-Corpus-El-Khair/f3eeef4afb81223df96575adadf808fe7fe440b4) - [The OSIAN Corpus](https://www.aclweb.org/anthology/W19-4619) - Assafir news articles. Huge thank you for Assafir for giving us the data # Disclaimer The text generated by AraGPT2 is automatically generated by a neural network model trained on a large amount of texts, which does not represent the authors' or their institutes' official attitudes and preferences. The text generated by AraGPT2 should only be used for research and scientific purposes. If it infringes on your rights and interests or violates social morality, please do not propagate it. # If you used this model please cite us as : ``` @inproceedings{antoun-etal-2021-aragpt2, title = "{A}ra{GPT}2: Pre-Trained Transformer for {A}rabic Language Generation", author = "Antoun, Wissam and Baly, Fady and Hajj, Hazem", booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop", month = apr, year = "2021", address = "Kyiv, Ukraine (Virtual)", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2021.wanlp-1.21", pages = "196--207", } ``` # Acknowledgments Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continuous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT. # Contacts **Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <wfa07@mail.aub.edu> | <wissam.antoun@gmail.com> **Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <fgb06@mail.aub.edu> | <baly.fady@gmail.com>
7,223
[ [ -0.04412841796875, -0.050628662109375, 0.0177001953125, -0.0028018951416015625, -0.02264404296875, -0.004425048828125, -0.015960693359375, -0.03387451171875, -0.00058746337890625, 0.01226806640625, -0.03680419921875, -0.03619384765625, -0.0645751953125, -0.0...
google/bigbird-pegasus-large-pubmed
2023-01-24T16:36:47.000Z
[ "transformers", "pytorch", "bigbird_pegasus", "text2text-generation", "summarization", "en", "dataset:scientific_papers", "arxiv:2007.14062", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
summarization
google
null
null
google/bigbird-pegasus-large-pubmed
30
1,039
transformers
2022-03-02T23:29:05
--- language: en license: apache-2.0 datasets: - scientific_papers tags: - summarization model-index: - name: google/bigbird-pegasus-large-pubmed results: - task: type: summarization name: Summarization dataset: name: scientific_papers type: scientific_papers config: pubmed split: test metrics: - name: ROUGE-1 type: rouge value: 40.8966 verified: true - name: ROUGE-2 type: rouge value: 18.1161 verified: true - name: ROUGE-L type: rouge value: 26.1743 verified: true - name: ROUGE-LSUM type: rouge value: 34.2773 verified: true - name: loss type: loss value: 2.1707184314727783 verified: true - name: meteor type: meteor value: 0.3513 verified: true - name: gen_len type: gen_len value: 221.2531 verified: true - task: type: summarization name: Summarization dataset: name: scientific_papers type: scientific_papers config: arxiv split: test metrics: - name: ROUGE-1 type: rouge value: 40.3815 verified: true - name: ROUGE-2 type: rouge value: 14.374 verified: true - name: ROUGE-L type: rouge value: 23.4773 verified: true - name: ROUGE-LSUM type: rouge value: 33.772 verified: true - name: loss type: loss value: 3.235051393508911 verified: true - name: gen_len type: gen_len value: 186.2003 verified: true --- # BigBirdPegasus model (large) BigBird, is a sparse-attention based transformer which extends Transformer based models, such as BERT to much longer sequences. Moreover, BigBird comes along with a theoretical understanding of the capabilities of a complete transformer that the sparse model can handle. BigBird was introduced in this [paper](https://arxiv.org/abs/2007.14062) and first released in this [repository](https://github.com/google-research/bigbird). Disclaimer: The team releasing BigBird did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BigBird relies on **block sparse attention** instead of normal attention (i.e. BERT's attention) and can handle sequences up to a length of 4096 at a much lower compute cost compared to BERT. It has achieved SOTA on various tasks involving very long sequences such as long documents summarization, question-answering with long contexts. ## How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BigBirdPegasusForConditionalGeneration, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("google/bigbird-pegasus-large-pubmed") # by default encoder-attention is `block_sparse` with num_random_blocks=3, block_size=64 model = BigBirdPegasusForConditionalGeneration.from_pretrained("google/bigbird-pegasus-large-pubmed") # decoder attention type can't be changed & will be "original_full" # you can change `attention_type` (encoder only) to full attention like this: model = BigBirdPegasusForConditionalGeneration.from_pretrained("google/bigbird-pegasus-large-pubmed", attention_type="original_full") # you can change `block_size` & `num_random_blocks` like this: model = BigBirdPegasusForConditionalGeneration.from_pretrained("google/bigbird-pegasus-large-pubmed", block_size=16, num_random_blocks=2) text = "Replace me by any text you'd like." inputs = tokenizer(text, return_tensors='pt') prediction = model.generate(**inputs) prediction = tokenizer.batch_decode(prediction) ``` ## Training Procedure This checkpoint is obtained after fine-tuning `BigBirdPegasusForConditionalGeneration` for **summarization** on **pubmed dataset** from [scientific_papers](https://huggingface.co/datasets/scientific_papers). ## BibTeX entry and citation info ```tex @misc{zaheer2021big, title={Big Bird: Transformers for Longer Sequences}, author={Manzil Zaheer and Guru Guruganesh and Avinava Dubey and Joshua Ainslie and Chris Alberti and Santiago Ontanon and Philip Pham and Anirudh Ravula and Qifan Wang and Li Yang and Amr Ahmed}, year={2021}, eprint={2007.14062}, archivePrefix={arXiv}, primaryClass={cs.LG} } ```
4,352
[ [ -0.0287933349609375, -0.040252685546875, 0.01239776611328125, 0.023345947265625, -0.0081634521484375, -0.0261383056640625, -0.030242919921875, -0.040313720703125, 0.032073974609375, 0.01056671142578125, -0.049285888671875, -0.01458740234375, -0.054534912109375, ...
stablediffusionapi/meinamix
2023-04-28T10:03:16.000Z
[ "diffusers", "stablediffusionapi.com", "stable-diffusion-api", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
stablediffusionapi
null
null
stablediffusionapi/meinamix
3
1,039
diffusers
2023-04-28T10:01:26
--- license: creativeml-openrail-m tags: - stablediffusionapi.com - stable-diffusion-api - text-to-image - ultra-realistic pinned: true --- # MeinaMix API Inference ![generated from stablediffusionapi.com](https://pub-8b49af329fae499aa563997f5d4068a4.r2.dev/generations/12823269741682675994.png) ## Get API Key Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed. Replace Key in below code, change **model_id** to "meinamix" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs) Model link: [View model](https://stablediffusionapi.com/models/meinamix) Credits: [View credits](https://civitai.com/?query=MeinaMix) View all models: [View Models](https://stablediffusionapi.com/models) import requests import json url = "https://stablediffusionapi.com/api/v3/dreambooth" payload = json.dumps({ "key": "", "model_id": "meinamix", "prompt": "actual 8K portrait photo of gareth person, portrait, happy colors, bright eyes, clear eyes, warm smile, smooth soft skin, big dreamy eyes, beautiful intricate colored hair, symmetrical, anime wide eyes, soft lighting, detailed face, by makoto shinkai, stanley artgerm lau, wlop, rossdraws, concept art, digital painting, looking into camera", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) > Use this coupon code to get 25% off **DMGG0RBN**
2,394
[ [ -0.035675048828125, -0.052520751953125, 0.0418701171875, 0.0229949951171875, -0.031707763671875, -0.0021839141845703125, 0.0174560546875, -0.031829833984375, 0.035858154296875, 0.04254150390625, -0.06219482421875, -0.07366943359375, -0.0282745361328125, -0.0...
jinaai/jina-embedding-t-en-v1
2023-09-10T06:42:12.000Z
[ "sentence-transformers", "pytorch", "bert", "finetuner", "feature-extraction", "sentence-similarity", "en", "dataset:jinaai/negation-dataset", "arxiv:2307.11224", "license:apache-2.0", "endpoints_compatible", "region:us" ]
sentence-similarity
jinaai
null
null
jinaai/jina-embedding-t-en-v1
26
1,039
sentence-transformers
2023-07-21T15:05:02
--- pipeline_tag: sentence-similarity tags: - finetuner - sentence-transformers - feature-extraction - sentence-similarity datasets: - jinaai/negation-dataset language: en license: apache-2.0 --- <br><br> <p align="center"> <img src="https://github.com/jina-ai/finetuner/blob/main/docs/_static/finetuner-logo-ani.svg?raw=true" alt="Finetuner logo: Finetuner helps you to create experiments in order to improve embeddings on search tasks. It accompanies you to deliver the last mile of performance-tuning for neural search applications." width="150px"> </p> <p align="center"> <b>The text embedding set trained by <a href="https://jina.ai/"><b>Jina AI</b></a>, <a href="https://github.com/jina-ai/finetuner"><b>Finetuner</b></a> team.</b> </p> ## Intented Usage & Model Info `jina-embedding-t-en-v1` is a tiny language model that has been trained using Jina AI's Linnaeus-Clean dataset. This dataset consists of 380 million pairs of sentences, which include both query-document pairs. These pairs were obtained from various domains and were carefully selected through a thorough cleaning process. The Linnaeus-Full dataset, from which the Linnaeus-Clean dataset is derived, originally contained 1.6 billion sentence pairs. The model has a range of use cases, including information retrieval, semantic textual similarity, text reranking, and more. With a tiny small parameter size of just 14 million parameters, the model enables lightning-fast inference on CPU, while still delivering impressive performance. Additionally, we provide the following options: - [`jina-embedding-t-en-v1`](https://huggingface.co/jinaai/jina-embedding-t-en-v1): 14 million parameters **(you are here)**. - [`jina-embedding-s-en-v1`](https://huggingface.co/jinaai/jina-embedding-s-en-v1): 35 million parameters. - [`jina-embedding-b-en-v1`](https://huggingface.co/jinaai/jina-embedding-b-en-v1): 110 million parameters. - [`jina-embedding-l-en-v1`](https://huggingface.co/jinaai/jina-embedding-l-en-v1): 330 million parameters. - `jina-embedding-1b-en-v1`: 1.2 billion parameters, 10 times bert-base (soon). - `jina-embedding-6b-en-v1`: 6 billion parameters, 30 times bert-base (soon). ## Data & Parameters Please checkout our [technical blog](https://arxiv.org/abs/2307.11224). ## Metrics We compared the model against `all-minilm-l6-v2`/`all-mpnet-base-v2` from sbert and `text-embeddings-ada-002` from OpenAI: |Name|param |dimension| |------------------------------|-----|------| |all-minilm-l6-v2|23m |384| |all-mpnet-base-v2 |110m |768| |ada-embedding-002|Unknown/OpenAI API |1536| |jina-embedding-t-en-v1|14m |312| |jina-embedding-s-en-v1|35m |512| |jina-embedding-b-en-v1|110m |768| |jina-embedding-l-en-v1|330m |1024| |Name|STS12|STS13|STS14|STS15|STS16|STS17|TRECOVID|Quora|SciFact| |------------------------------|-----|-----|-----|-----|-----|-----|--------|-----|-----| |all-minilm-l6-v2|0.724|0.806|0.756|0.854|0.79 |0.876|0.473 |0.876|0.645 | |all-mpnet-base-v2|0.726|**0.835**|0.78 |0.857|0.8 |**0.906**|0.513 |0.875|0.656 | |ada-embedding-002|0.698|0.833|0.761|0.861|**0.86** |0.903|**0.685** |0.876|**0.726** | |jina-embedding-t-en-v1|0.717|0.773|0.731|0.829|0.777|0.860|0.482 |0.840|0.522 | |jina-embedding-s-en-v1|0.743|0.786|0.738|0.837|0.80|0.875|0.523 |0.857|0.524 | |jina-embedding-b-en-v1|**0.751**|0.809|0.761|0.856|0.812|0.890|0.606 |0.876|0.594 | |jina-embedding-l-en-v1|0.745|0.832|**0.781**|**0.869**|0.837|0.902|0.573 |**0.881**|0.598 | ## Inference Speed We encoded a single sentence "What is the current weather like today?" 10k times on: 1. cpu: MacBook Pro 2020, 2 GHz Quad-Core Intel Core i5 2. gpu: 1 Nvidia 3090 And recorded time spent to demonstrate the embedding speed: |Name|param |dimension| time@cpu | time@gpu | |------------------------------|-----|------|-----|-----| |jina-embedding-t-en-v1|14m |312| 5.78s | 2.36s| |all-minilm-l6-v2|23m |384| 11.95s | 2.70s | |jina-embedding-s-en-v1|35m |512| 17.25s | 2.81s | ## Usage Use with Jina AI Finetuner ```python !pip install finetuner import finetuner model = finetuner.build_model('jinaai/jina-embedding-t-en-v1') embeddings = finetuner.encode( model=model, data=['how is the weather today', 'What is the current weather like today?'] ) print(finetuner.cos_sim(embeddings[0], embeddings[1])) ``` Use with sentence-transformers: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim sentences = ['how is the weather today', 'What is the current weather like today?'] model = SentenceTransformer('jinaai/jina-embedding-t-en-v1') embeddings = model.encode(sentences) print(cos_sim(embeddings[0], embeddings[1])) ``` ## Fine-tuning Please consider [Finetuner](https://github.com/jina-ai/finetuner). ## Plans 1. The development of `jina-embedding-s-en-v2` is currently underway with two main objectives: improving performance and increasing the maximum sequence length. 2. We are currently working on a bilingual embedding model that combines English and X language. The upcoming model will be called `jina-embedding-s/b/l-de-v1`. ## Contact Join our [Discord community](https://discord.jina.ai) and chat with other community members about ideas. ## Citation If you find Jina Embeddings useful in your research, please cite the following paper: ``` latex @misc{günther2023jina, title={Jina Embeddings: A Novel Set of High-Performance Sentence Embedding Models}, author={Michael Günther and Louis Milliken and Jonathan Geuter and Georgios Mastrapas and Bo Wang and Han Xiao}, year={2023}, eprint={2307.11224}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
5,748
[ [ -0.05328369140625, -0.07177734375, 0.02349853515625, 0.0126190185546875, -0.0183563232421875, -0.0167694091796875, -0.0210418701171875, -0.021728515625, 0.04241943359375, -0.0018014907836914062, -0.0340576171875, -0.037139892578125, -0.048431396484375, 0.005...
MirageML/lowpoly-environment
2023-05-05T21:32:29.000Z
[ "diffusers", "tensorboard", "safetensors", "stable-diffusion", "text-to-image", "en", "license:creativeml-openrail-m", "endpoints_compatible", "has_space", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
MirageML
null
null
MirageML/lowpoly-environment
8
1,038
diffusers
2022-11-28T08:25:58
--- language: - en tags: - stable-diffusion - text-to-image license: creativeml-openrail-m inference: true --- # Low Poly Environment on Stable Diffusion via Dreambooth This the Stable Diffusion model fine-tuned the Low Poly Environment concept taught to Stable Diffusion with Dreambooth. It can be used by modifying the `instance_prompt`: **a photo of lowpoly_environment** # Run on [Mirage](https://app.mirageml.com) Run this model and explore text-to-3D on [Mirage](https://app.mirageml.com)! Here are is a sample output for this model: ![image 0](https://huggingface.co/MirageML/lowpoly-environment/resolve/main/output.png) # Share your Results and Reach us on [Discord](https://discord.gg/9B2Pu2bEvj)! [![Discord Server](https://discord.com/api/guilds/1022387303022338058/widget.png?style=banner2)](https://discord.gg/9B2Pu2bEvj) [Image Source](https://www.behance.net/gallery/76095417/Game-Environments?tracking_source=search_projects%7Cisometric+building)
969
[ [ -0.044952392578125, -0.0850830078125, 0.048492431640625, 0.0160675048828125, -0.01552581787109375, 0.00301361083984375, -0.008575439453125, -0.024169921875, 0.0257568359375, 0.04473876953125, -0.0374755859375, -0.050384521484375, -0.01189422607421875, -0.022...
osiria/deberta-italian-question-answering
2023-08-08T02:12:51.000Z
[ "transformers", "pytorch", "safetensors", "deberta-v2", "question-answering", "it", "dataset:squad_it", "arxiv:2111.09543", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
question-answering
osiria
null
null
osiria/deberta-italian-question-answering
4
1,038
transformers
2023-06-01T22:10:12
--- license: mit language: - it datasets: - squad_it widget: - text: Quale libro fu scritto da Alessandro Manzoni? context: Alessandro Manzoni pubblicò la prima versione dei Promessi Sposi nel 1827 - text: In quali competizioni gareggia la Ferrari? context: La Scuderia Ferrari è una squadra corse italiana di Formula 1 con sede a Maranello - text: Quale sport è riferito alla Serie A? context: Il campionato di Serie A è la massima divisione professionistica del campionato italiano di calcio maschile model-index: - name: osiria/deberta-italian-question-answering results: - task: type: question-answering name: Question Answering dataset: name: squad_it type: squad_it metrics: - type: exact-match value: 0.7004 name: Exact Match - type: f1 value: 0.8097 name: F1 pipeline_tag: question-answering --- -------------------------------------------------------------------------------------------------- <body> <span class="vertical-text" style="background-color:lightgreen;border-radius: 3px;padding: 3px;"> </span> <br> <span class="vertical-text" style="background-color:orange;border-radius: 3px;padding: 3px;">    Task: Question Answering</span> <br> <span class="vertical-text" style="background-color:lightblue;border-radius: 3px;padding: 3px;">    Model: DeBERTa</span> <br> <span class="vertical-text" style="background-color:tomato;border-radius: 3px;padding: 3px;">    Lang: IT</span> <br> <span class="vertical-text" style="background-color:lightgrey;border-radius: 3px;padding: 3px;">  </span> <br> <span class="vertical-text" style="background-color:#CF9FFF;border-radius: 3px;padding: 3px;"> </span> </body> -------------------------------------------------------------------------------------------------- <h3>Model description</h3> This is a <b>DeBERTa</b> <b>[1]</b> model for the <b>Italian</b> language, fine-tuned for <b>Extractive Question Answering</b> on the [SQuAD-IT](https://huggingface.co/datasets/squad_it) dataset <b>[2]</b>. <h3>Training and Performances</h3> The model is trained to perform question answering, given a context and a question (under the assumption that the context contains the answer to the question). It has been fine-tuned for Extractive Question Answering, using the SQuAD-IT dataset, for 2 epochs with a linearly decaying learning rate starting from 3e-5, maximum sequence length of 384 and document stride of 128. <br>The dataset includes 54.159 training instances and 7.609 test instances <b>update: version 2.0</b> The 2.0 version further improves the performances by exploiting a 2-phases fine-tuning strategy: the model is first fine-tuned on the English SQuAD v2 (1 epoch, 20% warmup ratio, and max learning rate of 3e-5) then further fine-tuned on the Italian SQuAD (2 epochs, no warmup, initial learning rate of 3e-5) In order to maximize the benefits of the multilingual procedure, [mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) is used as a pre-trained model. When the double fine-tuning is completed, the embedding layer is then compressed as in [deberta-base-italian](https://huggingface.co/osiria/deberta-base-italian) to obtain a mono-lingual model size The performances on the test set are reported in the following table: (<b>version 2.0</b> performances) | EM | F1 | | ------ | ------ | | 70.04 | 80.97 | Testing notebook: https://huggingface.co/osiria/deberta-italian-question-answering/blob/main/osiria_deberta_italian_qa_evaluation.ipynb <h3>Quick usage</h3> In order to get the best possible outputs from the model, it is recommended to use the following pipeline ```python from transformers import DebertaV2TokenizerFast, DebertaV2ForQuestionAnswering import re import string from transformers.pipelines import QuestionAnsweringPipeline tokenizer = DebertaV2TokenizerFast.from_pretrained("osiria/deberta-italian-question-answering") model = DebertaV2ForQuestionAnswering.from_pretrained("osiria/deberta-italian-question-answering") class OsiriaQA(QuestionAnsweringPipeline): def __init__(self, punctuation = ',;.:!?()[\]{}', **kwargs): QuestionAnsweringPipeline.__init__(self, **kwargs) self.post_regex_left = "^[\s" + punctuation + "]+" self.post_regex_right = "[\s" + punctuation + "]+$" def postprocess(self, output): output = QuestionAnsweringPipeline.postprocess(self, model_outputs=output) output_length = len(output["answer"]) output["answer"] = re.sub(self.post_regex_left, "", output["answer"]) output["start"] = output["start"] + (output_length - len(output["answer"])) output_length = len(output["answer"]) output["answer"] = re.sub(self.post_regex_right, "", output["answer"]) output["end"] = output["end"] - (output_length - len(output["answer"])) return output pipeline_qa = OsiriaQA(model = model, tokenizer = tokenizer) pipeline_qa(context = "Alessandro Manzoni è nato a Milano nel 1785", question = "Dove è nato Manzoni?") # {'score': 0.9899800419807434, 'start': 28, 'end': 34, 'answer': 'Milano'} ``` You can also try the model online using this web app: https://huggingface.co/spaces/osiria/deberta-italian-question-answering <h3>References</h3> [1] https://arxiv.org/abs/2111.09543 [2] https://link.springer.com/chapter/10.1007/978-3-030-03840-3_29 <h3>Limitations</h3> This model was trained on the English SQuAD v2 and on SQuAD-IT, which is mainly a machine translated version of the original SQuAD v1.1. This means that the quality of the training set is limited by the machine translation. Moreover, the model is meant to answer questions under the assumption that the required information is actually contained in the given context (which is the underlying assumption of SQuAD v1.1). If the assumption is violated, the model will try to return an answer in any case, which is going to be incorrect. <h3>License</h3> The model is released under <b>MIT</b> license
6,030
[ [ -0.031768798828125, -0.0826416015625, 0.0287933349609375, 0.0238494873046875, -0.00965118408203125, -0.00220489501953125, 0.002063751220703125, -0.036651611328125, 0.013824462890625, 0.00637054443359375, -0.0714111328125, -0.0222015380859375, -0.038909912109375,...
showlab/show-1-sr2
2023-10-12T05:23:35.000Z
[ "diffusers", "text-to-video", "arxiv:2309.15818", "license:cc-by-nc-4.0", "diffusers:VideoToVideoSDPipeline", "region:us", "has_space" ]
text-to-video
showlab
null
null
showlab/show-1-sr2
5
1,038
diffusers
2023-10-10T17:08:54
--- license: cc-by-nc-4.0 tags: - text-to-video --- # show-1-sr2 Pixel-based VDMs can generate motion accurately aligned with the textual prompt but typically demand expensive computational costs in terms of time and GPU memory, especially when generating high-resolution videos. Latent-based VDMs are more resource-efficient because they work in a reduced-dimension latent space. But it is challenging for such small latent space (e.g., 64×40 for 256×160 videos) to cover rich yet necessary visual semantic details as described by the textual prompt. To marry the strength and alleviate the weakness of pixel-based and latent-based VDMs, we introduce **Show-1**, an efficient text-to-video model that generates videos of not only decent video-text alignment but also high visual quality. ![](https://showlab.github.io/Show-1/assets/images/method.png) ## Model Details This is the super-resolution model of Show-1 that upscales videos from a 256x160 resolution to 576x320. The model is finetuned using diffusion timesteps 0-900 on the [WebVid-10M](https://maxbain.com/webvid-dataset/) dataset. - **Developed by:** [Show Lab, National University of Singapore](https://sites.google.com/view/showlab/home?authuser=0) - **Model type:** pixel- and latent-based cascaded text-to-video diffusion model - **Cascade stage:** super-resolution (256x160->576x320) - **Finetuned from model:** [cerspense/zeroscope_v2_576w](https://huggingface.co/cerspense/zeroscope_v2_576w) - **License:** Creative Commons Attribution Non Commercial 4.0 - **Resources for more information:** [GitHub](https://github.com/showlab/Show-1), [Website](https://showlab.github.io/Show-1/), [arXiv](https://arxiv.org/abs/2309.15818) ## Usage Clone the GitHub repository and install the requirements: ```bash git clone https://github.com/showlab/Show-1.git pip install -r requirements.txt ``` Run the following command to generate a video from a text prompt. By default, this will automatically download all the model weights from huggingface. ```bash python run_inference.py ``` You can also download the weights manually and change the `pretrained_model_path` in `run_inference.py` to run the inference. ```bash git lfs install # base git clone https://huggingface.co/showlab/show-1-base # interp git clone https://huggingface.co/showlab/show-1-interpolation # sr1 git clone https://huggingface.co/showlab/show-1-sr1 # sr2 git clone https://huggingface.co/showlab/show-1-sr2 ``` ## Citation If you make use of our work, please cite our paper. ```bibtex @misc{zhang2023show1, title={Show-1: Marrying Pixel and Latent Diffusion Models for Text-to-Video Generation}, author={David Junhao Zhang and Jay Zhangjie Wu and Jia-Wei Liu and Rui Zhao and Lingmin Ran and Yuchao Gu and Difei Gao and Mike Zheng Shou}, year={2023}, eprint={2309.15818}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ## Model Card Contact This model card is maintained by [David Junhao Zhang](https://junhaozhang98.github.io/) and [Jay Zhangjie Wu](https://jayzjwu.github.io/). For any questions, please feel free to contact us or open an issue in the repository.
3,151
[ [ -0.032623291015625, -0.06622314453125, 0.038848876953125, 0.006378173828125, -0.0289459228515625, -0.0295867919921875, 0.0035552978515625, 0.0027256011962890625, 0.0029144287109375, 0.015472412109375, -0.05609130859375, -0.032928466796875, -0.061614990234375, ...
valhalla/t5-small-e2e-qg
2021-07-30T13:10:33.000Z
[ "transformers", "pytorch", "t5", "text2text-generation", "question-generation", "dataset:squad", "arxiv:1910.10683", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text2text-generation
valhalla
null
null
valhalla/t5-small-e2e-qg
6
1,037
transformers
2022-03-02T23:29:05
--- datasets: - squad tags: - question-generation widget: - text: "Python is developed by Guido Van Rossum and released in 1991. </s>" license: mit --- ## T5 for question-generation This is [t5-small](https://arxiv.org/abs/1910.10683) model trained for end-to-end question generation task. Simply input the text and the model will generate multile questions. You can play with the model using the inference API, just put the text and see the results! For more deatils see [this](https://github.com/patil-suraj/question_generation) repo. ### Model in action 🚀 You'll need to clone the [repo](https://github.com/patil-suraj/question_generation). [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patil-suraj/question_generation/blob/master/question_generation.ipynb) ```python3 from pipelines import pipeline text = "Python is an interpreted, high-level, general-purpose programming language. Created by Guido van Rossum \ and first released in 1991, Python's design philosophy emphasizes code \ readability with its notable use of significant whitespace." nlp = pipeline("e2e-qg") nlp(text) => [ 'Who created Python?', 'When was Python first released?', "What is Python's design philosophy?" ] ```
1,277
[ [ -0.0350341796875, -0.07464599609375, 0.0307769775390625, 0.0054168701171875, 0.00051116943359375, -0.01270294189453125, 0.004947662353515625, -0.01922607421875, -0.0081634521484375, 0.04022216796875, -0.05413818359375, -0.01708984375, -0.01415252685546875, 0...
stablediffusionapi/epicrealism
2023-07-12T11:56:42.000Z
[ "diffusers", "stablediffusionapi.com", "stable-diffusion-api", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "endpoints_compatible", "has_space", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
stablediffusionapi
null
null
stablediffusionapi/epicrealism
1
1,037
diffusers
2023-05-31T13:46:10
--- license: creativeml-openrail-m tags: - stablediffusionapi.com - stable-diffusion-api - text-to-image - ultra-realistic pinned: true --- # epiCRealism V1 API Inference ![generated from stablediffusionapi.com](https://cdn.stablediffusionapi.com/generations/3513027121685540721.png) ## Get API Key Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed. Replace Key in below code, change **model_id** to "epicrealism" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs) Try model for free: [Generate Images](https://stablediffusionapi.com/models/epicrealism) Model link: [View model](https://stablediffusionapi.com/models/epicrealism) Credits: [View credits](https://civitai.com/?query=epiCRealism) View all models: [View Models](https://stablediffusionapi.com/models) import requests import json url = "https://stablediffusionapi.com/api/v3/dreambooth" payload = json.dumps({ "key": "your_api_key", "model_id": "epicrealism", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) > Use this coupon code to get 25% off **DMGG0RBN**
2,443
[ [ -0.033416748046875, -0.061920166015625, 0.03729248046875, 0.0187835693359375, -0.031646728515625, 0.01262664794921875, 0.02325439453125, -0.0330810546875, 0.048004150390625, 0.04986572265625, -0.05072021484375, -0.0650634765625, -0.02728271484375, 0.00080347...
Dr-BERT/DrBERT-4GB
2023-05-28T17:38:07.000Z
[ "transformers", "pytorch", "tensorboard", "camembert", "fill-mask", "medical", "chemistry", "biomedical", "life science", "fr", "dataset:Dr-BERT/NACHOS", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
Dr-BERT
null
null
Dr-BERT/DrBERT-4GB
1
1,036
transformers
2022-12-25T19:21:28
--- license: apache-2.0 datasets: - Dr-BERT/NACHOS language: - fr library_name: transformers tags: - medical - chemistry - biomedical - life science --- <p align="center"> <img src="https://github.com/qanastek/DrBERT/blob/main/assets/logo.png?raw=true" alt="drawing" width="250"/> </p> # DrBERT: A Robust Pre-trained Model in French for Biomedical and Clinical domains In recent years, pre-trained language models (PLMs) achieve the best performance on a wide range of natural language processing (NLP) tasks. While the first models were trained on general domain data, specialized ones have emerged to more effectively treat specific domains. In this paper, we propose an original study of PLMs in the medical domain on French language. We compare, for the first time, the performance of PLMs trained on both public data from the web and private data from healthcare establishments. We also evaluate different learning strategies on a set of biomedical tasks. Finally, we release the first specialized PLMs for the biomedical field in French, called DrBERT, as well as the largest corpus of medical data under free license on which these models are trained. # 1. DrBERT models **DrBERT** is a French RoBERTa trained on a open source corpus of French medical crawled textual data called NACHOS. Models with different amount of data from differents public and private sources are trained using the CNRS (French National Centre for Scientific Research) [Jean Zay](http://www.idris.fr/jean-zay/) French supercomputer. Only the weights of the models trained using exclusively open-sources data are publicly released to prevent any personnal information leak and to follow the european GDPR laws : | Model name | Corpus | Number of layers | Attention Heads | Embedding Dimension | Sequence Length | Model URL | | :------: | :---: | :---: | :---: | :---: | :---: | :---: | | `DrBERT-7-GB-cased-Large` | NACHOS 7 GB | 24 | 16 | 1024 | 512 | [HuggingFace](https://huggingface.co/Dr-BERT/DrBERT-7GB-Large) | | `DrBERT-7-GB-cased` | NACHOS 7 GB | 12 | 12 | 768 | 512 | [HuggingFace](https://huggingface.co/Dr-BERT/DrBERT-7GB) | | `DrBERT-4-GB-cased` | NACHOS 4 GB | 12 | 12 | 768 | 512 | [HuggingFace](https://huggingface.co/Dr-BERT/DrBERT-4GB) | | `DrBERT-4-GB-cased-CP-CamemBERT` | NACHOS 4 GB | 12 | 12 | 768 | 512 | [HuggingFace](https://huggingface.co/Dr-BERT/DrBERT-4GB-CP-CamemBERT) | | `DrBERT-4-GB-cased-CP-PubMedBERT` | NACHOS 4 GB | 12 | 12 | 768 | 512 | [HuggingFace](https://huggingface.co/Dr-BERT/DrBERT-4GB-CP-PubMedBERT) | # 2. Using DrBERT You can use DrBERT with [Hugging Face's Transformers library](https://github.com/huggingface/transformers) as follow. Loading the model and tokenizer : ```python from transformers import AutoModel, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("Dr-BERT/DrBERT-7GB") model = AutoModel.from_pretrained("Dr-BERT/DrBERT-7GB") ``` Perform the mask filling task : ```python from transformers import pipeline fill_mask = pipeline("fill-mask", model="Dr-BERT/DrBERT-7GB", tokenizer="Dr-BERT/DrBERT-7GB") results = fill_mask("La patiente est atteinte d'une <mask>") ``` # 3. Pre-training DrBERT tokenizer and model from scratch by using HuggingFace Transformers Library ## 3.1 Install dependencies ```bash accelerate @ git+https://github.com/huggingface/accelerate@66edfe103a0de9607f9b9fdcf6a8e2132486d99b datasets==2.6.1 sentencepiece==0.1.97 protobuf==3.20.1 evaluate==0.2.2 tensorboard==2.11.0 torch >= 1.3 ``` ## 3.2 Download NACHOS Dataset text file Download the full NACHOS dataset from [Zenodo]() and place it the the `from_scratch` or `continued_pretraining` directory. ## 3.3 Build your own tokenizer from scratch based on NACHOS Note : This step is required only in the case of an from scratch pre-training, if you want to do a continued pre-training you just have to download the model and the tokenizer that correspond to the model you want to continue the training from. In this case, you simply have to go to the HuggingFace Hub, select a model (for example [RoBERTa-base](https://huggingface.co/roberta-base)). Finally, you have to download the entire model / tokenizer repository by clicking on the `Use In Transformers` button and get the Git link `git clone https://huggingface.co/roberta-base`. Build the tokenizer from scratch on your data of the file `./corpus.txt` by using `./build_tokenizer.sh`. ## 3.4 Preprocessing and tokenization of the dataset First, replace the field `tokenizer_path` of the shell script to match the path of your tokenizer directory downloaded before using HuggingFace Git or the one you have build. Run `./preprocessing_dataset.sh` to generate the tokenized dataset by using the givent tokenizer. ## 3.5 Model training First, change the number of GPUs `--ntasks=128` you are needing to match your computational capabilities in the shell script called `run_training.sh`. In our case, we used 128 V100 32 GB GPUs from 32 nodes of 4 GPUs (`--ntasks-per-node=4` and `--gres=gpu:4`) during 20 hours (`--time=20:00:00`). If you are using Jean Zay, you also need to change the `-A` flag to match one of your `@gpu` profile capable of running the job. You also need to move **ALL** of your datasets, tokenizer, script and outputs on the `$SCRATCH` disk space to preserve others users of suffuring of IO issues. ### 3.5.1 Pre-training from scratch Once the SLURM parameters updated, you have to change name of the model architecture in the flag `--model_type="camembert"` and to update the `--config_overrides=` according to the specifications of the architecture you are trying to train. In our case, RoBERTa had a `514` sequence length, a vocabulary of `32005` (32K tokens of the tokenizer and 5 of the model architecture) tokens, the identifier of the beginning-of-sentence token (BOS) and end-of-sentence token (EOS) are respectivly `5` and `6`. Change the Then, go to `./from_scratch/` directory. Run `sbatch ./run_training.sh` to send the training job in the SLURM queue. ### 3.5.2 continue pre-training Once the SLURM parameters updated, you have to change path of the model / tokenizer you want to start from `--model_name_or_path=` / `--tokenizer_name=` to the path of the model downloaded from HuggingFace's Git in the section 3.3. Then, go to `./continued_pretraining/` directory. Run `sbatch ./run_training.sh` to send the training job in the SLURM queue. # 4. Fine-tuning on a downstream task You just need to change the name of the model to `Dr-BERT/DrBERT-7GB` in any of the examples given by HuggingFace's team [here](https://huggingface.co/docs/transformers/tasks/sequence_classification). # Citation BibTeX ```bibtex @inproceedings{labrak2023drbert, title = {{DrBERT: A Robust Pre-trained Model in French for Biomedical and Clinical domains}}, author = {Labrak, Yanis and Bazoge, Adrien and Dufour, Richard and Rouvier, Mickael and Morin, Emmanuel and Daille, Béatrice and Gourraud, Pierre-Antoine}, booktitle = {Proceedings of the 61th Annual Meeting of the Association for Computational Linguistics (ACL'23), Long Paper}, month = july, year = 2023, address = {Toronto, Canada}, publisher = {Association for Computational Linguistics} } ```
7,252
[ [ -0.03326416015625, -0.04339599609375, 0.0218505859375, 0.017333984375, -0.0160064697265625, 0.01035308837890625, -0.01019287109375, -0.02655029296875, 0.03521728515625, 0.032073974609375, -0.046173095703125, -0.0328369140625, -0.06927490234375, 0.00416946411...