modelId
stringlengths
4
111
lastModified
stringlengths
24
24
tags
list
pipeline_tag
stringlengths
5
30
author
stringlengths
2
34
config
null
securityStatus
null
id
stringlengths
4
111
likes
int64
0
9.53k
downloads
int64
2
73.6M
library_name
stringlengths
2
84
created
timestamp[us]
card
stringlengths
101
901k
card_len
int64
101
901k
embeddings
list
mrm8488/bert-small-finetuned-squadv2
2021-05-20T00:33:09.000Z
[ "transformers", "pytorch", "jax", "bert", "question-answering", "en", "arxiv:1908.08962", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
question-answering
mrm8488
null
null
mrm8488/bert-small-finetuned-squadv2
1
715
transformers
2022-03-02T23:29:05
--- language: en thumbnail: --- # BERT-Small fine-tuned on SQuAD v2 [BERT-Small](https://github.com/google-research/bert/) created by [Google Research](https://github.com/google-research) and fine-tuned on [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) for **Q&A** downstream task. **Mode size** (after training): **109.74 MB** ## Details of BERT-Small and its 'family' (from their documentation) Released on March 11th, 2020 This is model is a part of 24 smaller BERT models (English only, uncased, trained with WordPiece masking) referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962). The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher. ## Details of the downstream task (Q&A) - Dataset [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD2.0, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering. | Dataset | Split | # samples | | -------- | ----- | --------- | | SQuAD2.0 | train | 130k | | SQuAD2.0 | eval | 12.3k | ## Model training The model was trained on a Tesla P100 GPU and 25GB of RAM. The script for fine tuning can be found [here](https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_squad.py) ## Results: | Metric | # Value | | ------ | --------- | | **EM** | **60.49** | | **F1** | **64.21** | ## Comparison: | Model | EM | F1 score | SIZE (MB) | | ------------------------------------------------------------------------------------------- | --------- | --------- | --------- | | [bert-tiny-finetuned-squadv2](https://huggingface.co/mrm8488/bert-tiny-finetuned-squadv2) | 48.60 | 49.73 | **16.74** | | [bert-mini-finetuned-squadv2](https://huggingface.co/mrm8488/bert-mini-finetuned-squadv2) | 56.31 | 59.65 | 42.63 | | [bert-small-finetuned-squadv2](https://huggingface.co/mrm8488/bert-small-finetuned-squadv2) | **60.49** | **64.21** | 109.74 | ## Model in action Fast usage with **pipelines**: ```python from transformers import pipeline qa_pipeline = pipeline( "question-answering", model="mrm8488/bert-small-finetuned-squadv2", tokenizer="mrm8488/bert-small-finetuned-squadv2" ) qa_pipeline({ 'context': "Manuel Romero has been working hardly in the repository hugginface/transformers lately", 'question': "Who has been working hard for hugginface/transformers lately?" }) # Output: ``` ```json { "answer": "Manuel Romero", "end": 13, "score": 0.9939319924374637, "start": 0 } ``` ### Yes! That was easy 🎉 Let's try with another example ```python qa_pipeline({ 'context': "Manuel Romero has been working hardly in the repository hugginface/transformers lately", 'question': "For which company has worked Manuel Romero?" }) # Output: ``` ```json { "answer": "hugginface/transformers", "end": 79, "score": 0.6024888734447131, "start": 56 } ``` ### It works!! 🎉 🎉 🎉 > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
3,753
[ [ -0.049346923828125, -0.0556640625, 0.0233612060546875, 0.01256561279296875, -0.0052490234375, 0.00909423828125, -0.0199737548828125, -0.025543212890625, 0.023681640625, 0.012725830078125, -0.0794677734375, -0.01209259033203125, -0.03741455078125, 0.009994506...
nvidia/tts_en_fastpitch
2022-09-23T04:28:43.000Z
[ "nemo", "text-to-speech", "speech", "audio", "Transformer", "pytorch", "NeMo", "Riva", "en", "dataset:ljspeech", "arxiv:2006.06873", "arxiv:2108.10447", "license:cc-by-4.0", "has_space", "region:us" ]
text-to-speech
nvidia
null
null
nvidia/tts_en_fastpitch
28
715
nemo
2022-06-28T17:55:51
--- language: - en library_name: nemo datasets: - ljspeech thumbnail: null tags: - text-to-speech - speech - audio - Transformer - pytorch - NeMo - Riva license: cc-by-4.0 --- # NVIDIA FastPitch (en-US) <style> img { display: inline; } </style> | [![Model architecture](https://img.shields.io/badge/Model_Arch-FastPitch--Transformer-lightgrey#model-badge)](#model-architecture) | [![Model size](https://img.shields.io/badge/Params-45M-lightgrey#model-badge)](#model-architecture) | [![Language](https://img.shields.io/badge/Language-en--US-lightgrey#model-badge)](#datasets) | [![Riva Compatible](https://img.shields.io/badge/NVIDIA%20Riva-compatible-brightgreen#model-badge)](#deployment-with-nvidia-riva) | FastPitch [1] is a fully-parallel transformer architecture with prosody control over pitch and individual phoneme duration. Additionally, it uses an unsupervised speech-text aligner [2]. See the [model architecture](#model-architecture) section for complete architecture details. It is also compatible with NVIDIA Riva for [production-grade server deployments](#deployment-with-nvidia-riva). ## Usage The model is available for use in the NeMo toolkit [3] and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset. To train, fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed the latest PyTorch version. ``` pip install nemo_toolkit['all'] ``` ### Automatically instantiate the model Note: This model generates only spectrograms and a vocoder is needed to convert the spectrograms to waveforms. In this example HiFiGAN is used. ```python # Load FastPitch from nemo.collections.tts.models import FastPitchModel spec_generator = FastPitchModel.from_pretrained("nvidia/tts_en_fastpitch") # Load vocoder from nemo.collections.tts.models import HifiGanModel model = HifiGanModel.from_pretrained(model_name="nvidia/tts_hifigan") ``` ### Generate audio ```python import soundfile as sf parsed = spec_generator.parse("You can type your sentence here to get nemo to produce speech.") spectrogram = spec_generator.generate_spectrogram(tokens=parsed) audio = model.convert_spectrogram_to_audio(spec=spectrogram) ``` ### Save the generated audio file ```python # Save the audio to disk in a file called speech.wav sf.write("speech.wav", audio.to('cpu').detach().numpy()[0], 22050) ``` ### Input This model accepts batches of text. ### Output This model generates mel spectrograms. ## Model Architecture FastPitch is a fully-parallel text-to-speech model based on FastSpeech, conditioned on fundamental frequency contours. The model predicts pitch contours during inference. By altering these predictions, the generated speech can be more expressive, better match the semantic of the utterance, and in the end more engaging to the listener. FastPitch is based on a fully-parallel Transformer architecture, with a much higher real-time factor than Tacotron2 for the mel-spectrogram synthesis of a typical utterance. It uses an unsupervised speech-text aligner. ## Training The NeMo toolkit [3] was used for training the models for 1000 epochs. These model are trained with this [example script](https://github.com/NVIDIA/NeMo/blob/main/examples/tts/fastpitch.py) and this [base config](https://github.com/NVIDIA/NeMo/blob/main/examples/tts/conf/fastpitch_align_v1.05.yaml). ### Datasets This model is trained on LJSpeech sampled at 22050Hz, and has been tested on generating female English voices with an American accent. ## Performance No performance information is available at this time. ## Limitations This checkpoint only works well with vocoders that were trained on 22050Hz data. Otherwise, the generated audio may be scratchy or choppy-sounding. ## Deployment with NVIDIA Riva For the best real-time accuracy, latency, and throughput, deploy the model with [NVIDIA Riva](https://developer.nvidia.com/riva), an accelerated speech AI SDK deployable on-prem, in all clouds, multi-cloud, hybrid, at the edge, and embedded. Additionally, Riva provides: * World-class out-of-the-box accuracy for the most common languages with model checkpoints trained on proprietary data with hundreds of thousands of GPU-compute hours * Best in class accuracy with run-time word boosting (e.g., brand and product names) and customization of acoustic model, language model, and inverse text normalization * Streaming speech recognition, Kubernetes compatible scaling, and Enterprise-grade support Check out [Riva live demo](https://developer.nvidia.com/riva#demos). ## References - [1] [FastPitch: Parallel Text-to-speech with Pitch Prediction](https://arxiv.org/abs/2006.06873) - [2] [One TTS Alignment To Rule Them All](https://arxiv.org/abs/2108.10447) - [3] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
4,881
[ [ -0.041778564453125, -0.07012939453125, 0.014373779296875, 0.0164642333984375, -0.0219268798828125, -0.0032367706298828125, -0.0198822021484375, -0.040557861328125, 0.019012451171875, 0.010894775390625, -0.044097900390625, -0.041534423828125, -0.039520263671875, ...
Fsoft-AIC/videberta-base
2023-03-01T10:01:43.000Z
[ "transformers", "pytorch", "deberta-v2", "transformer", "vietnamese", "nlp", "bert", "deberta", "deberta-v3", "token-classification", "vi", "endpoints_compatible", "region:us" ]
token-classification
Fsoft-AIC
null
null
Fsoft-AIC/videberta-base
4
715
transformers
2023-03-01T09:40:39
--- language: - vi metrics: - f1 pipeline_tag: token-classification tags: - transformer - vietnamese - nlp - bert - deberta - deberta-v3 --- # ViDeBERTa: A powerful pre-trained language model for Vietnamese ViDeBERTa, a new pre-trained monolingual language model for Vietnamese, with three versions - ViDeBERTa_xsmall, ViDeBERTa_base, and ViDeBERTa_large, which are pre-trained on 138GB of Vietnamese text of high-quality and diverse Vietnamese text using DeBERTaV3 architecture. Please check the [official repository][github] for more implementation details and updates The DeBERTa V3 xsmall model comes with 12 layers and a hidden size of 384. It has only 22M backbone parameters with a vocabulary containing 128K tokens which introduces 48M parameters in the Embedding layer. This model was trained using CC100 dataset, which consists of 138 GB of Vietnamese text. ## Fine-tuning on NLU tasks We present the dev results on VLSP POS, PhoNER, ViQuAD dataset. | Model|#Params(M)| POS | NER | MRC | |-----------|-------|---------|-----|----------| | XLM-R-base | 125M | 96.2 | - | 82.0 | | XLM-R-large | 355M | 96.3 | 93.8 | 87.0 | | PhoBERT-base | 135M | 96.7 | 80.1 | | PhoBERT-large | 370M | 96.8 | 83.5 | | ViT5-base | 310M | - | 94.5 | - | | ViT5-large | 866M | - | 93.8 | - | | **ViDeBERTa-xsmall** | **22M** | **96.4** | **93.6** | **81.3** | | ViDeBERTa-base | 86M | 96.8 | 94.5 | 85.7 | | ViDeBERTa-large | 304M | 97.2 | 95.3 | 89.9 | ## Citation If you find ViDeBERTa useful for your work, please cite the following papers: ```latex @article{dao2023videberta, title={ViDeBERTa: A powerful pre-trained language model for Vietnamese}, author={Dao Tran, Cong and Pham, Nhut Huy and Nguyen, Anh and Son Hy, Truong and Vu, Tu}, journal={arXiv e-prints}, pages={arXiv--2301}, year={2023} } ``` [github]: https://github.com/HySonLab/ViDeBERTa
1,876
[ [ -0.022796630859375, -0.0574951171875, 0.0296630859375, 0.0249176025390625, -0.0281524658203125, -0.001773834228515625, -0.00629425048828125, -0.01544952392578125, 0.025238037109375, 0.047454833984375, -0.0194244384765625, -0.0650634765625, -0.0472412109375, ...
timm/deit3_small_patch16_224.fb_in22k_ft_in1k
2023-03-28T01:27:04.000Z
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "dataset:imagenet-22k", "arxiv:2204.07118", "license:apache-2.0", "region:us" ]
image-classification
timm
null
null
timm/deit3_small_patch16_224.fb_in22k_ft_in1k
0
715
timm
2023-03-28T01:26:49
--- tags: - image-classification - timm library_tag: timm license: apache-2.0 datasets: - imagenet-1k - imagenet-22k --- # Model card for deit3_small_patch16_224.fb_in22k_ft_in1k A DeiT-III image classification model. Pretrained on ImageNet-22k and fine-tuned on ImageNet-1k by paper authors. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 22.1 - GMACs: 4.6 - Activations (M): 11.9 - Image size: 224 x 224 - **Papers:** - DeiT III: Revenge of the ViT: https://arxiv.org/abs/2204.07118 - **Original:** https://github.com/facebookresearch/deit - **Dataset:** ImageNet-1k - **Pretrain Dataset:** ImageNet-22k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('deit3_small_patch16_224.fb_in22k_ft_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'deit3_small_patch16_224.fb_in22k_ft_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 197, 384) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @article{Touvron2022DeiTIR, title={DeiT III: Revenge of the ViT}, author={Hugo Touvron and Matthieu Cord and Herve Jegou}, journal={arXiv preprint arXiv:2204.07118}, year={2022}, } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
3,055
[ [ -0.03179931640625, -0.03521728515625, 0.009613037109375, 0.01224517822265625, -0.028472900390625, -0.0226287841796875, -0.004207611083984375, -0.0294952392578125, 0.01523590087890625, 0.021820068359375, -0.041473388671875, -0.053741455078125, -0.046600341796875,...
timm/resnet50.fb_swsl_ig1b_ft_in1k
2023-04-05T18:13:11.000Z
[ "timm", "pytorch", "safetensors", "image-classification", "arxiv:1905.00546", "arxiv:1512.03385", "license:cc-by-nc-4.0", "region:us" ]
image-classification
timm
null
null
timm/resnet50.fb_swsl_ig1b_ft_in1k
0
715
timm
2023-04-05T18:12:55
--- tags: - image-classification - timm library_tag: timm license: cc-by-nc-4.0 --- # Model card for resnet50.fb_swsl_ig1b_ft_in1k A ResNet-B image classification model. This model features: * ReLU activations * single layer 7x7 convolution with pooling * 1x1 convolution shortcut downsample Pretrained on Instagram-1B hashtags dataset using semi-weakly supervised learning and fine-tuned on ImageNet-1k by paper authors. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 25.6 - GMACs: 4.1 - Activations (M): 11.1 - Image size: 224 x 224 - **Papers:** - Billion-scale semi-supervised learning for image classification: https://arxiv.org/abs/1905.00546 - Deep Residual Learning for Image Recognition: https://arxiv.org/abs/1512.03385 - **Original:** https://github.com/facebookresearch/semi-supervised-ImageNet1K-models ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('resnet50.fb_swsl_ig1b_ft_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'resnet50.fb_swsl_ig1b_ft_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 64, 112, 112]) # torch.Size([1, 256, 56, 56]) # torch.Size([1, 512, 28, 28]) # torch.Size([1, 1024, 14, 14]) # torch.Size([1, 2048, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'resnet50.fb_swsl_ig1b_ft_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 2048, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). |model |img_size|top1 |top5 |param_count|gmacs|macts|img/sec| |------------------------------------------|--------|-----|-----|-----------|-----|-----|-------| |[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|320 |86.72|98.17|93.6 |35.2 |69.7 |451 | |[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|288 |86.51|98.08|93.6 |28.5 |56.4 |560 | |[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|288 |86.49|98.03|93.6 |28.5 |56.4 |557 | |[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|224 |85.96|97.82|93.6 |17.2 |34.2 |923 | |[resnext101_32x32d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x32d.fb_wsl_ig1b_ft_in1k)|224 |85.11|97.44|468.5 |87.3 |91.1 |254 | |[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|416 |85.0 |97.12|191.9 |108.4|213.8|134 | |[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|352 |84.96|97.22|102.1 |50.2 |101.2|291 | |[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|320 |84.73|97.18|102.1 |41.5 |83.7 |353 | |[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|384 |84.71|96.99|164.0 |77.6 |154.7|183 | |[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|288 |84.57|97.08|93.6 |28.5 |56.4 |557 | |[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|320 |84.45|97.08|93.2 |31.5 |67.8 |446 | |[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|352 |84.43|96.97|129.9 |51.1 |105.5|280 | |[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|288 |84.36|96.92|93.6 |27.6 |53.0 |595 | |[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|320 |84.35|97.04|66.8 |24.1 |47.7 |610 | |[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|288 |84.3 |96.94|164.0 |43.7 |87.1 |333 | |[resnext101_32x8d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_swsl_ig1b_ft_in1k)|224 |84.28|97.17|88.8 |16.5 |31.2 |1100 | |[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|320 |84.24|96.86|191.9 |64.2 |126.6|228 | |[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|288 |84.19|96.87|93.6 |27.2 |51.6 |613 | |[resnext101_32x16d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_wsl_ig1b_ft_in1k)|224 |84.18|97.19|194.0 |36.3 |51.2 |581 | |[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|288 |84.11|97.11|44.6 |15.1 |29.0 |1144 | |[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|320 |83.97|96.82|64.7 |31.2 |67.3 |518 | |[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|256 |83.87|96.75|93.2 |20.2 |43.4 |692 | |[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|224 |83.86|96.65|93.6 |17.2 |34.2 |923 | |[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|320 |83.72|96.61|86.6 |24.3 |48.1 |617 | |[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|256 |83.69|96.78|66.8 |15.4 |30.6 |943 | |[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|224 |83.68|96.61|93.6 |16.7 |32.0 |986 | |[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|320 |83.67|96.74|60.2 |24.1 |47.7 |706 | |[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|256 |83.59|96.61|129.9 |27.1 |55.8 |526 | |[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|224 |83.58|96.4 |93.6 |16.5 |31.2 |1013 | |[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|224 |83.54|96.83|44.6 |9.1 |17.6 |1864 | |[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|288 |83.46|96.54|60.2 |19.1 |37.3 |904 | |[resnext101_32x16d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_swsl_ig1b_ft_in1k)|224 |83.35|96.85|194.0 |36.3 |51.2 |582 | |[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|256 |83.23|96.53|64.7 |20.0 |43.1 |809 | |[resnext101_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_swsl_ig1b_ft_in1k)|224 |83.22|96.75|44.2 |8.0 |21.2 |1814 | |[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|288 |83.16|96.38|83.5 |25.7 |51.6 |590 | |[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|256 |83.14|96.38|60.2 |15.4 |30.5 |1096 | |[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|320 |83.02|96.45|44.6 |16.5 |34.8 |992 | |[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|288 |82.98|96.54|44.6 |13.4 |28.2 |1077 | |[resnext101_64x4d.tv_in1k](https://huggingface.co/timm/resnext101_64x4d.tv_in1k)|224 |82.98|96.25|83.5 |15.5 |31.2 |989 | |[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|256 |82.86|96.28|86.6 |15.6 |30.8 |951 | |[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|224 |82.83|96.22|88.8 |16.5 |31.2 |1099 | |[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|224 |82.8 |96.13|60.2 |11.6 |22.6 |1486 | |[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|288 |82.8 |96.32|44.6 |13.0 |26.8 |1291 | |[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|288 |82.74|95.71|60.2 |19.1 |37.3 |905 | |[resnext101_32x8d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_wsl_ig1b_ft_in1k)|224 |82.69|96.63|88.8 |16.5 |31.2 |1100 | |[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|288 |82.62|95.75|60.2 |19.1 |37.3 |904 | |[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|288 |82.61|96.49|25.6 |8.9 |20.6 |1729 | |[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|288 |82.53|96.13|36.8 |9.9 |21.5 |1773 | |[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|224 |82.5 |96.02|126.9 |22.8 |21.2 |1078 | |[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|224 |82.46|95.92|83.5 |15.5 |31.2 |987 | |[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|288 |82.36|96.18|35.7 |8.1 |20.9 |1964 | |[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|320 |82.35|96.14|25.6 |8.8 |24.1 |1386 | |[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|288 |82.31|95.63|44.6 |13.0 |26.8 |1291 | |[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|288 |82.29|96.01|63.6 |13.6 |28.5 |1078 | |[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|224 |82.29|96.0 |60.2 |11.6 |22.6 |1484 | |[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|288 |82.27|96.06|68.9 |18.9 |23.8 |1176 | |[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|256 |82.26|96.07|44.6 |10.6 |22.2 |1542 | |[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|288 |82.24|95.73|44.6 |13.0 |26.8 |1290 | |[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|288 |82.2 |96.14|27.6 |7.0 |23.8 |1547 | |[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|224 |82.18|96.05|44.6 |8.1 |17.1 |1771 | |[resnext50_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_swsl_ig1b_ft_in1k)|224 |82.17|96.22|25.0 |4.3 |14.4 |2943 | |[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|288 |82.12|95.65|25.6 |7.1 |19.6 |1704 | |[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|288 |82.03|95.94|25.0 |7.0 |23.8 |1745 | |[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|288 |82.0 |96.15|24.9 |5.8 |12.7 |1787 | |[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|256 |81.99|95.85|36.8 |7.8 |17.0 |2230 | |[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|176 |81.98|95.72|88.8 |10.3 |19.4 |1768 | |[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|224 |81.97|95.24|60.2 |11.6 |22.6 |1486 | |[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|224 |81.93|95.75|44.6 |7.8 |16.2 |2122 | |[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|224 |81.9 |95.77|44.6 |7.8 |16.2 |2118 | |[resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k)|224 |81.84|96.1 |194.0 |36.3 |51.2 |583 | |[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|256 |81.78|95.94|35.7 |6.4 |16.6 |2471 | |[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|224 |81.77|95.22|60.2 |11.6 |22.6 |1485 | |[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|224 |81.74|96.06|25.6 |5.4 |12.4 |2813 | |[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|288 |81.65|95.54|25.6 |7.1 |19.6 |1703 | |[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|288 |81.64|95.88|25.6 |7.2 |19.7 |1694 | |[resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k)|224 |81.62|96.04|88.8 |16.5 |31.2 |1101 | |[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|224 |81.61|95.76|68.9 |11.4 |14.4 |1930 | |[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|288 |81.61|95.83|25.6 |8.5 |19.2 |1868 | |[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|224 |81.5 |95.16|44.6 |7.8 |16.2 |2125 | |[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|288 |81.48|95.16|25.0 |7.0 |23.8 |1745 | |[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|288 |81.47|95.71|25.9 |6.9 |18.6 |2071 | |[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|224 |81.45|95.53|68.9 |11.4 |14.4 |1929 | |[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|288 |81.44|95.22|25.6 |7.2 |19.7 |1908 | |[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|256 |81.44|95.67|25.6 |5.6 |15.4 |2168 | |[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|288 |81.4 |95.82|30.2 |6.8 |13.9 |2132 | |[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|288 |81.37|95.74|25.6 |7.2 |19.7 |1910 | |[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|224 |81.32|95.19|44.6 |7.8 |16.2 |2125 | |[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|288 |81.3 |95.65|28.1 |6.8 |18.4 |1803 | |[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|288 |81.3 |95.11|25.0 |7.0 |23.8 |1746 | |[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|224 |81.27|95.62|27.6 |4.3 |14.4 |2591 | |[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|224 |81.26|95.16|25.6 |4.3 |11.8 |2823 | |[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|288 |81.23|95.54|15.7 |4.8 |19.6 |2117 | |[senet154.gluon_in1k](https://huggingface.co/timm/senet154.gluon_in1k)|224 |81.23|95.35|115.1 |20.8 |38.7 |545 | |[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|288 |81.22|95.11|25.6 |6.8 |18.4 |2089 | |[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|288 |81.22|95.63|25.6 |6.8 |18.4 |676 | |[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|288 |81.18|95.09|25.6 |7.2 |19.7 |1908 | |[resnet50.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet50.fb_swsl_ig1b_ft_in1k)|224 |81.18|95.98|25.6 |4.1 |11.1 |3455 | |[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|224 |81.17|95.34|25.0 |4.3 |14.4 |2933 | |[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|224 |81.1 |95.33|25.0 |4.3 |14.4 |2934 | |[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|288 |81.1 |95.23|28.1 |6.8 |18.4 |1801 | |[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|288 |81.1 |95.12|28.1 |6.8 |18.4 |1799 | |[resnet152s.gluon_in1k](https://huggingface.co/timm/resnet152s.gluon_in1k)|224 |81.02|95.41|60.3 |12.9 |25.0 |1347 | |[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|288 |80.97|95.44|25.6 |6.8 |18.4 |2085 | |[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|256 |80.94|95.45|25.9 |5.4 |14.7 |2571 | |[resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.93|95.73|44.2 |8.0 |21.2 |1814 | |[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|288 |80.91|95.55|25.6 |6.8 |18.4 |2084 | |[seresnext101_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_32x4d.gluon_in1k)|224 |80.9 |95.31|49.0 |8.0 |21.3 |1585 | |[seresnext101_64x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_64x4d.gluon_in1k)|224 |80.9 |95.3 |88.2 |15.5 |31.2 |918 | |[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|288 |80.86|95.52|25.6 |6.8 |18.4 |2085 | |[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|224 |80.85|95.43|25.6 |4.1 |11.1 |3450 | |[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|224 |80.84|95.02|25.6 |4.3 |11.8 |2821 | |[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|224 |80.79|95.62|24.9 |3.5 |7.7 |2961 | |[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|288 |80.79|95.36|19.8 |6.0 |14.8 |2506 | |[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|288 |80.79|95.58|19.9 |4.2 |10.6 |2349 | |[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|288 |80.78|94.99|25.6 |6.8 |18.4 |2088 | |[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|288 |80.71|95.43|25.6 |6.8 |18.4 |2087 | |[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|288 |80.7 |95.39|25.0 |7.0 |23.8 |1749 | |[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|192 |80.69|95.24|63.6 |6.0 |12.7 |2270 | |[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|224 |80.68|94.71|25.6 |4.4 |11.9 |3162 | |[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|288 |80.68|95.36|19.7 |6.0 |14.8 |2637 | |[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|224 |80.67|95.3 |25.6 |4.1 |11.1 |3452 | |[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|288 |80.67|95.42|25.0 |7.4 |25.1 |1626 | |[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|224 |80.63|95.21|25.6 |5.2 |11.6 |3034 | |[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|224 |80.61|95.32|25.6 |4.4 |11.9 |2813 | |[resnext101_64x4d.gluon_in1k](https://huggingface.co/timm/resnext101_64x4d.gluon_in1k)|224 |80.61|94.99|83.5 |15.5 |31.2 |989 | |[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|288 |80.6 |95.31|19.9 |6.0 |14.8 |2578 | |[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|256 |80.57|95.17|15.7 |3.8 |15.5 |2710 | |[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|224 |80.56|95.0 |60.2 |11.6 |22.6 |1483 | |[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|224 |80.53|95.16|25.6 |4.4 |11.9 |3164 | |[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|224 |80.53|94.46|25.0 |4.3 |14.4 |2930 | |[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|176 |80.48|94.98|126.9 |14.3 |13.2 |1719 | |[resnet152d.gluon_in1k](https://huggingface.co/timm/resnet152d.gluon_in1k)|224 |80.47|95.2 |60.2 |11.8 |23.4 |1428 | |[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|288 |80.45|95.32|25.6 |6.8 |18.4 |2086 | |[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|224 |80.45|95.24|30.2 |4.1 |8.4 |3530 | |[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|224 |80.45|94.63|25.0 |4.3 |14.4 |2936 | |[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|176 |80.43|95.09|68.9 |7.3 |9.0 |3015 | |[resnet101d.gluon_in1k](https://huggingface.co/timm/resnet101d.gluon_in1k)|224 |80.42|95.01|44.6 |8.1 |17.0 |2007 | |[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|224 |80.38|94.6 |25.6 |4.1 |11.1 |3461 | |[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|256 |80.36|95.1 |19.8 |4.8 |11.7 |3267 | |[resnext101_32x4d.gluon_in1k](https://huggingface.co/timm/resnext101_32x4d.gluon_in1k)|224 |80.34|94.93|44.2 |8.0 |21.2 |1814 | |[resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.32|95.4 |25.0 |4.3 |14.4 |2941 | |[resnet101s.gluon_in1k](https://huggingface.co/timm/resnet101s.gluon_in1k)|224 |80.28|95.16|44.7 |9.2 |18.6 |1851 | |[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|224 |80.26|95.08|28.1 |4.1 |11.1 |2972 | |[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|288 |80.24|95.24|25.6 |8.5 |19.9 |1523 | |[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|224 |80.22|94.63|25.6 |4.4 |11.9 |3162 | |[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|176 |80.2 |94.64|60.2 |7.2 |14.0 |2346 | |[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|224 |80.08|94.74|28.1 |4.1 |11.1 |2969 | |[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|256 |80.08|94.97|19.7 |4.8 |11.7 |3284 | |[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|256 |80.06|94.99|19.9 |4.8 |11.7 |3216 | |[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|224 |80.06|94.95|25.6 |4.1 |11.1 |1109 | |[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|224 |80.02|94.71|28.1 |4.1 |11.1 |2962 | |[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|288 |79.97|95.05|25.6 |6.8 |18.4 |2086 | |[resnet152c.gluon_in1k](https://huggingface.co/timm/resnet152c.gluon_in1k)|224 |79.92|94.84|60.2 |11.8 |23.4 |1455 | |[seresnext50_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext50_32x4d.gluon_in1k)|224 |79.91|94.82|27.6 |4.3 |14.4 |2591 | |[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|224 |79.91|94.67|25.6 |4.1 |11.1 |3456 | |[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|176 |79.9 |94.6 |44.6 |4.9 |10.1 |3341 | |[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|224 |79.89|94.97|35.7 |4.5 |12.1 |2774 | |[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|224 |79.88|94.87|25.6 |4.1 |11.1 |3455 | |[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|320 |79.86|95.07|16.0 |5.2 |16.4 |2168 | |[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|224 |79.85|94.56|25.6 |4.1 |11.1 |3460 | |[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|288 |79.83|94.97|25.6 |6.8 |18.4 |2087 | |[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|224 |79.82|94.62|44.6 |7.8 |16.2 |2114 | |[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|224 |79.76|94.6 |25.0 |4.3 |14.4 |2943 | |[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|224 |79.74|94.95|25.6 |4.1 |11.1 |3455 | |[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|224 |79.74|94.87|19.9 |2.5 |6.4 |3929 | |[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|288 |79.71|94.83|19.7 |6.0 |14.8 |2710 | |[resnet152.gluon_in1k](https://huggingface.co/timm/resnet152.gluon_in1k)|224 |79.68|94.74|60.2 |11.6 |22.6 |1486 | |[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|224 |79.67|94.87|25.0 |4.5 |15.2 |2729 | |[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|288 |79.63|94.91|25.6 |6.8 |18.4 |2086 | |[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|224 |79.56|94.72|25.6 |4.3 |11.8 |2805 | |[resnet101c.gluon_in1k](https://huggingface.co/timm/resnet101c.gluon_in1k)|224 |79.53|94.58|44.6 |8.1 |17.0 |2062 | |[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|224 |79.52|94.61|25.6 |4.1 |11.1 |3459 | |[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|176 |79.42|94.64|25.6 |2.6 |6.9 |5397 | |[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|288 |79.4 |94.66|18.0 |5.9 |14.6 |2752 | |[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|224 |79.38|94.57|25.6 |4.1 |11.1 |3459 | |[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|176 |79.37|94.3 |25.0 |2.7 |9.0 |4577 | |[resnext50_32x4d.gluon_in1k](https://huggingface.co/timm/resnext50_32x4d.gluon_in1k)|224 |79.36|94.43|25.0 |4.3 |14.4 |2942 | |[resnext101_32x8d.tv_in1k](https://huggingface.co/timm/resnext101_32x8d.tv_in1k)|224 |79.31|94.52|88.8 |16.5 |31.2 |1100 | |[resnet101.gluon_in1k](https://huggingface.co/timm/resnet101.gluon_in1k)|224 |79.31|94.53|44.6 |7.8 |16.2 |2125 | |[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|224 |79.31|94.63|25.6 |5.2 |12.0 |2524 | |[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|176 |79.27|94.49|25.6 |2.6 |6.9 |5404 | |[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|224 |79.25|94.31|25.0 |4.3 |14.4 |2931 | |[resnet50.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet50.fb_ssl_yfcc100m_ft_in1k)|224 |79.22|94.84|25.6 |4.1 |11.1 |3451 | |[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|256 |79.21|94.56|19.7 |4.8 |11.7 |3392 | |[resnet50d.gluon_in1k](https://huggingface.co/timm/resnet50d.gluon_in1k)|224 |79.07|94.48|25.6 |4.4 |11.9 |3162 | |[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|224 |79.03|94.38|25.6 |4.1 |11.1 |3453 | |[resnet50.am_in1k](https://huggingface.co/timm/resnet50.am_in1k)|224 |79.01|94.39|25.6 |4.1 |11.1 |3461 | |[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|256 |79.01|94.37|18.0 |4.6 |11.6 |3440 | |[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|256 |78.9 |94.54|16.0 |3.4 |10.5 |3421 | |[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|160 |78.89|94.11|60.2 |5.9 |11.5 |2745 | |[wide_resnet101_2.tv_in1k](https://huggingface.co/timm/wide_resnet101_2.tv_in1k)|224 |78.84|94.28|126.9 |22.8 |21.2 |1079 | |[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|288 |78.83|94.24|16.8 |4.5 |16.8 |2251 | |[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|224 |78.81|94.32|25.6 |4.1 |11.1 |3454 | |[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|288 |78.74|94.33|16.8 |4.5 |16.7 |2264 | |[resnet50s.gluon_in1k](https://huggingface.co/timm/resnet50s.gluon_in1k)|224 |78.72|94.23|25.7 |5.5 |13.5 |2796 | |[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|224 |78.71|94.24|25.6 |4.4 |11.9 |3154 | |[wide_resnet50_2.tv_in1k](https://huggingface.co/timm/wide_resnet50_2.tv_in1k)|224 |78.47|94.09|68.9 |11.4 |14.4 |1934 | |[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|224 |78.46|94.27|25.6 |4.1 |11.1 |3454 | |[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|288 |78.43|94.35|21.8 |6.5 |7.5 |3291 | |[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|288 |78.42|94.04|10.5 |3.1 |13.3 |3226 | |[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|320 |78.33|94.13|16.0 |5.2 |16.4 |2391 | |[resnet152.tv_in1k](https://huggingface.co/timm/resnet152.tv_in1k)|224 |78.32|94.04|60.2 |11.6 |22.6 |1487 | |[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|288 |78.28|94.1 |10.4 |3.1 |13.3 |3062 | |[bat_resnext26ts.ch_in1k](https://huggingface.co/timm/bat_resnext26ts.ch_in1k)|256 |78.25|94.1 |10.7 |2.5 |12.5 |3393 | |[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|224 |78.06|93.78|25.6 |4.1 |11.1 |3450 | |[resnet50c.gluon_in1k](https://huggingface.co/timm/resnet50c.gluon_in1k)|224 |78.0 |93.99|25.6 |4.4 |11.9 |3286 | |[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|288 |78.0 |93.91|10.3 |3.1 |13.3 |3297 | |[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|224 |77.98|93.75|16.8 |2.7 |10.1 |3841 | |[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|288 |77.92|93.77|21.8 |6.1 |6.2 |3609 | |[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|160 |77.88|93.71|44.6 |4.0 |8.3 |3926 | |[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|256 |77.87|93.84|16.0 |3.4 |10.5 |3772 | |[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|256 |77.86|93.79|10.4 |2.4 |10.5 |4263 | |[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|160 |77.82|93.81|35.7 |2.3 |6.2 |5238 | |[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|256 |77.81|93.82|10.5 |2.4 |10.5 |4183 | |[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|160 |77.79|93.6 |25.6 |2.2 |6.0 |5329 | |[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|160 |77.73|93.32|25.0 |2.2 |7.4 |5576 | |[resnext50_32x4d.tv_in1k](https://huggingface.co/timm/resnext50_32x4d.tv_in1k)|224 |77.61|93.7 |25.0 |4.3 |14.4 |2944 | |[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|224 |77.59|93.61|16.8 |2.7 |10.2 |3807 | |[resnet50.gluon_in1k](https://huggingface.co/timm/resnet50.gluon_in1k)|224 |77.58|93.72|25.6 |4.1 |11.1 |3455 | |[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|256 |77.44|93.56|10.3 |2.4 |10.5 |4284 | |[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|288 |77.41|93.63|16.0 |4.3 |13.5 |2907 | |[resnet101.tv_in1k](https://huggingface.co/timm/resnet101.tv_in1k)|224 |77.38|93.54|44.6 |7.8 |16.2 |2125 | |[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|160 |77.22|93.27|25.6 |2.2 |6.1 |5982 | |[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|288 |77.17|93.47|10.3 |3.1 |13.3 |3392 | |[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|288 |77.15|93.27|21.8 |6.1 |6.2 |3615 | |[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|224 |77.1 |93.37|21.8 |3.9 |4.5 |5436 | |[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|224 |77.02|93.07|28.1 |4.1 |11.1 |2952 | |[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|256 |76.78|93.13|10.3 |2.4 |10.5 |4410 | |[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|224 |76.7 |93.17|16.0 |2.6 |8.2 |4859 | |[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|288 |76.5 |93.35|21.8 |6.1 |6.2 |3617 | |[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|224 |76.42|92.87|21.8 |3.7 |3.7 |5984 | |[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|288 |76.35|93.18|16.0 |3.9 |12.2 |3331 | |[resnet50.tv_in1k](https://huggingface.co/timm/resnet50.tv_in1k)|224 |76.13|92.86|25.6 |4.1 |11.1 |3457 | |[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|160 |75.96|92.5 |25.6 |2.1 |5.7 |6490 | |[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|224 |75.52|92.44|21.8 |3.7 |3.7 |5991 | |[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|224 |75.3 |92.58|16.0 |2.4 |7.4 |5583 | |[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|224 |75.16|92.18|21.8 |3.7 |3.7 |5994 | |[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|160 |75.1 |92.08|28.1 |2.1 |5.7 |5513 | |[resnet34.gluon_in1k](https://huggingface.co/timm/resnet34.gluon_in1k)|224 |74.57|91.98|21.8 |3.7 |3.7 |5984 | |[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|288 |73.81|91.83|11.7 |3.4 |5.4 |5196 | |[resnet34.tv_in1k](https://huggingface.co/timm/resnet34.tv_in1k)|224 |73.32|91.42|21.8 |3.7 |3.7 |5979 | |[resnet18.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet18.fb_swsl_ig1b_ft_in1k)|224 |73.28|91.73|11.7 |1.8 |2.5 |10213 | |[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|288 |73.16|91.03|11.7 |3.0 |4.1 |6050 | |[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|224 |72.98|91.11|21.8 |3.7 |3.7 |5967 | |[resnet18.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet18.fb_ssl_yfcc100m_ft_in1k)|224 |72.6 |91.42|11.7 |1.8 |2.5 |10213 | |[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|288 |72.37|90.59|11.7 |3.0 |4.1 |6051 | |[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|224 |72.26|90.31|10.1 |1.7 |5.8 |7026 | |[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|224 |72.26|90.68|11.7 |2.1 |3.3 |8707 | |[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|224 |71.49|90.07|11.7 |1.8 |2.5 |10187 | |[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|176 |71.31|89.69|10.1 |1.1 |3.6 |10970 | |[resnet18.gluon_in1k](https://huggingface.co/timm/resnet18.gluon_in1k)|224 |70.84|89.76|11.7 |1.8 |2.5 |10210 | |[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|224 |70.64|89.47|11.7 |1.8 |2.5 |10194 | |[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|160 |70.56|89.52|21.8 |1.9 |1.9 |10737 | |[resnet18.tv_in1k](https://huggingface.co/timm/resnet18.tv_in1k)|224 |69.76|89.07|11.7 |1.8 |2.5 |10205 | |[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|224 |68.34|88.03|5.4 |1.1 |2.4 |13079 | |[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|224 |68.25|88.17|11.7 |1.8 |2.5 |10167 | |[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|176 |66.71|86.96|5.4 |0.7 |1.5 |20327 | |[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|160 |65.66|86.26|11.7 |0.9 |1.3 |18229 | ## Citation ```bibtex @misc{yalniz2019billionscale, title={Billion-scale semi-supervised learning for image classification}, author={I. Zeki Yalniz and Hervé Jégou and Kan Chen and Manohar Paluri and Dhruv Mahajan}, year={2019}, eprint={1905.00546}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ```bibtex @article{He2015, author = {Kaiming He and Xiangyu Zhang and Shaoqing Ren and Jian Sun}, title = {Deep Residual Learning for Image Recognition}, journal = {arXiv preprint arXiv:1512.03385}, year = {2015} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
38,449
[ [ -0.06463623046875, -0.018310546875, 0.0018529891967773438, 0.030914306640625, -0.0316162109375, -0.0082550048828125, -0.01076507568359375, -0.032257080078125, 0.0849609375, 0.0190277099609375, -0.0491943359375, -0.04095458984375, -0.04473876953125, -0.000450...
Langboat/bloom-1b4-zh
2023-05-08T03:39:33.000Z
[ "transformers", "pytorch", "safetensors", "bloom", "text-generation", "zh", "license:bigscience-bloom-rail-1.0", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text-generation
Langboat
null
null
Langboat/bloom-1b4-zh
9
714
transformers
2022-08-31T06:49:32
--- license: bigscience-bloom-rail-1.0 language: - zh pipeline_tag: text-generation widget: - text: "中国的首都是" --- This model is based on [bigscience/bloom-1b7](https://huggingface.co/bigscience/bloom-1b7). We pruned its vocabulary from 250880 to 46145 with Chinese corpus to reduce GPU memory usage. So the total parameter is 1.4b now. # How to use ```python from transformers import BloomTokenizerFast, BloomForCausalLM tokenizer = BloomTokenizerFast.from_pretrained('Langboat/bloom-1b4-zh') model = BloomForCausalLM.from_pretrained('Langboat/bloom-1b4-zh') print(tokenizer.batch_decode(model.generate(tokenizer.encode('中国的首都是', return_tensors='pt')))) ```
663
[ [ -0.034210205078125, -0.040863037109375, 0.0193939208984375, 0.034454345703125, -0.03631591796875, -0.0252532958984375, -0.0288543701171875, -0.01195526123046875, 0.0022792816162109375, 0.0289764404296875, -0.03790283203125, -0.0246429443359375, -0.03131103515625...
timm/flexivit_large.1200ep_in1k
2023-05-05T23:59:42.000Z
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2212.08013", "arxiv:2010.11929", "license:apache-2.0", "region:us" ]
image-classification
timm
null
null
timm/flexivit_large.1200ep_in1k
0
714
timm
2022-12-22T07:20:33
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for flexivit_large.1200ep_in1k A FlexiViT image classification model. Trained on ImageNet-1k in JAX by paper authors, ported to PyTorch by Ross Wightman. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 304.4 - GMACs: 68.5 - Activations (M): 50.2 - Image size: 240 x 240 - **Papers:** - FlexiViT: One Model for All Patch Sizes: https://arxiv.org/abs/2212.08013 - An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2 - **Dataset:** ImageNet-1k - **Original:** https://github.com/google-research/big_vision ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('flexivit_large.1200ep_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'flexivit_large.1200ep_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 226, 1024) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @article{beyer2022flexivit, title={FlexiViT: One Model for All Patch Sizes}, author={Beyer, Lucas and Izmailov, Pavel and Kolesnikov, Alexander and Caron, Mathilde and Kornblith, Simon and Zhai, Xiaohua and Minderer, Matthias and Tschannen, Michael and Alabdulmohsin, Ibrahim and Pavetic, Filip}, journal={arXiv preprint arXiv:2212.08013}, year={2022} } ``` ```bibtex @article{dosovitskiy2020vit, title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale}, author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil}, journal={ICLR}, year={2021} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
3,704
[ [ -0.03839111328125, -0.028594970703125, 0.00518798828125, 0.0054931640625, -0.0262298583984375, -0.02935791015625, -0.019989013671875, -0.0367431640625, 0.0170135498046875, 0.017303466796875, -0.0421142578125, -0.0411376953125, -0.045562744140625, -0.00184345...
facebook/mms-tts-fra
2023-09-01T12:59:00.000Z
[ "transformers", "pytorch", "safetensors", "vits", "text-to-audio", "mms", "text-to-speech", "arxiv:2305.13516", "license:cc-by-nc-4.0", "endpoints_compatible", "has_space", "region:us" ]
text-to-speech
facebook
null
null
facebook/mms-tts-fra
0
714
transformers
2023-09-01T12:58:25
--- license: cc-by-nc-4.0 tags: - mms - vits pipeline_tag: text-to-speech --- # Massively Multilingual Speech (MMS): French Text-to-Speech This repository contains the **French (fra)** language text-to-speech (TTS) model checkpoint. This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to provide speech technology across a diverse range of languages. You can find more details about the supported languages and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html), and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts). MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. ## Model Details VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior. A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers, much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to synthesise speech with different rhythms from the same input text. The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training. To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor, the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform. For the MMS project, a separate VITS checkpoint is trained on each langauge. ## Usage MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint, first install the latest version of the library: ``` pip install --upgrade transformers accelerate ``` Then, run inference with the following code-snippet: ```python from transformers import VitsModel, AutoTokenizer import torch model = VitsModel.from_pretrained("facebook/mms-tts-fra") tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-fra") text = "some example text in the French language" inputs = tokenizer(text, return_tensors="pt") with torch.no_grad(): output = model(**inputs).waveform ``` The resulting waveform can be saved as a `.wav` file: ```python import scipy scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output) ``` Or displayed in a Jupyter Notebook / Google Colab: ```python from IPython.display import Audio Audio(output, rate=model.config.sampling_rate) ``` ## BibTex citation This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper: ``` @article{pratap2023mms, title={Scaling Speech Technology to 1,000+ Languages}, author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli}, journal={arXiv}, year={2023} } ``` ## License The model is licensed as **CC-BY-NC 4.0**.
3,969
[ [ -0.0257110595703125, -0.059722900390625, 0.0168914794921875, 0.03326416015625, -0.001377105712890625, -0.0047760009765625, -0.022491455078125, -0.0166015625, 0.0242156982421875, 0.01904296875, -0.057281494140625, -0.03466796875, -0.045562744140625, 0.0027027...
nsi319/legal-led-base-16384
2021-03-01T12:33:48.000Z
[ "transformers", "pytorch", "led", "text2text-generation", "summarization", "en", "license:mit", "autotrain_compatible", "has_space", "region:us" ]
summarization
nsi319
null
null
nsi319/legal-led-base-16384
4
713
transformers
2022-03-02T23:29:05
--- language: en tags: summarization metrics: - rouge - precision inference: false license: mit --- ## LED for legal summarization of documents This is a Longformer Encoder Decoder ([led-base-16384](https://huggingface.co/allenai/led-base-16384)) model for the **legal domain**, trained for **long document abstractive summarization** task. The length of the document can be upto 16,384 tokens. ## Training data The **legal-led-base-16384** model was trained on [sec-litigation-releases](https://www.sec.gov/litigation/litreleases.htm) dataset consisting more than 2700 litigation releases and complaints. ## How to use ```Python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("nsi319/legal-led-base-16384") model = AutoModelForSeq2SeqLM.from_pretrained("nsi319/legal-led-base-16384") padding = "max_length" text="""On March 2, 2018, the Securities and Exchange Commission announced securities fraud charges against a U.K.-based broker-dealer and its investment manager in connection with manipulative trading in the securities of HD View 360 Inc., a U.S.-based microcap issuer. The SEC also announced charges against HD View's CEO, another individual, and three entities they control for manipulating HD View's securities as well as the securities of another microcap issuer, West Coast Ventures Group Corp. The SEC further announced the institution of an order suspending trading in the securities of HD View.These charges arise in part from an undercover operation by the Federal Bureau of Investigation, which also resulted in related criminal prosecutions against these defendants by the Office of the United States Attorney for the Eastern District of New York.In a complaint filed in the U.S. District Court for the Eastern District of New York, the SEC alleges that Beaufort Securities Ltd. and Peter Kyriacou, an investment manager at Beaufort, manipulated the market for HD View's common stock. The scheme involved an undercover FBI agent who described his business as manipulating U.S. stocks through pump-and-dump schemes. Kyriacou and the agent discussed depositing large blocks of microcap stock in Beaufort accounts, driving up the price of the stock through promotions, manipulating the stock's price and volume through matched trades, and then selling the shares for a large profit.The SEC's complaint against Beaufort and Kyriacou alleges that they:opened brokerage accounts for the undercover agent in the names of nominees in order to conceal his identity and his connection to the anticipated trading activity in the accounts suggested that the undercover agent could create the false appearance that HD View's stock was liquid in advance of a pump-and-dump by "gam[ing] the market" through matched trades executed multiple purchase orders of HD View shares with the understanding that Beaufort's client had arranged for an associate to simultaneously offer an equivalent number of shares at the same priceA second complaint filed by the SEC in the U.S. District Court for the Eastern District of New York alleges that in a series of recorded telephone conversations with the undercover agent, HD View CEO Dennis Mancino and William T. Hirschy agreed to manipulate HD View's common stock by using the agent's network of brokers to generate fraudulent retail demand for the stock in exchange for a kickback from the trading proceeds. According to the complaint, the three men agreed that Mancino and Hirschy would manipulate HD View stock to a higher price before using the agent's brokers to liquidate their positions at an artificially inflated price. The SEC's complaint also alleges that Mancino and Hirschy executed a "test trade" on Jan. 31, 2018, coordinated by the agent, consisting of a sell order placed by the defendants filled by an opposing purchase order placed by a broker into an account at Beaufort. Unbeknownst to Mancino and Hirschy, the Beaufort account used for this trade was a nominal account that was opened and funded by the agent. The SEC's complaint also alleges that, prior to their contact with the undercover agent, Mancino and Hirschy manipulated the market for HD View and for West Coast by using brokerage accounts that they owned, controlled, or were associated with –including TJM Investments Inc., DJK Investments 10 Inc., WT Consulting Group LLC – to effect manipulative "matched trades."The SEC's complaint against Beaufort and Kyriacou charges the defendants with violating Section 10(b) of the Securities Exchange Act of 1934 and Rule 10b-5 thereunder. The SEC also charged Hirschy, Mancino, and their corporate entities with violating Section 17(a)(1) of the Securities Act of 1933, Sections 9(a)(1), 9(a)(2), and 10(b) of the Exchange Act and Rules 10b-5(a) and (c) thereunder. The SEC is seeking injunctions, disgorgement, prejudgment interest, penalties, and penny stock bars from Beaufort and Kyriacou. With respect to Hirschy, Mancino, and their corporate entities, the SEC is seeking injunctions, disgorgement, prejudgment interest, penalties, penny stock bars, and an officer-and-director bar against Mancino.The investigation was conducted in the SEC's New York Regional Office by Tejal Shah and Joseph Darragh, Lorraine Collazo, and Michael D. Paley of the Microcap Fraud Task Force and supervised by Lara S. Mehraban, and in Washington, D.C. by Patrick L. Feeney, Robert Nesbitt, and Kevin Guerrero, and supervised by Antonia Chion. Preethi Krishnamurthy and Ms. Shah will lead the SEC's litigation against Beaufort and Kyriacou. Ann H. Petalas and Mr. Feeney, under the supervision of Cheryl Crumpton, will handle the SEC's litigation against Mancino, Hirschy, and their entities. The SEC appreciates the assistance of the Office of the United States Attorney for the Eastern District of New York, the Federal Bureau of Investigation, the Internal Revenue Service, the Alberta Securities Commission, the Ontario Securities Commission, the Financial Conduct Authority of the United Kingdom, and the Financial Industry Regulatory Authority.The Commission's investigation in this matter is continuing.""" input_tokenized = tokenizer.encode(text, return_tensors='pt',padding=padding,pad_to_max_length=True, max_length=6144,truncation=True) summary_ids = model.generate(input_tokenized, num_beams=4, no_repeat_ngram_size=3, length_penalty=2, min_length=350, max_length=500) summary = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summary_ids][0] ### Summary Output # On March 2, 2018, the Securities and Exchange Commission charged Beaufort Securities Ltd. and Peter Kyriacou, an investment manager at Beaufort, with manipulating the market for HD View 360 Inc., a U.S.-based microcap issuer. The SEC also announced charges against HD View's CEO, another individual, and three entities they control for manipulating HD View through pump-and-dump schemes. According to the SEC's complaint, the defendants discussed depositing large blocks of microcap stock in Beaufort accounts, driving up the price of the stock through promotions, manipulating the stock's price and volume through matched trades, and then selling the shares for a large profit. In a parallel action, the United States Attorney's Office for the Eastern District of New York announced criminal charges against the defendants. On March 4, the SEC announced the entry of an order suspending trading in the securities of HD View and for West Coast, pending the outcome of a parallel criminal action by the Federal Bureau of Investigation. Following the announcement of the suspension, HD View stock prices and volume increased significantly, and the defendants agreed to pay over $1.5 million in disgorgement, prejudgment interest, penalties, and an officer and director bar. Beaufort agreed to settle the charges without admitting or denying the allegations of the complaint, and to pay a $1 million civil penalty. The SEC's investigation, which is continuing, has been conducted by Patrick McCluskey and Cheryl Crumpton of the SEC Enforcement Division's Market Abuse Unit in the New York Regional Office. The SEC appreciates the assistance of the Financial Industry Regulatory Authority of the United Kingdom, the Canadian Securities Commission, the Alberta Securities Commission and the Ontario Securities Commission. ``` ## Evaluation results When the model is used for summarizing legal documents, it achieves the following results: | Model | rouge1 | rouge1-precision | rouge2 | rouge2-precision | rougeL | rougeL-precision | |:-----------:|:-----:|:-----:|:------:|:-----:|:------:|:-----:| | legal-led-base-16384 | **55.69** | **61.73** | **29.03** | **36.68** | **32.65** | **40.43** | | led-base-16384 | 29.19 | 30.43 | 15.23 | 16.27 | 16.32 | 16.58 |
9,043
[ [ -0.0081024169921875, -0.0416259765625, 0.02783203125, 0.0125885009765625, -0.030059814453125, 0.0197906494140625, 0.0253143310546875, -0.0489501953125, 0.06146240234375, 0.05645751953125, -0.006519317626953125, -0.03265380859375, -0.0282745361328125, -0.0032...
textattack/albert-base-v2-yelp-polarity
2020-07-06T16:37:10.000Z
[ "transformers", "pytorch", "albert", "text-classification", "endpoints_compatible", "has_space", "region:us" ]
text-classification
textattack
null
null
textattack/albert-base-v2-yelp-polarity
3
713
transformers
2022-03-02T23:29:05
## TextAttack Model Card This `albert-base-v2` model was fine-tuned for sequence classification using TextAttack and the yelp_polarity dataset loaded using the `nlp` library. The model was fine-tuned for 5 epochs with a batch size of 16, a learning rate of 3e-05, and a maximum sequence length of 512. Since this was a classification task, the model was trained with a cross-entropy loss function. The best score the model achieved on this task was 0.975078947368421, as measured by the eval set accuracy, found after 3 epochs. For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
628
[ [ -0.01357269287109375, -0.0198822021484375, 0.02862548828125, 0.0024566650390625, -0.03076171875, -0.0025997161865234375, -0.004009246826171875, -0.03741455078125, -0.00989532470703125, 0.03375244140625, -0.03533935546875, -0.0552978515625, -0.038482666015625, ...
timm/vit_small_patch16_384.augreg_in1k
2023-05-06T00:28:49.000Z
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2106.10270", "arxiv:2010.11929", "license:apache-2.0", "region:us" ]
image-classification
timm
null
null
timm/vit_small_patch16_384.augreg_in1k
0
713
timm
2022-12-22T07:54:33
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for vit_small_patch16_384.augreg_in1k A Vision Transformer (ViT) image classification model. Trained on ImageNet-1k (with additional augmentation and regularization) in JAX by paper authors, ported to PyTorch by Ross Wightman. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 22.2 - GMACs: 12.4 - Activations (M): 24.2 - Image size: 384 x 384 - **Papers:** - How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers: https://arxiv.org/abs/2106.10270 - An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2 - **Dataset:** ImageNet-1k - **Original:** https://github.com/google-research/vision_transformer ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('vit_small_patch16_384.augreg_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'vit_small_patch16_384.augreg_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 577, 384) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @article{steiner2021augreg, title={How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers}, author={Steiner, Andreas and Kolesnikov, Alexander and and Zhai, Xiaohua and Wightman, Ross and Uszkoreit, Jakob and Beyer, Lucas}, journal={arXiv preprint arXiv:2106.10270}, year={2021} } ``` ```bibtex @article{dosovitskiy2020vit, title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale}, author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil}, journal={ICLR}, year={2021} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
3,799
[ [ -0.03961181640625, -0.03021240234375, -0.0028228759765625, 0.00402069091796875, -0.028839111328125, -0.0292816162109375, -0.0211029052734375, -0.03448486328125, 0.0161285400390625, 0.023345947265625, -0.041595458984375, -0.03662109375, -0.04669189453125, 0.0...
heegyu/kogpt-j-base
2023-03-05T08:30:14.000Z
[ "transformers", "pytorch", "jax", "gptj", "text-generation", "ko", "dataset:heegyu/korean-petitions", "dataset:heegyu/namuwiki-extracted", "dataset:heegyu/kowikitext", "license:mit", "endpoints_compatible", "region:us" ]
text-generation
heegyu
null
null
heegyu/kogpt-j-base
3
712
transformers
2022-11-22T02:21:12
--- license: mit widget: - text: 오늘 아침 정부는 발표를 통해 - text: | 아 배고프다 datasets: - heegyu/korean-petitions - heegyu/namuwiki-extracted - heegyu/kowikitext language: - ko pipeline_tag: text-generation --- ## 모델 구성 - GPT-J(Flax, Pytorch) - 12 Layers, 768 hidden dim, 3072 intermediate, 12 heads, 51200 vocab size - 1024 max_seq_len - 파라미터 수: 163M ### 성능 벤치마크 <img src="https://github.com/HeegyuKim/language-model/blob/63d8bd7cd39f25e87e0e376cdd18df3f8b460dee/image/benchmark0304.png?raw=true" /> ## 학습 환경 및 하이퍼파라미터 - TPU V2-8 - Learning Rate: 6e-4, Batch Size: 512(=64 accum x 8 devices), Scheduler: Linear, WarmUp: 1000 step - Optimizer: AdamW(adam_beta1=0.9 adam_beta2=0.98, weight_decay=0.01) - Training Steps: 43247 (3 epoch) - 학습 토큰 수: 21.11B (43247 * 512 * 1024seq / 1024^3) - 학습 기간: 2023/2/16 ~ 2023/2/18(2일 22시간 소요) - 학습 코드: https://github.com/HeegyuKim/language-model ## 학습에 사용한 데이터 - AIHub SNS 대화(730MB) - AIHub 구어체(422MB) - AIHub 도서(1.6MB) - AIHub 대규모 웹데이터 기반 한국어 말뭉치(12GB) - 한국어 위키(867MB) - 나무위키(6.4GB) - 국립국어원 메신저 대화(21MB) - 국립국어원 일상대화 말뭉치(23MB) - 국립국어원 문어 말뭉치(3.2GB) - 국립국어원 구어 말뭉치(1.1GB) - 국립국어원 신문 말뭉치(~2022, 17GB) - 청와대 국민청원(525MB) 데이터셋 크기는 전처리한 jsonl파일을 기준으로 함. 총 토큰 수는 약 7B임 ## 사용 예시 ```python from transformers import pipeline model_name = "heegyu/kogpt-j-base" pipe = pipeline('text-generation', model=model_name) print(pipe("안녕하세요", repetition_penalty=1.2, do_sample=True, eos_token_id=1, early_stopping=True, max_new_tokens=128)) print(pipe("오늘 정부 발표에 따르면, ", repetition_penalty=1.2, do_sample=True, eos_token_id=1, early_stopping=True, max_new_tokens=128)) print(pipe("싸늘하다. 가슴에 비수가 날아와 꽂힌다. ", repetition_penalty=1.2, do_sample=True, eos_token_id=1, early_stopping=True, max_new_tokens=128, min_length=64)) ``` 결과 ```bash [{'generated_text': "안녕하세요'라는 제목의 영상에서 (이름)는 '10년, 한 달 전쯤 kbs 9시 뉴스와 11시에 뉴스를 하면서 '아침에는 어떤 내용이고 아침이 왜 안 됐냐', '내가 뭘 잘못했느냐' 이런 이야기를 했다. 그런데 kbs '뉴스9'에 나와서 방송국에 사과해야 하는 게 아니냐고 하더라'고 말했다.. 이어'sns 활동을 하면서 이런 질문을 할 정도로 (진정성 없는 사람이라면 방송을 그만 뒀어야 했다)'며 '당시에는 '나도 방송인이 됐는데 왜 나에게 이런 질문과 비판을 하는가'라고 생각했고, 내가 무슨 잘못을 했다고 이렇게 이야기하냐 했다. 그러나 지금 생각해보면 그때 제가"}] [{'generated_text': '오늘 정부 발표에 따르면, 美 제조업, 내수 모두 하락했다.미 상무부는 지난달 미국의 제조업 수주가 전월대비 0.5% 감소했다고 22일 밝혔다. 이는 직전월과 같은 수치다. 서비스업지수도 0.3% 후퇴하며 증가세를 견인하고 있다. 미국 제조업 수주도 전월에 비해 1포인트 저하한 수준이다.미중 무역전쟁에 따른 불안감이 지속되면서 뉴욕증시가 6주 연속 하락하면서 투자 심리를 악화시켰다.미국 연방준비제도(연준)의 9·12 양적완화(qe) 결정으로 6년반 만에 사상 최대로 늘어난 자금이 시장에 유입되고 있다는 분석이다'}] [{'generated_text': "싸늘하다. 가슴에 비수가 날아와 꽂힌다. 救國者, 이것이 바로 한비자다, 그대의 용사다」\n『太平御覽』 「答薩於蕭金可書記事大會(太平朝覽)」.\n'아타시메시(Triasmehi…)'라고 하는 이 말은 일본어 '하츠네테키'(ハートとキルス)에 더 가까운 말이다. 일본의 역사학자 다카야마 세이시가 쓴 『토쿠이 테루오미(徳井奥義三)』에서 '도적'"}] ``` ## 주의사항 이 모델의 학습 데이터는 각종 차별/혐오 데이터가 포함됐을 수 있으며, 별로의 제거작업을 진행하지 않았습니다. 따라서 모델이 생성하는 문장에 특정 인물이나 인종, 성별, 장애에 따른 차별/혐오발언을 생성할 수 있습니다.
2,765
[ [ -0.034271240234375, -0.059173583984375, 0.026336669921875, 0.02569580078125, -0.030792236328125, -0.00025463104248046875, -0.0062255859375, -0.013702392578125, 0.040679931640625, 0.0196075439453125, -0.0254364013671875, -0.04510498046875, -0.053009033203125, ...
pkshatech/GLuCoSE-base-ja
2023-08-25T02:53:22.000Z
[ "sentence-transformers", "pytorch", "luke", "transformers", "sentence-similarity", "feature-extraction", "ja", "dataset:mc4", "dataset:clips/mqa", "dataset:shunk031/JGLUE", "dataset:paws-x", "dataset:MoritzLaurer/multilingual-NLI-26lang-2mil7", "dataset:castorini/mr-tydi", "dataset:hpprc/j...
sentence-similarity
pkshatech
null
null
pkshatech/GLuCoSE-base-ja
16
712
sentence-transformers
2023-07-16T07:28:46
--- pipeline_tag: sentence-similarity language: ja license: apache-2.0 tags: - transformers - sentence-similarity - feature-extraction - sentence-transformers inference: false datasets: - mc4 - clips/mqa - shunk031/JGLUE - paws-x - MoritzLaurer/multilingual-NLI-26lang-2mil7 - castorini/mr-tydi - hpprc/jsick --- # GLuCoSE (General Luke-based Contrastive Sentence Embedding)-base-Japanese [日本語のREADME/Japanese README](https://huggingface.co/pkshatech/GLuCoSE-base-ja/blob/main/README_JA.md) GLuCoSE (General LUke-based COntrastive Sentence Embedding, "glucose") is a Japanese text embedding model based on [LUKE](https://github.com/studio-ousia/luke). In order to create a general-purpose, user-friendly Japanese text embedding model, GLuCoSE has been trained on a mix of web data and various datasets associated with natural language inference and search. This model is not only suitable for sentence vector similarity tasks but also for semantic search tasks. - Maximum token count: 512 - Output dimension: 768 - Pooling: mean pooling - Supported language: Japanese ## Usage You can use this model easily with [sentence-transformers](https://www.SBERT.net). First, install sentence-transformers with pip as follows: ``` pip install -U sentence-transformers ``` You can load the model and convert sentences into dense vectors as shown below: ```python from sentence_transformers import SentenceTransformer sentences = [ "PKSHA Technologyは機械学習/深層学習技術に関わるアルゴリズムソリューションを展開している。", "この深層学習モデルはPKSHA Technologyによって学習され、公開された。", "広目天は、仏教における四天王の一尊であり、サンスクリット語の「種々の眼をした者」を名前の由来とする。", ] model = SentenceTransformer('pkshatech/GLuCoSE-base-ja') embeddings = model.encode(sentences) print(embeddings) ``` Since the loss function used during training is cosine similarity, we recommend using cosine similarity for downstream tasks. This text embedding model can also be used in LangChain. Please refer to [this page](https://python.langchain.com/docs/modules/data_connection/text_embedding/integrations/sentence_transformers) for more information. ## Resources Used The following resources were used to train this model. ### Pre-trained model - [studio-ousia/luke-japanese-base-lite](https://huggingface.co/studio-ousia/luke-japanese-base-lite) ### Datasets - [mC4](https://huggingface.co/datasets/mc4) - [MQA](https://huggingface.co/datasets/clips/mqa) - [JNLI](https://github.com/yahoojapan/JGLUE) - [JSNLI](https://nlp.ist.i.kyoto-u.ac.jp/?%E6%97%A5%E6%9C%AC%E8%AA%9ESNLI%28JSNLI%29%E3%83%87%E3%83%BC%E3%82%BF%E3%82%BB%E3%83%83%E3%83%88) - [PAWS-X](https://huggingface.co/datasets/paws-x) - [JSeM](https://github.com/DaisukeBekki/JSeM) - [MoritzLaurer/multilingual-NLI-26lang-2mil7](https://huggingface.co/datasets/MoritzLaurer/multilingual-NLI-26lang-2mil7) - [MultiNLI](https://huggingface.co/datasets/multi_nli) - [WANLI](https://huggingface.co/datasets/alisawuffles/WANLI) - [FeverNLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md) - [LingNLI](https://arxiv.org/pdf/2104.07179.pdf) - [JSICK](https://github.com/verypluming/JSICK) - [Mr.Tidy](https://huggingface.co/datasets/castorini/mr-tydi) - [JSTS](https://github.com/yahoojapan/JGLUE) (used for validation) [^1] ## Benchmarks ### Semantic Similarity Calculation ([JSTS](https://github.com/yahoojapan/JGLUE) dev set) Evaluation by Spearman's correlation coefficient and Pearson's correlation coefficient. | Model | Spearman | Pearson | | --- | --- | --- | | [text-embedding-ada-002](https://platform.openai.com/docs/guides/embeddings) |0.837[^2] | 0.790[^2] | | [pkshatech/simcse-ja-bert-base-clcmlp](https://huggingface.co/pkshatech/simcse-ja-bert-base-clcmlp)[^3] | 0.850 | 0.801 | | pkshatech/GLuCoSE-base-ja | **0.864** | **0.818** | ### Zero-shot Search ([AIO3](https://sites.google.com/view/project-aio/competition3?authuser=0) dev set) Evaluation by top-k retrieval accuracy[^4] (the fraction of questions that have a correct answer in the top-k retrieved documents at least once.) | Model | Top-1 | Top-5 | Top-10 | Top-50 | | --- | --- | --- | --- | --- | | [text-embedding-ada-002](https://platform.openai.com/docs/guides/embeddings) | 33.50 | 57.80 | 65.10 | 76.60 | | [pkshatech/simcse-ja-bert-base-clcmlp](https://huggingface.co/pkshatech/simcse-ja-bert-base-clcmlp)[^3] | 30.60 | 54.50 | 62.50 | 76.70 | | pkshatech/GLuCoSE-base-ja | **36.10** | **59.40** | **66.40** | **78.30** | # Authors [Akihiko Fukuchi](https://huggingface.co/akiFQC), [Yuichiro Hoshino](https://huggingface.co/Yuichiroh), [Yotarow Watanabe](https://huggingface.co/yotarow) ## License This model is published under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0). [^1]: When we trained this model, the test data of JGLUE was not released, so we used the dev set of JGLUE as a private evaluation data. Therefore, we selected the checkpoint on the train set of JGLUE insted of its dev set. [^2]: https://qiita.com/akeyhero/items/ce371bfed64399027c23 [^3]: This is the model we have released before. [^4]: For more details, please refer to https://arxiv.org/pdf/2004.04906.pdf.
5,174
[ [ -0.023193359375, -0.0587158203125, 0.030548095703125, 0.00489044189453125, -0.01329803466796875, -0.0156097412109375, -0.0218505859375, -0.0274505615234375, 0.03594970703125, 0.013397216796875, -0.02691650390625, -0.05816650390625, -0.047210693359375, 0.0094...
UrukHan/t5-russian-spell
2023-04-05T10:13:50.000Z
[ "transformers", "pytorch", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "dataset:UrukHan/wav2vec2-russian", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text2text-generation
UrukHan
null
null
UrukHan/t5-russian-spell
20
711
transformers
2022-03-29T14:20:26
--- tags: - generated_from_trainer datasets: UrukHan/wav2vec2-russian widget: - text: ывсем привет выныканалетоп армии и это двадцать пятый день спец операций на украине ет самый главной новости российские военные ракетами кинжалы калибр уничтожили крупную военную топливную базу украины ракетным ударом по населенному пункту под жетамиром уничтжены более стаукраинских военных в две тысячи двадцать втором году model-index: - name: t5-russian-spell results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> --- # t5-russian-spell --- модель для исправление текста из распознаного аудио. моя модлеь для распознования аудио https://huggingface.co/UrukHan/wav2vec2-russian и его результаты можно закидывать в эту модель. тестил на видео случайном с ютюба <table border="0"> <tr> <td><b style="font-size:30px">Output wav2vec2</b></td> <td><b style="font-size:30px">Output spell correcor</b></td> </tr> <tr> <td>ывсем привет выныканалетоп армии и это двадцать пятый день спец операций на украине ет самый главной новости российские военные ракетами кинжалы калибр уничтожили крупную военную топливную базу украины ракетным ударом по населенному пункту под жетамиром уничтжены более стаукраинских военных в две тысячи двадцать втором году</td> <td>Всем привет! Вы в курсе новостей от армии. И это 25 день спецопераций на Украине. Есть самые главные новости. Российские военные ракетами «Кинжалы» и «Кинжалы» калибра уничтожили крупную военную топливную базу Украины. Ракетным ударом по населенному пункту под Жетамиром уничтожены более ста украинских военных в 2022г.</td> </tr> </table> # --- Датасеты для обучения: UrukHan/t5-russian-spell_I : https://huggingface.co/datasets/UrukHan/t5-russian-spell_I UrukHan/t5-russian-spell_II : https://huggingface.co/datasets/UrukHan/t5-russian-spell_II UrukHan/t5-russian-spell_III : https://huggingface.co/datasets/UrukHan/t5-russian-spell_III --- # Запуск на вывод результатов пример работы с комментариями в колабе https://colab.research.google.com/drive/1ame2va9_NflYqy4RZ07HYmQ0moJYy7w2?usp=sharing : # ```python # Установим библиотеку трансформеров !pip install transformers # Импортируем библиотеки from transformers import AutoModelForSeq2SeqLM, T5TokenizerFast # Зададим название выбронной модели из хаба MODEL_NAME = 'UrukHan/t5-russian-spell' MAX_INPUT = 256 # Загрузка модели и токенизатора tokenizer = T5TokenizerFast.from_pretrained(MODEL_NAME) model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_NAME) # Входные данные (можно массив фраз или текст) input_sequences = ['сеглдыя хорош ден', 'когд а вы прдет к нам в госи'] # или можно использовать одиночные фразы: input_sequences = 'сеглдыя хорош ден' task_prefix = "Spell correct: " # Токенизирование данных if type(input_sequences) != list: input_sequences = [input_sequences] encoded = tokenizer( [task_prefix + sequence for sequence in input_sequences], padding="longest", max_length=MAX_INPUT, truncation=True, return_tensors="pt", ) predicts = model.generate(encoded) # # Прогнозирование tokenizer.batch_decode(predicts, skip_special_tokens=True) # Декодируем данные ``` # --- #Настроенный блокнот для запуска обучения и сохранения модели в свой репозиторий на huggingface hub: #https://colab.research.google.com/drive/1H4IoasDqa2TEjGivVDp-4Pdpm0oxrCWd?usp=sharing # ```python # Установка библиотек !pip install datasets !apt install git-lfs !pip install transformers !pip install sentencepiece !pip install rouge_score # Импорт библиотек import numpy as np from datasets import Dataset import tensorflow as import nltk from transformers import T5TokenizerFast, Seq2SeqTrainingArguments, Seq2SeqTrainer, AutoModelForSeq2SeqLM, DataCollatorForSeq2Seq import torch from transformers.optimization import Adafactor, AdafactorSchedule from datasets import load_dataset, load_metric # загрузка параметров raw_datasets = load_dataset("xsum") metric = load_metric("rouge") nltk.download('punkt') # Ввести свой ключ huggingface hyb from huggingface_hub import notebook_login notebook_login() # Определение параметров REPO = "t5-russian-spell" # Введите наазвание название репозитория MODEL_NAME = "UrukHan/t5-russian-spell" # Введите наазвание выбранной модели из хаба MAX_INPUT = 256 # Введите максимальную длинну входных данных в токенах (длинна входных фраз в словах (можно считать полслова токен)) MAX_OUTPUT = 256 # Введите максимальную длинну прогнозов в токенах (можно уменьшить для задач суммризации или других задач где выход короче) BATCH_SIZE = 8 DATASET = 'UrukHan/t5-russian-spell_I' # Введите наазвание название датасета # Загрузка датасета использование других типов данных опишу ниже data = load_dataset(DATASET) # Загрузка модели и токенизатора tokenizer = T5TokenizerFast.from_pretrained(MODEL_NAME) model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_NAME) model.config.max_length = MAX_OUTPUT # по умолчанию 20, поэтому во всех моделях прогнозы обрезаются выходные последовательности # Закоментить после первого соъранения в репозиторий свой необъязательно tokenizer.push_to_hub(repo_name) train = data['train'] test = data['test'].train_test_split(0.02)['test'] # Уменьшил так тестовыу. выборку чтоб не ждать долго расчет ошибок между эпохами data_collator = DataCollatorForSeq2Seq(tokenizer, model=model) #return_tensors="tf" def compute_metrics(eval_pred): predictions, labels = eval_pred decoded_preds = tokenizer.batch_decode(predictions, skip_special_tokens=True) # Replace -100 in the labels as we can't decode them. labels = np.where(labels != -100, labels, tokenizer.pad_token_id) decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True) # Rouge expects a newline after each sentence decoded_preds = ["\n".join(nltk.sent_tokenize(pred.strip())) for pred in decoded_preds] decoded_labels = ["\n".join(nltk.sent_tokenize(label.strip())) for label in decoded_labels] result = metric.compute(predictions=decoded_preds, references=decoded_labels, use_stemmer=True) # Extract a few results result = {key: value.mid.fmeasure * 100 for key, value in result.items()} # Add mean generated length prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in predictions] result["gen_len"] = np.mean(prediction_lens) return {k: round(v, 4) for k, v in result.items()} training_args = Seq2SeqTrainingArguments( output_dir = REPO, #overwrite_output_dir=True, evaluation_strategy='steps', #learning_rate=2e-5, eval_steps=5000, save_steps=5000, num_train_epochs=1, predict_with_generate=True, per_device_train_batch_size=BATCH_SIZE, per_device_eval_batch_size=BATCH_SIZE, fp16=True, save_total_limit=2, #generation_max_length=256, #generation_num_beams=4, weight_decay=0.005, #logging_dir='logs', push_to_hub=True, ) # Выберем вручную оптимизатор. Т5 в оригинальной архитектуре использует Адафактор оптимизатор optimizer = Adafactor( model.parameters(), lr=1e-5, eps=(1e-30, 1e-3), clip_threshold=1.0, decay_rate=-0.8, beta1=None, weight_decay=0.0, relative_step=False, scale_parameter=False, warmup_init=False, ) lr_scheduler = AdafactorSchedule(optimizer) trainer = Seq2SeqTrainer( model=model, args=training_args, train_dataset = train, eval_dataset = test, optimizers = (optimizer, lr_scheduler), tokenizer = tokenizer, compute_metrics=compute_metrics ) trainer.train() trainer.push_to_hub() ``` # --- # Пример конвертации массивов для данной сети # ```python input_data = ['удач почти отнее отвернулась', 'в хааоде проведения чемпиониавта мира дветысячивосемнандцтая лгодаа'] output_data = ['Удача почти от нее отвернулась', 'в ходе проведения чемпионата мира две тысячи восемнадцатого года'] # Токенизируем входные данные task_prefix = "Spell correct: " input_sequences = input_data encoding = tokenizer( [task_prefix + sequence for sequence in input_sequences], padding="longest", max_length=MAX_INPUT, truncation=True, return_tensors="pt", ) input_ids, attention_mask = encoding.input_ids, encoding.attention_mask # Токенизируем выходные данные target_encoding = tokenizer(output_data, padding="longest", max_length=MAX_OUTPUT, truncation=True) labels = target_encoding.input_ids # replace padding token id's of the labels by -100 labels = torch.tensor(labels) labels[labels == tokenizer.pad_token_id] = -100''' # Конвертируем наши данные в формат dataset data = Dataset.from_pandas(pd.DataFrame({'input_ids': list(np.array(input_ids)), 'attention_mask': list(np.array(attention_mask)), 'labels': list(np.array(labels))})) data = data.train_test_split(0.02) # и получим на вход сети для нашешго trainer: train_dataset = data['train'], eval_dataset = data['test']
8,928
[ [ -0.01910400390625, -0.0367431640625, 0.00724029541015625, 0.01186370849609375, -0.024688720703125, -0.003322601318359375, -0.0180816650390625, -0.00968170166015625, 0.0220947265625, 0.00968170166015625, -0.051055908203125, -0.055206298828125, -0.042266845703125,...
nvidia/stt_en_conformer_transducer_xlarge
2022-10-29T00:15:28.000Z
[ "nemo", "automatic-speech-recognition", "speech", "audio", "Transducer", "Conformer", "Transformer", "pytorch", "NeMo", "hf-asr-leaderboard", "en", "dataset:librispeech_asr", "dataset:fisher_corpus", "dataset:Switchboard-1", "dataset:WSJ-0", "dataset:WSJ-1", "dataset:National-Singapo...
automatic-speech-recognition
nvidia
null
null
nvidia/stt_en_conformer_transducer_xlarge
51
711
nemo
2022-06-13T20:21:18
--- language: - en library_name: nemo datasets: - librispeech_asr - fisher_corpus - Switchboard-1 - WSJ-0 - WSJ-1 - National-Singapore-Corpus-Part-1 - National-Singapore-Corpus-Part-6 - vctk - VoxPopuli-(EN) - Europarl-ASR-(EN) - Multilingual-LibriSpeech-(2000-hours) - mozilla-foundation/common_voice_8_0 - MLCommons/peoples_speech thumbnail: null tags: - automatic-speech-recognition - speech - audio - Transducer - Conformer - Transformer - pytorch - NeMo - hf-asr-leaderboard license: cc-by-4.0 widget: - example_title: Librispeech sample 1 src: https://cdn-media.huggingface.co/speech_samples/sample1.flac - example_title: Librispeech sample 2 src: https://cdn-media.huggingface.co/speech_samples/sample2.flac model-index: - name: stt_en_conformer_transducer_xlarge results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: LibriSpeech (clean) type: librispeech_asr config: clean split: test args: language: en metrics: - name: Test WER type: wer value: 1.62 - task: type: Automatic Speech Recognition name: automatic-speech-recognition dataset: name: LibriSpeech (other) type: librispeech_asr config: other split: test args: language: en metrics: - name: Test WER type: wer value: 3.01 - task: type: Automatic Speech Recognition name: automatic-speech-recognition dataset: name: Multilingual LibriSpeech type: facebook/multilingual_librispeech config: english split: test args: language: en metrics: - name: Test WER type: wer value: 5.32 - task: type: Automatic Speech Recognition name: automatic-speech-recognition dataset: name: Mozilla Common Voice 7.0 type: mozilla-foundation/common_voice_7_0 config: en split: test args: language: en metrics: - name: Test WER type: wer value: 5.13 - task: type: Automatic Speech Recognition name: automatic-speech-recognition dataset: name: Mozilla Common Voice 8.0 type: mozilla-foundation/common_voice_8_0 config: en split: test args: language: en metrics: - name: Test WER type: wer value: 6.46 - task: type: Automatic Speech Recognition name: automatic-speech-recognition dataset: name: Wall Street Journal 92 type: wsj_0 args: language: en metrics: - name: Test WER type: wer value: 1.17 - task: type: Automatic Speech Recognition name: automatic-speech-recognition dataset: name: Wall Street Journal 93 type: wsj_1 args: language: en metrics: - name: Test WER type: wer value: 2.05 - task: type: Automatic Speech Recognition name: automatic-speech-recognition dataset: name: National Singapore Corpus type: nsc_part_1 args: language: en metrics: - name: Test WER type: wer value: 5.7 --- # NVIDIA Conformer-Transducer X-Large (en-US) <style> img { display: inline; } </style> | [![Model architecture](https://img.shields.io/badge/Model_Arch-Conformer--Transducer-lightgrey#model-badge)](#model-architecture) | [![Model size](https://img.shields.io/badge/Params-600M-lightgrey#model-badge)](#model-architecture) | [![Language](https://img.shields.io/badge/Language-en--US-lightgrey#model-badge)](#datasets) This model transcribes speech in lower case English alphabet along with spaces and apostrophes. It is an "extra-large" versions of Conformer-Transducer (around 600M parameters) model. See the [model architecture](#model-architecture) section and [NeMo documentation](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#conformer-transducer) for complete architecture details. ## NVIDIA NeMo: Training To train, fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed latest Pytorch version. ``` pip install nemo_toolkit['all'] ''' ''' (if it causes an error): pip install nemo_toolkit[all] ``` ## How to Use this Model The model is available for use in the NeMo toolkit [3], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset. ### Automatically instantiate the model ```python import nemo.collections.asr as nemo_asr asr_model = nemo_asr.models.EncDecRNNTBPEModel.from_pretrained("nvidia/stt_en_conformer_transducer_xlarge") ``` ### Transcribing using Python First, let's get a sample ``` wget https://dldata-public.s3.us-east-2.amazonaws.com/2086-149220-0033.wav ``` Then simply do: ``` asr_model.transcribe(['2086-149220-0033.wav']) ``` ### Transcribing many audio files ```shell python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py pretrained_name="nvidia/stt_en_conformer_transducer_xlarge" audio_dir="<DIRECTORY CONTAINING AUDIO FILES>" ``` ### Input This model accepts 16000 KHz Mono-channel Audio (wav files) as input. ### Output This model provides transcribed speech as a string for a given audio sample. ## Model Architecture Conformer-Transducer model is an autoregressive variant of Conformer model [1] for Automatic Speech Recognition which uses Transducer loss/decoding instead of CTC Loss. You may find more info on the detail of this model here: [Conformer-Transducer Model](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html). ## Training The NeMo toolkit [3] was used for training the models for over several hundred epochs. These model are trained with this [example script](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/asr_transducer/speech_to_text_rnnt_bpe.py) and this [base config](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/conf/conformer/conformer_transducer_bpe.yaml). The tokenizers for these models were built using the text transcripts of the train set with this [script](https://github.com/NVIDIA/NeMo/blob/main/scripts/tokenizers/process_asr_text_tokenizer.py). ### Datasets All the models in this collection are trained on a composite dataset (NeMo ASRSET) comprising of several thousand hours of English speech: - Librispeech 960 hours of English speech - Fisher Corpus - Switchboard-1 Dataset - WSJ-0 and WSJ-1 - National Speech Corpus (Part 1, Part 6) - VCTK - VoxPopuli (EN) - Europarl-ASR (EN) - Multilingual Librispeech (MLS EN) - 2,000 hrs subset - Mozilla Common Voice (v8.0) - People's Speech - 12,000 hrs subset Note: older versions of the model may have trained on smaller set of datasets. ## Performance The list of the available models in this collection is shown in the following table. Performances of the ASR models are reported in terms of Word Error Rate (WER%) with greedy decoding. | Version | Tokenizer | Vocabulary Size | LS test-other | LS test-clean | WSJ Eval92 | WSJ Dev93 | NSC Part 1 | MLS Test | MLS Dev | MCV Test 8.0 | Train Dataset | |---------|-----------------------|-----------------|---------------|---------------|------------|-----------|-----|-------|------|----|------| | 1.10.0 | SentencePiece Unigram | 1024 | 3.01 | 1.62 | 1.17 | 2.05 | 5.70 | 5.32 | 4.59 | 6.46 | NeMo ASRSET 3.0 | ## Limitations Since this model was trained on publicly available speech datasets, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on. The model might also perform worse for accented speech. ## NVIDIA Riva: Deployment [NVIDIA Riva](https://developer.nvidia.com/riva), is an accelerated speech AI SDK deployable on-prem, in all clouds, multi-cloud, hybrid, on edge, and embedded. Additionally, Riva provides: * World-class out-of-the-box accuracy for the most common languages with model checkpoints trained on proprietary data with hundreds of thousands of GPU-compute hours * Best in class accuracy with run-time word boosting (e.g., brand and product names) and customization of acoustic model, language model, and inverse text normalization * Streaming speech recognition, Kubernetes compatible scaling, and enterprise-grade support Although this model isn’t supported yet by Riva, the [list of supported models is here](https://huggingface.co/models?other=Riva). Check out [Riva live demo](https://developer.nvidia.com/riva#demos). ## References [1] [Conformer: Convolution-augmented Transformer for Speech Recognition](https://arxiv.org/abs/2005.08100) [2] [Google Sentencepiece Tokenizer](https://github.com/google/sentencepiece) [3] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo) ## Licence License to use this model is covered by the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/). By downloading the public and release version of the model, you accept the terms and conditions of the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/) license.
9,095
[ [ -0.03155517578125, -0.045501708984375, 0.005168914794921875, -0.0059356689453125, -0.0206756591796875, -0.01065826416015625, -0.0140380859375, -0.046112060546875, -0.0031948089599609375, 0.02392578125, -0.03607177734375, -0.032440185546875, -0.049560546875, ...
Raspberry-ai/monse_lora_v3_500_epochs
2023-10-18T00:15:40.000Z
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "license:creativeml-openrail-m", "region:us" ]
text-to-image
Raspberry-ai
null
null
Raspberry-ai/monse_lora_v3_500_epochs
0
711
diffusers
2023-10-17T18:52:15
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA text2image fine-tuning - julian-raspberry-ai/monse_lora_v3_500_epochs These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the Raspberry-ai/monse-v3 dataset. You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png)
554
[ [ -0.0276031494140625, -0.045867919921875, 0.02667236328125, 0.0220794677734375, -0.0192413330078125, -0.0186309814453125, 0.0273284912109375, -0.01611328125, 0.037872314453125, 0.04638671875, -0.057098388671875, -0.03033447265625, -0.05010986328125, -0.011917...
castorini/afriberta_large
2023-01-13T20:01:15.000Z
[ "transformers", "pytorch", "tf", "xlm-roberta", "fill-mask", "om", "am", "rw", "rn", "ha", "ig", "so", "sw", "ti", "yo", "pcm", "multilingual", "dataset:castorini/afriberta-corpus", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
castorini
null
null
castorini/afriberta_large
6
710
transformers
2022-03-02T23:29:05
--- language: - om - am - rw - rn - ha - ig - so - sw - ti - yo - pcm - multilingual license: mit datasets: - castorini/afriberta-corpus --- # afriberta_large ## Model description AfriBERTa large is a pretrained multilingual language model with around 126 million parameters. The model has 10 layers, 6 attention heads, 768 hidden units and 3072 feed forward size. The model was pretrained on 11 African languages namely - Afaan Oromoo (also called Oromo), Amharic, Gahuza (a mixed language containing Kinyarwanda and Kirundi), Hausa, Igbo, Nigerian Pidgin, Somali, Swahili, Tigrinya and Yorùbá. The model has been shown to obtain competitive downstream performances on text classification and Named Entity Recognition on several African languages, including those it was not pretrained on. ## Intended uses & limitations #### How to use You can use this model with Transformers for any downstream task. For example, assuming we want to finetune this model on a token classification task, we do the following: ```python >>> from transformers import AutoTokenizer, AutoModelForTokenClassification >>> model = AutoModelForTokenClassification.from_pretrained("castorini/afriberta_large") >>> tokenizer = AutoTokenizer.from_pretrained("castorini/afriberta_large") # we have to manually set the model max length because it is an imported sentencepiece model, which huggingface does not properly support right now >>> tokenizer.model_max_length = 512 ``` #### Limitations and bias - This model is possibly limited by its training dataset which are majorly obtained from news articles from a specific span of time. Thus, it may not generalize well. - This model is trained on very little data (less than 1 GB), hence it may not have seen enough data to learn very complex linguistic relations. ## Training data The model was trained on an aggregation of datasets from the BBC news website and Common Crawl. ## Training procedure For information on training procedures, please refer to the AfriBERTa [paper]() or [repository](https://github.com/keleog/afriberta) ### BibTeX entry and citation info ``` @inproceedings{ogueji-etal-2021-small, title = "Small Data? No Problem! Exploring the Viability of Pretrained Multilingual Language Models for Low-resourced Languages", author = "Ogueji, Kelechi and Zhu, Yuxin and Lin, Jimmy", booktitle = "Proceedings of the 1st Workshop on Multilingual Representation Learning", month = nov, year = "2021", address = "Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.mrl-1.11", pages = "116--126", } ```
2,674
[ [ -0.05224609375, -0.05224609375, 0.0085296630859375, 0.033843994140625, -0.01181793212890625, -0.0227813720703125, -0.04364013671875, -0.042144775390625, 0.030792236328125, 0.0305328369140625, -0.035003662109375, -0.020660400390625, -0.057037353515625, 0.0125...
wavymulder/timeless-diffusion
2023-05-05T21:59:12.000Z
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "safetensors", "en", "license:creativeml-openrail-m", "endpoints_compatible", "has_space", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
wavymulder
null
null
wavymulder/timeless-diffusion
50
710
diffusers
2022-12-30T20:21:03
--- language: - en thumbnail: "https://huggingface.co/wavymulder/timeless-diffusion/resolve/main/imgs/page1.jpg" license: creativeml-openrail-m tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - safetensors - diffusers inference: true --- **Timeless Diffusion** ![Header](https://huggingface.co/wavymulder/timeless-diffusion/resolve/main/imgs/page1.jpg) [*CKPT DOWNLOAD LINK*](https://huggingface.co/wavymulder/timeless-diffusion/resolve/main/timeless-1.0.ckpt) - This is a dreambooth model trained on a diverse set of colourized photographs from the 1880s-1980s. Use the activation token **timeless style** in your prompt (I recommend at the start) The goal of this model was to create striking images with rich tones and an anachronistic feel. When using this model, I typically use **painted illustration blur haze monochrome** in my negative prompt. I encourage you to experiment and see what works well for you. Trained from 1.5 with VAE. Please see [this document where I share the parameters (prompt, sampler, seed, etc.) used for all example images.](https://huggingface.co/wavymulder/timeless-diffusion/resolve/main/parameters_for_samples.txt) We support a [Gradio](https://github.com/gradio-app/gradio) Web UI to run Timeless Diffusion: [![Open In Spaces](https://camo.githubusercontent.com/00380c35e60d6b04be65d3d94a58332be5cc93779f630bcdfc18ab9a3a7d3388/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f25463025394625413425393725323048756767696e67253230466163652d5370616365732d626c7565)](https://huggingface.co/spaces/wavymulder/timeless-diffusion) ![Environments Example](https://huggingface.co/wavymulder/timeless-diffusion/resolve/main/imgs/page2.jpg) ![Characters Example](https://huggingface.co/wavymulder/timeless-diffusion/resolve/main/imgs/page3.jpg)
1,812
[ [ -0.0377197265625, -0.07159423828125, 0.0521240234375, 0.02362060546875, -0.035980224609375, 0.00505828857421875, 0.03167724609375, -0.03692626953125, 0.042572021484375, 0.04693603515625, -0.050079345703125, -0.04888916015625, -0.0260772705078125, -0.02644348...
hyperalchemy/cat-face
2023-03-05T16:05:10.000Z
[ "diffusers", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
hyperalchemy
null
null
hyperalchemy/cat-face
0
710
diffusers
2023-03-05T16:01:36
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### cat-face Dreambooth model trained by hyperalchemy with [buildspace's DreamBooth](https://colab.research.google.com/github/buildspace/diffusers/blob/main/examples/dreambooth/DreamBooth_Stable_Diffusion.ipynb) notebook Build your own using the [AI Avatar project](https://buildspace.so/builds/ai-avatar)! To get started head over to the [project dashboard](https://buildspace.so/p/build-ai-avatars). Sample pictures of this concept:
521
[ [ -0.034027099609375, -0.04742431640625, 0.033203125, 0.041412353515625, -0.005641937255859375, 0.042633056640625, 0.0272369384765625, -0.029449462890625, 0.0765380859375, 0.0360107421875, -0.032806396484375, -0.0251617431640625, -0.0153961181640625, -0.019241...
gerulata/slovakbert
2023-07-11T18:36:33.000Z
[ "transformers", "pytorch", "tf", "safetensors", "roberta", "fill-mask", "SlovakBERT", "sk", "dataset:wikipedia", "dataset:opensubtitles", "dataset:oscar", "dataset:gerulatawebcrawl", "dataset:gerulatamonitoring", "dataset:blbec.online", "arxiv:2109.15254", "license:mit", "autotrain_c...
fill-mask
gerulata
null
null
gerulata/slovakbert
12
709
transformers
2022-03-02T23:29:05
--- language: sk tags: - SlovakBERT license: mit datasets: - wikipedia - opensubtitles - oscar - gerulatawebcrawl - gerulatamonitoring - blbec.online --- # SlovakBERT (base-sized model) SlovakBERT pretrained model on Slovak language using a masked language modeling (MLM) objective. This model is case-sensitive: it makes a difference between slovensko and Slovensko. ## Intended uses & limitations You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. **IMPORTANT**: The model was not trained on the “ and ” (direct quote) character -> so before tokenizing the text, it is advised to replace all “ and ” (direct quote marks) with a single "(double quote marks). ### How to use You can use this model directly with a pipeline for masked language modeling: ```python from transformers import pipeline unmasker = pipeline('fill-mask', model='gerulata/slovakbert') unmasker("Deti sa <mask> na ihrisku.") [{'sequence': 'Deti sa hrali na ihrisku.', 'score': 0.6355380415916443, 'token': 5949, 'token_str': ' hrali'}, {'sequence': 'Deti sa hrajú na ihrisku.', 'score': 0.14731724560260773, 'token': 9081, 'token_str': ' hrajú'}, {'sequence': 'Deti sa zahrali na ihrisku.', 'score': 0.05016357824206352, 'token': 32553, 'token_str': ' zahrali'}, {'sequence': 'Deti sa stretli na ihrisku.', 'score': 0.041727423667907715, 'token': 5964, 'token_str': ' stretli'}, {'sequence': 'Deti sa učia na ihrisku.', 'score': 0.01886524073779583, 'token': 18099, 'token_str': ' učia'}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('gerulata/slovakbert') model = RobertaModel.from_pretrained('gerulata/slovakbert') text = "Text ktorý sa má embedovať." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('gerulata/slovakbert') model = TFRobertaModel.from_pretrained('gerulata/slovakbert') text = "Text ktorý sa má embedovať." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` Or extract information from the model like this: ```python from transformers import pipeline unmasker = pipeline('fill-mask', model='gerulata/slovakbert') unmasker("Slovenské národne povstanie sa uskutočnilo v roku <mask>.") [{'sequence': 'Slovenske narodne povstanie sa uskutočnilo v roku 1944.', 'score': 0.7383289933204651, 'token': 16621, 'token_str': ' 1944'},...] ``` # Training data The SlovakBERT model was pretrained on these datasets: - Wikipedia (326MB of text), - OpenSubtitles (415MB of text), - Oscar (4.6GB of text), - Gerulata WebCrawl (12.7GB of text) , - Gerulata Monitoring (214 MB of text), - blbec.online (4.5GB of text) The text was then processed with the following steps: - URL and email addresses were replaced with special tokens ("url", "email"). - Elongated interpunction was reduced (e.g. -- to -). - Markdown syntax was deleted. - All text content in braces f.g was eliminated to reduce the amount of markup and programming language text. We segmented the resulting corpus into sentences and removed duplicates to get 181.6M unique sentences. In total, the final corpus has 19.35GB of text. # Pretraining The model was trained in **fairseq** on 4 x Nvidia A100 GPUs for 300K steps with a batch size of 512 and a sequence length of 512. The optimizer used is Adam with a learning rate of 5e-4, \\(\beta_{1} = 0.9\\), \\(\beta_{2} = 0.98\\) and \\(\epsilon = 1e-6\\), a weight decay of 0.01, dropout rate 0.1, learning rate warmup for 10k steps and linear decay of the learning rate after. We used 16-bit float precision. ## About us <a href="https://www.gerulata.com/"> <img width="300px" src="https://www.gerulata.com/assets/images/Logo_Blue.svg"> </a> Gerulata Technologies is a tech company on a mission to provide tools for fighting disinformation and hostile propaganda. At Gerulata, we focus on providing state-of-the-art AI-powered tools that empower human analysts and provide them with the ability to make informed decisions. Our tools allow for the monitoring and analysis of online activity, as well as the detection and tracking of disinformation and hostile propaganda campaigns. With our products, our clients are better equipped to identify and respond to threats in real-time. ### BibTeX entry and citation info If you find our resource or paper is useful, please consider including the following citation in your paper. - https://arxiv.org/abs/2109.15254 ``` @misc{pikuliak2021slovakbert, title={SlovakBERT: Slovak Masked Language Model}, author={Matúš Pikuliak and Štefan Grivalský and Martin Konôpka and Miroslav Blšták and Martin Tamajka and Viktor Bachratý and Marián Šimko and Pavol Balážik and Michal Trnka and Filip Uhlárik}, year={2021}, eprint={2109.15254}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
5,133
[ [ -0.0225372314453125, -0.0634765625, 0.01389312744140625, 0.0125885009765625, -0.0254669189453125, 0.002864837646484375, -0.042144775390625, -0.019561767578125, 0.0232696533203125, 0.031768798828125, -0.038330078125, -0.06182861328125, -0.05615234375, 0.00777...
heegyu/ajoublue-gpt2-medium
2023-03-04T14:10:27.000Z
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "ko", "dataset:heegyu/korean-petitions", "dataset:heegyu/namuwiki-extracted", "dataset:heegyu/kowikitext", "license:mit", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
heegyu
null
null
heegyu/ajoublue-gpt2-medium
2
709
transformers
2023-01-17T07:45:42
--- license: mit widget: - text: 오늘 아침 정부는 발표를 통해 - text: | 아 배고프다 datasets: - heegyu/korean-petitions - heegyu/namuwiki-extracted - heegyu/kowikitext language: - ko pipeline_tag: text-generation --- ## 모델 구성 - GPT2(Flax, Pytorch) - 24 Layers, 1024 hidden dim, 4096 intermediate, 16 heads, 51200 vocab size - 1024 max_seq_len - 파라미터 수: 355M ### 성능 벤치마크 <img src="https://github.com/HeegyuKim/language-model/blob/63d8bd7cd39f25e87e0e376cdd18df3f8b460dee/image/benchmark0304.png?raw=true" /> ## 학습 환경 및 하이퍼파라미터 - TPU V2-8 - Learning Rate: 3e-4, Batch Size: 512(=64 accum x 8 devices), Scheduler: Linear, WarmUp: 1000 step - Optimizer: AdamW(adam_beta1=0.9 adam_beta2=0.98, weight_decay=0.01) - bfloat16 - Training Steps: 43247 (3 epoch) - 학습 토큰 수: 21.11B (43247 * 512 * 1024seq / 1024^3) - 학습 기간: 2023/1/30 ~ 2023/2/5(6일 11시간 소요) - 학습 코드: https://github.com/HeegyuKim/language-model ## 학습에 사용한 데이터 - AIHub SNS 대화(730MB) - AIHub 구어체(422MB) - AIHub 도서(1.6MB) - AIHub 대규모 웹데이터 기반 한국어 말뭉치(12GB) - 한국어 위키(867MB) - 나무위키(6.4GB) - 국립국어원 메신저 대화(21MB) - 국립국어원 일상대화 말뭉치(23MB) - 국립국어원 문어 말뭉치(3.2GB) - 국립국어원 구어 말뭉치(1.1GB) - 국립국어원 신문 말뭉치(~2022, 17GB) - 청와대 국민청원(525MB) 데이터셋 크기는 전처리한 jsonl파일을 기준으로 함. 총 토큰 수는 약 7B임 ## 사용 예시 ```python from transformers import pipeline model_name = "heegyu/ajoublue-gpt2-medium" pipe = pipeline('text-generation', model=model_name) print(pipe("안녕하세요", repetition_penalty=1.2, do_sample=True, eos_token_id=1, early_stopping=True, max_new_tokens=128)) print(pipe("오늘 정부 발표에 따르면, ", repetition_penalty=1.2, do_sample=True, eos_token_id=1, early_stopping=True, max_new_tokens=128)) print(pipe("싸늘하다. 가슴에 비수가 날아와 꽂힌다. ", repetition_penalty=1.2, do_sample=True, eos_token_id=1, early_stopping=True, max_new_tokens=128, min_length=64)) ``` 결과 ```bash [{'generated_text': '안녕하세요 라고 말하거나 그 뒤에 뭐~ 어떤 사람이 어~ 그렇게 얘기를 해 놓고 있습니다.\n예?\n그리고 이분이 또 제가 아는 사이인데 이런 말이 있어요. 자기한테 무슨 부탁을 했다고 그러는데 내가 어떻게 하냐면 자기가 인제 술 먹고 한 번 더 술을 먹자 그랬더니 내가 한 번은 잘했지 이렇게 말을 하는 거예요. 그니까 약간 그런 거죠. 그래서 내가 그냥 다른 사람 앞에서 할 말은 하지 말자라는 생각이 들어서 그냥 그렇게 넘어가게 되는 거구요.(웃음) 이제 저 개인적으로는 조금 안타까운 게 이게 뭐냐하면은 어쨌든 저는 원래 남자다 보니까 결혼 생활에 있어서는 정말 안 되려고 해요. 왜냐면 내 자신을 사랑하고 그게 제일 중요한 부분이기 때문에'}] [{'generated_text': "오늘 정부 발표에 따르면, 對韓 외국인투자가는 지난해 875억달러에서 3년만에 60억달러로 40%가까이 줄어들 것으로 예상됐다. 이런 추세가 지속된다면 올 하반기 중 300달러 이하로 떨어질 확률이 높다.. ◇외국인 주가순자산비율 5년來 최저치=지난해 금융위기로 증시가 폭락한 여파로 국내주식형펀드는 같은 기간 동안 24조7000억원 순유출됐으나 해외 주식형과 채권형은 각각 11조원 이상 늘어났다.(이름) kb국민은행 연구원은 '작년 말 미국의 기준금리 추가 인상과 한 해 뒤 중국 경기회복 기대 등으로 신흥국 중심으로 투자자금이 확대되면서 글로벌 자금의 안전자산 선호"}] [{'generated_text': '싸늘하다. 가슴에 비수가 날아와 꽂힌다. 苟哭易到之鼓(청야옹도지니)는 명언이라, 이 시를 쓰는 그 마음이 여간 편치 않다 한다"고 하여 이를 "사사로운 일 때문에 시비를 가리기는 힘듭니다만 고인의 말씀처럼 내 한 몸을 지키어 순결하게 살아가고자 합니다."라고 풀이하였다.(『中國書』 「答耕志」에 나온 것으로 \'순결한 삶을 위해\'라는 뜻이다.)\n또한 박인호는 그의 시의 기본 테마를 \'시조를 깊이 있게 탐할 수는 있으나 시로써의 문학성을 논하는 것에는 이르'}] ``` ## 주의사항 이 모델의 학습 데이터는 각종 차별/혐오 데이터가 포함됐을 수 있으며, 별도의 제거작업을 진행하지 않았습니다. 따라서 모델이 생성하는 문장에 특정 인물이나 인종, 성별, 장애에 따른 차별/혐오발언을 생성할 수 있습니다.
2,922
[ [ -0.035614013671875, -0.057342529296875, 0.0225677490234375, 0.0260009765625, -0.0266876220703125, -0.001373291015625, -0.0026111602783203125, -0.0137786865234375, 0.035980224609375, 0.0181121826171875, -0.0271148681640625, -0.0401611328125, -0.0555419921875, ...
facebook/convnextv2-base-1k-224
2023-02-20T13:00:30.000Z
[ "transformers", "pytorch", "convnextv2", "image-classification", "vision", "dataset:imagenet-1k", "arxiv:2301.00808", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
facebook
null
null
facebook/convnextv2-base-1k-224
2
709
transformers
2023-02-17T14:12:51
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet-1k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # ConvNeXt V2 (base-sized model) ConvNeXt V2 model pretrained using the FCMAE framework and fine-tuned on the ImageNet-1K dataset at resolution 224x224. It was introduced in the paper [ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders](https://arxiv.org/abs/2301.00808) by Woo et al. and first released in [this repository](https://github.com/facebookresearch/ConvNeXt-V2). Disclaimer: The team releasing ConvNeXT V2 did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description ConvNeXt V2 is a pure convolutional model (ConvNet) that introduces a fully convolutional masked autoencoder framework (FCMAE) and a new Global Response Normalization (GRN) layer to ConvNeXt. ConvNeXt V2 significantly improves the performance of pure ConvNets on various recognition benchmarks. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/convnextv2_architecture.png) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=convnextv2) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import AutoImageProcessor, ConvNextV2ForImageClassification import torch from datasets import load_dataset dataset = load_dataset("huggingface/cats-image") image = dataset["test"]["image"][0] preprocessor = AutoImageProcessor.from_pretrained("facebook/convnextv2-base-1k-224") model = ConvNextV2ForImageClassification.from_pretrained("facebook/convnextv2-base-1k-224") inputs = preprocessor(image, return_tensors="pt") with torch.no_grad(): logits = model(**inputs).logits # model predicts one of the 1000 ImageNet classes predicted_label = logits.argmax(-1).item() print(model.config.id2label[predicted_label]), ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/convnextv2). ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2301-00808, author = {Sanghyun Woo and Shoubhik Debnath and Ronghang Hu and Xinlei Chen and Zhuang Liu and In So Kweon and Saining Xie}, title = {ConvNeXt {V2:} Co-designing and Scaling ConvNets with Masked Autoencoders}, journal = {CoRR}, volume = {abs/2301.00808}, year = {2023}, url = {https://doi.org/10.48550/arXiv.2301.00808}, doi = {10.48550/arXiv.2301.00808}, eprinttype = {arXiv}, eprint = {2301.00808}, timestamp = {Tue, 10 Jan 2023 15:10:12 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2301-00808.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
3,370
[ [ -0.051513671875, -0.028350830078125, -0.0294189453125, 0.016510009765625, -0.0281829833984375, -0.0204315185546875, -0.011199951171875, -0.060394287109375, 0.02392578125, 0.0341796875, -0.042694091796875, -0.00958251953125, -0.04534912109375, -0.007198333740...
deepmind/optical-flow-perceiver
2021-12-11T13:28:43.000Z
[ "transformers", "pytorch", "perceiver", "dataset:autoflow", "arxiv:2107.14795", "license:apache-2.0", "endpoints_compatible", "has_space", "region:us" ]
null
deepmind
null
null
deepmind/optical-flow-perceiver
9
708
transformers
2022-03-02T23:29:05
--- license: apache-2.0 tags: datasets: - autoflow --- # Perceiver IO for optical flow Perceiver IO model trained on [AutoFlow](https://autoflow-google.github.io/). It was introduced in the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Jaegle et al. and first released in [this repository](https://github.com/deepmind/deepmind-research/tree/master/perceiver). Optical flow is a decades-old open problem in computer vision. Given two images of the same scene (e.g. two consecutive frames of a video), the task is to estimate the 2D displacement for each pixel in the first image. This has many broader applications, such as navigation and visual odometry in robots, estimation of 3D geometry, and even to aid transfer of more complex, learned inference such as 3D human pose estimation from synthetic to real images. Disclaimer: The team releasing Perceiver IO did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description Perceiver IO is a transformer encoder model that can be applied on any modality (text, images, audio, video, ...). The core idea is to employ the self-attention mechanism on a not-too-large set of latent vectors (e.g. 256 or 512), and only use the inputs to perform cross-attention with the latents. This allows for the time and memory requirements of the self-attention mechanism to not depend on the size of the inputs. To decode, the authors employ so-called decoder queries, which allow to flexibly decode the final hidden states of the latents to produce outputs of arbitrary size and semantics. For optical flow, the output is a tensor containing the predicted flow of shape (batch_size, height, width, 2). <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/perceiver_architecture.jpg" alt="drawing" width="600"/> <small> Perceiver IO architecture.</small> As the time and memory requirements of the self-attention mechanism don't depend on the size of the inputs, the Perceiver IO authors can train the model on raw pixel values, by concatenating a pair of images and extracting a 3x3 patch around each pixel. The model obtains state-of-the-art results on important optical flow benchmarks, including [Sintel](http://sintel.is.tue.mpg.de/) and [KITTI](http://www.cvlibs.net/datasets/kitti/eval_scene_flow.php?benchmark=flow). ## Intended uses & limitations You can use the raw model for predicting optical flow between a pair of images. See the [model hub](https://huggingface.co/models?search=deepmind/perceiver) to look for other versions on a task that may interest you. ### How to use We refer to the [tutorial notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/Perceiver/Perceiver_for_Optical_Flow.ipynb) regarding using the Perceiver for optical flow. ## Training data This model was trained on [AutoFlow](https://autoflow-google.github.io/), a synthetic dataset consisting of 400,000 annotated image pairs. ## Training procedure ### Preprocessing Frames are resized to a resolution of 368x496. The authors concatenate the frames along the channel dimension and extract a 3x3 patch around each pixel (leading to 3x3x3x2 = 54 values for each pixel). ### Pretraining Hyperparameter details can be found in Appendix E of the [paper](https://arxiv.org/abs/2107.14795). ## Evaluation results The model achieves a average end-point error (EPE) of 1.81 on Sintel.clean, 2.42 on Sintel.final and 4.98 on KITTI. For evaluation results, we refer to table 4 of the [paper](https://arxiv.org/abs/2107.14795). ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2107-14795, author = {Andrew Jaegle and Sebastian Borgeaud and Jean{-}Baptiste Alayrac and Carl Doersch and Catalin Ionescu and David Ding and Skanda Koppula and Daniel Zoran and Andrew Brock and Evan Shelhamer and Olivier J. H{\'{e}}naff and Matthew M. Botvinick and Andrew Zisserman and Oriol Vinyals and Jo{\~{a}}o Carreira}, title = {Perceiver {IO:} {A} General Architecture for Structured Inputs {\&} Outputs}, journal = {CoRR}, volume = {abs/2107.14795}, year = {2021}, url = {https://arxiv.org/abs/2107.14795}, eprinttype = {arXiv}, eprint = {2107.14795}, timestamp = {Tue, 03 Aug 2021 14:53:34 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2107-14795.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
4,756
[ [ -0.042724609375, -0.04718017578125, 0.03912353515625, 0.00437164306640625, -0.0230865478515625, -0.02825927734375, 0.01396942138671875, -0.06378173828125, 0.01544952392578125, 0.0167694091796875, -0.050201416015625, -0.007541656494140625, -0.04901123046875, ...
SCUT-DLVCLab/lilt-infoxlm-base
2023-09-11T13:20:42.000Z
[ "transformers", "pytorch", "safetensors", "lilt", "feature-extraction", "vision", "arxiv:2202.13669", "license:mit", "endpoints_compatible", "has_space", "region:us" ]
feature-extraction
SCUT-DLVCLab
null
null
SCUT-DLVCLab/lilt-infoxlm-base
4
708
transformers
2022-10-10T14:19:02
--- license: mit tags: - vision --- # LiLT-InfoXLM (base-sized model) Language-Independent Layout Transformer - InfoXLM model by stitching a pre-trained InfoXLM and a pre-trained Language-Independent Layout Transformer (LiLT) together. It was introduced in the paper [LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding](https://arxiv.org/abs/2202.13669) by Wang et al. and first released in [this repository](https://github.com/jpwang/lilt). Disclaimer: The team releasing LiLT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The Language-Independent Layout Transformer (LiLT) allows to combine any pre-trained RoBERTa encoder from the hub (hence, in any language) with a lightweight Layout Transformer to have a LayoutLM-like model for any language. <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/lilt_architecture.jpg" alt="drawing" width="600"/> ## Intended uses & limitations The model is meant to be fine-tuned on tasks like document image classification, document parsing and document QA. See the [model hub](https://huggingface.co/models?search=lilt) to look for fine-tuned versions on a task that interests you. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/transformers/main/model_doc/lilt.html). ### BibTeX entry and citation info ```bibtex @misc{https://doi.org/10.48550/arxiv.2202.13669, doi = {10.48550/ARXIV.2202.13669}, url = {https://arxiv.org/abs/2202.13669}, author = {Wang, Jiapeng and Jin, Lianwen and Ding, Kai}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding}, publisher = {arXiv}, year = {2022}, copyright = {arXiv.org perpetual, non-exclusive license} } ```
2,068
[ [ -0.031524658203125, -0.0576171875, 0.023529052734375, 0.0266571044921875, -0.006450653076171875, -0.0121307373046875, 0.0030536651611328125, -0.028564453125, 0.018280029296875, 0.0233001708984375, -0.050506591796875, -0.0284576416015625, -0.042205810546875, ...
shi-labs/dinat-mini-in1k-224
2022-11-18T23:10:49.000Z
[ "transformers", "pytorch", "dinat", "image-classification", "vision", "dataset:imagenet-1k", "arxiv:2209.15001", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
shi-labs
null
null
shi-labs/dinat-mini-in1k-224
1
708
transformers
2022-11-14T22:27:14
--- license: mit tags: - vision - image-classification datasets: - imagenet-1k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # DiNAT (mini variant) DiNAT-Mini trained on ImageNet-1K at 224x224 resolution. It was introduced in the paper [Dilated Neighborhood Attention Transformer](https://arxiv.org/abs/2209.15001) by Hassani et al. and first released in [this repository](https://github.com/SHI-Labs/Neighborhood-Attention-Transformer). ## Model description DiNAT is a hierarchical vision transformer based on Neighborhood Attention (NA) and its dilated variant (DiNA). Neighborhood Attention is a restricted self attention pattern in which each token's receptive field is limited to its nearest neighboring pixels. NA and DiNA are therefore sliding-window attention patterns, and as a result are highly flexible and maintain translational equivariance. They come with PyTorch implementations through the [NATTEN](https://github.com/SHI-Labs/NATTEN/) package. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/dilated-neighborhood-attention-pattern.jpg) [Source](https://paperswithcode.com/paper/dilated-neighborhood-attention-transformer) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=dinat) to look for fine-tuned versions on a task that interests you. ### Example Here is how to use this model to classify an image from the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import AutoImageProcessor, DinatForImageClassification from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) feature_extractor = AutoImageProcessor.from_pretrained("shi-labs/dinat-mini-in1k-224") model = DinatForImageClassification.from_pretrained("shi-labs/dinat-mini-in1k-224") inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` For more examples, please refer to the [documentation](https://huggingface.co/transformers/model_doc/dinat.html#). ### Requirements Other than transformers, this model requires the [NATTEN](https://shi-labs.com/natten) package. If you're on Linux, you can refer to [shi-labs.com/natten](https://shi-labs.com/natten) for instructions on installing with pre-compiled binaries (just select your torch build to get the correct wheel URL). You can alternatively use `pip install natten` to compile on your device, which may take up to a few minutes. Mac users only have the latter option (no pre-compiled binaries). Refer to [NATTEN's GitHub](https://github.com/SHI-Labs/NATTEN/) for more information. ### BibTeX entry and citation info ```bibtex @article{hassani2022dilated, title = {Dilated Neighborhood Attention Transformer}, author = {Ali Hassani and Humphrey Shi}, year = 2022, url = {https://arxiv.org/abs/2209.15001}, eprint = {2209.15001}, archiveprefix = {arXiv}, primaryclass = {cs.CV} } ```
3,570
[ [ -0.054351806640625, -0.041900634765625, 0.0002944469451904297, 0.005641937255859375, -0.01432037353515625, -0.0138397216796875, -0.00203704833984375, -0.03094482421875, 0.01885986328125, 0.017181396484375, -0.046173095703125, -0.03314208984375, -0.04904174804687...
thibaud/controlnet-sd21-canny-diffusers
2023-08-14T07:45:22.000Z
[ "diffusers", "art", "stable diffusion", "controlnet", "en", "license:other", "has_space", "diffusers:ControlNetModel", "region:us" ]
null
thibaud
null
null
thibaud/controlnet-sd21-canny-diffusers
4
708
diffusers
2023-03-09T08:18:19
--- license: other language: - en tags: - art - diffusers - stable diffusion - controlnet --- Here's the first version of controlnet for stablediffusion 2.1 for diffusers Trained on a subset of laion/laion-art License: refers to the different preprocessor's ones. ### Canny: ![<canny> 0](https://huggingface.co/thibaud/controlnet-sd21/resolve/main/example_canny.png) ### Misuse, Malicious Use, and Out-of-Scope Use The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes. Thanks - https://huggingface.co/lllyasviel/ControlNet for the implementation and the release of 1.5 models. - https://huggingface.co/thepowefuldeez for the conversion script to diffusers
912
[ [ -0.0148773193359375, -0.01580810546875, 0.00304412841796875, 0.043914794921875, -0.038360595703125, -0.034454345703125, 0.00926971435546875, -0.032562255859375, 0.01338958740234375, 0.055328369140625, -0.0256195068359375, -0.031005859375, -0.0640869140625, -...
rinna/youri-7b-chat
2023-10-31T00:34:10.000Z
[ "transformers", "pytorch", "llama", "text-generation", "ja", "en", "dataset:databricks/databricks-dolly-15k", "dataset:kunishou/databricks-dolly-15k-ja", "dataset:izumi-lab/llm-japanese-dataset", "arxiv:2307.09288", "license:llama2", "text-generation-inference", "region:us", "has_space" ]
text-generation
rinna
null
null
rinna/youri-7b-chat
10
708
transformers
2023-10-30T15:12:30
--- thumbnail: https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png license: llama2 language: - ja - en inference: false datasets: - databricks/databricks-dolly-15k - kunishou/databricks-dolly-15k-ja - izumi-lab/llm-japanese-dataset --- # `rinna/youri-7b-chat` ![rinna-icon](./rinna.png) # Overview The model is the instruction-tuned version of [`rinna/youri-7b`](https://huggingface.co/rinna/youri-7b). It adopts a chat-style input format. * **Model architecture** A 32-layer, 4096-hidden-size transformer-based language model. Refer to the [llama2 paper](https://arxiv.org/abs/2307.09288) for architecture details. * **Fine-tuning** The fine-tuning data is the subset of the following datasets. * [Databricks Dolly data](https://huggingface.co/datasets/databricks/databricks-dolly-15k) * [Japanese Databricks Dolly data](https://huggingface.co/datasets/kunishou/databricks-dolly-15k-ja) * [Anthropic HH RLHF data](https://huggingface.co/datasets/Anthropic/hh-rlhf) and its Japanese translation * [FLAN Instruction Tuning data](https://github.com/google-research/FLAN) and its Japanese translation * [Izumi lab LLM Japanese dataset](https://github.com/masanorihirano/llm-japanese-dataset/tree/main) * The following sections are used * alt * aozora-txt * CourseraParallel * ParaNatCom * Tab-delimited_Bilingual_Sentence_Pairs * tanaka-corpus * wikinews * wordnet * yasashi-japanese * The [remaining sections](https://github.com/masanorihirano/llm-japanese-dataset/tree/main/datasets-cc-by-sa) contain commonly used evaluation corpora so they are skipped to prevent data leak. * **Authors** - [Tianyu Zhao](https://huggingface.co/tianyuz) - [Kei Sawada](https://huggingface.co/keisawada) --- # Benchmarking Evaluation experiments suggest that rinna's `youri-7b` series outperforms other open-source Japanese LLMs on Japanese tasks according to our runs. | Model | Model type | 4-task score | 6-task score | 8-task score | | :-- | :-- | :-- | :-- | :-- | | rinna/youri-7b-instruction | SFT | 83.88 | 80.93 | 63.63 | | **rinna/youri-7b-chat** | **SFT** | **78.29** | **78.47** | **62.18** | | matsuo-lab/weblab-10b-instruction-sft | SFT | 78.75 | 75.05 | 59.11 | | rinna/youri-7b | pre-trained | 73.32 | 74.58 | 58.87 | | stabilityai/japanese-stablelm-instruct-alpha-7b | SFT | 70.10 | 71.32 | 54.71 | | elyza/ELYZA-japanese-Llama-2-7b | pre-trained | 71.72 | 69.28 | 53.17 | | elyza/ELYZA-japanese-Llama-2-7b-instruct | SFT | 70.57 | 68.12 | 53.14 | | stabilityai/japanese-stablelm-base-alpha-7b | pre-trained | 61.03 | 65.83 | 51.05 | | matsuo-lab/weblab-10b | pre-trained | 66.33 | 65.58 | 50.74 | | meta/llama2-7b | pre-trained | 56.33 | 54.80 | 42.97 | | rinna/japanese-gpt-neox-3.6b | pre-trained | 47.20 | 54.68 | 41.80 | | rinna/bilingual-gpt-neox-4b | pre-trained | 46.60 | 52.04 | 40.03 | --- # How to use the model ~~~~python import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("rinna/youri-7b-chat") model = AutoModelForCausalLM.from_pretrained("rinna/youri-7b-chat") if torch.cuda.is_available(): model = model.to("cuda") instruction = "次の日本語を英語に翻訳してください。" input = "自然言語による指示に基づきタスクが解けるよう学習させることを Instruction tuning と呼びます。" context = [ { "speaker": "設定", "text": instruction }, { "speaker": "ユーザー", "text": input } ] prompt = [ f"{uttr['speaker']}: {uttr['text']}" for uttr in context ] prompt = "\n".join(prompt) prompt = ( prompt + "\n" + "システム: " ) token_ids = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt") with torch.no_grad(): output_ids = model.generate( token_ids.to(model.device), max_new_tokens=200, do_sample=True, temperature=0.5, pad_token_id=tokenizer.pad_token_id, bos_token_id=tokenizer.bos_token_id, eos_token_id=tokenizer.eos_token_id ) output = tokenizer.decode(output_ids.tolist()[0]) print(output) """ 設定: 次の日本語を英語に翻訳してください。 ユーザー: 自然言語による指示に基づきタスクが解けるよう学習させることを Instruction tuning と呼びます。 システム: Learning to solve tasks based on natural language instructions is called instruction tuning.</s> """ output = output[len(prompt):-len("</s>")].strip() input = "大規模言語モデル(だいきぼげんごモデル、英: large language model、LLM)は、多数のパラメータ(数千万から数十億)を持つ人工ニューラルネットワークで構成されるコンピュータ言語モデルで、膨大なラベルなしテキストを使用して自己教師あり学習または半教師あり学習によって訓練が行われる。" context.extend([ { "speaker": "システム", "text": output }, { "speaker": "ユーザー", "text": input } ]) prompt = [ f"{uttr['speaker']}: {uttr['text']}" for uttr in context ] prompt = "\n".join(prompt) prompt = ( prompt + "\n" + "システム: " ) token_ids = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt") with torch.no_grad(): output_ids = model.generate( token_ids.to(model.device), max_new_tokens=200, do_sample=True, temperature=0.5, pad_token_id=tokenizer.pad_token_id, bos_token_id=tokenizer.bos_token_id, eos_token_id=tokenizer.eos_token_id ) output = tokenizer.decode(output_ids.tolist()[0]) print(output) """ 設定: 次の日本語を英語に翻訳してください。 ユーザー: 自然言語による指示に基づきタスクが解けるよう学習させることを Instruction tuning と呼びます。 システム: Learning to solve tasks based on natural language instructions is called instruction tuning. ユーザー: 大規模言語モデル(だいきぼげんごモデル、英: large language model、LLM)は、多数のパラメータ(数千万から数十億)を持つ人工ニューラルネットワークで構成されるコンピュータ言語モデルで、膨大なラベルなしテ キストを使用して自己教師あり学習または半教師あり学習によって訓練が行われる。 システム: Large language models (LLMs) are computer language models consisting of a deep artificial neural network with millions to billions of parameters that are trained by self-supervised learning or semi-supervised learning using vast unlabeled text corpora.</s> """ ~~~~ --- # Tokenization The model uses the original llama-2 tokenizer. --- # How to cite ~~~ @misc{RinnaYouri7bChat, url={https://huggingface.co/rinna/youri-7b-chat}, title={rinna/youri-7b-chat}, author={Zhao, Tianyu and Sawada, Kei} } ~~~ --- # License [The llama2 license](https://ai.meta.com/llama/license/)
6,246
[ [ -0.0237884521484375, -0.071044921875, 0.01030731201171875, 0.01611328125, -0.0227203369140625, -0.0012340545654296875, -0.028289794921875, -0.0278167724609375, 0.01357269287109375, 0.025177001953125, -0.04840087890625, -0.0546875, -0.03759765625, 0.010841369...
Maltehb/aelaectra-danish-electra-small-cased-ner-dane
2021-07-26T08:48:30.000Z
[ "transformers", "pytorch", "tf", "electra", "token-classification", "ælæctra", "danish", "ELECTRA-Small", "replaced token detection", "da", "dataset:DAGW", "arxiv:2003.10555", "arxiv:1810.04805", "arxiv:2005.03521", "license:mit", "autotrain_compatible", "endpoints_compatible", "re...
token-classification
Maltehb
null
null
Maltehb/aelaectra-danish-electra-small-cased-ner-dane
1
707
transformers
2022-03-02T23:29:04
--- language: "da" tags: - ælæctra - pytorch - danish - ELECTRA-Small - replaced token detection license: "mit" datasets: - DAGW widget: - text: "Chili Jensen, som bor på Danmarksgade 12, køber chilifrugter fra Netto." metrics: - f1 --- # Ælæctra - Finetuned for Named Entity Recognition on the [DaNE dataset](https://danlp.alexandra.dk/304bd159d5de/datasets/ddt.zip) (Hvingelby et al., 2020) by Malte Højmark-Bertelsen. **Ælæctra** is a Danish Transformer-based language model created to enhance the variety of Danish NLP resources with a more efficient model compared to previous state-of-the-art (SOTA) models. Ælæctra was pretrained with the ELECTRA-Small (Clark et al., 2020) pretraining approach by using the Danish Gigaword Corpus (Strømberg-Derczynski et al., 2020) and evaluated on Named Entity Recognition (NER) tasks. Since NER only presents a limited picture of Ælæctra's capabilities I am very interested in further evaluations. Therefore, if you employ it for any task, feel free to hit me up your findings! Ælæctra was, as mentioned, created to enhance the Danish NLP capabilties and please do note how this GitHub still does not support the Danish characters "*Æ, Ø and Å*" as the title of this repository becomes "*-l-ctra*". How ironic.🙂 Here is an example on how to load the finetuned Ælæctra-cased model for Named Entity Recognition in [PyTorch](https://pytorch.org/) using the [🤗Transformers](https://github.com/huggingface/transformers) library: ```python from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("Maltehb/-l-ctra-danish-electra-small-cased-ner-dane") model = AutoModelForTokenClassification.from_pretrained("Maltehb/-l-ctra-danish-electra-small-cased-ner-dane") ``` ### Evaluation of current Danish Language Models Ælæctra, Danish BERT (DaBERT) and multilingual BERT (mBERT) were evaluated: | Model | Layers | Hidden Size | Params | AVG NER micro-f1 (DaNE-testset) | Average Inference Time (Sec/Epoch) | Download | | --- | --- | --- | --- | --- | --- | --- | | Ælæctra Uncased | 12 | 256 | 13.7M | 78.03 (SD = 1.28) | 10.91 | [Link for model](https://www.dropbox.com/s/cag7prs1nvdchqs/%C3%86l%C3%A6ctra.zip?dl=0) | | Ælæctra Cased | 12 | 256 | 14.7M | 80.08 (SD = 0.26) | 10.92 | [Link for model](https://www.dropbox.com/s/cag7prs1nvdchqs/%C3%86l%C3%A6ctra.zip?dl=0) | | DaBERT | 12 | 768 | 110M | 84.89 (SD = 0.64) | 43.03 | [Link for model](https://www.dropbox.com/s/19cjaoqvv2jicq9/danish_bert_uncased_v2.zip?dl=1) | | mBERT Uncased | 12 | 768 | 167M | 80.44 (SD = 0.82) | 72.10 | [Link for model](https://storage.googleapis.com/bert_models/2018_11_03/multilingual_L-12_H-768_A-12.zip) | | mBERT Cased | 12 | 768 | 177M | 83.79 (SD = 0.91) | 70.56 | [Link for model](https://storage.googleapis.com/bert_models/2018_11_23/multi_cased_L-12_H-768_A-12.zip) | On [DaNE](https://danlp.alexandra.dk/304bd159d5de/datasets/ddt.zip) (Hvingelby et al., 2020) without the *MISC-tag*, Ælæctra scores slightly worse than both cased and uncased Multilingual BERT (Devlin et al., 2019) and Danish BERT (Danish BERT, 2019/2020), however, Ælæctra is less than one third the size, and uses significantly fewer computational resources to pretrain and instantiate. ### Pretraining To pretrain Ælæctra it is recommended to build a Docker Container from the [Dockerfile](https://github.com/MalteHB/Ælæctra/tree/master/notebooks/fine-tuning/). Next, simply follow the [pretraining notebooks](https://github.com/MalteHB/Ælæctra/tree/master/infrastructure/Dockerfile/) The pretraining was done by utilizing a single NVIDIA Tesla V100 GPU with 16 GiB, endowed by the Danish data company [KMD](https://www.kmd.dk/). The pretraining took approximately 4 days and 9.5 hours for both the cased and uncased model ### Fine-tuning To fine-tune any Ælæctra model follow the [fine-tuning notebooks](https://github.com/MalteHB/Ælæctra/tree/master/notebooks/fine-tuning/) ### References Clark, K., Luong, M.-T., Le, Q. V., & Manning, C. D. (2020). ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. ArXiv:2003.10555 [Cs]. http://arxiv.org/abs/2003.10555 Danish BERT. (2020). BotXO. https://github.com/botxo/nordic_bert (Original work published 2019) Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. ArXiv:1810.04805 [Cs]. http://arxiv.org/abs/1810.04805 Hvingelby, R., Pauli, A. B., Barrett, M., Rosted, C., Lidegaard, L. M., & Søgaard, A. (2020). DaNE: A Named Entity Resource for Danish. Proceedings of the 12th Language Resources and Evaluation Conference, 4597–4604. https://www.aclweb.org/anthology/2020.lrec-1.565 Strømberg-Derczynski, L., Baglini, R., Christiansen, M. H., Ciosici, M. R., Dalsgaard, J. A., Fusaroli, R., Henrichsen, P. J., Hvingelby, R., Kirkedal, A., Kjeldsen, A. S., Ladefoged, C., Nielsen, F. Å., Petersen, M. L., Rystrøm, J. H., & Varab, D. (2020). The Danish Gigaword Project. ArXiv:2005.03521 [Cs]. http://arxiv.org/abs/2005.03521 #### Acknowledgements As the majority of this repository is build upon [the works](https://github.com/google-research/electra) by the team at Google who created ELECTRA, a HUGE thanks to them is in order. A Giga thanks also goes out to the incredible people who collected The Danish Gigaword Corpus (Strømberg-Derczynski et al., 2020). Furthermore, I would like to thank my supervisor [Riccardo Fusaroli](https://github.com/fusaroli) for the support with the thesis, and a special thanks goes out to [Kenneth Enevoldsen](https://github.com/KennethEnevoldsen) for his continuous feedback. Lastly, i would like to thank KMD, my colleagues from KMD, and my peers and co-students from Cognitive Science for encouriging me to keep on working hard and holding my head up high! #### Contact For help or further information feel free to connect with the author Malte Højmark-Bertelsen on [hjb@kmd.dk](mailto:hjb@kmd.dk?subject=[GitHub]%20ÆlæctraCasedNER) or any of the following platforms: [<img align="left" alt="MalteHB | Twitter" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/twitter.svg" />][twitter] [<img align="left" alt="MalteHB | LinkedIn" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/linkedin.svg" />][linkedin] [<img align="left" alt="MalteHB | Instagram" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/instagram.svg" />][instagram] <br /> </details> [twitter]: https://twitter.com/malteH_B [instagram]: https://www.instagram.com/maltemusen/ [linkedin]: https://www.linkedin.com/in/malte-h%C3%B8jmark-bertelsen-9a618017b/
6,678
[ [ -0.0484619140625, -0.040283203125, 0.038909912109375, 0.016754150390625, -0.0251007080078125, -0.007450103759765625, -0.042694091796875, -0.03985595703125, 0.027496337890625, 0.0162506103515625, -0.0251617431640625, -0.0455322265625, -0.033660888671875, 0.01...
shashanksrinath/News_Sentiment_Analysis
2022-08-03T08:34:50.000Z
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "endpoints_compatible", "region:us" ]
text-classification
shashanksrinath
null
null
shashanksrinath/News_Sentiment_Analysis
4
707
transformers
2022-08-01T13:01:39
--- tags: - generated_from_trainer model-index: - name: News_Sentiment_Analysis results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # News_Sentiment_Analysis This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment-latest](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment-latest) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.21.0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
1,061
[ [ -0.036956787109375, -0.05877685546875, 0.0139923095703125, 0.02850341796875, -0.038543701171875, -0.0223846435546875, -0.0197601318359375, -0.0104217529296875, 0.0220489501953125, 0.0189666748046875, -0.059417724609375, -0.06396484375, -0.055023193359375, -0...
timm/flexivit_large.600ep_in1k
2023-05-05T23:59:29.000Z
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2212.08013", "arxiv:2010.11929", "license:apache-2.0", "region:us" ]
image-classification
timm
null
null
timm/flexivit_large.600ep_in1k
0
707
timm
2022-12-22T07:18:07
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for flexivit_large.600ep_in1k A FlexiViT image classification model. Trained on ImageNet-1k in JAX by paper authors, ported to PyTorch by Ross Wightman. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 304.4 - GMACs: 68.5 - Activations (M): 50.2 - Image size: 240 x 240 - **Papers:** - FlexiViT: One Model for All Patch Sizes: https://arxiv.org/abs/2212.08013 - An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2 - **Dataset:** ImageNet-1k - **Original:** https://github.com/google-research/big_vision ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('flexivit_large.600ep_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'flexivit_large.600ep_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 226, 1024) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @article{beyer2022flexivit, title={FlexiViT: One Model for All Patch Sizes}, author={Beyer, Lucas and Izmailov, Pavel and Kolesnikov, Alexander and Caron, Mathilde and Kornblith, Simon and Zhai, Xiaohua and Minderer, Matthias and Tschannen, Michael and Alabdulmohsin, Ibrahim and Pavetic, Filip}, journal={arXiv preprint arXiv:2212.08013}, year={2022} } ``` ```bibtex @article{dosovitskiy2020vit, title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale}, author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil}, journal={ICLR}, year={2021} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
3,701
[ [ -0.03839111328125, -0.028289794921875, 0.004978179931640625, 0.005199432373046875, -0.025909423828125, -0.0293426513671875, -0.0196990966796875, -0.0367431640625, 0.0170135498046875, 0.0174102783203125, -0.0418701171875, -0.0413818359375, -0.045318603515625, ...
timm/regnety_016.tv2_in1k
2023-03-21T06:37:55.000Z
[ "timm", "pytorch", "safetensors", "image-classification", "arxiv:2003.13678", "license:bsd-3-clause", "region:us" ]
image-classification
timm
null
null
timm/regnety_016.tv2_in1k
0
706
timm
2023-03-21T06:37:43
--- tags: - image-classification - timm library_tag: timm license: bsd-3-clause --- # Model card for regnety_016.tv2_in1k A RegNetY-1.6GF image classification model. Pretrained on ImageNet-1k by torchvision contributors (see ImageNet1K-V2 weight details https://github.com/pytorch/vision/issues/3995#new-recipe). The `timm` RegNet implementation includes a number of enhancements not present in other implementations, including: * stochastic depth * gradient checkpointing * layer-wise LR decay * configurable output stride (dilation) * configurable activation and norm layers * option for a pre-activation bottleneck block used in RegNetV variant * only known RegNetZ model definitions with pretrained weights ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 11.2 - GMACs: 1.6 - Activations (M): 8.0 - Image size: 224 x 224 - **Papers:** - Designing Network Design Spaces: https://arxiv.org/abs/2003.13678 - **Original:** https://github.com/pytorch/vision ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('regnety_016.tv2_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'regnety_016.tv2_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 32, 112, 112]) # torch.Size([1, 48, 56, 56]) # torch.Size([1, 120, 28, 28]) # torch.Size([1, 336, 14, 14]) # torch.Size([1, 888, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'regnety_016.tv2_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 888, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). For the comparison summary below, the ra_in1k, ra3_in1k, ch_in1k, sw_*, and lion_* tagged weights are trained in `timm`. |model |img_size|top1 |top5 |param_count|gmacs|macts | |-------------------------|--------|------|------|-----------|-----|------| |[regnety_1280.swag_ft_in1k](https://huggingface.co/timm/regnety_1280.swag_ft_in1k)|384 |88.228|98.684|644.81 |374.99|210.2 | |[regnety_320.swag_ft_in1k](https://huggingface.co/timm/regnety_320.swag_ft_in1k)|384 |86.84 |98.364|145.05 |95.0 |88.87 | |[regnety_160.swag_ft_in1k](https://huggingface.co/timm/regnety_160.swag_ft_in1k)|384 |86.024|98.05 |83.59 |46.87|67.67 | |[regnety_160.sw_in12k_ft_in1k](https://huggingface.co/timm/regnety_160.sw_in12k_ft_in1k)|288 |86.004|97.83 |83.59 |26.37|38.07 | |[regnety_1280.swag_lc_in1k](https://huggingface.co/timm/regnety_1280.swag_lc_in1k)|224 |85.996|97.848|644.81 |127.66|71.58 | |[regnety_160.lion_in12k_ft_in1k](https://huggingface.co/timm/regnety_160.lion_in12k_ft_in1k)|288 |85.982|97.844|83.59 |26.37|38.07 | |[regnety_160.sw_in12k_ft_in1k](https://huggingface.co/timm/regnety_160.sw_in12k_ft_in1k)|224 |85.574|97.666|83.59 |15.96|23.04 | |[regnety_160.lion_in12k_ft_in1k](https://huggingface.co/timm/regnety_160.lion_in12k_ft_in1k)|224 |85.564|97.674|83.59 |15.96|23.04 | |[regnety_120.sw_in12k_ft_in1k](https://huggingface.co/timm/regnety_120.sw_in12k_ft_in1k)|288 |85.398|97.584|51.82 |20.06|35.34 | |[regnety_2560.seer_ft_in1k](https://huggingface.co/timm/regnety_2560.seer_ft_in1k)|384 |85.15 |97.436|1282.6 |747.83|296.49| |[regnetz_e8.ra3_in1k](https://huggingface.co/timm/regnetz_e8.ra3_in1k)|320 |85.036|97.268|57.7 |15.46|63.94 | |[regnety_120.sw_in12k_ft_in1k](https://huggingface.co/timm/regnety_120.sw_in12k_ft_in1k)|224 |84.976|97.416|51.82 |12.14|21.38 | |[regnety_320.swag_lc_in1k](https://huggingface.co/timm/regnety_320.swag_lc_in1k)|224 |84.56 |97.446|145.05 |32.34|30.26 | |[regnetz_040_h.ra3_in1k](https://huggingface.co/timm/regnetz_040_h.ra3_in1k)|320 |84.496|97.004|28.94 |6.43 |37.94 | |[regnetz_e8.ra3_in1k](https://huggingface.co/timm/regnetz_e8.ra3_in1k)|256 |84.436|97.02 |57.7 |9.91 |40.94 | |[regnety_1280.seer_ft_in1k](https://huggingface.co/timm/regnety_1280.seer_ft_in1k)|384 |84.432|97.092|644.81 |374.99|210.2 | |[regnetz_040.ra3_in1k](https://huggingface.co/timm/regnetz_040.ra3_in1k)|320 |84.246|96.93 |27.12 |6.35 |37.78 | |[regnetz_d8.ra3_in1k](https://huggingface.co/timm/regnetz_d8.ra3_in1k)|320 |84.054|96.992|23.37 |6.19 |37.08 | |[regnetz_d8_evos.ch_in1k](https://huggingface.co/timm/regnetz_d8_evos.ch_in1k)|320 |84.038|96.992|23.46 |7.03 |38.92 | |[regnetz_d32.ra3_in1k](https://huggingface.co/timm/regnetz_d32.ra3_in1k)|320 |84.022|96.866|27.58 |9.33 |37.08 | |[regnety_080.ra3_in1k](https://huggingface.co/timm/regnety_080.ra3_in1k)|288 |83.932|96.888|39.18 |13.22|29.69 | |[regnety_640.seer_ft_in1k](https://huggingface.co/timm/regnety_640.seer_ft_in1k)|384 |83.912|96.924|281.38 |188.47|124.83| |[regnety_160.swag_lc_in1k](https://huggingface.co/timm/regnety_160.swag_lc_in1k)|224 |83.778|97.286|83.59 |15.96|23.04 | |[regnetz_040_h.ra3_in1k](https://huggingface.co/timm/regnetz_040_h.ra3_in1k)|256 |83.776|96.704|28.94 |4.12 |24.29 | |[regnetv_064.ra3_in1k](https://huggingface.co/timm/regnetv_064.ra3_in1k)|288 |83.72 |96.75 |30.58 |10.55|27.11 | |[regnety_064.ra3_in1k](https://huggingface.co/timm/regnety_064.ra3_in1k)|288 |83.718|96.724|30.58 |10.56|27.11 | |[regnety_160.deit_in1k](https://huggingface.co/timm/regnety_160.deit_in1k)|288 |83.69 |96.778|83.59 |26.37|38.07 | |[regnetz_040.ra3_in1k](https://huggingface.co/timm/regnetz_040.ra3_in1k)|256 |83.62 |96.704|27.12 |4.06 |24.19 | |[regnetz_d8.ra3_in1k](https://huggingface.co/timm/regnetz_d8.ra3_in1k)|256 |83.438|96.776|23.37 |3.97 |23.74 | |[regnetz_d32.ra3_in1k](https://huggingface.co/timm/regnetz_d32.ra3_in1k)|256 |83.424|96.632|27.58 |5.98 |23.74 | |[regnetz_d8_evos.ch_in1k](https://huggingface.co/timm/regnetz_d8_evos.ch_in1k)|256 |83.36 |96.636|23.46 |4.5 |24.92 | |[regnety_320.seer_ft_in1k](https://huggingface.co/timm/regnety_320.seer_ft_in1k)|384 |83.35 |96.71 |145.05 |95.0 |88.87 | |[regnetv_040.ra3_in1k](https://huggingface.co/timm/regnetv_040.ra3_in1k)|288 |83.204|96.66 |20.64 |6.6 |20.3 | |[regnety_320.tv2_in1k](https://huggingface.co/timm/regnety_320.tv2_in1k)|224 |83.162|96.42 |145.05 |32.34|30.26 | |[regnety_080.ra3_in1k](https://huggingface.co/timm/regnety_080.ra3_in1k)|224 |83.16 |96.486|39.18 |8.0 |17.97 | |[regnetv_064.ra3_in1k](https://huggingface.co/timm/regnetv_064.ra3_in1k)|224 |83.108|96.458|30.58 |6.39 |16.41 | |[regnety_040.ra3_in1k](https://huggingface.co/timm/regnety_040.ra3_in1k)|288 |83.044|96.5 |20.65 |6.61 |20.3 | |[regnety_064.ra3_in1k](https://huggingface.co/timm/regnety_064.ra3_in1k)|224 |83.02 |96.292|30.58 |6.39 |16.41 | |[regnety_160.deit_in1k](https://huggingface.co/timm/regnety_160.deit_in1k)|224 |82.974|96.502|83.59 |15.96|23.04 | |[regnetx_320.tv2_in1k](https://huggingface.co/timm/regnetx_320.tv2_in1k)|224 |82.816|96.208|107.81 |31.81|36.3 | |[regnety_032.ra_in1k](https://huggingface.co/timm/regnety_032.ra_in1k)|288 |82.742|96.418|19.44 |5.29 |18.61 | |[regnety_160.tv2_in1k](https://huggingface.co/timm/regnety_160.tv2_in1k)|224 |82.634|96.22 |83.59 |15.96|23.04 | |[regnetz_c16_evos.ch_in1k](https://huggingface.co/timm/regnetz_c16_evos.ch_in1k)|320 |82.634|96.472|13.49 |3.86 |25.88 | |[regnety_080_tv.tv2_in1k](https://huggingface.co/timm/regnety_080_tv.tv2_in1k)|224 |82.592|96.246|39.38 |8.51 |19.73 | |[regnetx_160.tv2_in1k](https://huggingface.co/timm/regnetx_160.tv2_in1k)|224 |82.564|96.052|54.28 |15.99|25.52 | |[regnetz_c16.ra3_in1k](https://huggingface.co/timm/regnetz_c16.ra3_in1k)|320 |82.51 |96.358|13.46 |3.92 |25.88 | |[regnetv_040.ra3_in1k](https://huggingface.co/timm/regnetv_040.ra3_in1k)|224 |82.44 |96.198|20.64 |4.0 |12.29 | |[regnety_040.ra3_in1k](https://huggingface.co/timm/regnety_040.ra3_in1k)|224 |82.304|96.078|20.65 |4.0 |12.29 | |[regnetz_c16.ra3_in1k](https://huggingface.co/timm/regnetz_c16.ra3_in1k)|256 |82.16 |96.048|13.46 |2.51 |16.57 | |[regnetz_c16_evos.ch_in1k](https://huggingface.co/timm/regnetz_c16_evos.ch_in1k)|256 |81.936|96.15 |13.49 |2.48 |16.57 | |[regnety_032.ra_in1k](https://huggingface.co/timm/regnety_032.ra_in1k)|224 |81.924|95.988|19.44 |3.2 |11.26 | |[regnety_032.tv2_in1k](https://huggingface.co/timm/regnety_032.tv2_in1k)|224 |81.77 |95.842|19.44 |3.2 |11.26 | |[regnetx_080.tv2_in1k](https://huggingface.co/timm/regnetx_080.tv2_in1k)|224 |81.552|95.544|39.57 |8.02 |14.06 | |[regnetx_032.tv2_in1k](https://huggingface.co/timm/regnetx_032.tv2_in1k)|224 |80.924|95.27 |15.3 |3.2 |11.37 | |[regnety_320.pycls_in1k](https://huggingface.co/timm/regnety_320.pycls_in1k)|224 |80.804|95.246|145.05 |32.34|30.26 | |[regnetz_b16.ra3_in1k](https://huggingface.co/timm/regnetz_b16.ra3_in1k)|288 |80.712|95.47 |9.72 |2.39 |16.43 | |[regnety_016.tv2_in1k](https://huggingface.co/timm/regnety_016.tv2_in1k)|224 |80.66 |95.334|11.2 |1.63 |8.04 | |[regnety_120.pycls_in1k](https://huggingface.co/timm/regnety_120.pycls_in1k)|224 |80.37 |95.12 |51.82 |12.14|21.38 | |[regnety_160.pycls_in1k](https://huggingface.co/timm/regnety_160.pycls_in1k)|224 |80.288|94.964|83.59 |15.96|23.04 | |[regnetx_320.pycls_in1k](https://huggingface.co/timm/regnetx_320.pycls_in1k)|224 |80.246|95.01 |107.81 |31.81|36.3 | |[regnety_080.pycls_in1k](https://huggingface.co/timm/regnety_080.pycls_in1k)|224 |79.882|94.834|39.18 |8.0 |17.97 | |[regnetz_b16.ra3_in1k](https://huggingface.co/timm/regnetz_b16.ra3_in1k)|224 |79.872|94.974|9.72 |1.45 |9.95 | |[regnetx_160.pycls_in1k](https://huggingface.co/timm/regnetx_160.pycls_in1k)|224 |79.862|94.828|54.28 |15.99|25.52 | |[regnety_064.pycls_in1k](https://huggingface.co/timm/regnety_064.pycls_in1k)|224 |79.716|94.772|30.58 |6.39 |16.41 | |[regnetx_120.pycls_in1k](https://huggingface.co/timm/regnetx_120.pycls_in1k)|224 |79.592|94.738|46.11 |12.13|21.37 | |[regnetx_016.tv2_in1k](https://huggingface.co/timm/regnetx_016.tv2_in1k)|224 |79.44 |94.772|9.19 |1.62 |7.93 | |[regnety_040.pycls_in1k](https://huggingface.co/timm/regnety_040.pycls_in1k)|224 |79.23 |94.654|20.65 |4.0 |12.29 | |[regnetx_080.pycls_in1k](https://huggingface.co/timm/regnetx_080.pycls_in1k)|224 |79.198|94.55 |39.57 |8.02 |14.06 | |[regnetx_064.pycls_in1k](https://huggingface.co/timm/regnetx_064.pycls_in1k)|224 |79.064|94.454|26.21 |6.49 |16.37 | |[regnety_032.pycls_in1k](https://huggingface.co/timm/regnety_032.pycls_in1k)|224 |78.884|94.412|19.44 |3.2 |11.26 | |[regnety_008_tv.tv2_in1k](https://huggingface.co/timm/regnety_008_tv.tv2_in1k)|224 |78.654|94.388|6.43 |0.84 |5.42 | |[regnetx_040.pycls_in1k](https://huggingface.co/timm/regnetx_040.pycls_in1k)|224 |78.482|94.24 |22.12 |3.99 |12.2 | |[regnetx_032.pycls_in1k](https://huggingface.co/timm/regnetx_032.pycls_in1k)|224 |78.178|94.08 |15.3 |3.2 |11.37 | |[regnety_016.pycls_in1k](https://huggingface.co/timm/regnety_016.pycls_in1k)|224 |77.862|93.73 |11.2 |1.63 |8.04 | |[regnetx_008.tv2_in1k](https://huggingface.co/timm/regnetx_008.tv2_in1k)|224 |77.302|93.672|7.26 |0.81 |5.15 | |[regnetx_016.pycls_in1k](https://huggingface.co/timm/regnetx_016.pycls_in1k)|224 |76.908|93.418|9.19 |1.62 |7.93 | |[regnety_008.pycls_in1k](https://huggingface.co/timm/regnety_008.pycls_in1k)|224 |76.296|93.05 |6.26 |0.81 |5.25 | |[regnety_004.tv2_in1k](https://huggingface.co/timm/regnety_004.tv2_in1k)|224 |75.592|92.712|4.34 |0.41 |3.89 | |[regnety_006.pycls_in1k](https://huggingface.co/timm/regnety_006.pycls_in1k)|224 |75.244|92.518|6.06 |0.61 |4.33 | |[regnetx_008.pycls_in1k](https://huggingface.co/timm/regnetx_008.pycls_in1k)|224 |75.042|92.342|7.26 |0.81 |5.15 | |[regnetx_004_tv.tv2_in1k](https://huggingface.co/timm/regnetx_004_tv.tv2_in1k)|224 |74.57 |92.184|5.5 |0.42 |3.17 | |[regnety_004.pycls_in1k](https://huggingface.co/timm/regnety_004.pycls_in1k)|224 |74.018|91.764|4.34 |0.41 |3.89 | |[regnetx_006.pycls_in1k](https://huggingface.co/timm/regnetx_006.pycls_in1k)|224 |73.862|91.67 |6.2 |0.61 |3.98 | |[regnetx_004.pycls_in1k](https://huggingface.co/timm/regnetx_004.pycls_in1k)|224 |72.38 |90.832|5.16 |0.4 |3.14 | |[regnety_002.pycls_in1k](https://huggingface.co/timm/regnety_002.pycls_in1k)|224 |70.282|89.534|3.16 |0.2 |2.17 | |[regnetx_002.pycls_in1k](https://huggingface.co/timm/regnetx_002.pycls_in1k)|224 |68.752|88.556|2.68 |0.2 |2.16 | ## Citation ```bibtex @InProceedings{Radosavovic2020, title = {Designing Network Design Spaces}, author = {Ilija Radosavovic and Raj Prateek Kosaraju and Ross Girshick and Kaiming He and Piotr Doll{'a}r}, booktitle = {CVPR}, year = {2020} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
15,537
[ [ -0.059661865234375, -0.0163116455078125, -0.0138092041015625, 0.03564453125, -0.034149169921875, -0.0085601806640625, -0.01104736328125, -0.038421630859375, 0.0733642578125, 0.00655364990234375, -0.04962158203125, -0.03759765625, -0.049224853515625, 0.001854...
Salesforce/codegen2-7B
2023-07-06T10:48:47.000Z
[ "transformers", "pytorch", "codegen", "text-generation", "custom_code", "arxiv:2305.02309", "license:apache-2.0", "endpoints_compatible", "has_space", "region:us" ]
text-generation
Salesforce
null
null
Salesforce/codegen2-7B
23
706
transformers
2023-04-26T16:04:49
--- license: apache-2.0 --- # CodeGen2 (CodeGen2-7B) ## Model description [CodeGen2](https://github.com/salesforce/CodeGen2) is a family of autoregressive language models for **program synthesis**, introduced in the paper: [CodeGen2: Lessons for Training LLMs on Programming and Natural Languages](https://arxiv.org/abs/2305.02309) by Erik Nijkamp\*, Hiroaki Hayashi\*, Caiming Xiong, Silvio Savarese, Yingbo Zhou. Unlike the original CodeGen model family (i.e., CodeGen1), CodeGen2 is capable of infilling, and supports more programming languages. Four model sizes are released: `1B`, `3.7B`, `7B`, `16B`. ## How to use This model can be easily loaded using the `AutoModelForCausalLM` functionality. ### Causal sampling For regular causal sampling, simply generate completions given the context: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen2-7B") model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen2-7B", trust_remote_code=True, revision="main") text = "def hello_world():" input_ids = tokenizer(text, return_tensors="pt").input_ids generated_ids = model.generate(input_ids, max_length=128) print(tokenizer.decode(generated_ids[0], skip_special_tokens=True)) ``` ### Infill sampling For **infill** sampling, we introduce three new special token types: * `<mask_N>`: N-th span to be masked. In practice, use `<mask_1>` to where you want to sample infill. * `<sep>`: Separator token between the suffix and the infilled sample. See below. * `<eom>`: "End-Of-Mask" token that model will output at the end of infilling. You may use this token to truncate the output. For example, if we want to generate infill for the following cursor position of a function: ```python def hello_world(): | return name ``` we construct an input to the model by 1. Inserting `<mask_1>` token in place of cursor position 2. Append `<sep>` token to indicate the boundary 3. Insert another `<mask_1>` to indicate which mask we want to infill. The final snippet looks as follows: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen2-7B") model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen2-7B", trust_remote_code=True, revision="main") def format(prefix, suffix): return prefix + "<mask_1>" + suffix + "<|endoftext|>" + "<sep>" + "<mask_1>" prefix = "def hello_world():\n " suffix = " return name" text = format(prefix, suffix) input_ids = tokenizer(text, return_tensors="pt").input_ids generated_ids = model.generate(input_ids, max_length=128) print(tokenizer.decode(generated_ids[0], skip_special_tokens=False)[len(text):]) ``` You might want to truncate the model output with `<eom>`. ## Training data This checkpoint is trained on the stricter permissive subset of [the deduplicated version of the Stack dataset (v1.1)](https://huggingface.co/datasets/bigcode/the-stack-dedup). Supported languages (and frameworks) are as follows: `c`, `c++`, `c-sharp`, `dart`, `go`, `java`, `javascript`, `kotlin`, `lua`, `php`, `python`, `ruby`, `rust`, `scala`, `shell`, `sql`, `swift`, `typescript`, `vue`. ## Training procedure CodeGen2 was trained using cross-entropy loss to maximize the likelihood of sequential inputs. The input sequences are formatted in two ways: (1) causal language modeling and (2) file-level span corruption. Please refer to the paper for more details. ## Evaluation results We evaluate our models on HumanEval and HumanEval-Infill. Please refer to the [paper](https://arxiv.org/abs/2305.02309) for more details. ## Intended use and limitations As an autoregressive language model, CodeGen2 is capable of extracting features from given natural language and programming language texts, and calculating the likelihood of them. However, the model is intended for and best at **program synthesis**, that is, generating executable code given English prompts, where the prompts should be in the form of a comment string. The model can complete partially-generated code as well. ## BibTeX entry and citation info ```bibtex @article{Nijkamp2023codegen2, title={CodeGen2: Lessons for Training LLMs on Programming and Natural Languages}, author={Nijkamp, Erik and Hayashi, Hiroaki and Xiong, Caiming and Savarese, Silvio and Zhou, Yingbo}, journal={arXiv preprint}, year={2023} } ```
4,421
[ [ -0.0170135498046875, -0.04962158203125, 0.0017538070678710938, 0.024505615234375, -0.00690460205078125, 0.0103759765625, -0.01177978515625, -0.04022216796875, -0.0153961181640625, 0.033294677734375, -0.04156494140625, -0.0257720947265625, -0.035675048828125, ...
natsusakiyomi/AnzuMix
2023-11-06T17:11:39.000Z
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "ja", "en", "license:creativeml-openrail-m", "has_space", "region:us" ]
text-to-image
natsusakiyomi
null
null
natsusakiyomi/AnzuMix
20
706
diffusers
2023-07-30T13:10:10
--- license: creativeml-openrail-m language: - ja - en pipeline_tag: text-to-image tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image library_name: diffusers --- <div class="flex justify-center"> <div class="container p-0 w-100"> <img class="mt-0 object-cover rounded-t-lg w-100" style="height: 320px;" src="https://pbs.twimg.com/media/F2SsrV1boAE2a05?format=jpg&name=large" width="100%"/> <div class="flex px-4"> <div class="flex-auto"> <h1 class="mb-2 text-3xl font-bold leading-tight" style="color: rgb(252, 238, 235/var(--tw-text-opacity));"> AnzuMixSeries </h1> <p class="mb-4 text-base text-neutral-600 dark:text-neutral-200"> VAEの内臓はないぞ!と言わせないぞ!!!! </p> </div> <div> <a href="https://twitter.com/min__san" class="mb-2 inline-block rounded px-6 py-2.5 text-white shadow-md" style="background-color: #1da1f2"> <svg xmlns="http://www.w3.org/2000/svg" class="h-3.5 w-3.5" fill="currentColor" viewBox="0 0 24 24"> <path d="M24 4.557c-.883.392-1.832.656-2.828.775 1.017-.609 1.798-1.574 2.165-2.724-.951.564-2.005.974-3.127 1.195-.897-.957-2.178-1.555-3.594-1.555-3.179 0-5.515 2.966-4.797 6.045-4.091-.205-7.719-2.165-10.148-5.144-1.29 2.213-.669 5.108 1.523 6.574-.806-.026-1.566-.247-2.229-.616-.054 2.281 1.581 4.415 3.949 4.89-.693.188-1.452.232-2.224.084.626 1.956 2.444 3.379 4.6 3.419-2.07 1.623-4.678 2.348-7.29 2.04 2.179 1.397 4.768 2.212 7.548 2.212 9.142 0 14.307-7.721 13.995-14.646.962-.695 1.797-1.562 2.457-2.549z" /> </svg> </a> </div> </div> </div> </div> <h4>📄 ライセンス / License</h4> <div class="px-2"> <table class="table-fixed border mt-0 text-xs"> <tbody> <tr> <td class="px-4 text-base" colspan="2"> <a href="https://huggingface.co/spaces/CompVis/stable-diffusion-license"> 修正 CreativeML OpenRAIL-M ライセンス / Modified CreativeML OpenRAIL-M license </a> </td> </tr> <tr> <td class="align-middle px-2 w-8"> <span class="text-green-500"> <svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6"> <path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" /> </svg> </span> </td> <td> このモデルのクレジットを入れずに使用する<br> Use the model without crediting the creator </td> </tr> <tr> <td class="align-middle px-2 w-8"> <span class="text-green-500"> <svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6"> <path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" /> </svg> </span> </td> <td> このモデルで生成した画像を商用利用する<br> Sell images they generate </td> </tr> <tr class="bg-danger-100"> <td class="align-middle px-2 w-8"> <span class="text-red-500"> <svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6"> <path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" /> </svg> </span> </td> <td> このモデルを商用の画像生成サービスで利用する</br> Run on services that generate images for money </td> </tr> <tr> <td class="align-middle px-2 w-8"> <span class="text-green-500"> <svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6"> <path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" /> </svg> </span> </td> <td> このモデルを使用したマージモデルを共有する<br> Share merges using this model </td> </tr> <tr class="bg-danger-100"> <td class="align-middle px-2 w-8"> <span class="text-red-500"> <svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6"> <path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" /> </svg> </span> </td> <td> このモデル、またはこのモデルをマージしたモデルを販売する</br> Sell this model or merges using this model </td> </tr> <tr class="bg-danger-100"> <td class="align-middle px-2 w-8"> <span class="text-red-500"> <svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6"> <path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" /> </svg> </span> </td> <td> このモデルをマージしたモデルに異なる権限を設定する</br> Have different permissions when sharing merges </td> </tr> </tbody> </table> </div> </div> <h4>🖼️ 例 / Examples(※他の人が生成した物を表示している場合は本人の許諾を得て表示しています)</h4> <div class="container mx-auto px-2"> <div class="flex flex-wrap min-w-min items-baseline"> <div class="p-1 flex-1" style="width: 50%; min-width: 320px; flex-basis: 50%;"> <div class="flex-1"> <img alt="gallery" class="block h-full w-full rounded-t-lg object-contain object-center" src="https://pbs.twimg.com/media/F2X1PnKWIAAr-sw?format=jpg&name=large" loading="lazy" /> </div> <div class="w-full"> <pre class="w-full" style="white-space: pre-line;"> ぽょん/Poyonさん作 <a href="https://twitter.com/Poyon_NA" target="_blank">ぽょんさんのTwitter</a> </pre> </div> </div> <div class="p-1 flex-1" style="width: 50%; min-width: 320px; flex-basis: 50%;"> <div class="w-full"> <img alt="gallery" class="block h-full w-full rounded-t-lg object-contain object-center" src="https://pbs.twimg.com/media/F2YVBTSakAA39-9?format=jpg&name=large" loading="lazy" /> </div> <div class="w-full"> <pre class="w-full" style="white-space: pre-line;"> 奈津野さん作 <a href="https://twitter.com/natsuno0911" target="_blank">奈津野さんTwitter</a> </pre> </div> </div> <div class="p-1 flex-1" style="width: 50%; min-width: 320px; flex-basis: 50%;"> <div class="w-full"> <img alt="gallery" class="block h-full w-full rounded-t-lg object-contain object-center" src="https://pbs.twimg.com/media/F2cv4xgaMAAgJEk?format=jpg&name=large" loading="lazy" /> </div> <div class="w-full"> <pre class="w-full" style="white-space: pre-line;"> み ゆ きさん作 <a href="https://twitter.com/Miyuki_Symphony" target="_blank">み ゆ きさんのTwitter</a> </pre> </div> </div> <div class="p-1 flex-1" style="width: 50%; min-width: 320px; flex-basis: 50%;"> <div class="w-full"> <img alt="gallery" class="block h-full w-full rounded-t-lg object-contain object-center" src="https://pbs.twimg.com/media/F2nQT8TaAAAZEqj?format=webp&name=large" loading="lazy" /> </div> <div class="w-full"> <pre class="w-full" style="white-space: pre-line;"> 水無月麻耶さん作 <a href="https://twitter.com/maya_minaduki" target="_blank">水無月麻耶さんのTwitter</a> </pre> </div> </div> <div class="p-1 flex-1" style="width: 50%; min-width: 320px; flex-basis: 50%;"> <div class="w-full"> <img alt="gallery" class="block h-full w-full rounded-t-lg object-contain object-center" src="https://pbs.twimg.com/media/F2mMMQzbsAAdJ7N?format=jpg&name=medium" loading="lazy" /> </div> <div class="w-full"> <pre class="w-full" style="white-space: pre-line;"> おるとさん作 <a href="https://twitter.com/Gameillust_AI" target="_blank">おるとさんのTwitter</a> </pre> </div> </div> <div class="p-1 flex-1" style="width: 50%; min-width: 320px; flex-basis: 50%;"> <div class="w-full"> <img alt="gallery" class="block h-full w-full rounded-t-lg object-contain object-center" src="https://pbs.twimg.com/media/F2o3wvMbgAE_nIp?format=jpg&name=4096x4096" loading="lazy" /> </div> <div class="w-full"> <pre class="w-full" style="white-space: pre-line;"> ぺす/Vulpesさん作 <a href="https://twitter.com/Xperia_1V" target="_blank">ぺす/VulpesのTwitter</a> </pre> </div> </div> <div class="p-1 flex-1" style="width: 50%; min-width: 320px; flex-basis: 50%;"> <div class="w-full"> <img alt="gallery" class="block h-full w-full rounded-t-lg object-contain object-center" src="https://pbs.twimg.com/media/F2miwlhbMAAYd4A?format=jpg&name=large" loading="lazy" /> </div> <div class="w-full"> <pre class="w-full" style="white-space: pre-line;"> animel7316さん作 <a href="https://twitter.com/animel7316" target="_blank">animel7316さんのTwitter</a> </pre> </div> </div> <div class="p-1 flex-1" style="width: 50%; min-width: 320px; flex-basis: 50%;"> <div class="flex-1"> <img alt="gallery" class="block h-full w-full rounded-t-lg object-contain object-center" src="https://pbs.twimg.com/media/F2a7NqpaMAANR4l?format=jpg&name=large" loading="lazy" /> </div> <div class="w-full"> <pre class="w-full" style="white-space: pre-line;"> </pre> </div> </div> <div class="p-1 flex-1" style="width: 50%; min-width: 320px; flex-basis: 50%;"> <div class="w-full"> <img alt="gallery" class="block h-full w-full rounded-t-lg object-contain object-center" src="https://pbs.twimg.com/media/F2WHeKgaEAErMj9?format=jpg&name=large" loading="lazy" /> </div> <div class="w-full"> <pre class="w-full" style="white-space: pre-line;"> </pre> </div> </div> </div> </div> </div>
10,956
[ [ -0.0517578125, -0.040740966796875, 0.01483154296875, 0.01275634765625, -0.026641845703125, 0.0234832763671875, 0.008636474609375, -0.04486083984375, 0.0452880859375, 0.0016994476318359375, -0.0859375, -0.046600341796875, -0.03350830078125, 0.017913818359375,...
tifa-benchmark/llama2_tifa_question_generation
2023-08-24T21:28:03.000Z
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "llama2", "text-to-image", "en", "dataset:TIFA", "arxiv:2303.11897", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
tifa-benchmark
null
null
tifa-benchmark/llama2_tifa_question_generation
5
706
transformers
2023-08-16T00:41:50
--- license: apache-2.0 inference: true widget: - text: "<s>[INST] <<SYS>>\nGiven an image description, generate one or two multiple-choice questions that verifies if the image description is correct.\nClassify each concept into a type (object, human, animal, food, activity, attribute, counting, color, material, spatial, location, shape, other), and then generate a question for each type.\n\n<</SYS>>\n\nDescription: a blue rabbit and a red plane [/INST] Entities:" pipeline_tag: text-generation tags: - text-generation-inference - llama2 - text-to-image datasets: - TIFA language: - en --- Project page: <https://tifa-benchmark.github.io/> This is the text parsing and question generation model for the ICCV 2023 paper [TIFA: Accurate and Interpretable Text-to-Image Faithfulness Evaluation with Question Answering](https://arxiv.org/abs/2303.11897) We introduce TIFA (Text-to-Image Faithfulness evaluation with question Answering), an automatic evaluation metric that measures the faithfulness of a generated image to its text input via visual question answering (VQA). Specifically, given a text input, we automatically generate several question-answer pairs using a language model. We calculate image faithfulness by checking whether existing VQA models can answer these questions using the generated image. Specifically, this fine-tuned LLaMA 2 model is the substitute for the GPT-3 model in the paper. It can parse an arbitrary prompt into visual entities, attributes, relations, etc. and generate question-answer tuples for each of them. See examples below. # QuickStart All codes are from <https://github.com/Yushi-Hu/tifa>. Clone this repo to easily use this model together with other modules (e.g. VQA) provided in TIFA. Please follow the prompt format, which will give the best performance. ```python import torch import transformers # prepare the LLaMA 2 model model_name = "tifa-benchmark/llama2_tifa_question_generation" pipeline = transformers.pipeline( "text-generation", model=model_name, torch_dtype=torch.float16, device_map="auto", ) # formating prompt following LLaMA 2 style def create_qg_prompt(caption): INTRO_BLURB = "Given an image description, generate one or two multiple-choice questions that verifies if the image description is correct.\nClassify each concept into a type (object, human, animal, food, activity, attribute, counting, color, material, spatial, location, shape, other), and then generate a question for each type.\n" formated_prompt = f"<s>[INST] <<SYS>>\n{INTRO_BLURB}\n<</SYS>>\n\n" formated_prompt += f"Description: {caption} [/INST] Entities:" return formated_prompt test_caption = "a blue rabbit and a red plane" # create prompt prompt = create_qg_prompt(text_caption) # text completion sequences = pipeline( prompt, do_sample=False, num_beams=5, num_return_sequences=1, max_length=512) output = sequences[0]['generated_text'][len(prompt):] output = output.split('\n\n')[0] # output print(output) #### Expected output ### # rabbit, plane # Activites: # Colors: blue, red # Counting: # Other attributes: # About rabbit (animal): # Q: is this a rabbit? # Choices: yes, no # A: yes # About rabbit (animal): # Q: what animal is in the picture? # Choices: rabbit, dog, cat, fish # A: rabbit # About plane (object): # Q: is this a plane? # Choices: yes, no # A: yes # About plane (object): # Q: what type of vehicle is this? # Choices: plane, car, motorcycle, bus # A: plane # About blue (color): # Q: is the rabbit blue? # Choices: yes, no # A: yes # About blue (color): # Q: what color is the rabbit? # Choices: blue, red, yellow, green # A: blue # About red (color): # Q: is the plane red? # Choices: yes, no # A: yes # About red (color): # Q: what color is the plane? # Choices: red, blue, yellow, green # A: red ``` # Use this LM under tifascore package tifascore provides extra functions to parse this output etc. First install tifascore according to <https://github.com/Yushi-Hu/tifa>. Then the usage is below ```python from tifascore import get_llama2_pipeline, get_llama2_question_and_answers pipeline = get_llama2_pipeline("tifa-benchmark/llama2_tifa_question_generation") print(get_llama2_question_and_answers(pipeline, "a blue rabbit and a red plane")) #### Expected output ### # [{'caption': 'a blue rabbit and a red plane', 'element': 'rabbit', 'question': 'what animal is in the picture?', 'choices': ['rabbit', 'dog', 'cat', 'fish'], 'answer': 'rabbit', 'element_type': 'animal/human'}, {'caption': 'a blue rabbit and a red plane', 'element': 'plane', 'question': 'is this a plane?', 'choices': ['yes', 'no'], 'answer': 'yes', 'element_type': 'object'}, {'caption': 'a blue rabbit and a red plane', 'element': 'plane', 'question': 'what type of vehicle is this?', 'choices': ['plane', 'car', 'motorcycle', 'bus'], 'answer': 'plane', 'element_type': 'object'}, {'caption': 'a blue rabbit and a red plane', 'element': 'blue', 'question': 'is the rabbit blue?', 'choices': ['yes', 'no'], 'answer': 'yes', 'element_type': 'color'}, {'caption': 'a blue rabbit and a red plane', 'element': 'blue', 'question': 'what color is the rabbit?', 'choices': ['blue', 'red', 'yellow', 'green'], 'answer': 'blue', 'element_type': 'color'}, {'caption': 'a blue rabbit and a red plane', 'element': 'red', 'question': 'is the plane red?', 'choices': ['yes', 'no'], 'answer': 'yes', 'element_type': 'color'}, {'caption': 'a blue rabbit and a red plane', 'element': 'red', 'question': 'what color is the plane?', 'choices': ['red', 'blue', 'yellow', 'green'], 'answer': 'red', 'element_type': 'color'}] ``` ## Bibtex ``` @article{hu2023tifa, title={Tifa: Accurate and interpretable text-to-image faithfulness evaluation with question answering}, author={Hu, Yushi and Liu, Benlin and Kasai, Jungo and Wang, Yizhong and Ostendorf, Mari and Krishna, Ranjay and Smith, Noah A}, journal={arXiv preprint arXiv:2303.11897}, year={2023} } ```
5,953
[ [ -0.039764404296875, -0.06658935546875, 0.041473388671875, 0.0028533935546875, -0.00788116455078125, 0.0032444000244140625, 0.0294036865234375, -0.01812744140625, 0.002117156982421875, 0.029754638671875, -0.047515869140625, -0.031829833984375, -0.0204315185546875...
TheBloke/dolphin-2.0-mistral-7B-GPTQ
2023-10-03T19:58:06.000Z
[ "transformers", "safetensors", "mistral", "text-generation", "en", "dataset:ehartford/dolphin", "dataset:jondurbin/airoboros-2.2.1", "license:apache-2.0", "text-generation-inference", "region:us" ]
text-generation
TheBloke
null
null
TheBloke/dolphin-2.0-mistral-7B-GPTQ
8
706
transformers
2023-10-02T22:27:42
--- base_model: ehartford/dolphin-2.0-mistral-7b datasets: - ehartford/dolphin - jondurbin/airoboros-2.2.1 inference: false language: - en license: apache-2.0 model_creator: Eric Hartford model_name: Dolphin 2.0 Mistral 7B model_type: mistral prompt_template: '<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Dolphin 2.0 Mistral 7B - GPTQ - Model creator: [Eric Hartford](https://huggingface.co/ehartford) - Original model: [Dolphin 2.0 Mistral 7B](https://huggingface.co/ehartford/dolphin-2.0-mistral-7b) <!-- description start --> ## Description This repo contains GPTQ model files for [Eric Hartford's Dolphin 2.0 Mistral 7B](https://huggingface.co/ehartford/dolphin-2.0-mistral-7b). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/dolphin-2.0-mistral-7B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/dolphin-2.0-mistral-7B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/dolphin-2.0-mistral-7B-GGUF) * [Eric Hartford's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ehartford/dolphin-2.0-mistral-7b) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: ChatML ``` <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` <!-- prompt-template end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files, and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/dolphin-2.0-mistral-7B-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 4.16 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/dolphin-2.0-mistral-7B-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 4.57 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/dolphin-2.0-mistral-7B-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.52 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/dolphin-2.0-mistral-7B-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.68 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. | | [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/dolphin-2.0-mistral-7B-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 8.17 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. | | [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/dolphin-2.0-mistral-7B-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 4.29 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download, including from branches ### In text-generation-webui To download from the `main` branch, enter `TheBloke/dolphin-2.0-mistral-7B-GPTQ` in the "Download model" box. To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/dolphin-2.0-mistral-7B-GPTQ:gptq-4bit-32g-actorder_True` ### From the command line I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` To download the `main` branch to a folder called `dolphin-2.0-mistral-7B-GPTQ`: ```shell mkdir dolphin-2.0-mistral-7B-GPTQ huggingface-cli download TheBloke/dolphin-2.0-mistral-7B-GPTQ --local-dir dolphin-2.0-mistral-7B-GPTQ --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir dolphin-2.0-mistral-7B-GPTQ huggingface-cli download TheBloke/dolphin-2.0-mistral-7B-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir dolphin-2.0-mistral-7B-GPTQ --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Huggingface cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell mkdir dolphin-2.0-mistral-7B-GPTQ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/dolphin-2.0-mistral-7B-GPTQ --local-dir dolphin-2.0-mistral-7B-GPTQ --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ### With `git` (**not** recommended) To clone a specific branch with `git`, use a command like this: ```shell git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/dolphin-2.0-mistral-7B-GPTQ ``` Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.) <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui). Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/dolphin-2.0-mistral-7B-GPTQ`. - To download from a specific branch, enter for example `TheBloke/dolphin-2.0-mistral-7B-GPTQ:gptq-4bit-32g-actorder_True` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `dolphin-2.0-mistral-7B-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. * Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-tgi start --> ## Serving this model from Text Generation Inference (TGI) It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/dolphin-2.0-mistral-7B-GPTQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ''' client = InferenceClient(endpoint_url) response = client.text_generation(prompt, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1) print(f"Model output: {response}") ``` <!-- README_GPTQ.md-use-from-tgi end --> <!-- README_GPTQ.md-use-from-python start --> ## How to use this GPTQ model from Python code ### Install the necessary packages Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install transformers optimum pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7 ``` If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ git checkout v0.4.2 pip3 install . ``` ### You can then use the following code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/dolphin-2.0-mistral-7B-GPTQ" # To use a different branch, change revision # For example: revision="gptq-4bit-32g-actorder_True" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=False, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Tell me about AI" prompt_template=f'''<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI). [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility. [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: Eric Hartford's Dolphin 2.0 Mistral 7B Dolphin 2.0 🐬 https://erichartford.com/dolphin Dolphin-2.0-mistral-7b's training was sponsored by [a16z](https://a16z.com/supporting-the-open-source-ai-community/). This model is based on mistralAI, so it is suitable for commercial or non-commercial use. This model is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant to any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly. ## Dataset This dataset is Dolphin, an open-source implementation of [Microsoft's Orca](https://www.microsoft.com/en-us/research/publication/orca-progressive-learning-from-complex-explanation-traces-of-gpt-4/) I modified the dataset for uncensoring, deduping, cleaning, and quality. I added Jon Durbin's excellent Airoboros dataset to increase creativity. ## Training It took 48 hours to train 10 epochs on 4x A100s. Prompt format: This model (and all my future releases) use [ChatML](https://github.com/openai/openai-python/blob/main/chatml.md) prompt format. ``` <|im_start|>system You are Dolphin, a helpful AI assistant.<|im_end|> <|im_start|>user {prompt}<|im_end|> ``` Example: ``` <|im_start|>system you are an expert dolphin trainer<|im_end|> <|im_start|>user What is the best way to train a dolphin to obey me? Please answer step by step.<|im_end|> ``` ## Gratitude - This model was made possible by the generous sponsorship of a16z. - Thank you to Microsoft for authoring the Orca paper and inspiring this work. - Special thanks to WingLian, and TheBloke for helpful advice - Thank you to all the other people in the Open Source AI community who have taught me and helped me along the way. ## Example Output ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/xnz5M1lYd4oGVATSDRkQ-.png) [Buy me a coffee](https://www.buymeacoffee.com/ehartford)
21,057
[ [ -0.052642822265625, -0.054351806640625, 0.007465362548828125, 0.0167236328125, -0.0242767333984375, -0.0191497802734375, 0.008087158203125, -0.046600341796875, 0.0235595703125, 0.03369140625, -0.047149658203125, -0.0284576416015625, -0.0269622802734375, 0.00...
Celestinian/Kalista-Diffusion-Legacy
2023-04-20T09:05:42.000Z
[ "diffusers", "art", "stable-diffusion", "Automatic1111", ".ckpt", "stable-diffusion-diffusers", "text-to-image", "kalista", "license:creativeml-openrail-m", "region:us" ]
text-to-image
Celestinian
null
null
Celestinian/Kalista-Diffusion-Legacy
2
705
diffusers
2023-01-19T06:14:55
--- license: creativeml-openrail-m tags: - art - stable-diffusion - Automatic1111 - .ckpt - stable-diffusion-diffusers - text-to-image - diffusers - kalista inference: false --- Kalista was a state-of-the-art realism model that is extremely consistent in generating high quality outputs. This model is now archived and this repository will not be maintained. ![kalista](https://huggingface.co/skylersterling/kalista/resolve/main/images/4.png) ![Woman](https://huggingface.co/skylersterling/kalista/resolve/main/images/6.png) ![Xample](https://huggingface.co/skylersterling/kalista/resolve/main/images/3.png) ![Xample2](https://huggingface.co/skylersterling/kalista/resolve/main/images/2.png) All the example images provided are 100% raw and were generated in succession from one another. Kalista was meant to be a quick experiment on 'simulating' additional training steps by finding the mean values of multiple different checkpoints that were trained on separate datasets. More photo examples can be found in the images folder of this repository. ## License This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
1,919
[ [ -0.01739501953125, -0.0261077880859375, 0.03875732421875, 0.0170135498046875, -0.03594970703125, -0.022857666015625, 0.0026264190673828125, -0.02618408203125, 0.01332855224609375, 0.041259765625, -0.05194091796875, -0.0396728515625, -0.043975830078125, -0.00...
iszotic/Yaoi-diffusion
2023-04-16T15:49:55.000Z
[ "diffusers", "license:openrail", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
null
iszotic
null
null
iszotic/Yaoi-diffusion
2
705
diffusers
2023-01-23T19:52:04
--- license: openrail library_name: diffusers --- # Yaoi diffusion @768: Stable diffusion model of yiffy-e18@0.5 and NAI@0.5 finetuned on 45000+ images of yaoi/bara/shota/furry and real life males, tagged with blip, deepdanbooru (used both e621 and wd14-vit) @768: Current version: YaoiDiffusionV1.ckpt more details and terms of use at: https://civitai.com/models/35801/yaoi-diffusion ![card.png](https://s3.amazonaws.com/moonup/production/uploads/6303f49b1dd5d3c624856a04/aKVNaPLysp8I0QyheTBGj.png)
506
[ [ -0.048248291015625, -0.050323486328125, 0.0192108154296875, 0.035186767578125, -0.019805908203125, -0.00717926025390625, 0.01299285888671875, -0.034698486328125, 0.047882080078125, 0.013580322265625, -0.036376953125, -0.01959228515625, -0.025360107421875, -0...
Trelis/Llama-2-7b-chat-hf-hosted-inference-8bit
2023-09-07T14:32:21.000Z
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "facebook", "meta", "llama-2", "hosted inference", "8 bit", "8bit", "8-bit", "en", "arxiv:2307.09288", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
Trelis
null
null
Trelis/Llama-2-7b-chat-hf-hosted-inference-8bit
4
705
transformers
2023-08-12T17:08:46
--- language: - en pipeline_tag: text-generation inference: true tags: - facebook - meta - pytorch - llama - llama-2 - hosted inference - 8 bit - 8bit - 8-bit --- # Llama 2 - hosted inference This is simply an 8-bit version of the Llama-2-7B model. - 8-bits allows the model to be below 10 GB - This allows for hosted inference of the model on the model's home page - Note that inference may be slow unless you have a HuggingFace Pro plan. If you want to run inference yourself (e.g. in a Colab notebook) you can try: ``` !pip install -q -U git+https://github.com/huggingface/accelerate.git !pip install -q -U bitsandbytes !pip install -q -U git+https://github.com/huggingface/transformers.git model_id = 'Trelis/Llama-2-7b-chat-hf-hosted-inference-8bit' import transformers from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig, pipeline, TextStreamer model = AutoModelForCausalLM.from_pretrained(model_id, device_map='auto') #Llama 2 Inference def stream(user_prompt): system_prompt = 'You are a helpful assistant that provides accurate and concise responses' B_INST, E_INST = "[INST]", "[/INST]" B_SYS, E_SYS = "<<SYS>>\n", "\n<</SYS>>\n\n" prompt = f"{B_INST} {B_SYS}{system_prompt.strip()}{E_SYS}{user_prompt.strip()} {E_INST}\n\n" inputs = tokenizer([prompt], return_tensors="pt").to("cuda:0") streamer = TextStreamer(tokenizer) # Despite returning the usual output, the streamer will also print the generated text to stdout. _ = model.generate(**inputs, streamer=streamer, max_new_tokens=500) stream('Count to ten') ``` Below follows information on the original Llama 2 model... ~ # **Llama 2** Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom. ## Model Details *Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.* Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM. **Model Developers** Meta **Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations. **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety. ||Training Data|Params|Content Length|GQA|Tokens|LR| |---|---|---|---|---|---|---| |Llama 2|*A new mix of publicly available online data*|7B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|13B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|70B|4k|&#10004;|2.0T|1.5 x 10<sup>-4</sup>| *Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability. **Model Dates** Llama 2 was trained between January 2023 and July 2023. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) **Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288) ## Intended Use **Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212). **Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program. ||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)| |---|---|---|---| |Llama 2 7B|184320|400|31.22| |Llama 2 13B|368640|400|62.44| |Llama 2 70B|1720320|400|291.42| |Total|3311616||539.00| **CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023. ## Evaluation Results In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library. |Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval| |---|---|---|---|---|---|---|---|---|---| |Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9| |Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9| |Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7| |Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6| |Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3| |Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1| |Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**| **Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1. |||TruthfulQA|Toxigen| |---|---|---|---| |Llama 1|7B|27.42|23.00| |Llama 1|13B|41.74|23.08| |Llama 1|33B|44.19|22.57| |Llama 1|65B|48.71|21.77| |Llama 2|7B|33.29|**21.25**| |Llama 2|13B|41.86|26.10| |Llama 2|70B|**50.18**|24.60| **Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better). |||TruthfulQA|Toxigen| |---|---|---|---| |Llama-2-Chat|7B|57.04|**0.00**| |Llama-2-Chat|13B|62.18|**0.00**| |Llama-2-Chat|70B|**64.14**|0.01| **Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above. ## Ethical Considerations and Limitations Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide) ## Reporting Issues Please report any software “bug,” or other problems with the models through one of the following means: - Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama) - Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) - Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) ## Llama Model Index |Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf| |---|---|---|---|---| |7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)| |13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)| |70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
11,149
[ [ -0.0191497802734375, -0.06365966796875, 0.0272216796875, 0.0198822021484375, -0.0268707275390625, 0.012298583984375, -0.00688934326171875, -0.0531005859375, 0.01788330078125, 0.0284576416015625, -0.054534912109375, -0.0300445556640625, -0.050689697265625, 0....
facebook/convnext-base-384-22k-1k
2022-03-02T19:03:18.000Z
[ "transformers", "pytorch", "tf", "convnext", "image-classification", "vision", "dataset:imagenet-21k", "dataset:imagenet-1k", "arxiv:2201.03545", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
facebook
null
null
facebook/convnext-base-384-22k-1k
1
704
transformers
2022-03-02T23:29:05
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet-21k - imagenet-1k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # ConvNeXT (base-sized model) ConvNeXT model pre-trained on ImageNet-22k and fine-tuned on ImageNet-1k at resolution 384x384. It was introduced in the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Liu et al. and first released in [this repository](https://github.com/facebookresearch/ConvNeXt). Disclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and "modernized" its design by taking the Swin Transformer as inspiration. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/convnext_architecture.png) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=convnext) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import ConvNextFeatureExtractor, ConvNextForImageClassification import torch from datasets import load_dataset dataset = load_dataset("huggingface/cats-image") image = dataset["test"]["image"][0] feature_extractor = ConvNextFeatureExtractor.from_pretrained("facebook/convnext-base-384-22k-1k") model = ConvNextForImageClassification.from_pretrained("facebook/convnext-base-384-22k-1k") inputs = feature_extractor(image, return_tensors="pt") with torch.no_grad(): logits = model(**inputs).logits # model predicts one of the 1000 ImageNet classes predicted_label = logits.argmax(-1).item() print(model.config.id2label[predicted_label]), ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/convnext). ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2201-03545, author = {Zhuang Liu and Hanzi Mao and Chao{-}Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie}, title = {A ConvNet for the 2020s}, journal = {CoRR}, volume = {abs/2201.03545}, year = {2022}, url = {https://arxiv.org/abs/2201.03545}, eprinttype = {arXiv}, eprint = {2201.03545}, timestamp = {Thu, 20 Jan 2022 14:21:35 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2201-03545.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
3,140
[ [ -0.052490234375, -0.036224365234375, -0.0135955810546875, 0.009796142578125, -0.02545166015625, -0.01983642578125, -0.00710296630859375, -0.056976318359375, 0.03277587890625, 0.035400390625, -0.048126220703125, -0.0220947265625, -0.038665771484375, -0.003383...
mideind/IceBERT
2023-04-17T07:55:54.000Z
[ "transformers", "pytorch", "safetensors", "roberta", "fill-mask", "icelandic", "masked-lm", "is", "arxiv:2201.05601", "license:agpl-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
mideind
null
null
mideind/IceBERT
3
704
transformers
2022-03-02T23:29:05
--- language: is widget: - text: Má bjóða þér <mask> í kvöld? - text: Forseti <mask> er ágæt. - text: Súpan var <mask> á bragðið. tags: - roberta - icelandic - masked-lm - pytorch license: agpl-3.0 --- # IceBERT This model was trained with fairseq using the RoBERTa-base architecture. It is one of many models we have trained for Icelandic, see the paper referenced below for further details. The training data used is shown in the table below. | Dataset | Size | Tokens | |------------------------------------------------------|---------|--------| | Icelandic Gigaword Corpus v20.05 (IGC) | 8.2 GB | 1,388M | | Icelandic Common Crawl Corpus (IC3) | 4.9 GB | 824M | | Greynir News articles | 456 MB | 76M | | Icelandic Sagas | 9 MB | 1.7M | | Open Icelandic e-books (Rafbókavefurinn) | 14 MB | 2.6M | | Data from the medical library of Landspitali | 33 MB | 5.2M | | Student theses from Icelandic universities (Skemman) | 2.2 GB | 367M | | Total | 15.8 GB | 2,664M | ## Citation The model is described in this paper [https://arxiv.org/abs/2201.05601](https://arxiv.org/abs/2201.05601). Please cite the paper if you make use of the model. ``` @inproceedings{snaebjarnarson-etal-2022-warm, title = "A Warm Start and a Clean Crawled Corpus - A Recipe for Good Language Models", author = "Sn{\ae}bjarnarson, V{\'e}steinn and S{\'\i}monarson, Haukur Barri and Ragnarsson, P{\'e}tur Orri and Ing{\'o}lfsd{\'o}ttir, Svanhv{\'\i}t Lilja and J{\'o}nsson, Haukur and Thorsteinsson, Vilhjalmur and Einarsson, Hafsteinn", booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference", month = jun, year = "2022", address = "Marseille, France", publisher = "European Language Resources Association", url = "https://aclanthology.org/2022.lrec-1.464", pages = "4356--4366", abstract = "We train several language models for Icelandic, including IceBERT, that achieve state-of-the-art performance in a variety of downstream tasks, including part-of-speech tagging, named entity recognition, grammatical error detection and constituency parsing. To train the models we introduce a new corpus of Icelandic text, the Icelandic Common Crawl Corpus (IC3), a collection of high quality texts found online by targeting the Icelandic top-level-domain .is. Several other public data sources are also collected for a total of 16GB of Icelandic text. To enhance the evaluation of model performance and to raise the bar in baselines for Icelandic, we manually translate and adapt the WinoGrande commonsense reasoning dataset. Through these efforts we demonstrate that a properly cleaned crawled corpus is sufficient to achieve state-of-the-art results in NLP applications for low to medium resource languages, by comparison with models trained on a curated corpus. We further show that initializing models using existing multilingual models can lead to state-of-the-art results for some downstream tasks.", } ```
3,239
[ [ -0.0145263671875, -0.0208282470703125, 0.01198577880859375, -0.0021228790283203125, -0.01995849609375, -0.00873565673828125, -0.0167388916015625, -0.0214691162109375, -0.00249481201171875, 0.029144287109375, -0.01593017578125, -0.051788330078125, -0.051666259765...
classla/wav2vec2-xls-r-juznevesti-sr
2022-08-18T12:47:32.000Z
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "audio", "sr", "dataset:juznevesti-sr", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
classla
null
null
classla/wav2vec2-xls-r-juznevesti-sr
0
704
transformers
2022-08-18T08:38:24
--- language: sr datasets: - juznevesti-sr tags: - audio - automatic-speech-recognition widget: - example_title: Croatian example 1 src: https://huggingface.co/classla/wav2vec2-xls-r-parlaspeech-hr/raw/main/1800.m4a - example_title: Croatian example 2 src: https://huggingface.co/classla/wav2vec2-xls-r-parlaspeech-hr/raw/main/00020578b.flac.wav - example_title: Croatian example 3 src: https://huggingface.co/classla/wav2vec2-xls-r-parlaspeech-hr/raw/main/00020570a.flac.wav --- # wav2vec2-xls-r-juznevesti This model for Serbian ASR is based on the [facebook/wav2vec2-xls-r-300m model](https://huggingface.co/facebook/wav2vec2-xls-r-300m) and was fine-tuned with 58 hours of audio and transcripts from [Južne vesti](https://www.juznevesti.com/), programme '15 minuta'. For more info on the dataset creation see [this repo](https://github.com/clarinsi/parlaspeech/tree/main/juzne_vesti). ## Metrics Evaluation is performed on the dev and test portions of the JuzneVesti dataset | | dev | test | |:----|---------:|---------:| | WER | 0.295206 | 0.290094 | | CER | 0.140766 | 0.137642 | ## Usage in `transformers` Tested with `transformers==4.18.0`, `torch==1.11.0`, and `SoundFile==0.10.3.post1`. ```python from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC import soundfile as sf import torch import os device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") # load model and tokenizer processor = Wav2Vec2Processor.from_pretrained( "classla/wav2vec2-xls-r-juznevesti-sr") model = Wav2Vec2ForCTC.from_pretrained("classla/wav2vec2-xls-r-juznevesti-sr") # download the example wav files: os.system("wget https://huggingface.co/classla/wav2vec2-xls-r-parlaspeech-hr/raw/main/00020570a.flac.wav") # read the wav file speech, sample_rate = sf.read("00020570a.flac.wav") input_values = processor(speech, sampling_rate=sample_rate, return_tensors="pt").input_values.to(device) # remove the raw wav file os.system("rm 00020570a.flac.wav") # retrieve logits logits = model.to(device)(input_values).logits # take argmax and decode predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.decode(predicted_ids[0]) transcription # 'velik broj poslovnih subjekata posluje sa minosom velik deo' ``` ## Training hyperparameters In fine-tuning, the following arguments were used: | arg | value | |-------------------------------|-------| | `per_device_train_batch_size` | 16 | | `gradient_accumulation_steps` | 4 | | `num_train_epochs` | 20 | | `learning_rate` | 3e-4 | | `warmup_steps` | 500 |
2,645
[ [ -0.0311737060546875, -0.050079345703125, 0.006549835205078125, 0.02020263671875, -0.01129913330078125, -0.01129913330078125, -0.03375244140625, -0.03021240234375, 0.0024700164794921875, 0.0217437744140625, -0.06561279296875, -0.039886474609375, -0.04501342773437...
yiyanghkust/finbert-esg-9-categories
2022-10-17T00:34:01.000Z
[ "transformers", "pytorch", "bert", "text-classification", "financial-text-analysis", "esg", "environmental-social-corporate-governance", "en", "endpoints_compatible", "region:us" ]
text-classification
yiyanghkust
null
null
yiyanghkust/finbert-esg-9-categories
23
704
transformers
2022-10-14T01:16:21
--- language: "en" tags: - financial-text-analysis - esg - environmental-social-corporate-governance widget: - text: "For 2002, our total net emissions were approximately 60 million metric tons of CO2 equivalents for all businesses and operations we have financial interests in, based on its equity share in those businesses and operations. " --- ESG analysis can help investors determine a business' long-term sustainability and identify associated risks. **FinBERT-esg-9-categories** is a FinBERT model fine-tuned on about 14,000 manually annotated sentences from firms' ESG reports and annual reports. **finbert-esg-9-categories** classifies a text into nine fine-grained ESG topics: *Climate Change, Natural Capital, Pollution & Waste, Human Capital, Product Liability, Community Relations, Corporate Governance, Business Ethics & Values, and Non-ESG*. This model complements [**finbert-esg**](https://huggingface.co/yiyanghkust/finbert-esg) which classifies a text into four coarse-grained ESG themes (*E, S, G or None*). Detailed description of the nine fine-grained ESG topic definition, some examples for each topic, training sample, and the model’s performance can be found [**here**](https://www.allenhuang.org/uploads/2/6/5/5/26555246/esg_9-class_descriptions.pdf). **Input**: A text. **Output**: Climate Change, Natural Capital, Pollution & Waste, Human Capital, Product Liability, Community Relations, Corporate Governance, Business Ethics & Values, or Non-ESG. # How to use You can use this model with Transformers pipeline for fine-grained ESG 9 categories classification. ```python from transformers import BertTokenizer, BertForSequenceClassification, pipeline finbert = BertForSequenceClassification.from_pretrained('yiyanghkust/finbert-esg-9-categories',num_labels=9) tokenizer = BertTokenizer.from_pretrained('yiyanghkust/finbert-esg-9-categories') nlp = pipeline("text-classification", model=finbert, tokenizer=tokenizer) results = nlp('For 2002, our total net emissions were approximately 60 million metric tons of CO2 equivalents for all businesses and operations we have financial interests in, based on its equity share in those businesses and operations.') print(results) # [{'label': 'Climate Change', 'score': 0.9955655932426453}] ``` If you use the model in your academic work, please cite the following paper: Huang, Allen H., Hui Wang, and Yi Yang. "FinBERT: A Large Language Model for Extracting Information from Financial Text." *Contemporary Accounting Research* (2022).
2,534
[ [ -0.032135009765625, -0.06280517578125, 0.02294921875, -0.0013837814331054688, -0.0210113525390625, 0.007419586181640625, -0.009918212890625, -0.052093505859375, 0.0212860107421875, 0.055206298828125, -0.0350341796875, -0.049041748046875, -0.053436279296875, ...
artificialguybr/IconsMI-AppIconsModelforSD
2023-05-16T09:26:33.000Z
[ "diffusers", "text-to-image", "license:creativeml-openrail-m", "endpoints_compatible", "has_space", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
artificialguybr
null
null
artificialguybr/IconsMI-AppIconsModelforSD
137
703
diffusers
2022-11-10T03:39:52
--- license: creativeml-openrail-m tags: - text-to-image --- To use it you have to use the word ''IconsMi'' in the prompt. From my tests the images look better with this prompt: highly detailed, trending on artstation, ios icon app, IconsMi For negative prompts I got better results when I used: out of frame, duplicate, watermark, signature, text, ugly, sketch, deformed, mutated, blurry, mutilated, ugly sketch I recommend you to instead describe the style of app you want, e.g. news app, music app, sports app. Describe what you want in the image. For example, ''a reporter microphone''. The results are better. SD doesn't understand these abstractions yet. The Model was made in 7200 Steps with models saved every 700 steps after the 2000 steps. In my tests the 2k model was the one that got the most creativity and had the most variety of themes in the generation. The 5500 model was the one that had the best image quality, but did not know how to abstract and be creative. I included both models for you to test. Both use the same prompt ''IconsMi You can help me on mine: Patreon:https://www.patreon.com/user?u=81570187 Ko-Fi:https://ko-fi.com/jvkape buy me a coffe:https://www.buymeacoffee.com/JVKAPE All money will go towards the creation of new models.
1,278
[ [ -0.0548095703125, -0.0287017822265625, 0.0321044921875, 0.031982421875, -0.056365966796875, 0.0094451904296875, 0.029693603515625, -0.0272064208984375, 0.060516357421875, 0.021087646484375, -0.050933837890625, -0.03369140625, -0.031951904296875, 0.0153274536...
ckip-joint/bloom-3b-zh
2023-05-22T07:00:48.000Z
[ "transformers", "pytorch", "bloom", "text-generation", "zh", "arxiv:2303.04715", "license:bigscience-bloom-rail-1.0", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text-generation
ckip-joint
null
null
ckip-joint/bloom-3b-zh
26
703
transformers
2023-04-07T09:33:31
--- license: bigscience-bloom-rail-1.0 language: - zh pipeline_tag: text-generation widget: - text: "四月的某一天,天氣晴朗寒冷," - text: "問:台灣最高的建築物是?答:" --- <h1 style='text-align: center '>BLOOM-zh</h1> <h2 style='text-align: center '><em>Traditional Chinese-enhanced BLOOM language model</em> </h2> <h3 style='text-align: center '>Model Card</h3> Version 1.0 / 10.Apr.2023 BLOOM-zh is a joint collaboration between CKIP lab at Acedemia Sinica ([link](https://ckip.iis.sinica.edu.tw/)), MediaTek Research ([連結](https://www.mtkresearch.com/), [连结](https://www.mtkresearch.com/zh-hans/), [link](https://www.mtkresearch.com/en/)), and National Academy for Educational Research ([link](https://www.naer.edu.tw/)). This model is released for **non-commerical research purposes only**. ## Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Training Data](#training-data) 4. [Risks and Limitations](#risks-and-limitations) 5. [Recommendations](#recommendations) 6. [Model Card Authors](#model-card-authors) ## Model Details BLOOM-zh is a language model with enhanced Traditional Chinese capability. It is derived from [BLOOMZ](https://huggingface.co/bigscience/bloomz-3b). BLOOM-zh is trained extendedly on large amount of Traditional Chinese text data. ### Basics * **Developed by:** MediaTek Research * **Model Type:** Transformer-based Language Model * **Version:** 1.0.0 * **Languages:** Multiple; see [training data](#training-data) * **License:** MEDIATEK RESEARCH License ([link](https://huggingface.co/ckip-joint/bloom-3b-zh/blob/main/LICENSE_MR.md)) and RAIL License v1.0 ([link](https://huggingface.co/spaces/bigscience/license)) * **Release Date Estimate:** Monday, 10.April.2023 * **Send Questions to:** info@mtkresearch.com * **Paper:** [https://arxiv.org/abs/2303.04715](https://arxiv.org/abs/2303.04715) * **Cite as:** MediaTek Research: Traditional Chinese-enhanced BLOOM language model. International, February 2023. * **Organizations of contributors:** * MediaTek Research * Academia Sinica * National Academy for Educational Research ### Technical Specifications *This section provides information for people who work on model development.* For technical specifications, please refer to [BLOOM](https://huggingface.co/bigscience/bloom-3b#model-details). ### Environmental Impact For environmental impact, please refer to [BLOOM](https://huggingface.co/bigscience/bloom-3b#model-details). ## Uses *This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model. It provides information for anyone considering using the model or who is affected by the model.* For the uses of the model, please refer to [BLOOM](https://huggingface.co/bigscience/bloom-3b#uses). ## Training Data *This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.* We trained the 3B parameter model on a total of 13 Billion tokens of mostly high quality Traditional Chinese text. Details are provided in the [paper](https://arxiv.org/abs/2303.04715). ## Risks and Limitations *This section identifies foreseeable harms and misunderstandings.* For risks and limitations, please refer to [BLOOM](https://huggingface.co/bigscience/bloom-3b#risks-and-limitations). ### Factors *This section lists some different aspects of BLOOM models. Its focus is on those aspects that are likely to give rise to high variance in model behavior.* - The model is trained on Traditional Chinese. However, the pretrained weights capture more than 40 different languages. - The model is trained on web crawled data, news articles, novels, knowledge sources (encyclopedia, education sector) and instructions. ## Recommendations *This section provides information on warnings and potential mitigations.* For recommendations, please refer to [BLOOM](https://huggingface.co/bigscience/bloom-3b#recommendations). ## Model Card Authors *Ordered roughly chronologically and by amount of time spent.* Philipp Ennen, Po-Chun Hsu, Chan-Jan Hsu, Chang-Le Liu, Yen-Chen Wu, Yin-Hsiang Liao, Chin-Tung Lin, Chi-Ming Chung, Yi-Chang Chen, Da-Shan Shiu, Wei-Yun Ma <!-- # Bloom_eval -->
4,375
[ [ -0.006900787353515625, -0.054443359375, 0.0184326171875, 0.039764404296875, -0.013580322265625, -0.033660888671875, -0.03033447265625, -0.061614990234375, -0.01299285888671875, 0.01861572265625, -0.03948974609375, -0.039703369140625, -0.01361083984375, -0.01...
Yntec/COOLKIDSV2
2023-09-10T01:24:59.000Z
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "Children", "Book illustrations", "Clumsy_Trainer", "license:creativeml-openrail-m", "endpoints_compatible", "has_space", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
Yntec
null
null
Yntec/COOLKIDSV2
2
703
diffusers
2023-07-15T20:51:44
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image tags: - stable-diffusion - stable-diffusion-diffusers - diffusers - text-to-image - Children - Book illustrations - Clumsy_Trainer --- # COOL KIDS Children book style by Clumsy_Trainer. This was oraginally a Lora and was converted into a model. Original page: https://civitai.com/models/60724
382
[ [ -0.01190948486328125, -0.029937744140625, -0.0218658447265625, -0.0062103271484375, -0.033172607421875, -0.0080108642578125, 0.0291290283203125, -0.037933349609375, 0.052215576171875, 0.037109375, -0.06903076171875, -0.0223388671875, -0.0170440673828125, -0....
gokul8967/Loki-lora
2023-10-14T07:34:30.000Z
[ "peft", "region:us" ]
null
gokul8967
null
null
gokul8967/Loki-lora
0
703
peft
2023-10-08T19:09:41
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.4.0 - PEFT 0.4.0
795
[ [ -0.04937744140625, -0.04913330078125, 0.0239715576171875, 0.034698486328125, -0.04168701171875, 0.004543304443359375, 0.006931304931640625, -0.0208740234375, -0.0097808837890625, 0.026275634765625, -0.048858642578125, -0.01384735107421875, -0.037750244140625, ...
meta-math/MetaMath-Mistral-7B
2023-10-22T23:55:38.000Z
[ "transformers", "pytorch", "mistral", "text-generation", "dataset:meta-math/MetaMathQA", "arxiv:2309.12284", "license:apache-2.0", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
meta-math
null
null
meta-math/MetaMath-Mistral-7B
21
703
transformers
2023-10-22T11:12:15
--- license: apache-2.0 datasets: - meta-math/MetaMathQA --- see our paper in https://arxiv.org/abs/2309.12284 View the project page: https://meta-math.github.io/ ## Model Details MetaMath-Mistral-7B is fully fine-tuned on the MetaMathQA datasets and based on the powerful Mistral-7B model. It is glad to see using MetaMathQA datasets and change the base model from llama-2-7B to Mistral-7b can boost the GSM8K performance from 66.5 to **77.7**. To fine-tune Mistral-7B, I would suggest using a smaller learning rate (usually 1/5 to 1/10 of the lr for LlaMa-2-7B) and staying other training args unchanged. More training details and scripts can be seen at https://github.com/meta-math/MetaMath ## Installation ``` pip install transformers==4.35.0 pip instal torch==2.0.1 pip instal sentencepiece==0.1.99 pip instal tokenizers==0.13.3 pip instal accelerate==0.21.0 pip instal bitsandbytes==0.40.0 pip instal vllm pip instal fraction pip install protobuf ``` ## Model Usage prompting template: ''' "Below is an instruction that describes a task. " "Write a response that appropriately completes the request.\n\n" "### Instruction:\n{instruction}\n\n### Response: Let's think step by step." ''' where you need to use your query question to replace the {instruction} There is another interesting repo about Arithmo-Mistral-7B in https://huggingface.co/akjindal53244/Arithmo-Mistral-7B, where they combine our MetaMathQA dataset and MathInstruct datasets to train a powerful model. Thanks agian for their contributions. We would also try to train the combination of **MetaMathQA** and **MathInstruct** datasets, and also open all the results and training details. ## Experiments | Model | GSM8k Pass@1 | MATH Pass@1 | |---------------------|--------------|-------------| | MPT-7B | 6.8 | 3.0 | | Falcon-7B | 6.8 | 2.3 | | LLaMA-1-7B | 11.0 | 2.9 | | LLaMA-2-7B | 14.6 | 2.5 | | MPT-30B | 15.2 | 3.1 | | LLaMA-1-13B | 17.8 | 3.9 | | GPT-Neo-2.7B | 19.5 | -- | | Falcon-40B | 19.6 | 2.5 | | Baichuan-chat-13B | 23.9 | -- | | Vicuna-v1.3-13B | 27.6 | -- | | LLaMA-2-13B | 28.7 | 3.9 | | InternLM-7B | 31.2 | -- | | ChatGLM-2-6B | 32.4 | -- | | GPT-J-6B | 34.9 | -- | | LLaMA-1-33B | 35.6 | 3.9 | | LLaMA-2-34B | 42.2 | 6.24 | | RFT-7B | 50.3 | -- | | LLaMA-1-65B | 50.9 | 10.6 | | Qwen-7B | 51.6 | -- | | WizardMath-7B | 54.9 | 10.7 | | LLaMA-2-70B | 56.8 | 13.5 | | WizardMath-13B | 63.9 | 14.0 | | MAmmoTH-7B (COT) | 50.5 | 10.4 | | MAmmoTH-7B (POT+COT)| 53.6 | 31.5 | | Arithmo-Mistral-7B | 74.7 | 25.3 | | MetaMath-7B | 66.5 | 19.8 | | MetaMath-13B | 72.3 | 22.4 | | 🔥 **MetaMath-Mistral-7B** | **77.7** | **28.2** | ## Citation ```bibtex @article{yu2023metamath, title={MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models}, author={Yu, Longhui and Jiang, Weisen and Shi, Han and Yu, Jincheng and Liu, Zhengying and Zhang, Yu and Kwok, James T and Li, Zhenguo and Weller, Adrian and Liu, Weiyang}, journal={arXiv preprint arXiv:2309.12284}, year={2023} } ``` ```bibtex @article{jiang2023mistral, title={Mistral 7B}, author={Jiang, Albert Q and Sablayrolles, Alexandre and Mensch, Arthur and Bamford, Chris and Chaplot, Devendra Singh and Casas, Diego de las and Bressand, Florian and Lengyel, Gianna and Lample, Guillaume and Saulnier, Lucile and others}, journal={arXiv preprint arXiv:2310.06825}, year={2023} } ```
4,034
[ [ -0.0439453125, -0.040985107421875, 0.0163116455078125, 0.005924224853515625, -0.0115509033203125, -0.01003265380859375, -0.002391815185546875, -0.02032470703125, 0.024169921875, 0.0246429443359375, -0.04718017578125, -0.04351806640625, -0.047607421875, 0.005...
Salesforce/codegen25-7b-instruct
2023-07-13T16:42:11.000Z
[ "transformers", "pytorch", "llama", "text-generation", "code", "dataset:bigcode/starcoderdata", "arxiv:2305.02309", "license:other", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text-generation
Salesforce
null
null
Salesforce/codegen25-7b-instruct
22
702
transformers
2023-07-06T17:49:25
--- license: other datasets: - bigcode/starcoderdata language: - code pipeline_tag: text-generation --- # CodeGen2.5-7B-instruct Title: [**CodeGen2.5: Small, but mighty**](https://blog.salesforceairesearch.com/codegen25) Authors: [Erik Nijkamp](https://eriknijkamp.com)\*, [Hiroaki Hayashi](https://hiroakih.me)\*, Yingbo Zhou, Caiming Xiong (\* equal contribution) ## Model description [CodeGen2.5](https://github.com/salesforce/CodeGen) is a family of autoregressive language models for **program synthesis**. Building upon [CodeGen2](https://arxiv.org/abs/2305.02309), the model is trained on [StarCoderData](https://huggingface.co/datasets/bigcode/starcoderdata) for 1.4T tokens, achieving competitive results compared to StarCoderBase-15.5B with less than half the size. Like CodeGen2, this model is capable of infilling, and supports multiple programming languages. We then further train on Python, then on instruction data. We release all the models as follows: * **CodeGen2.5-7B-multi**: Trained on StarCoderData. Licensed under Apache-2.0. * **CodeGen2.5-7B-mono**: Further trained on additional Python tokens. Licensed under Apache-2.0. * **CodeGen2.5-7B-instruct** (this repo): Further trained from CodeGen2.5-7B-mono on instruction data. *Research purposes only*. ## How to use This model can be easily loaded using the `AutoModelForCausalLM` functionality. ### Pre-requisite Please install OpenAI `tiktoken` for the tokenizer. ```bash pip install tiktoken==0.4.0 ``` ### Causal sampling (code autocompletion) For regular causal sampling, simply generate completions given the context: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen25-7b-instruct", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen25-7b-instruct") text = "def hello_world():" input_ids = tokenizer(text, return_tensors="pt").input_ids generated_ids = model.generate(input_ids, max_length=128) print(tokenizer.decode(generated_ids[0], skip_special_tokens=True)) ``` ### Infill sampling For **infill** sampling, we follow the CodeGen2 format: * `<mask_N>`: N-th span to be masked. In practice, use `<mask_1>` to where you want to sample infill. * `<sep>`: Separator token between the suffix and the infilled sample. See below. * `<eom>`: "End-Of-Mask" token that model will output at the end of infilling. You may use this token to truncate the output. For example, if we want to generate infill for the following cursor position of a function: ```python def hello_world(): | return name ``` we construct an input to the model by 1. Inserting `<mask_1>` token in place of cursor position 2. Append `<sep>` token to indicate the boundary 3. Insert another `<mask_1>` to indicate which mask we want to infill. The final snippet looks as follows: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen25-7b-instruct", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen25-7b-instruct") def format(prefix, suffix): return prefix + "<mask_1>" + suffix + "<|endoftext|>" + "<sep>" + "<mask_1>" prefix = "def hello_world():\n " suffix = " return name" text = format(prefix, suffix) input_ids = tokenizer(text, return_tensors="pt").input_ids generated_ids = model.generate(input_ids, max_length=128) print(tokenizer.decode(generated_ids[0], skip_special_tokens=False)[len(text):]) ``` You might want to truncate the model output with `<eom>`. ## Evaluation results We evaluate our models on HumanEval and HumanEval-Infill. Please refer to the [blog](https://blog.salesforceairesearch.com/codegen25) for more details. ## Intended use and limitations As an autoregressive language model, CodeGen2.5 is capable of extracting features from given natural language and programming language texts, and calculating the likelihood of them. However, the model is intended for and best at **program synthesis**, that is, generating executable code given English prompts, where the prompts should be in the form of a comment string. The model can complete partially-generated code as well. ## Attribution & Other Requirements The pretraining dataset of the model was filtered for permissive licenses only. Nevertheless, the model can generate source code verbatim from the dataset. The code's license might require attribution and/or other specific requirements that must be respected. The data provider BigCode provides a [search index](https://huggingface.co/spaces/bigcode/starcoder-search) that lets you search through the pretraining data to identify where generated code came from and apply the proper attribution to your code. ## BibTeX entry and citation info Please cite CodeGen2 paper: ```bibtex @article{Nijkamp2023codegen2, title={CodeGen2: Lessons for Training LLMs on Programming and Natural Languages}, author={Nijkamp, Erik and Hayashi, Hiroaki and Xiong, Caiming and Savarese, Silvio and Zhou, Yingbo}, journal={arXiv preprint}, year={2023} } ```
5,124
[ [ -0.0220794677734375, -0.04559326171875, 0.003971099853515625, 0.0215606689453125, -0.009552001953125, 0.0038661956787109375, -0.02777099609375, -0.043060302734375, -0.0054779052734375, 0.0304412841796875, -0.0399169921875, -0.0294036865234375, -0.0391845703125, ...
femboysLover/RealisticStockPhoto-fp16
2023-09-04T11:13:32.000Z
[ "diffusers", "text-to-image", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
femboysLover
null
null
femboysLover/RealisticStockPhoto-fp16
1
702
diffusers
2023-09-04T11:07:34
--- library_name: diffusers pipeline_tag: text-to-image --- original model weights https://civitai.com/models/139565/realistic-stock-photo version 1.0
151
[ [ -0.01314544677734375, -0.0010395050048828125, 0.0028553009033203125, 0.030426025390625, -0.0310821533203125, -0.021728515625, 0.0273284912109375, -0.0216217041015625, 0.0579833984375, 0.040496826171875, -0.047637939453125, 0.00004106760025024414, -0.016250610351...
google/realm-cc-news-pretrained-embedder
2022-01-05T18:47:59.000Z
[ "transformers", "pytorch", "realm", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
google
null
null
google/realm-cc-news-pretrained-embedder
1
701
transformers
2022-03-02T23:29:05
--- language: en license: apache-2.0 --- # realm-cc-news-pretrained-embedder ## Model description The REALM checkpoint pretrained with CC-News as target corpus and Wikipedia as knowledge corpus, converted from the TF checkpoint provided by Google Language. The original paper, code, and checkpoints can be found [here](https://github.com/google-research/language/tree/master/language/realm). ## Usage ```python from transformers import RealmEmbedder embedder = RealmEmbedder.from_pretrained("qqaatw/realm-cc-news-pretrained-embedder") ```
546
[ [ -0.013031005859375, -0.044891357421875, 0.02215576171875, -0.004238128662109375, -0.0095672607421875, 0.0135955810546875, 0.0026378631591796875, 0.01332855224609375, 0.02301025390625, 0.03863525390625, -0.059539794921875, -0.0316162109375, -0.0263214111328125, ...
josu/roberta-pt-br
2021-12-12T20:15:09.000Z
[ "transformers", "pytorch", "roberta", "fill-mask", "portuguese", "brazil", "pt_BR", "pt", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
josu
null
null
josu/roberta-pt-br
6
701
transformers
2022-03-02T23:29:05
--- language: pt tags: - portuguese - brazil - pt_BR widget: - text: Brasilia é a capital do <mask> --- ``` python from transformers import pipeline unmasker = pipeline('fill-mask', model='josu/roberta-pt-br') text = 'Brasilia é a capital do <mask>' [{'sequence': 'Brasilia é a capital do Brasil', 'score': 0.24386335909366608, 'token': 707, 'token_str': ' Brasil'}, {'sequence': 'Brasilia é a capital do estado', 'score': 0.2320091277360916, 'token': 1031, 'token_str': ' estado'}, {'sequence': 'Brasilia é a capital do país', 'score': 0.0665697380900383, 'token': 998, 'token_str': ' país'}, {'sequence': 'Brasilia é a capital do Rio', 'score': 0.05980581417679787, 'token': 993, 'token_str': ' Rio'}, {'sequence': 'Brasilia é a capital do capital', 'score': 0.058453518897295, 'token': 2027, 'token_str': ' capital'}] ```
867
[ [ -0.0160369873046875, -0.01355743408203125, 0.00106048583984375, 0.02679443359375, -0.034210205078125, 0.023834228515625, -0.0011491775512695312, 0.0088653564453125, 0.027130126953125, 0.036163330078125, -0.0345458984375, -0.036041259765625, -0.07952880859375, ...
prithivida/active_to_passive_styletransfer
2021-06-23T13:43:58.000Z
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text2text-generation
prithivida
null
null
prithivida/active_to_passive_styletransfer
4
701
transformers
2022-03-02T23:29:05
## This model belongs to the Styleformer project [Please refer to github page](https://github.com/PrithivirajDamodaran/Styleformer)
134
[ [ 0.01119232177734375, -0.0225830078125, 0.01119232177734375, 0.0287628173828125, -0.01194000244140625, 0.01467132568359375, 0.031036376953125, 0.0174407958984375, 0.0252685546875, 0.05841064453125, -0.05780029296875, -0.0211944580078125, -0.0167236328125, -0....
timm/flexivit_base.300ep_in1k
2023-05-05T23:58:32.000Z
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2212.08013", "arxiv:2010.11929", "license:apache-2.0", "region:us" ]
image-classification
timm
null
null
timm/flexivit_base.300ep_in1k
0
701
timm
2022-12-22T07:12:14
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for flexivit_base.300ep_in1k A FlexiViT image classification model. Trained on ImageNet-1k in JAX by paper authors, ported to PyTorch by Ross Wightman. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 86.6 - GMACs: 19.4 - Activations (M): 18.9 - Image size: 240 x 240 - **Papers:** - FlexiViT: One Model for All Patch Sizes: https://arxiv.org/abs/2212.08013 - An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2 - **Dataset:** ImageNet-1k - **Original:** https://github.com/google-research/big_vision ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('flexivit_base.300ep_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'flexivit_base.300ep_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 226, 768) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @article{beyer2022flexivit, title={FlexiViT: One Model for All Patch Sizes}, author={Beyer, Lucas and Izmailov, Pavel and Kolesnikov, Alexander and Caron, Mathilde and Kornblith, Simon and Zhai, Xiaohua and Minderer, Matthias and Tschannen, Michael and Alabdulmohsin, Ibrahim and Pavetic, Filip}, journal={arXiv preprint arXiv:2212.08013}, year={2022} } ``` ```bibtex @article{dosovitskiy2020vit, title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale}, author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil}, journal={ICLR}, year={2021} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
3,696
[ [ -0.037811279296875, -0.027923583984375, 0.00440216064453125, 0.005218505859375, -0.0255126953125, -0.02880859375, -0.0187225341796875, -0.035888671875, 0.015533447265625, 0.0168609619140625, -0.04302978515625, -0.040863037109375, -0.045074462890625, -0.00232...
timm/vit_giant_patch14_dinov2.lvd142m
2023-09-03T22:00:05.000Z
[ "timm", "pytorch", "safetensors", "image-classification", "arxiv:2304.07193", "arxiv:2010.11929", "license:apache-2.0", "region:us" ]
image-classification
timm
null
null
timm/vit_giant_patch14_dinov2.lvd142m
0
701
timm
2023-05-09T20:50:13
--- tags: - image-classification - timm library_name: timm license: apache-2.0 --- # Model card for vit_giant_patch14_dinov2.lvd142m A Vision Transformer (ViT) image feature model. Pretrained on LVD-142M with self-supervised DINOv2 method. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 1136.5 - GMACs: 1784.2 - Activations (M): 2757.9 - Image size: 518 x 518 - **Papers:** - DINOv2: Learning Robust Visual Features without Supervision: https://arxiv.org/abs/2304.07193 - An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2 - **Original:** https://github.com/facebookresearch/dinov2 - **Pretrain Dataset:** LVD-142M ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('vit_giant_patch14_dinov2.lvd142m', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'vit_giant_patch14_dinov2.lvd142m', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 1370, 1536) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @misc{oquab2023dinov2, title={DINOv2: Learning Robust Visual Features without Supervision}, author={Oquab, Maxime and Darcet, Timothée and Moutakanni, Theo and Vo, Huy V. and Szafraniec, Marc and Khalidov, Vasil and Fernandez, Pierre and Haziza, Daniel and Massa, Francisco and El-Nouby, Alaaeldin and Howes, Russell and Huang, Po-Yao and Xu, Hu and Sharma, Vasu and Li, Shang-Wen and Galuba, Wojciech and Rabbat, Mike and Assran, Mido and Ballas, Nicolas and Synnaeve, Gabriel and Misra, Ishan and Jegou, Herve and Mairal, Julien and Labatut, Patrick and Joulin, Armand and Bojanowski, Piotr}, journal={arXiv:2304.07193}, year={2023} } ``` ```bibtex @article{dosovitskiy2020vit, title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale}, author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil}, journal={ICLR}, year={2021} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
3,993
[ [ -0.036895751953125, -0.02972412109375, 0.0047149658203125, 0.00426483154296875, -0.03472900390625, -0.0243377685546875, -0.020721435546875, -0.0340576171875, 0.01430511474609375, 0.0213623046875, -0.032440185546875, -0.037872314453125, -0.0513916015625, -0.0...
digiplay/Opiate_v2
2023-07-15T05:07:02.000Z
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "endpoints_compatible", "has_space", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
digiplay
null
null
digiplay/Opiate_v2
2
701
diffusers
2023-07-15T04:16:25
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- Model info: https://civitai.com/models/69587?modelVersionId=98101 Original Author's DEMO images : ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/5f81c93a-d9eb-4399-8362-95681d8f9d87/OpiateV2.jpeg) ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/991410e6-a9b8-4027-8582-10ef89ac22d3/00260-4105401889.jpeg) ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/c6227ce2-66cc-4532-b012-78291681b13d/00004-2061204743.jpeg) ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/2cb82af3-bac6-44c1-94f8-e3898e7daa74/00021-3738425758.jpeg)
669
[ [ -0.020904541015625, -0.0180206298828125, 0.04376220703125, 0.01114654541015625, -0.0181732177734375, -0.0145721435546875, 0.01248931884765625, -0.0020313262939453125, 0.033660888671875, 0.0294036865234375, -0.045745849609375, -0.03338623046875, -0.01055145263671...
yukiyan/resnet-50-finetuned-eurosat
2023-09-28T07:06:33.000Z
[ "transformers", "pytorch", "resnet", "image-classification", "generated_from_trainer", "dataset:cifar10", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
yukiyan
null
null
yukiyan/resnet-50-finetuned-eurosat
0
701
transformers
2023-09-28T06:36:23
--- license: apache-2.0 base_model: microsoft/resnet-50 tags: - generated_from_trainer datasets: - cifar10 metrics: - accuracy model-index: - name: resnet-50-finetuned-eurosat results: - task: name: Image Classification type: image-classification dataset: name: cifar10 type: cifar10 config: plain_text split: train args: plain_text metrics: - name: Accuracy type: accuracy value: 0.852 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # resnet-50-finetuned-eurosat This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the cifar10 dataset. It achieves the following results on the evaluation set: - Loss: 0.5331 - Accuracy: 0.852 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.6163 | 1.0 | 351 | 1.3104 | 0.665 | | 1.0927 | 2.0 | 703 | 0.6382 | 0.8286 | | 1.0099 | 2.99 | 1053 | 0.5331 | 0.852 | ### Framework versions - Transformers 4.33.3 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
1,891
[ [ -0.04107666015625, -0.0233612060546875, -0.003978729248046875, 0.01003265380859375, -0.0145263671875, -0.0255889892578125, -0.01409149169921875, -0.0211029052734375, 0.0097198486328125, 0.01605224609375, -0.053009033203125, -0.034820556640625, -0.038055419921875...
timm/eva_giant_patch14_plus_clip_224.merged2b_s11b_b114k
2023-04-10T23:10:04.000Z
[ "open_clip", "zero-shot-image-classification", "clip", "license:mit", "region:us", "has_space" ]
zero-shot-image-classification
timm
null
null
timm/eva_giant_patch14_plus_clip_224.merged2b_s11b_b114k
1
700
open_clip
2023-04-10T22:59:37
--- tags: - zero-shot-image-classification - clip library_tag: open_clip license: mit --- # Model card for eva_giant_patch14_clip_224.merged2b_s11b_b114k
154
[ [ -0.035797119140625, -0.010833740234375, 0.0145263671875, 0.03558349609375, -0.04315185546875, 0.025543212890625, 0.0228271484375, -0.0021686553955078125, 0.06707763671875, 0.06903076171875, -0.051666259765625, -0.0231475830078125, -0.0433349609375, 0.0003736...
h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-7b
2023-06-28T14:38:16.000Z
[ "transformers", "pytorch", "llama", "text-generation", "gpt", "llm", "large language model", "h2o-llmstudio", "en", "dataset:OpenAssistant/oasst1", "license:apache-2.0", "has_space", "text-generation-inference", "region:us" ]
text-generation
h2oai
null
null
h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-7b
10
700
transformers
2023-06-09T09:28:29
--- language: - en library_name: transformers tags: - gpt - llm - large language model - h2o-llmstudio inference: false thumbnail: >- https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico license: apache-2.0 datasets: - OpenAssistant/oasst1 --- # Model Card ## Summary This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio). - Base model: [openlm-research/open_llama_7b](https://huggingface.co/openlm-research/open_llama_7b) - Dataset preparation: [OpenAssistant/oasst1](https://github.com/h2oai/h2o-llmstudio/blob/1935d84d9caafed3ee686ad2733eb02d2abfce57/app_utils/utils.py#LL1896C5-L1896C28) ## Usage To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate` and `torch` libraries installed. ```bash pip install transformers==4.30.2 pip install accelerate==0.19.0 pip install torch==2.0.0 ``` ```python import torch from transformers import pipeline generate_text = pipeline( model="h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-7b", torch_dtype="auto", trust_remote_code=True, use_fast=False, device_map={"": "cuda:0"}, ) res = generate_text( "Why is drinking water so healthy?", min_new_tokens=2, max_new_tokens=1024, do_sample=False, num_beams=1, temperature=float(0.3), repetition_penalty=float(1.2), renormalize_logits=True ) print(res[0]["generated_text"]) ``` You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer: ```python print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"]) ``` ```bash <|prompt|>Why is drinking water so healthy?</s><|answer|> ``` Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`. ```python import torch from h2oai_pipeline import H2OTextGenerationPipeline from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained( "h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-7b", use_fast=False, padding_side="left", trust_remote_code=False, ) model = AutoModelForCausalLM.from_pretrained( "h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-7b", torch_dtype="auto", device_map={"": "cuda:0"}, trust_remote_code=False, ) generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer) res = generate_text( "Why is drinking water so healthy?", min_new_tokens=2, max_new_tokens=1024, do_sample=False, num_beams=1, temperature=float(0.3), repetition_penalty=float(1.2), renormalize_logits=True ) print(res[0]["generated_text"]) ``` You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-7b" # either local folder or huggingface model name # Important: The prompt needs to be in the same format the model was trained with. # You can find an example prompt in the experiment logs. prompt = "<|prompt|>How are you?</s><|answer|>" tokenizer = AutoTokenizer.from_pretrained( model_name, use_fast=False, trust_remote_code=False, ) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map={"": "cuda:0"}, trust_remote_code=False, ) model.cuda().eval() inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda") # generate configuration can be modified to your needs tokens = model.generate( **inputs, min_new_tokens=2, max_new_tokens=1024, do_sample=False, num_beams=1, temperature=float(0.3), repetition_penalty=float(1.2), renormalize_logits=True )[0] tokens = tokens[inputs["input_ids"].shape[1]:] answer = tokenizer.decode(tokens, skip_special_tokens=True) print(answer) ``` ## Model Architecture ``` LlamaForCausalLM( (model): LlamaModel( (embed_tokens): Embedding(32000, 4096, padding_idx=0) (layers): ModuleList( (0-31): 32 x LlamaDecoderLayer( (self_attn): LlamaAttention( (q_proj): Linear(in_features=4096, out_features=4096, bias=False) (k_proj): Linear(in_features=4096, out_features=4096, bias=False) (v_proj): Linear(in_features=4096, out_features=4096, bias=False) (o_proj): Linear(in_features=4096, out_features=4096, bias=False) (rotary_emb): LlamaRotaryEmbedding() ) (mlp): LlamaMLP( (gate_proj): Linear(in_features=4096, out_features=11008, bias=False) (down_proj): Linear(in_features=11008, out_features=4096, bias=False) (up_proj): Linear(in_features=4096, out_features=11008, bias=False) (act_fn): SiLUActivation() ) (input_layernorm): LlamaRMSNorm() (post_attention_layernorm): LlamaRMSNorm() ) ) (norm): LlamaRMSNorm() ) (lm_head): Linear(in_features=4096, out_features=32000, bias=False) ) ``` ## Model Configuration This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models. ## Disclaimer Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions. - Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints. - Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion. - Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model. - Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities. - Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues. - Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes. By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
7,819
[ [ -0.0116729736328125, -0.05621337890625, 0.02984619140625, 0.01366424560546875, -0.0262908935546875, -0.006561279296875, -0.01556396484375, -0.021636962890625, 0.007061004638671875, 0.0213165283203125, -0.032196044921875, -0.044891357421875, -0.049468994140625, ...
line-corporation/japanese-large-lm-1.7b-instruction-sft
2023-08-14T08:19:11.000Z
[ "transformers", "pytorch", "gpt2", "text-generation", "ja", "license:apache-2.0", "has_space", "text-generation-inference", "region:us" ]
text-generation
line-corporation
null
null
line-corporation/japanese-large-lm-1.7b-instruction-sft
6
700
transformers
2023-08-03T01:34:01
--- license: apache-2.0 inference: false language: ja --- # japanese-large-lm-1.7b-instruction-sft This repository provides a 1.7B parameters Japanese language model, fine-tuned and trained by [LINE Corporation](https://linecorp.com/ja/). ## For Japanese 詳細な説明や実験に関しては「[Instruction Tuningにより対話性能を向上させた3.6B日本語言語モデルを公開します](https://engineering.linecorp.com/ja/blog/3.6b-japanese-language-model-with-improved-dialog-performance-by-instruction-tuning)」をご覧ください。 ## How to use ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model = AutoModelForCausalLM.from_pretrained("line-corporation/japanese-large-lm-1.7b-instruction-sft") tokenizer = AutoTokenizer.from_pretrained("line-corporation/japanese-large-lm-1.7b-instruction-sft", use_fast=False) generator = pipeline("text-generation", model=model, tokenizer=tokenizer, device=0) input_text = """四国の県名を全て列挙してください。""" text = generator( f"ユーザー: {input_text}\nシステム: ", max_length = 256, do_sample = True, temperature = 0.7, top_p = 0.9, top_k = 0, repetition_penalty = 1.1, num_beams = 1, pad_token_id = tokenizer.pad_token_id, num_return_sequences = 1, ) print(text) # [{'generated_text': 'ユーザー: 四国の県名を全て列挙してください。\nシステム: 香川県、徳島県、愛媛県、高知県'}] ``` ## Tokenization We use a sentencepiece tokenizer with a unigram language model and byte-fallback. We **do not** apply pre-tokenization with Japanese tokenizer. Thus, a user may directly feed raw sentences into the tokenizer. ## License [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
1,596
[ [ -0.020538330078125, -0.0748291015625, 0.022430419921875, 0.037841796875, -0.04833984375, -0.0080718994140625, -0.024169921875, 0.0012044906616210938, 0.0081634521484375, 0.05279541015625, -0.049896240234375, -0.0445556640625, -0.03851318359375, 0.01092529296...
google/bert_uncased_L-12_H-512_A-8
2021-05-19T17:26:55.000Z
[ "transformers", "pytorch", "jax", "bert", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
google
null
null
google/bert_uncased_L-12_H-512_A-8
0
699
transformers
2022-03-02T23:29:05
--- thumbnail: https://huggingface.co/front/thumbnails/google.png license: apache-2.0 --- BERT Miniatures === This is the set of 24 BERT models referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962) (English only, uncased, trained with WordPiece masking). We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher. Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity. You can download the 24 BERT miniatures either from the [official BERT Github page](https://github.com/google-research/bert/), or via HuggingFace from the links below: | |H=128|H=256|H=512|H=768| |---|:---:|:---:|:---:|:---:| | **L=2** |[**2/128 (BERT-Tiny)**][2_128]|[2/256][2_256]|[2/512][2_512]|[2/768][2_768]| | **L=4** |[4/128][4_128]|[**4/256 (BERT-Mini)**][4_256]|[**4/512 (BERT-Small)**][4_512]|[4/768][4_768]| | **L=6** |[6/128][6_128]|[6/256][6_256]|[6/512][6_512]|[6/768][6_768]| | **L=8** |[8/128][8_128]|[8/256][8_256]|[**8/512 (BERT-Medium)**][8_512]|[8/768][8_768]| | **L=10** |[10/128][10_128]|[10/256][10_256]|[10/512][10_512]|[10/768][10_768]| | **L=12** |[12/128][12_128]|[12/256][12_256]|[12/512][12_512]|[**12/768 (BERT-Base)**][12_768]| Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model. Here are the corresponding GLUE scores on the test set: |Model|Score|CoLA|SST-2|MRPC|STS-B|QQP|MNLI-m|MNLI-mm|QNLI(v2)|RTE|WNLI|AX| |---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| |BERT-Tiny|64.2|0.0|83.2|81.1/71.1|74.3/73.6|62.2/83.4|70.2|70.3|81.5|57.2|62.3|21.0| |BERT-Mini|65.8|0.0|85.9|81.1/71.8|75.4/73.3|66.4/86.2|74.8|74.3|84.1|57.9|62.3|26.1| |BERT-Small|71.2|27.8|89.7|83.4/76.2|78.8/77.0|68.1/87.0|77.6|77.0|86.4|61.8|62.3|28.6| |BERT-Medium|73.5|38.0|89.6|86.6/81.6|80.4/78.4|69.6/87.9|80.0|79.1|87.7|62.2|62.3|30.5| For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs: - batch sizes: 8, 16, 32, 64, 128 - learning rates: 3e-4, 1e-4, 5e-5, 3e-5 If you use these models, please cite the following paper: ``` @article{turc2019, title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models}, author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina}, journal={arXiv preprint arXiv:1908.08962v2 }, year={2019} } ``` [2_128]: https://huggingface.co/google/bert_uncased_L-2_H-128_A-2 [2_256]: https://huggingface.co/google/bert_uncased_L-2_H-256_A-4 [2_512]: https://huggingface.co/google/bert_uncased_L-2_H-512_A-8 [2_768]: https://huggingface.co/google/bert_uncased_L-2_H-768_A-12 [4_128]: https://huggingface.co/google/bert_uncased_L-4_H-128_A-2 [4_256]: https://huggingface.co/google/bert_uncased_L-4_H-256_A-4 [4_512]: https://huggingface.co/google/bert_uncased_L-4_H-512_A-8 [4_768]: https://huggingface.co/google/bert_uncased_L-4_H-768_A-12 [6_128]: https://huggingface.co/google/bert_uncased_L-6_H-128_A-2 [6_256]: https://huggingface.co/google/bert_uncased_L-6_H-256_A-4 [6_512]: https://huggingface.co/google/bert_uncased_L-6_H-512_A-8 [6_768]: https://huggingface.co/google/bert_uncased_L-6_H-768_A-12 [8_128]: https://huggingface.co/google/bert_uncased_L-8_H-128_A-2 [8_256]: https://huggingface.co/google/bert_uncased_L-8_H-256_A-4 [8_512]: https://huggingface.co/google/bert_uncased_L-8_H-512_A-8 [8_768]: https://huggingface.co/google/bert_uncased_L-8_H-768_A-12 [10_128]: https://huggingface.co/google/bert_uncased_L-10_H-128_A-2 [10_256]: https://huggingface.co/google/bert_uncased_L-10_H-256_A-4 [10_512]: https://huggingface.co/google/bert_uncased_L-10_H-512_A-8 [10_768]: https://huggingface.co/google/bert_uncased_L-10_H-768_A-12 [12_128]: https://huggingface.co/google/bert_uncased_L-12_H-128_A-2 [12_256]: https://huggingface.co/google/bert_uncased_L-12_H-256_A-4 [12_512]: https://huggingface.co/google/bert_uncased_L-12_H-512_A-8 [12_768]: https://huggingface.co/google/bert_uncased_L-12_H-768_A-12
4,617
[ [ -0.053558349609375, -0.035491943359375, 0.0239410400390625, 0.0131683349609375, -0.02374267578125, -0.016937255859375, -0.024017333984375, -0.031219482421875, 0.04376220703125, -0.006099700927734375, -0.06103515625, -0.030670166015625, -0.05206298828125, -0....
CompVis/stable-diffusion-v1-3
2023-07-05T16:17:35.000Z
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "arxiv:2207.12598", "arxiv:2112.10752", "arxiv:2103.00020", "arxiv:2205.11487", "arxiv:1910.09700", "license:creativeml-openrail-m", "has_space", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
CompVis
null
null
CompVis/stable-diffusion-v1-3
30
699
diffusers
2022-08-18T17:54:56
--- license: creativeml-openrail-m tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image inference: false extra_gated_prompt: |- This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content 2. The authors claim no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license carefully here: https://huggingface.co/spaces/CompVis/stable-diffusion-license extra_gated_heading: Please read the LICENSE to access this model --- # Stable Diffusion v1-3 Model Card Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input. For more information about how Stable Diffusion functions, please have a look at [🤗's Stable Diffusion with D🧨iffusers blog](https://huggingface.co/blog/stable_diffusion). The **Stable-Diffusion-v1-3** checkpoint was initialized with the weights of the [Stable-Diffusion-v1-2](https:/steps/huggingface.co/CompVis/stable-diffusion-v1-2) checkpoint and subsequently fine-tuned on 195,000 steps at resolution `512x512` on "laion-improved-aesthetics" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598). For more information, please refer to [Training](#training). This weights here are intended to be used with the D🧨iffusers library. If you are looking for the weights to be loaded into the CompVis Stable Diffusion codebase, [come here](https://huggingface.co/CompVis/stable-diffusion-v-1-3-original) ## Model Details - **Developed by:** Robin Rombach, Patrick Esser - **Model type:** Diffusion-based text-to-image generation model - **Language(s):** English - **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based. - **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([CLIP ViT-L/14](https://arxiv.org/abs/2103.00020)) as suggested in the [Imagen paper](https://arxiv.org/abs/2205.11487). - **Resources for more information:** [GitHub Repository](https://github.com/CompVis/stable-diffusion), [Paper](https://arxiv.org/abs/2112.10752). - **Cite as:** @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } ## Examples We recommend using [🤗's Diffusers library](https://github.com/huggingface/diffusers) to run Stable Diffusion. ```bash pip install --upgrade diffusers transformers scipy ``` Running the pipeline with the default PNDM scheduler: ```python import torch from torch import autocast from diffusers import StableDiffusionPipeline model_id = "CompVis/stable-diffusion-v1-3" device = "cuda" pipe = StableDiffusionPipeline.from_pretrained(model_id) pipe = pipe.to(device) prompt = "a photo of an astronaut riding a horse on mars" with autocast("cuda"): image = pipe(prompt, guidance_scale=7.5)["sample"][0] image.save("astronaut_rides_horse.png") ``` **Note**: If you are limited by GPU memory and have less than 10GB of GPU RAM available, please make sure to load the StableDiffusionPipeline in float16 precision instead of the default float32 precision as done above. You can do so by telling diffusers to expect the weights to be in float16 precision: ```py import torch pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe = pipe.to(device) prompt = "a photo of an astronaut riding a horse on mars" with autocast("cuda"): image = pipe(prompt, guidance_scale=7.5)["sample"][0] image.save("astronaut_rides_horse.png") ``` To swap out the noise scheduler, pass it to `from_pretrained`: ```python from diffusers import StableDiffusionPipeline, LMSDiscreteScheduler model_id = "CompVis/stable-diffusion-v1-3" # Use the K-LMS scheduler here instead scheduler = LMSDiscreteScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000) pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, use_auth_token=True) pipe = pipe.to("cuda") prompt = "a photo of an astronaut riding a horse on mars" with autocast("cuda"): image = pipe(prompt, guidance_scale=7.5)["sample"][0] image.save("astronaut_rides_horse.png") ``` # Uses ## Direct Use The model is intended for research purposes only. Possible research areas and tasks include - Safe deployment of models which have the potential to generate harmful content. - Probing and understanding the limitations and biases of generative models. - Generation of artworks and use in design and other artistic processes. - Applications in educational or creative tools. - Research on generative models. Excluded uses are described below. ### Misuse, Malicious Use, and Out-of-Scope Use _Note: This section is taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), but applies in the same way to Stable Diffusion v1_. The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes. #### Out-of-Scope Use The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. #### Misuse and Malicious Use Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to: - Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc. - Intentionally promoting or propagating discriminatory content or harmful stereotypes. - Impersonating individuals without their consent. - Sexual content without consent of the people who might see it. - Mis- and disinformation - Representations of egregious violence and gore - Sharing of copyrighted or licensed material in violation of its terms of use. - Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use. ## Limitations and Bias ### Limitations - The model does not achieve perfect photorealism - The model cannot render legible text - The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere” - Faces and people in general may not be generated properly. - The model was trained mainly with English captions and will not work as well in other languages. - The autoencoding part of the model is lossy - The model was trained on a large-scale dataset [LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material and is not fit for product use without additional safety mechanisms and considerations. - No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data. The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images. ### Bias While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases. Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/), which consists of images that are primarily limited to English descriptions. Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for. This affects the overall output of the model, as white and western cultures are often set as the default. Further, the ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts. ## Training ### Training Data The model developers used the following dataset for training the model: - LAION-2B (en) and subsets thereof (see next section) ### Training Procedure Stable Diffusion v1-4 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training, - Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4 - Text prompts are encoded through a ViT-L/14 text-encoder. - The non-pooled output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention. - The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet. We currently provide four checkpoints, which were trained as follows. - [`stable-diffusion-v1-1`](https://huggingface.co/CompVis/stable-diffusion-v1-1): 237,000 steps at resolution `256x256` on [laion2B-en](https://huggingface.co/datasets/laion/laion2B-en). 194,000 steps at resolution `512x512` on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B with resolution `>= 1024x1024`). - [`stable-diffusion-v1-2`](https://huggingface.co/CompVis/stable-diffusion-v1-2): Resumed from `stable-diffusion-v1-1`. 515,000 steps at resolution `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en, filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an [improved aesthetics estimator](https://github.com/christophschuhmann/improved-aesthetic-predictor)). - [`stable-diffusion-v1-3`](https://huggingface.co/CompVis/stable-diffusion-v1-3): Resumed from `stable-diffusion-v1-2`. 195,000 steps at resolution `512x512` on "laion-improved-aesthetics" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598). - [**`stable-diffusion-v1-4`**](https://huggingface.co/CompVis/stable-diffusion-v1-4) Resumed from `stable-diffusion-v1-2`.225,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598). ### Training details - **Hardware:** 32 x 8 x A100 GPUs - **Optimizer:** AdamW - **Gradient Accumulations**: 2 - **Batch:** 32 x 8 x 2 x 4 = 2048 - **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant ## Evaluation Results Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0) and 50 PLMS sampling steps show the relative improvements of the checkpoints: ![pareto](https://huggingface.co/CompVis/stable-diffusion/resolve/main/v1-variants-scores.jpg) Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores. ## Environmental Impact **Stable Diffusion v1** **Estimated Emissions** Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact. - **Hardware Type:** A100 PCIe 40GB - **Hours used:** 150000 - **Cloud Provider:** AWS - **Compute Region:** US-east - **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 11250 kg CO2 eq. ## Citation ```bibtex @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } ``` *This model card was written by: Robin Rombach and Patrick Esser and is based on the [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*
13,976
[ [ -0.027923583984375, -0.06005859375, 0.0293731689453125, 0.017791748046875, -0.020233154296875, -0.027923583984375, 0.0017299652099609375, -0.0245208740234375, -0.004596710205078125, 0.03216552734375, -0.0230712890625, -0.039306640625, -0.053253173828125, -0....
NlpHUST/vi-word-segmentation
2022-10-30T09:45:24.000Z
[ "transformers", "pytorch", "electra", "token-classification", "word segmentation", "vi", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
token-classification
NlpHUST
null
null
NlpHUST/vi-word-segmentation
1
699
transformers
2022-10-30T04:48:30
--- widget: - text: "Phát biểu tại phiên thảo luận về tình hình kinh tế xã hội của Quốc hội sáng 28/10 , Bộ trưởng Bộ LĐ-TB&XH Đào Ngọc Dung khái quát , tại phiên khai mạc kỳ họp , lãnh đạo chính phủ đã báo cáo , đề cập tương đối rõ ràng về việc thực hiện các chính sách an sinh xã hội" tags: - word segmentation language: - vi metrics: - precision - recall - f1 - accuracy model-index: - name: vi-word-segmentation results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vi-word-segmentation This model is a fine-tuned version of [NlpHUST/electra-base-vn](https://huggingface.co/NlpHUST/electra-base-vn) on an vlsp 2013 vietnamese word segmentation dataset. It achieves the following results on the evaluation set: - Loss: 0.0501 - Precision: 0.9833 - Recall: 0.9838 - F1: 0.9835 - Accuracy: 0.9911 ## Model description More information needed ## Intended uses & limitations You can use this model with Transformers *pipeline* for NER. ```python from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline tokenizer = AutoTokenizer.from_pretrained("NlpHUST/vi-word-segmentation") model = AutoModelForTokenClassification.from_pretrained("NlpHUST/vi-word-segmentation") nlp = pipeline("token-classification", model=model, tokenizer=tokenizer) example = "Phát biểu tại phiên thảo luận về tình hình kinh tế xã hội của Quốc hội sáng 28/10 , Bộ trưởng Bộ LĐ-TB&XH Đào Ngọc Dung khái quát , tại phiên khai mạc kỳ họp , lãnh đạo chính phủ đã báo cáo , đề cập tương đối rõ ràng về việc thực hiện các chính sách an sinh xã hội" ner_results = nlp(example) example_tok = "" for e in ner_results: if "##" in e["word"]: example_tok = example_tok + e["word"].replace("##","") elif e["entity"] =="I": example_tok = example_tok + "_" + e["word"] else: example_tok = example_tok + " " + e["word"] print(example_tok) Phát_biểu tại phiên thảo_luận về tình_hình kinh_tế xã_hội của Quốc_hội sáng 28 / 10 , Bộ_trưởng Bộ LĐ - TB [UNK] XH Đào_Ngọc_Dung khái_quát , tại phiên khai_mạc kỳ họp , lãnh_đạo chính_phủ đã báo_cáo , đề_cập tương_đối rõ_ràng về việc thực_hiện các chính_sách an_sinh xã_hội ``` ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0168 | 1.0 | 4712 | 0.0284 | 0.9813 | 0.9825 | 0.9819 | 0.9904 | | 0.0107 | 2.0 | 9424 | 0.0350 | 0.9789 | 0.9814 | 0.9802 | 0.9895 | | 0.005 | 3.0 | 14136 | 0.0364 | 0.9826 | 0.9843 | 0.9835 | 0.9909 | | 0.0033 | 4.0 | 18848 | 0.0434 | 0.9830 | 0.9831 | 0.9830 | 0.9908 | | 0.0017 | 5.0 | 23560 | 0.0501 | 0.9833 | 0.9838 | 0.9835 | 0.9911 | ### Framework versions - Transformers 4.22.2 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
3,519
[ [ -0.03472900390625, -0.05426025390625, 0.0158538818359375, 0.007076263427734375, -0.0254974365234375, -0.030029296875, -0.00849151611328125, -0.01493072509765625, 0.026763916015625, 0.0274658203125, -0.034759521484375, -0.050628662109375, -0.048065185546875, ...
multimodalart/sd-fine-tunable
2022-11-26T14:17:28.000Z
[ "diffusers", "license:creativeml-openrail-m", "endpoints_compatible", "has_space", "diffusers:StableDiffusionPipeline", "region:us" ]
null
multimodalart
null
null
multimodalart/sd-fine-tunable
6
699
diffusers
2022-11-17T00:51:44
--- license: creativeml-openrail-m --- Stable Diffusion v1-5 with the fine-tuned VAE `sd-vae-ft-mse` and files with config modifications for making it better to fine-tune made by [fast-stable-diffusion by TheLastBen](https://github.com/TheLastBen/fast-stable-diffusion) to be used on [fastDreambooth Colab Notebook](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) and on the [Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) Is not suited for inference and training elsewhere is under your own risk. The [model LICENSE](https://huggingface.co/spaces/CompVis/stable-diffusion-license) still applies normally for this use-case. Refer to the [original repository](https://huggingface.co/runwayml/stable-diffusion-v1-5) for the model card
837
[ [ -0.037322998046875, -0.06060791015625, 0.033050537109375, 0.044708251953125, -0.039794921875, -0.0008716583251953125, 0.01296234130859375, -0.0313720703125, 0.03564453125, 0.01812744140625, -0.031402587890625, -0.033935546875, -0.03814697265625, -0.018661499...
timm/beit_large_patch16_224.in22k_ft_in22k
2023-05-08T23:23:11.000Z
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-22k", "arxiv:2106.08254", "arxiv:2010.11929", "license:apache-2.0", "region:us" ]
image-classification
timm
null
null
timm/beit_large_patch16_224.in22k_ft_in22k
0
698
timm
2022-12-23T02:26:33
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-22k - imagenet-22k --- # Model card for beit_large_patch16_224.in22k_ft_in22k A BEiT image classification model. Trained on ImageNet-22k with self-supervised masked image modelling (MIM) using a DALL-E dVAE as visual tokenizer. Fine-tuned on ImageNet-22k. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 325.8 - GMACs: 61.6 - Activations (M): 63.5 - Image size: 224 x 224 - **Papers:** - BEiT: BERT Pre-Training of Image Transformers: https://arxiv.org/abs/2106.08254 - An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2 - **Dataset:** ImageNet-22k - **Pretrain Dataset:** ImageNet-22k - **Original:** https://github.com/microsoft/unilm/tree/master/beit ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('beit_large_patch16_224.in22k_ft_in22k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'beit_large_patch16_224.in22k_ft_in22k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 197, 1024) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @article{bao2021beit, title={Beit: Bert pre-training of image transformers}, author={Bao, Hangbo and Dong, Li and Piao, Songhao and Wei, Furu}, journal={arXiv preprint arXiv:2106.08254}, year={2021} } ``` ```bibtex @article{dosovitskiy2020vit, title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale}, author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil}, journal={ICLR}, year={2021} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
3,705
[ [ -0.043609619140625, -0.027252197265625, 0.002819061279296875, 0.0123138427734375, -0.0304718017578125, -0.017120361328125, -0.0208587646484375, -0.040069580078125, 0.0176544189453125, 0.026885986328125, -0.040802001953125, -0.047943115234375, -0.05712890625, ...
digiplay/MGM
2023-08-18T17:26:49.000Z
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "endpoints_compatible", "has_space", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
digiplay
null
null
digiplay/MGM
1
698
diffusers
2023-08-16T19:39:37
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- Model info : https://civitai.com/models/109568/mgmv1 Sample image generated by huggingface's API : ![d62d6ad6-7630-4862-902a-04fe8cbcc9eb.jpeg](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/m49ZPTU-UKk5DbyeC4T-Z.jpeg) ![aa4c19ec-5792-41c0-98ba-7b7e426155f9.jpeg](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/oMMNN240wk_xxq2K9NuqN.jpeg) ![79d3ed81-6560-4c6d-9384-652b231ad76e.jpeg](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/SL7RPJXsisjBSAO1EVV7m.jpeg) ![c166a1a1-2d4a-4faa-9df0-6157611cbf02.jpeg](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/nhuJ-jzf85Q4KewHddVOT.jpeg)
834
[ [ -0.04632568359375, -0.0386962890625, 0.0280303955078125, 0.034393310546875, -0.0260162353515625, -0.002777099609375, 0.031890869140625, -0.0263519287109375, 0.054412841796875, 0.030853271484375, -0.0830078125, -0.042022705078125, -0.04107666015625, 0.0001024...
timm/tf_efficientnet_cc_b1_8e.in1k
2023-04-27T21:28:54.000Z
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:1904.04971", "arxiv:1905.11946", "license:apache-2.0", "region:us" ]
image-classification
timm
null
null
timm/tf_efficientnet_cc_b1_8e.in1k
0
697
timm
2022-12-13T00:08:17
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for tf_efficientnet_cc_b1_8e.in1k A EfficientNet-CondConv image classification model. Trained on ImageNet-1k in Tensorflow by paper authors, ported to PyTorch by Ross Wightman. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 39.7 - GMACs: 0.7 - Activations (M): 15.4 - Image size: 240 x 240 - **Papers:** - CondConv: Conditionally Parameterized Convolutions for Efficient Inference: https://arxiv.org/abs/1904.04971 - EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks: https://arxiv.org/abs/1905.11946 - **Dataset:** ImageNet-1k - **Original:** https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('tf_efficientnet_cc_b1_8e.in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'tf_efficientnet_cc_b1_8e.in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 16, 120, 120]) # torch.Size([1, 24, 60, 60]) # torch.Size([1, 40, 30, 30]) # torch.Size([1, 112, 15, 15]) # torch.Size([1, 320, 8, 8]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'tf_efficientnet_cc_b1_8e.in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 1280, 8, 8) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @article{yang2019condconv, title={Condconv: Conditionally parameterized convolutions for efficient inference}, author={Yang, Brandon and Bender, Gabriel and Le, Quoc V and Ngiam, Jiquan}, journal={Advances in Neural Information Processing Systems}, volume={32}, year={2019} } ``` ```bibtex @inproceedings{tan2019efficientnet, title={Efficientnet: Rethinking model scaling for convolutional neural networks}, author={Tan, Mingxing and Le, Quoc}, booktitle={International conference on machine learning}, pages={6105--6114}, year={2019}, organization={PMLR} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
4,518
[ [ -0.030517578125, -0.039276123046875, 0.00026869773864746094, 0.0074310302734375, -0.0236053466796875, -0.0364990234375, -0.0272674560546875, -0.0263214111328125, 0.01068878173828125, 0.027069091796875, -0.0318603515625, -0.052215576171875, -0.058685302734375, ...
Norod78/SDXL-StickerSheet-Lora
2023-08-31T09:12:04.000Z
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "en", "license:mit", "region:us", "has_space" ]
text-to-image
Norod78
null
null
Norod78/SDXL-StickerSheet-Lora
14
697
diffusers
2023-08-31T09:04:29
--- license: mit base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: StickerSheet tags: - text-to-image - stable-diffusion - lora - diffusers widget: - text: Cute sparkle pink barbie StickerSheet - text: Cthulhu StickerSheet based on H.P Lovecraft stories - text: Cute sparkle rainbow kitten StickerSheet, Eric Wallis - text: Cute socially awkward potato StickerSheet inference: true language: - en --- # Trigger words Use "StickerSheet" in your prompts # Examples Cute sparkle pink barbie StickerSheet, Very detailed, clean, high quality, sharp image, Eric Wallis ![Sparkle pink barbie](https://huggingface.co/Norod78/SDXL-StickerSheet-Lora/resolve/main/Examples/00076-20230831113822-7778-Cute%20sparkle%20pink%20barbie%20StickerSheet%20%20%2C%20Very%20detailed%2C%20clean%2C%20high%20quality%2C%20sharp%20image%2C%20Eric%20Wallis-before-highres-fix.jpg) Cthulhu StickerSheet, based on H.P Lovecraft stories, Very detailed, clean, high quality, sharp image ![Cthulhu](https://huggingface.co/Norod78/SDXL-StickerSheet-Lora/resolve/main/Examples/00073-20230831113700-7780-Cthulhu%20StickerSheet%20%20_lora_SDXL-StickerSheet-Lora_1_%2C%20based%20on%20H.P%20Lovecraft%20stories%2C%20Very%20detailed%2C%20clean%2C%20high%20quality%2C%20sharp.jpg)
1,264
[ [ -0.0316162109375, -0.05133056640625, 0.0269622802734375, 0.032562255859375, -0.0270233154296875, 0.004566192626953125, 0.01348114013671875, -0.00649261474609375, 0.038055419921875, 0.046661376953125, -0.04754638671875, -0.047454833984375, -0.059173583984375, ...
ProomptEngineer/pe-neon-sign-style
2023-09-11T15:21:13.000Z
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "license:other", "has_space", "region:us" ]
text-to-image
ProomptEngineer
null
null
ProomptEngineer/pe-neon-sign-style
1
697
diffusers
2023-09-11T15:21:08
--- license: other tags: - text-to-image - stable-diffusion - lora - diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: PENeonSign widget: - text: PENeonSign --- # PE Neon Sign [Style] ![Image 0](2266232.jpeg) <p>you favorite character as a neon sign...</p><p>weights 0.8-1</p><h2 id="heading-63">If you want to donate:</h2><h2 id="heading-64"><a target="_blank" rel="ugc" href="https://ko-fi.com/proomptengineer">https://ko-fi.com/proomptengineer</a></h2> ## Image examples for the model: ![Image 1](2266242.jpeg) ![Image 2](2266231.jpeg) ![Image 3](2266233.jpeg) ![Image 4](2266230.jpeg) ![Image 5](2266236.jpeg) ![Image 6](2266235.jpeg) ![Image 7](2266239.jpeg) ![Image 8](2266238.jpeg) ![Image 9](2266240.jpeg)
760
[ [ -0.0192413330078125, -0.010528564453125, 0.03643798828125, 0.0136566162109375, -0.05767822265625, 0.003345489501953125, 0.03228759765625, -0.01214599609375, 0.056640625, 0.042449951171875, -0.039215087890625, -0.016510009765625, -0.0400390625, 0.019577026367...
digiplay/OldFish_v1.1
2023-09-19T21:25:44.000Z
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "endpoints_compatible", "has_space", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
digiplay
null
null
digiplay/OldFish_v1.1
3
697
diffusers
2023-09-19T19:13:21
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- Model info : https://civitai.com/models/14978?modelVersionId=22052 Note: API version is a special version for diffusers, not the same with Original version, you can download this special version and original version here: https://huggingface.co/digiplay/OldFish_v1.1/tree/main generated by Huggingface's API : ![f52a8ef2-eac9-4788-9d7a-080a731b6d0a.jpeg](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/voXwR7xe58t1eVlp4dAAe.jpeg) prompt : beautiful women sitting in many books,golden hair,high res,just two hands
670
[ [ -0.041015625, -0.04779052734375, 0.0285797119140625, 0.033111572265625, -0.033203125, -0.01181793212890625, 0.0308380126953125, -0.049346923828125, 0.04437255859375, 0.042633056640625, -0.051422119140625, -0.0153350830078125, -0.029815673828125, 0.0129241943...
nateraw/vit-base-patch16-224-cifar10
2022-01-28T10:22:01.000Z
[ "transformers", "pytorch", "vit", "image-classification", "vision", "dataset:cifar10", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
nateraw
null
null
nateraw/vit-base-patch16-224-cifar10
6
696
transformers
2022-03-02T23:29:05
--- tags: - image-classification - vision - pytorch license: apache-2.0 datasets: - cifar10 metrics: - accuracy thumbnail: https://avatars3.githubusercontent.com/u/32437151?s=460&u=4ec59abc8d21d5feea3dab323d23a5860e6996a4&v=4 --- # Vision Transformer Fine Tuned on CIFAR10 Vision Transformer (ViT) model pre-trained on ImageNet-21k (14 million images, 21,843 classes) and **fine-tuned on CIFAR10** at resolution 224x224. Check out the code at my [my Github repo](https://github.com/nateraw/huggingface-vit-finetune). ## Usage ```python from transformers import ViTFeatureExtractor, ViTForImageClassification from PIL import Image import requests url = 'https://www.cs.toronto.edu/~kriz/cifar-10-sample/dog10.png' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = ViTFeatureExtractor.from_pretrained('nateraw/vit-base-patch16-224-cifar10') model = ViTForImageClassification.from_pretrained('nateraw/vit-base-patch16-224-cifar10') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) preds = outputs.logits.argmax(dim=1) classes = [ 'airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck' ] classes[preds[0]] ``` ## Model description The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. Note that this model does not provide any fine-tuned heads, as these were zero'd by Google researchers. However, the model does include the pre-trained pooler, which can be used for downstream tasks (such as image classification). By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.
2,441
[ [ -0.06072998046875, -0.0162811279296875, 0.0058746337890625, 0.0019073486328125, -0.0360107421875, -0.008758544921875, 0.000621795654296875, -0.038330078125, 0.0231170654296875, 0.03863525390625, -0.03204345703125, -0.01065826416015625, -0.058441162109375, -0...
timm/flexivit_base.600ep_in1k
2023-05-05T23:58:42.000Z
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2212.08013", "arxiv:2010.11929", "license:apache-2.0", "region:us" ]
image-classification
timm
null
null
timm/flexivit_base.600ep_in1k
0
696
timm
2022-12-22T07:13:29
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for flexivit_base.600ep_in1k A FlexiViT image classification model. Trained on ImageNet-1k in JAX by paper authors, ported to PyTorch by Ross Wightman. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 86.6 - GMACs: 19.4 - Activations (M): 18.9 - Image size: 240 x 240 - **Papers:** - FlexiViT: One Model for All Patch Sizes: https://arxiv.org/abs/2212.08013 - An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2 - **Dataset:** ImageNet-1k - **Original:** https://github.com/google-research/big_vision ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('flexivit_base.600ep_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'flexivit_base.600ep_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 226, 768) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @article{beyer2022flexivit, title={FlexiViT: One Model for All Patch Sizes}, author={Beyer, Lucas and Izmailov, Pavel and Kolesnikov, Alexander and Caron, Mathilde and Kornblith, Simon and Zhai, Xiaohua and Minderer, Matthias and Tschannen, Michael and Alabdulmohsin, Ibrahim and Pavetic, Filip}, journal={arXiv preprint arXiv:2212.08013}, year={2022} } ``` ```bibtex @article{dosovitskiy2020vit, title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale}, author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil}, journal={ICLR}, year={2021} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
3,696
[ [ -0.037628173828125, -0.027984619140625, 0.00424957275390625, 0.0048828125, -0.0256195068359375, -0.0289154052734375, -0.018280029296875, -0.03631591796875, 0.0158233642578125, 0.0167694091796875, -0.042694091796875, -0.040985107421875, -0.045074462890625, -0...
TheBloke/Xwin-LM-13B-v0.2-AWQ
2023-10-15T01:16:48.000Z
[ "transformers", "safetensors", "llama", "text-generation", "license:llama2", "text-generation-inference", "region:us" ]
text-generation
TheBloke
null
null
TheBloke/Xwin-LM-13B-v0.2-AWQ
1
696
transformers
2023-10-15T00:54:33
--- base_model: Xwin-LM/Xwin-LM-13B-V0.2 inference: false license: llama2 model_creator: Xwin-LM model_name: Xwin LM 13B v0.2 model_type: llama prompt_template: 'A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user''s questions. USER: {prompt} ASSISTANT: ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Xwin LM 13B v0.2 - AWQ - Model creator: [Xwin-LM](https://huggingface.co/Xwin-LM) - Original model: [Xwin LM 13B v0.2](https://huggingface.co/Xwin-LM/Xwin-LM-13B-V0.2) <!-- description start --> ## Description This repo contains AWQ model files for [Xwin-LM's Xwin LM 13B v0.2](https://huggingface.co/Xwin-LM/Xwin-LM-13B-V0.2). ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference. It is also now supported by continuous batching server [vLLM](https://github.com/vllm-project/vllm), allowing use of Llama AWQ models for high-throughput concurrent inference in multi-user server scenarios. As of September 25th 2023, preliminary Llama-only AWQ support has also been added to [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference). Note that, at the time of writing, overall throughput is still lower than running vLLM or TGI with unquantised models, however using AWQ enables using much smaller GPUs which can lead to easier deployment and overall cost savings. For example, a 70B model can be run on 1 x 48GB GPU instead of 2 x 80GB. <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Xwin-LM-13B-v0.2-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Xwin-LM-13B-v0.2-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Xwin-LM-13B-v0.2-GGUF) * [Xwin-LM's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Xwin-LM/Xwin-LM-13B-V0.2) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Vicuna ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT: ``` <!-- prompt-template end --> <!-- README_AWQ.md-provided-files start --> ## Provided files, and AWQ parameters For my first release of AWQ models, I am releasing 128g models only. I will consider adding 32g as well if there is interest, and once I have done perplexity and evaluation comparisons, but at this time 32g models are still not fully tested with AutoAWQ and vLLM. Models are released as sharded safetensors files. | Branch | Bits | GS | AWQ Dataset | Seq Len | Size | | ------ | ---- | -- | ----------- | ------- | ---- | | [main](https://huggingface.co/TheBloke/Xwin-LM-13B-v0.2-AWQ/tree/main) | 4 | 128 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.25 GB <!-- README_AWQ.md-provided-files end --> <!-- README_AWQ.md-use-from-vllm start --> ## Serving this model from vLLM Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/). Note: at the time of writing, vLLM has not yet done a new release with AWQ support. If you try the vLLM examples below and get an error about `quantization` being unrecognised, or other AWQ-related issues, please install vLLM from Github source. - When using vLLM as a server, pass the `--quantization awq` parameter, for example: ```shell python3 python -m vllm.entrypoints.api_server --model TheBloke/Xwin-LM-13B-v0.2-AWQ --quantization awq --dtype half ``` When using vLLM from Python code, pass the `quantization=awq` parameter, for example: ```python from vllm import LLM, SamplingParams prompts = [ "Hello, my name is", "The president of the United States is", "The capital of France is", "The future of AI is", ] sampling_params = SamplingParams(temperature=0.8, top_p=0.95) llm = LLM(model="TheBloke/Xwin-LM-13B-v0.2-AWQ", quantization="awq", dtype="half") outputs = llm.generate(prompts, sampling_params) # Print the outputs. for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}") ``` <!-- README_AWQ.md-use-from-vllm start --> <!-- README_AWQ.md-use-from-tgi start --> ## Serving this model from Text Generation Inference (TGI) Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/Xwin-LM-13B-v0.2-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT: ''' client = InferenceClient(endpoint_url) response = client.text_generation(prompt, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1) print(f"Model output: {response}") ``` <!-- README_AWQ.md-use-from-tgi end --> <!-- README_AWQ.md-use-from-python start --> ## How to use this AWQ model from Python code ### Install the necessary packages Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.1 or later ```shell pip3 install autoawq ``` If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y autoawq git clone https://github.com/casper-hansen/AutoAWQ cd AutoAWQ pip3 install . ``` ### You can then try the following example code ```python from awq import AutoAWQForCausalLM from transformers import AutoTokenizer model_name_or_path = "TheBloke/Xwin-LM-13B-v0.2-AWQ" # Load model model = AutoAWQForCausalLM.from_quantized(model_name_or_path, fuse_layers=True, trust_remote_code=False, safetensors=True) tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=False) prompt = "Tell me about AI" prompt_template=f'''A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT: ''' print("\n\n*** Generate:") tokens = tokenizer( prompt_template, return_tensors='pt' ).input_ids.cuda() # Generate output generation_output = model.generate( tokens, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, max_new_tokens=512 ) print("Output: ", tokenizer.decode(generation_output[0])) """ # Inference should be possible with transformers pipeline as well in future # But currently this is not yet supported by AutoAWQ (correct as of September 25th 2023) from transformers import pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) """ ``` <!-- README_AWQ.md-use-from-python end --> <!-- README_AWQ.md-compatibility start --> ## Compatibility The files provided are tested to work with: - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - [vLLM](https://github.com/vllm-project/vllm) - [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) TGI merged AWQ support on September 25th, 2023: [TGI PR #1054](https://github.com/huggingface/text-generation-inference/pull/1054). Use the `:latest` Docker container until the next TGI release is made. <!-- README_AWQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: Xwin-LM's Xwin LM 13B v0.2 <h3 align="center"> Xwin-LM: Powerful, Stable, and Reproducible LLM Alignment </h3> <p align="center"> <a href="https://github.com/Xwin-LM/Xwin-LM"><img src="https://img.shields.io/badge/GitHub-yellow.svg?style=social&logo=github"></a><a href="https://huggingface.co/Xwin-LM"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue"></a> </p> **Step up your LLM alignment with Xwin-LM!** Xwin-LM aims to develop and open-source alignment technologies for large language models, including supervised fine-tuning (SFT), reward models (RM), reject sampling, reinforcement learning from human feedback (RLHF), etc. Our first release, built-upon on the Llama2 base models, ranked **TOP-1** on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/). Notably, it's **the first to surpass GPT-4** on this benchmark. The project will be continuously updated. ## News - 💥 [Oct 12, 2023] [Xwin-LM-7B-V0.2](https://huggingface.co/Xwin-LM/Xwin-LM-7B-V0.2) and [Xwin-LM-13B-V0.2](https://huggingface.co/Xwin-LM/Xwin-LM-13B-V0.2) have been released, with improved comparison data and RL training (i.e., PPO). Their winrates v.s. GPT-4 have increased significantly, reaching **59.83%** (7B model) and **70.36%** (13B model) respectively. The 70B model will be released soon. - 💥 [Sep, 2023] We released [Xwin-LM-70B-V0.1](https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1), which has achieved a win-rate against Davinci-003 of **95.57%** on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/) benchmark, ranking as **TOP-1** on AlpacaEval. **It was the FIRST model surpassing GPT-4** on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/). Also note its winrate v.s. GPT-4 is **60.61**. - 🔍 [Sep, 2023] RLHF plays crucial role in the strong performance of Xwin-LM-V0.1 release! - 💥 [Sep, 2023] We released [Xwin-LM-13B-V0.1](https://huggingface.co/Xwin-LM/Xwin-LM-13B-V0.1), which has achieved **91.76%** win-rate on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/), ranking as **top-1** among all 13B models. - 💥 [Sep, 2023] We released [Xwin-LM-7B-V0.1](https://huggingface.co/Xwin-LM/Xwin-LM-7B-V0.1), which has achieved **87.82%** win-rate on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/), ranking as **top-1** among all 7B models. ## Model Card | Model | Checkpoint | Report | License | |------------|------------|-------------|------------------| |Xwin-LM-7B-V0.2| 🤗 <a href="https://huggingface.co/Xwin-LM/Xwin-LM-7B-V0.2" target="_blank">HF Link</a> | 📃**Coming soon (Stay tuned)** | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License| |Xwin-LM-13B-V0.2| 🤗 <a href="https://huggingface.co/Xwin-LM/Xwin-LM-13B-V0.2" target="_blank">HF Link</a> | | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License| |Xwin-LM-7B-V0.1| 🤗 <a href="https://huggingface.co/Xwin-LM/Xwin-LM-7B-V0.1" target="_blank">HF Link</a> | | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License| |Xwin-LM-13B-V0.1| 🤗 <a href="https://huggingface.co/Xwin-LM/Xwin-LM-13B-V0.1" target="_blank">HF Link</a> | | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License| |Xwin-LM-70B-V0.1| 🤗 <a href="https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1" target="_blank">HF Link</a> | | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License| ## Benchmarks ### Xwin-LM performance on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/). The table below displays the performance of Xwin-LM on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/), where evaluates its win-rate against Text-Davinci-003 across 805 questions. To provide a comprehensive evaluation, we present, for the first time, the win-rate against ChatGPT and GPT-4 as well. Our Xwin-LM model family establish a new state-of-the-art performance across all metrics. Notably, Xwin-LM-70B-V0.1 has eclipsed GPT-4 for the first time, achieving an impressive win-rate of **95.57%** to Text-Davinci-003 and **60.61%** to GPT-4. | **Model** | **AlpacaEval (winrate %)** | **AlpacaEval (winrate %)** |**AlpacaEval (winrate %)** | |----------------------------------|------------|----------|-------------| | | **v.s. Text-Davinci-003** | **v.s. ChatGPT** | **v.s. GPT4**| | **Xwin-LM-70B-V0.1** | **95.57** | **87.50** | **60.61** | | GPT-4 | 95.28 | 84.66 | 50.00 | | **Xwin-LM-13B-V0.2** | **93.22** | **87.14** | **70.36** | | WizardLM-70B-V1.0 | 92.91 | 80.19 | 46.70 | | Llama-2-70B-Chat | 92.66 | 81.97 | 51.19 | | **Xwin-LM-13B-V0.1** | **91.76** | **81.79** | **55.30** | | ChatGPT | 89.37 | 50.00 | 16.60 | | **Xwin-LM-7B-V0.2** | **89.31** | **79.60** | **59.83** | | WizardLM-13B-V1.2 | 89.17 | 75.75 | 41.91 | | **Xwin-LM-7B-V0.1** | **87.35** | **76.40** | **47.57** | | Llama-2-13B-Chat | 81.09 | 64.22 | 30.92 | ### Case study on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/) | Prompts | GPT4 response | Xwin-LM response | |----------------|------------|--------| |I've put a plate on top of an apple, and take the plate from kitchen to dinning room. Where is the apple now|The apple is now in the dining room, as it was underneath the plate when you moved the plate from the kitchen to the dining room.|The apple is still in the kitchen, as you have only moved the plate. The apple is under the plate, so if you want to bring the apple to the dining room, you should either move the apple directly or put the apple on the plate and then move the plate to the dining room.| | Answer the following question.<br> How do you say "good evening" in French. | In French, "good evening" is said as "bonsoir." | In French, "good evening" is translated to "bonsoir." This greeting is used in the evening and nighttime hours, typically after 6 p.m. It is a combination of "bon," which means "good," and "soir," which means "evening." To pronounce it, say "bone-swahr." | ### Xwin-LM performance on NLP foundation tasks. The following table provides a comparison of Xwin-LMs with other LLMs on NLP foundation tasks in [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). | Model | MMLU 5-shot | ARC 25-shot | TruthfulQA 0-shot | HellaSwag 10-shot | Average | |------------------|-------------|-------------|-------------------|-------------------|------------| | Text-davinci-003 | 56.9 | **85.2** | 59.3 | 82.2 | 70.9 | |Vicuna-13b 1.1 | 51.3 | 53.0 | 51.8 | 80.1 | 59.1 | |Guanaco 30B | 57.6 | 63.7 | 50.7 | 85.1 | 64.3 | | WizardLM-7B 1.0 | 42.7 | 51.6 | 44.7 | 77.7 | 54.2 | | WizardLM-13B 1.0 | 52.3 | 57.2 | 50.5 | 81.0 | 60.2 | | WizardLM-30B 1.0 | 58.8 | 62.5 | 52.4 | 83.3 | 64.2| | Llama-2-7B-Chat | 48.3 | 52.9 | 45.6 | 78.6 | 56.4 | | Llama-2-13B-Chat | 54.6 | 59.0 | 44.1 | 81.9 | 59.9 | | Llama-2-70B-Chat | 63.9 | 64.6 | 52.8 | 85.9 | 66.8 | | **Xwin-LM-7B-V0.1** | 49.7 | 56.2 | 48.1 | 79.5 | 58.4 | | **Xwin-LM-13B-V0.1** | 56.6 | 62.4 | 45.5 | 83.0 | 61.9 | | **Xwin-LM-70B-V0.1** | **69.6** | 70.5 | **60.1** | **87.1** | **71.8** | | **Xwin-LM-7B-V0.2** | 50.0 | 56.4 | 49.5 | 78.9 | 58.7 | | **Xwin-LM-13B-V0.2** | 56.6 | 61.5 | 43.8 | 82.9 | 61.2 | ## Inference ### Conversation Template To obtain desired results, please strictly follow the conversation templates when utilizing our model for inference. Our model adopts the prompt format established by [Vicuna](https://github.com/lm-sys/FastChat) and is equipped to support **multi-turn** conversations. ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: Hi! ASSISTANT: Hello.</s>USER: Who are you? ASSISTANT: I am Xwin-LM.</s>...... ``` ### HuggingFace Example ```python from transformers import AutoTokenizer, AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("Xwin-LM/Xwin-LM-7B-V0.1") tokenizer = AutoTokenizer.from_pretrained("Xwin-LM/Xwin-LM-7B-V0.1") ( prompt := "A chat between a curious user and an artificial intelligence assistant. " "The assistant gives helpful, detailed, and polite answers to the user's questions. " "USER: Hello, can you help me? " "ASSISTANT:" ) inputs = tokenizer(prompt, return_tensors="pt") samples = model.generate(**inputs, max_new_tokens=4096, temperature=0.7) output = tokenizer.decode(samples[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True) print(output) # Of course! I'm here to help. Please feel free to ask your question or describe the issue you're having, and I'll do my best to assist you. ``` ### vLLM Example Because Xwin-LM is based on Llama2, it also offers support for rapid inference using [vLLM](https://github.com/vllm-project/vllm). Please refer to [vLLM](https://github.com/vllm-project/vllm) for detailed installation instructions. ```python from vllm import LLM, SamplingParams ( prompt := "A chat between a curious user and an artificial intelligence assistant. " "The assistant gives helpful, detailed, and polite answers to the user's questions. " "USER: Hello, can you help me? " "ASSISTANT:" ) sampling_params = SamplingParams(temperature=0.7, max_tokens=4096) llm = LLM(model="Xwin-LM/Xwin-LM-7B-V0.1") outputs = llm.generate([prompt,], sampling_params) for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(generated_text) ``` ## TODO - [ ] Release the source code - [ ] Release more capabilities, such as math, reasoning, and etc. ## Citation Please consider citing our work if you use the data or code in this repo. ``` @software{xwin-lm, title = {Xwin-LM}, author = {Xwin-LM Team}, url = {https://github.com/Xwin-LM/Xwin-LM}, version = {pre-release}, year = {2023}, month = {9}, } ``` ## Acknowledgements Thanks to [Llama 2](https://ai.meta.com/llama/), [FastChat](https://github.com/lm-sys/FastChat), [AlpacaFarm](https://github.com/tatsu-lab/alpaca_farm), and [vLLM](https://github.com/vllm-project/vllm).
23,611
[ [ -0.040618896484375, -0.061767578125, 0.033966064453125, 0.0022525787353515625, -0.0142059326171875, -0.007843017578125, 0.0043792724609375, -0.042266845703125, 0.00014412403106689453, 0.0254669189453125, -0.055389404296875, -0.041168212890625, -0.022232055664062...
fhswf/bert_de_ner
2023-03-20T11:38:30.000Z
[ "transformers", "pytorch", "tf", "jax", "safetensors", "bert", "token-classification", "German", "de", "NER", "dataset:germeval_14", "doi:10.57967/hf/0655", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
fhswf
null
null
fhswf/bert_de_ner
4
695
transformers
2022-03-02T23:29:05
--- language: de license: cc-by-sa-4.0 datasets: - germeval_14 tags: - German - de - NER --- # BERT-DE-NER ## What is it? This is a German BERT model fine-tuned for named entity recognition. ## Base model & training This model is based on [bert-base-german-dbmdz-cased](https://huggingface.co/bert-base-german-dbmdz-cased) and has been fine-tuned for NER on the training data from [GermEval2014](https://sites.google.com/site/germeval2014ner). ## Model results The results on the test data from GermEval2014 are (entities only): | Precision | Recall | F1-Score | |----------:|-------:|---------:| | 0.817 | 0.842 | 0.829 | ## How to use ```Python >>> from transformers import pipeline >>> classifier = pipeline('ner', model="fhswf/bert_de_ner") >>> classifier('Von der Organisation „medico international“ hieß es, die EU entziehe sich seit vielen Jahren der Verantwortung für die Menschen an ihren Außengrenzen.') [{'word': 'med', 'score': 0.9996621608734131, 'entity': 'B-ORG', 'index': 6}, {'word': '##ico', 'score': 0.9995362162590027, 'entity': 'I-ORG', 'index': 7}, {'word': 'international', 'score': 0.9996932744979858, 'entity': 'I-ORG', 'index': 8}, {'word': 'eu', 'score': 0.9997008442878723, 'entity': 'B-ORG', 'index': 14}] ```
1,273
[ [ -0.03314208984375, -0.0251312255859375, 0.0179443359375, 0.00798797607421875, -0.024261474609375, -0.005695343017578125, -0.01435089111328125, -0.0286102294921875, 0.025909423828125, 0.0265655517578125, -0.0374755859375, -0.050445556640625, -0.0648193359375, ...
timm/flexivit_small.600ep_in1k
2023-05-05T23:59:47.000Z
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2212.08013", "arxiv:2010.11929", "license:apache-2.0", "region:us" ]
image-classification
timm
null
null
timm/flexivit_small.600ep_in1k
0
695
timm
2022-12-22T07:22:05
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for flexivit_small.600ep_in1k A FlexiViT image classification model. Trained on ImageNet-1k in JAX by paper authors, ported to PyTorch by Ross Wightman. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 22.1 - GMACs: 4.9 - Activations (M): 9.5 - Image size: 240 x 240 - **Papers:** - FlexiViT: One Model for All Patch Sizes: https://arxiv.org/abs/2212.08013 - An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2 - **Dataset:** ImageNet-1k - **Original:** https://github.com/google-research/big_vision ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('flexivit_small.600ep_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'flexivit_small.600ep_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 226, 384) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @article{beyer2022flexivit, title={FlexiViT: One Model for All Patch Sizes}, author={Beyer, Lucas and Izmailov, Pavel and Kolesnikov, Alexander and Caron, Mathilde and Kornblith, Simon and Zhai, Xiaohua and Minderer, Matthias and Tschannen, Michael and Alabdulmohsin, Ibrahim and Pavetic, Filip}, journal={arXiv preprint arXiv:2212.08013}, year={2022} } ``` ```bibtex @article{dosovitskiy2020vit, title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale}, author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil}, journal={ICLR}, year={2021} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
3,697
[ [ -0.038330078125, -0.0277099609375, 0.00475311279296875, 0.004070281982421875, -0.0259857177734375, -0.02984619140625, -0.0193939208984375, -0.036407470703125, 0.0168304443359375, 0.0165252685546875, -0.042236328125, -0.040679931640625, -0.04473876953125, -0....
timm/swin_base_patch4_window12_384.ms_in22k
2023-03-18T04:06:20.000Z
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-22k", "arxiv:2103.14030", "license:mit", "region:us" ]
image-classification
timm
null
null
timm/swin_base_patch4_window12_384.ms_in22k
0
695
timm
2023-03-18T04:05:37
--- tags: - image-classification - timm library_tag: timm license: mit datasets: - imagenet-22k --- # Model card for swin_base_patch4_window12_384.ms_in22k A Swin Transformer image classification model. Pretrained on ImageNet-22k by paper authors. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 109.3 - GMACs: 47.2 - Activations (M): 134.8 - Image size: 384 x 384 - **Papers:** - Swin Transformer: Hierarchical Vision Transformer using Shifted Windows: https://arxiv.org/abs/2103.14030 - **Original:** https://github.com/microsoft/Swin-Transformer - **Dataset:** ImageNet-22k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('swin_base_patch4_window12_384.ms_in22k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'swin_base_patch4_window12_384.ms_in22k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g. for swin_base_patch4_window7_224 (NHWC output) # torch.Size([1, 56, 56, 128]) # torch.Size([1, 28, 28, 256]) # torch.Size([1, 14, 14, 512]) # torch.Size([1, 7, 7, 1024]) # e.g. for swinv2_cr_small_ns_224 (NCHW output) # torch.Size([1, 96, 56, 56]) # torch.Size([1, 192, 28, 28]) # torch.Size([1, 384, 14, 14]) # torch.Size([1, 768, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'swin_base_patch4_window12_384.ms_in22k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled (ie.e a (batch_size, H, W, num_features) tensor for swin / swinv2 # or (batch_size, num_features, H, W) for swinv2_cr output = model.forward_head(output, pre_logits=True) # output is (batch_size, num_features) tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @inproceedings{liu2021Swin, title={Swin Transformer: Hierarchical Vision Transformer using Shifted Windows}, author={Liu, Ze and Lin, Yutong and Cao, Yue and Hu, Han and Wei, Yixuan and Zhang, Zheng and Lin, Stephen and Guo, Baining}, booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, year={2021} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
4,418
[ [ -0.0307159423828125, -0.03338623046875, -0.005084991455078125, 0.01221466064453125, -0.0218353271484375, -0.0294647216796875, -0.017608642578125, -0.038330078125, 0.002716064453125, 0.0281219482421875, -0.043548583984375, -0.049102783203125, -0.04510498046875, ...
sileod/mdeberta-v3-base-tasksource-nli
2023-10-19T08:56:15.000Z
[ "transformers", "pytorch", "safetensors", "deberta-v2", "text-classification", "mdeberta-v3-base", "nli", "natural-language-inference", "multilingual", "multitask", "multi-task", "pipeline", "extreme-multi-task", "extreme-mtl", "tasksource", "zero-shot", "rlhf", "zero-shot-classifi...
zero-shot-classification
sileod
null
null
sileod/mdeberta-v3-base-tasksource-nli
9
695
transformers
2023-04-29T17:48:39
--- license: apache-2.0 tags: - mdeberta-v3-base - text-classification - nli - natural-language-inference - multilingual - multitask - multi-task - pipeline - extreme-multi-task - extreme-mtl - tasksource - zero-shot - rlhf datasets: - xnli - metaeval/xnli - americas_nli - MoritzLaurer/multilingual-NLI-26lang-2mil7 - stsb_multi_mt - paws-x - miam - strombergnlp/x-stance - tyqiangz/multilingual-sentiments - metaeval/universal-joy - amazon_reviews_multi - cardiffnlp/tweet_sentiment_multilingual - strombergnlp/offenseval_2020 - offenseval_dravidian - nedjmaou/MLMA_hate_speech - xglue - ylacombe/xsum_factuality - metaeval/x-fact - pasinit/xlwic - tasksource/oasst1_dense_flat - papluca/language-identification - wili_2018 - exams - xcsr - xcopa - juletxara/xstory_cloze - Anthropic/hh-rlhf - universal_dependencies - tasksource/oasst1_pairwise_rlhf_reward - OpenAssistant/oasst1 language: - multilingual - zh - ja - ar - ko - de - fr - es - pt - hi - id - it - tr - ru - bn - ur - mr - ta - vi - fa - pl - uk - nl - sv - he - sw - ps pipeline_tag: zero-shot-classification --- # Model Card for mDeBERTa-v3-base-tasksource-nli Multilingual [mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) with 30k steps multi-task training on [mtasksource](https://github.com/sileod/tasksource/blob/main/mtasks.md) This model can be used as a stable starting-point for further fine-tuning, or directly in zero-shot NLI model or a zero-shot pipeline. In addition, you can use the provided [adapters](https://huggingface.co/sileod/mdeberta-v3-base-tasksource-adapters) to directly load a model for hundreds of tasks. ```python !pip install tasknet, tasksource -q import tasknet as tn pipe=tn.load_pipeline( 'sileod/mdeberta-v3-base-tasksource-nli', 'miam/dihana') pipe(['si','como esta?']) ``` For more details, see [deberta-v3-base-tasksource-nli](https://huggingface.co/sileod/deberta-v3-base-tasksource-nli) and replace tasksource by mtasksource. # Software https://github.com/sileod/tasksource/ https://github.com/sileod/tasknet/ # Contact and citation For help integrating tasksource into your experiments, please contact [damien.sileo@inria.fr](mailto:damien.sileo@inria.fr). For more details, refer to this [article:](https://arxiv.org/abs/2301.05948) ```bib @article{sileo2023tasksource, title={tasksource: Structured Dataset Preprocessing Annotations for Frictionless Extreme Multi-Task Learning and Evaluation}, author={Sileo, Damien}, url= {https://arxiv.org/abs/2301.05948}, journal={arXiv preprint arXiv:2301.05948}, year={2023} } ```
2,576
[ [ -0.0187530517578125, -0.0266571044921875, 0.040313720703125, 0.042999267578125, -0.00836181640625, -0.0237579345703125, -0.00388336181640625, -0.023040771484375, 0.00829315185546875, 0.02972412109375, -0.044647216796875, -0.0306396484375, -0.03607177734375, ...
timm/flexivit_small.300ep_in1k
2023-05-05T23:59:44.000Z
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2212.08013", "arxiv:2010.11929", "license:apache-2.0", "region:us" ]
image-classification
timm
null
null
timm/flexivit_small.300ep_in1k
0
694
timm
2022-12-22T07:21:47
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for flexivit_small.300ep_in1k A FlexiViT image classification model. Trained on ImageNet-1k in JAX by paper authors, ported to PyTorch by Ross Wightman. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 22.1 - GMACs: 4.9 - Activations (M): 9.5 - Image size: 240 x 240 - **Papers:** - FlexiViT: One Model for All Patch Sizes: https://arxiv.org/abs/2212.08013 - An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2 - **Dataset:** ImageNet-1k - **Original:** https://github.com/google-research/big_vision ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('flexivit_small.300ep_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'flexivit_small.300ep_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 226, 384) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @article{beyer2022flexivit, title={FlexiViT: One Model for All Patch Sizes}, author={Beyer, Lucas and Izmailov, Pavel and Kolesnikov, Alexander and Caron, Mathilde and Kornblith, Simon and Zhai, Xiaohua and Minderer, Matthias and Tschannen, Michael and Alabdulmohsin, Ibrahim and Pavetic, Filip}, journal={arXiv preprint arXiv:2212.08013}, year={2022} } ``` ```bibtex @article{dosovitskiy2020vit, title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale}, author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil}, journal={ICLR}, year={2021} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
3,697
[ [ -0.0384521484375, -0.027740478515625, 0.00489044189453125, 0.0042877197265625, -0.02587890625, -0.0298004150390625, -0.01971435546875, -0.03594970703125, 0.0166778564453125, 0.0166168212890625, -0.04266357421875, -0.040496826171875, -0.044708251953125, -0.00...
Ailyth/Toro_cat
2023-10-07T14:24:49.000Z
[ "diffusers", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "endpoints_compatible", "has_space", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
Ailyth
null
null
Ailyth/Toro_cat
1
694
diffusers
2023-01-21T17:02:15
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### Toro cat (トロ) figure Toro (トロ), full name Toro Inoue (井上トロ, Inoue Toro), also known as the Sony Cat, is a fictional character created by Sony Interactive Entertainment. Once, he was the mascot of PlayStation in Japan,Once, he was the mascot of PlayStation in Japan, and he also launched related figures,This model was trained using hand-made photos. key prompt: ### toroneko Sample pictures of this concept: ![](https://huggingface.co/Ailyth/toro/resolve/main/feature_extractor/1.jpg) ![](https://huggingface.co/Ailyth/toro/resolve/main/feature_extractor/2.jpg) ![](https://huggingface.co/Ailyth/toro/resolve/main/feature_extractor/3.jpg)
730
[ [ -0.056640625, 0.018157958984375, 0.0256195068359375, 0.035430908203125, -0.0372314453125, 0.05035400390625, 0.0265350341796875, -0.0235443115234375, 0.0692138671875, 0.04412841796875, -0.0565185546875, -0.0145416259765625, -0.0260772705078125, 0.034362792968...
timm/vit_large_patch14_clip_336.laion2b_ft_in12k_in1k_inat21
2023-10-25T20:24:24.000Z
[ "timm", "pytorch", "safetensors", "image-classification", "arxiv:2303.11331", "arxiv:2304.14108", "arxiv:2212.07143", "license:cc-by-nc-4.0", "region:us" ]
image-classification
timm
null
null
timm/vit_large_patch14_clip_336.laion2b_ft_in12k_in1k_inat21
0
694
timm
2023-05-19T22:20:48
--- tags: - image-classification - timm library_name: timm license: cc-by-nc-4.0 --- # Model card for vit_large_patch14_clip_336.laion2b_ft_in12k_in1k_inat21 Part of a series of `timm` fine-tune experiments on iNaturalist 2021 competition data (https://github.com/visipedia/inat_comp/tree/master/2021) for higher capacity models. Covering 10,000 species, this dataset and these models are fun to explore via the classification widget with pictures from your backyard, but quite a bit smaller than models you can find on iNaturalist website (https://www.inaturalist.org/blog/75633-a-new-computer-vision-model-v2-1-including-1-770-new-taxa). No extra meta-data was used for training these models (as was the case for the competition), it was a straightfoward fine-tune to explore differences in model pretrain data. | Model | Top-1 | Top-5 | Img Size (Train) | Paper | |-------|-------|-------|----------|-------| | [eva02_large_patch14_clip_336.merged2b_ft_inat21](https://huggingface.co/timm/eva02_large_patch14_clip_336.merged2b_ft_inat21) | 92.05 | 98.01 | 336 | https://arxiv.org/abs/2303.11331 | | [vit_large_patch14_clip_336.datacompxl_ft_augreg_inat21](https://huggingface.co/timm/vit_large_patch14_clip_336.datacompxl_ft_augreg_inat21) | 91.98 | 98.03 | 336 | https://arxiv.org/abs/2304.14108 | | [vit_large_patch14_clip_336.laion2b_ft_augreg_inat21](https://huggingface.co/timm/vit_large_patch14_clip_336.laion2b_ft_augreg_inat21) | 91.48 | 97.89 | 336 | https://arxiv.org/abs/2212.07143 | | [convnext_large_mlp.laion2b_ft_augreg_inat21](https://huggingface.co/timm/convnext_large_mlp.laion2b_ft_augreg_inat21) | 90.95 | 97.68 | 448 (384) | | | [vit_large_patch14_clip_336.datacompxl_ft_inat21](https://huggingface.co/timm/vit_large_patch14_clip_336.datacompxl_ft_inat21) | 90.85 | 97.68 | 336 | https://arxiv.org/abs/2304.14108 | | [convnext_large_mlp.laion2b_ft_augreg_inat21](https://huggingface.co/timm/convnext_large_mlp.laion2b_ft_augreg_inat21) | 90.62 | 97.61 | 384 | | | [vit_large_patch14_clip_336.laion2b_ft_in12k_in1k_inat21](https://huggingface.co/timm/vit_large_patch14_clip_336.laion2b_ft_in12k_in1k_inat21) | 90.29 | 97.44 | 336 | https://arxiv.org/abs/2212.07143 | ## Fine-tune hparams ``` ./distributed_train.sh 4 --data-dir /tfds/ --dataset tfds/i_naturalist2021 --amp -j 8 --model vit_large_patch14_clip_224 --img-size 336 --model-kwargs img_size=336 --val-split val --opt adamw --opt-eps 1e-6 --weight-decay .01 --lr 5e-5 - -warmup-lr 0 --sched-on-updates --clip-grad 1.0 --pretrained -b 48 --num-classes 10000 --grad-accum-steps 8 --layer-decay 0.8 ``` ``` ./distributed_train.sh 4 --data-dir /tfds/ --dataset tfds/i_naturalist2021 --amp -j 8 --model eva02_large_patch14_clip_336 --val-split val --opt adamw --opt-eps 1e-6 --weight-decay .01 --lr 5e-5 --warmup-lr 0 --sched-on-updates --clip-gra d 1.0 --pretrained -b 40 --num-classes 10000 --grad-accum-steps 10 --layer-decay 0.8 --torchcompile ``` ## Run Validation ``` python validate.py /tfds/ --dataset tfds/i_naturalist2021 --model hf-hub:timm/eva02_large_patch14_clip_336.merged2b_ft_inat21 --split val --amp ``` ## Citation ```bibtex @inproceedings{cherti2023reproducible, title={Reproducible scaling laws for contrastive language-image learning}, author={Cherti, Mehdi and Beaumont, Romain and Wightman, Ross and Wortsman, Mitchell and Ilharco, Gabriel and Gordon, Cade and Schuhmann, Christoph and Schmidt, Ludwig and Jitsev, Jenia}, booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, pages={2818--2829}, year={2023} } ``` ```bibtex @article{datacomp, title={DataComp: In search of the next generation of multimodal datasets}, author={Samir Yitzhak Gadre, Gabriel Ilharco, Alex Fang, Jonathan Hayase, Georgios Smyrnis, Thao Nguyen, Ryan Marten, Mitchell Wortsman, Dhruba Ghosh, Jieyu Zhang, Eyal Orgad, Rahim Entezari, Giannis Daras, Sarah Pratt, Vivek Ramanujan, Yonatan Bitton, Kalyani Marathe, Stephen Mussmann, Richard Vencu, Mehdi Cherti, Ranjay Krishna, Pang Wei Koh, Olga Saukh, Alexander Ratner, Shuran Song, Hannaneh Hajishirzi, Ali Farhadi, Romain Beaumont, Sewoong Oh, Alex Dimakis, Jenia Jitsev, Yair Carmon, Vaishaal Shankar, Ludwig Schmidt}, journal={arXiv preprint arXiv:2304.14108}, year={2023} } ``` ```bibtex @article{EVA02, title={EVA-02: A Visual Representation for Neon Genesis}, author={Fang, Yuxin and Sun, Quan and Wang, Xinggang and Huang, Tiejun and Wang, Xinlong and Cao, Yue}, journal={arXiv preprint arXiv:2303.11331}, year={2023} } ```
4,541
[ [ -0.056854248046875, -0.039520263671875, 0.0078277587890625, 0.0218353271484375, -0.024169921875, -0.0141754150390625, -0.021148681640625, -0.03460693359375, 0.0401611328125, 0.01280975341796875, -0.04327392578125, -0.0401611328125, -0.0309600830078125, 0.021...
ppdev/detr-resnet-50_finetuned_lp
2023-10-12T06:02:10.000Z
[ "transformers", "pytorch", "detr", "object-detection", "generated_from_trainer", "dataset:license-plate-object-detection", "license:apache-2.0", "endpoints_compatible", "region:us" ]
object-detection
ppdev
null
null
ppdev/detr-resnet-50_finetuned_lp
0
694
transformers
2023-10-12T02:43:53
--- license: apache-2.0 base_model: facebook/detr-resnet-50 tags: - generated_from_trainer datasets: - license-plate-object-detection model-index: - name: detr-resnet-50_finetuned_lp results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-resnet-50_finetuned_lp This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the license-plate-object-detection dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.1
1,165
[ [ -0.046417236328125, -0.0352783203125, 0.0029735565185546875, -0.005084991455078125, -0.032501220703125, -0.0291290283203125, -0.002410888671875, -0.030029296875, 0.00872039794921875, 0.02545166015625, -0.055419921875, -0.03765869140625, -0.0294189453125, 0.0...
fabiochiu/t5-small-medium-title-generation
2023-08-03T07:55:10.000Z
[ "transformers", "pytorch", "tf", "jax", "safetensors", "t5", "text2text-generation", "generated_from_keras_callback", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text2text-generation
fabiochiu
null
null
fabiochiu/t5-small-medium-title-generation
9
693
transformers
2022-05-05T11:06:46
--- tags: - generated_from_keras_callback model-index: - name: t5-small-medium-title-generation results: [] widget: - text: "summarize: Many financial institutions started building conversational AI, prior to the Covid19 pandemic, as part of a digital transformation initiative. These initial solutions were high profile, highly personalized virtual assistants — like the Erica chatbot from Bank of America. As the pandemic hit, the need changed as contact centers were under increased pressures. As Cathal McGloin of ServisBOT explains in 'how it started, and how it is going,' financial institutions were looking for ways to automate solutions to help get back to 'normal' levels of customer service. This resulted in a change from the 'future of conversational AI' to a real tactical assistant that can help in customer service. Haritha Dev of Wells Fargo, saw a similar trend. Banks were originally looking to conversational AI as part of digital transformation to keep up with the times. However, with the pandemic, it has been more about customer retention and customer satisfaction. In addition, new use cases came about as a result of Covid-19 that accelerated adoption of conversational AI. As Vinita Kumar of Deloitte points out, banks were dealing with an influx of calls about new concerns, like questions around the Paycheck Protection Program (PPP) loans. This resulted in an increase in volume, without enough agents to assist customers, and tipped the scale to incorporate conversational AI. When choosing initial use cases to support, financial institutions often start with high volume, low complexity tasks. For example, password resets, checking account balances, or checking the status of a transaction, as Vinita points out. From there, the use cases can evolve as the banks get more mature in developing conversational AI, and as the customers become more engaged with the solutions. Cathal indicates another good way for banks to start is looking at use cases that are a pain point, and also do not require a lot of IT support. Some financial institutions may have a multi-year technology roadmap, which can make it harder to get a new service started. A simple chatbot for document collection in an onboarding process can result in high engagement, and a high return on investment. For example, Cathal has a banking customer that implemented a chatbot to capture a driver’s license to be used in the verification process of adding an additional user to an account — it has over 85% engagement with high satisfaction. An interesting use case Haritha discovered involved educating customers on financial matters. People feel more comfortable asking a chatbot what might be considered a 'dumb' question, as the chatbot is less judgmental. Users can be more ambiguous with their questions as well, not knowing the right words to use, as chatbot can help narrow things down." example_title: "Banking on Bots" --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Model description This model is [t5-small](https://huggingface.co/t5-small) fine-tuned on the [190k Medium Articles](https://www.kaggle.com/datasets/fabiochiusano/medium-articles) dataset for predicting article titles using the article textual content as input. There are two versions of the model: - [t5-small-medium-title-generation](https://huggingface.co/fabiochiu/t5-small-medium-title-generation): trained from [t5-small](https://huggingface.co/t5-small). - [t5-base-medium-title-generation](https://huggingface.co/fabiochiu/t5-base-medium-title-generation): trained from [t5-base](https://huggingface.co/t5-base). Visit the [title-generation space](https://huggingface.co/spaces/fabiochiu/title-generation) to try the model with different text generation parameters. # How to use the model ``` from transformers import AutoTokenizer, AutoModelForSeq2SeqLM import nltk nltk.download('punkt') tokenizer = AutoTokenizer.from_pretrained("fabiochiu/t5-small-medium-title-generation") model = AutoModelForSeq2SeqLM.from_pretrained("fabiochiu/t5-small-medium-title-generation") text = """ Many financial institutions started building conversational AI, prior to the Covid19 pandemic, as part of a digital transformation initiative. These initial solutions were high profile, highly personalized virtual assistants — like the Erica chatbot from Bank of America. As the pandemic hit, the need changed as contact centers were under increased pressures. As Cathal McGloin of ServisBOT explains in “how it started, and how it is going,” financial institutions were looking for ways to automate solutions to help get back to “normal” levels of customer service. This resulted in a change from the “future of conversational AI” to a real tactical assistant that can help in customer service. Haritha Dev of Wells Fargo, saw a similar trend. Banks were originally looking to conversational AI as part of digital transformation to keep up with the times. However, with the pandemic, it has been more about customer retention and customer satisfaction. In addition, new use cases came about as a result of Covid-19 that accelerated adoption of conversational AI. As Vinita Kumar of Deloitte points out, banks were dealing with an influx of calls about new concerns, like questions around the Paycheck Protection Program (PPP) loans. This resulted in an increase in volume, without enough agents to assist customers, and tipped the scale to incorporate conversational AI. When choosing initial use cases to support, financial institutions often start with high volume, low complexity tasks. For example, password resets, checking account balances, or checking the status of a transaction, as Vinita points out. From there, the use cases can evolve as the banks get more mature in developing conversational AI, and as the customers become more engaged with the solutions. Cathal indicates another good way for banks to start is looking at use cases that are a pain point, and also do not require a lot of IT support. Some financial institutions may have a multi-year technology roadmap, which can make it harder to get a new service started. A simple chatbot for document collection in an onboarding process can result in high engagement, and a high return on investment. For example, Cathal has a banking customer that implemented a chatbot to capture a driver’s license to be used in the verification process of adding an additional user to an account — it has over 85% engagement with high satisfaction. An interesting use case Haritha discovered involved educating customers on financial matters. People feel more comfortable asking a chatbot what might be considered a “dumb” question, as the chatbot is less judgmental. Users can be more ambiguous with their questions as well, not knowing the right words to use, as chatbot can help narrow things down. """ inputs = ["summarize: " + text] inputs = tokenizer(inputs, max_length=max_input_length, truncation=True, return_tensors="pt") output = model.generate(**inputs, num_beams=8, do_sample=True, min_length=10, max_length=64) decoded_output = tokenizer.batch_decode(output, skip_special_tokens=True)[0] predicted_title = nltk.sent_tokenize(decoded_output.strip())[0] print(predicted_title) # Conversational AI: The Future of Customer Service ``` ## Training and evaluation data The model has been trained on a single epoch spanning about 16000 articles, evaluating on 1000 random articles not used during training. ### Training results The model has been evaluated on a random dataset split of 1000 articles not used during training and validation. - Rouge-1: 27.8% - Rouge-2: 14.9% - Rouge-L: 26.9% - Rouge-Lsum: 26.9% - Average length of the generated titles: 13 tokens (about 9 English words) ### Framework versions - Transformers 4.18.0 - TensorFlow 2.8.0 - Datasets 2.1.0 - Tokenizers 0.12.1
7,981
[ [ -0.0302886962890625, -0.06622314453125, 0.0105743408203125, 0.0051727294921875, -0.0135040283203125, 0.00928497314453125, 0.007171630859375, -0.04608154296875, -0.007568359375, 0.04345703125, -0.03759765625, -0.019439697265625, -0.0396728515625, 0.0002686977...
timm/vit_large_patch14_clip_224.laion2b_ft_in12k_in1k
2023-05-06T00:11:16.000Z
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "dataset:laion-2b", "dataset:imagenet-12k", "arxiv:2212.07143", "arxiv:2210.08402", "arxiv:2010.11929", "license:apache-2.0", "region:us" ]
image-classification
timm
null
null
timm/vit_large_patch14_clip_224.laion2b_ft_in12k_in1k
0
693
timm
2022-11-01T23:01:48
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k - laion-2b - imagenet-12k --- # Model card for vit_large_patch14_clip_224.laion2b_ft_in12k_in1k A Vision Transformer (ViT) image classification model. Pretrained on LAION-2B image-text pairs using OpenCLIP. Fine-tuned on ImageNet-12k and then ImageNet-1k in `timm`. See recipes in [Reproducible scaling laws](https://arxiv.org/abs/2212.07143). ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 304.2 - GMACs: 77.8 - Activations (M): 57.1 - Image size: 224 x 224 - **Papers:** - OpenCLIP: https://github.com/mlfoundations/open_clip - Reproducible scaling laws for contrastive language-image learning: https://arxiv.org/abs/2212.07143 - LAION-5B: An open large-scale dataset for training next generation image-text models: https://arxiv.org/abs/2210.08402 - An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2 - **Dataset:** ImageNet-1k - **Pretrain Dataset:** - LAION-2B - ImageNet-12k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('vit_large_patch14_clip_224.laion2b_ft_in12k_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'vit_large_patch14_clip_224.laion2b_ft_in12k_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 257, 1024) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @software{ilharco_gabriel_2021_5143773, author = {Ilharco, Gabriel and Wortsman, Mitchell and Wightman, Ross and Gordon, Cade and Carlini, Nicholas and Taori, Rohan and Dave, Achal and Shankar, Vaishaal and Namkoong, Hongseok and Miller, John and Hajishirzi, Hannaneh and Farhadi, Ali and Schmidt, Ludwig}, title = {OpenCLIP}, month = jul, year = 2021, note = {If you use this software, please cite it as below.}, publisher = {Zenodo}, version = {0.1}, doi = {10.5281/zenodo.5143773}, url = {https://doi.org/10.5281/zenodo.5143773} } ``` ```bibtex @article{cherti2022reproducible, title={Reproducible scaling laws for contrastive language-image learning}, author={Cherti, Mehdi and Beaumont, Romain and Wightman, Ross and Wortsman, Mitchell and Ilharco, Gabriel and Gordon, Cade and Schuhmann, Christoph and Schmidt, Ludwig and Jitsev, Jenia}, journal={arXiv preprint arXiv:2212.07143}, year={2022} } ``` ```bibtex @inproceedings{schuhmann2022laionb, title={{LAION}-5B: An open large-scale dataset for training next generation image-text models}, author={Christoph Schuhmann and Romain Beaumont and Richard Vencu and Cade W Gordon and Ross Wightman and Mehdi Cherti and Theo Coombes and Aarush Katta and Clayton Mullis and Mitchell Wortsman and Patrick Schramowski and Srivatsa R Kundurthy and Katherine Crowson and Ludwig Schmidt and Robert Kaczmarczyk and Jenia Jitsev}, booktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2022}, url={https://openreview.net/forum?id=M3Y74vmsMcY} } ``` ```bibtex @article{dosovitskiy2020vit, title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale}, author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil}, journal={ICLR}, year={2021} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
5,767
[ [ -0.030181884765625, -0.027740478515625, 0.009674072265625, 0.0115966796875, -0.02606201171875, -0.033447265625, -0.03424072265625, -0.03076171875, 0.0105743408203125, 0.02740478515625, -0.0301513671875, -0.042694091796875, -0.051055908203125, -0.000960350036...
openmmlab/upernet-swin-tiny
2023-01-19T10:43:44.000Z
[ "transformers", "pytorch", "upernet", "vision", "image-segmentation", "en", "arxiv:1807.10221", "arxiv:2103.14030", "license:mit", "endpoints_compatible", "region:us" ]
image-segmentation
openmmlab
null
null
openmmlab/upernet-swin-tiny
3
693
transformers
2023-01-13T14:31:56
--- language: en license: mit tags: - vision - image-segmentation model_name: openmmlab/upernet-swin-tiny --- # UperNet, Swin Transformer tiny-sized backbone UperNet framework for semantic segmentation, leveraging a Swin Transformer backbone. UperNet was introduced in the paper [Unified Perceptual Parsing for Scene Understanding](https://arxiv.org/abs/1807.10221) by Xiao et al. Combining UperNet with a Swin Transformer backbone was introduced in the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030). Disclaimer: The team releasing UperNet + Swin Transformer did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description UperNet is a framework for semantic segmentation. It consists of several components, including a backbone, a Feature Pyramid Network (FPN) and a Pyramid Pooling Module (PPM). Any visual backbone can be plugged into the UperNet framework. The framework predicts a semantic label per pixel. ![UperNet architecture](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/upernet_architecture.jpg) ## Intended uses & limitations You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?search=openmmlab/upernet) to look for fine-tuned versions (with various backbones) on a task that interests you. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/upernet#transformers.UperNetForSemanticSegmentation).
1,630
[ [ -0.03338623046875, -0.00965118408203125, 0.019378662109375, 0.03607177734375, -0.016204833984375, -0.0191497802734375, 0.0185394287109375, -0.0418701171875, 0.0240020751953125, 0.0496826171875, -0.060333251953125, -0.03656005859375, -0.02923583984375, -0.019...
cambridgeltl/SapBERT-UMLS-2020AB-all-lang-from-XLMR-large
2023-06-14T19:00:04.000Z
[ "transformers", "pytorch", "xlm-roberta", "feature-extraction", "arxiv:2010.11784", "endpoints_compatible", "region:us" ]
feature-extraction
cambridgeltl
null
null
cambridgeltl/SapBERT-UMLS-2020AB-all-lang-from-XLMR-large
0
691
transformers
2022-03-02T23:29:05
--- language: multilingual tags: - biomedical - lexical-semantics - cross-lingual datasets: - UMLS **[news]** A cross-lingual extension of SapBERT will appear in the main onference of **ACL 2021**! <br> **[news]** SapBERT will appear in the conference proceedings of **NAACL 2021**! ### SapBERT-XLMR SapBERT [(Liu et al. 2021)](https://arxiv.org/pdf/2010.11784.pdf) trained with [UMLS](https://www.nlm.nih.gov/research/umls/licensedcontent/umlsknowledgesources.html) 2020AB, using [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) as the base model. Please use [CLS] as the representation of the input. #### Extracting embeddings from SapBERT The following script converts a list of strings (entity names) into embeddings. ```python import numpy as np import torch from tqdm.auto import tqdm from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("cambridgeltl/SapBERT-from-PubMedBERT-fulltext") model = AutoModel.from_pretrained("cambridgeltl/SapBERT-from-PubMedBERT-fulltext").cuda() # replace with your own list of entity names all_names = ["covid-19", "Coronavirus infection", "high fever", "Tumor of posterior wall of oropharynx"] bs = 128 # batch size during inference all_embs = [] for i in tqdm(np.arange(0, len(all_names), bs)): toks = tokenizer.batch_encode_plus(all_names[i:i+bs], padding="max_length", max_length=25, truncation=True, return_tensors="pt") toks_cuda = {} for k,v in toks.items(): toks_cuda[k] = v.cuda() cls_rep = model(**toks_cuda)[0][:,0,:] # use CLS representation as the embedding all_embs.append(cls_rep.cpu().detach().numpy()) all_embs = np.concatenate(all_embs, axis=0) ``` For more details about training and eval, see SapBERT [github repo](https://github.com/cambridgeltl/sapbert). ### Citation ```bibtex @inproceedings{liu2021learning, title={Learning Domain-Specialised Representations for Cross-Lingual Biomedical Entity Linking}, author={Liu, Fangyu and Vuli{\'c}, Ivan and Korhonen, Anna and Collier, Nigel}, booktitle={Proceedings of ACL-IJCNLP 2021}, month = aug, year={2021} } ```
2,285
[ [ -0.003871917724609375, -0.036407470703125, 0.037567138671875, 0.0014314651489257812, -0.0155792236328125, 0.0091400146484375, -0.0167694091796875, -0.01438140869140625, 0.023590087890625, 0.042938232421875, -0.0153961181640625, -0.06884765625, -0.044342041015625...
timm/tf_efficientnet_b6.aa_in1k
2023-04-27T21:22:26.000Z
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:1905.11946", "arxiv:1805.09501", "license:apache-2.0", "region:us" ]
image-classification
timm
null
null
timm/tf_efficientnet_b6.aa_in1k
0
691
timm
2022-12-13T00:04:46
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for tf_efficientnet_b6.aa_in1k A EfficientNet image classification model. Trained on ImageNet-1k with auto-augment in Tensorflow by paper authors, ported to PyTorch by Ross Wightman. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 43.0 - GMACs: 19.4 - Activations (M): 167.4 - Image size: 528 x 528 - **Papers:** - EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks: https://arxiv.org/abs/1905.11946 - AutoAugment: Learning Augmentation Policies from Data: https://arxiv.org/abs/1805.09501 - **Dataset:** ImageNet-1k - **Original:** https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('tf_efficientnet_b6.aa_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'tf_efficientnet_b6.aa_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 32, 264, 264]) # torch.Size([1, 40, 132, 132]) # torch.Size([1, 72, 66, 66]) # torch.Size([1, 200, 33, 33]) # torch.Size([1, 576, 17, 17]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'tf_efficientnet_b6.aa_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 2304, 17, 17) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @inproceedings{tan2019efficientnet, title={Efficientnet: Rethinking model scaling for convolutional neural networks}, author={Tan, Mingxing and Le, Quoc}, booktitle={International conference on machine learning}, pages={6105--6114}, year={2019}, organization={PMLR} } ``` ```bibtex @inproceedings{47890, title = {AutoAugment: Learning Augmentation Policies from Data}, author = {Ekin Dogus Cubuk and Barret Zoph and Dandelion Mane and Vijay Vasudevan and Quoc V. Le}, year = {2019}, URL = {https://arxiv.org/pdf/1805.09501.pdf} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
4,498
[ [ -0.029815673828125, -0.04193115234375, -0.00933074951171875, 0.006927490234375, -0.0157012939453125, -0.0301361083984375, -0.0216522216796875, -0.0316162109375, 0.013580322265625, 0.024505615234375, -0.0298919677734375, -0.043975830078125, -0.05621337890625, ...
BAAI/AquilaChat2-34B
2023-10-26T08:28:18.000Z
[ "transformers", "pytorch", "aquila", "text-generation", "custom_code", "license:other", "region:us" ]
text-generation
BAAI
null
null
BAAI/AquilaChat2-34B
40
691
transformers
2023-10-11T01:55:18
--- license: other --- ![Aquila_logo](./log.jpeg) <h4 align="center"> <p> <b>English</b> | <a href="https://huggingface.co/BAAI/AquilaChat2-34B/blob/main/README_zh.md">简体中文</a> </p> </h4> <p align="center"> <a href="https://github.com/FlagAI-Open/Aquila2" target="_blank">Github</a> • <a href="https://github.com/FlagAI-Open/Aquila2/blob/main/assets/wechat-qrcode.jpg" target="_blank">WeChat</a> <br> </p> We opensource our **Aquila2** series, now including **Aquila2**, the base language models, namely **Aquila2-7B** and **Aquila2-34B**, as well as **AquilaChat2**, the chat models, namely **AquilaChat2-7B** and **AquilaChat2-34B**, as well as the long-text chat models, namely **AquilaChat2-7B-16k** and **AquilaChat2-34B-16k** 2023.10.25 🔥 **AquilaChat2-34B v1.2** is based on the previous **AquilaChat2-34B**. The AquilaChat2-34B model is close to or exceeds the level of GPT3.5 in the subjective evaluation of 8 secondary ability dimensions. The additional details of the Aquila model will be presented in the official technical report. Please stay tuned for updates on official channels. ## Quick Start AquilaChat2-34B(Chat model) ### 1. Inference ```python from transformers import AutoTokenizer, AutoModelForCausalLM from transformers import BitsAndBytesConfig import torch device = torch.device("cuda:0") model_info = "BAAI/AquilaChat2-34B" tokenizer = AutoTokenizer.from_pretrained(model_info, trust_remote_code=True) quantization_config=BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16, ) model = AutoModelForCausalLM.from_pretrained(model_info, trust_remote_code=True, torch_dtype=torch.bfloat16, # quantization_config=quantization_config, # Uncomment this line for 4bit quantization ) model.eval() model.to(device) text = "请给出10个要到北京旅游的理由。" from predict import predict out = predict(model, text, tokenizer=tokenizer, max_gen_len=200, top_p=0.9, seed=123, topk=15, temperature=1.0, sft=True, device=device, model_name="AquilaChat2-34B") print(out) ``` ## License Aquila2 series open-source model is licensed under [ BAAI Aquila Model Licence Agreement](https://huggingface.co/BAAI/AquilaChat2-34B/blob/main/BAAI-Aquila-Model-License%20-Agreement.pdf)
2,538
[ [ -0.0142974853515625, -0.048583984375, 0.007122039794921875, 0.0316162109375, -0.029571533203125, -0.002292633056640625, -0.0168609619140625, -0.045135498046875, -0.00145721435546875, 0.0243377685546875, -0.0452880859375, -0.029449462890625, -0.036468505859375, ...
bert-large-cased-whole-word-masking
2021-05-18T16:30:05.000Z
[ "transformers", "pytorch", "tf", "jax", "bert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1810.04805", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
fill-mask
null
null
null
bert-large-cased-whole-word-masking
4
690
transformers
2022-03-02T23:29:04
--- language: en license: apache-2.0 datasets: - bookcorpus - wikipedia --- # BERT large model (cased) whole word masking Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in [this repository](https://github.com/google-research/bert). This model is cased: it makes a difference between english and English. Differently to other BERT models, this model was trained with a new technique: Whole Word Masking. In this case, all of the tokens corresponding to a word are masked at once. The overall masking rate remains the same. The training is identical -- each masked WordPiece token is predicted independently. Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the BERT model as inputs. This model has the following configuration: - 24-layer - 1024 hidden dimension - 16 attention heads - 336M parameters. ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='bert-large-cased-whole-word-masking') >>> unmasker("Hello I'm a [MASK] model.") [ { "sequence":"[CLS] Hello I'm a fashion model. [SEP]", "score":0.1474294513463974, "token":4633, "token_str":"fashion" }, { "sequence":"[CLS] Hello I'm a magazine model. [SEP]", "score":0.05430116504430771, "token":2435, "token_str":"magazine" }, { "sequence":"[CLS] Hello I'm a male model. [SEP]", "score":0.039395421743392944, "token":2581, "token_str":"male" }, { "sequence":"[CLS] Hello I'm a former model. [SEP]", "score":0.036936815828084946, "token":1393, "token_str":"former" }, { "sequence":"[CLS] Hello I'm a professional model. [SEP]", "score":0.03663451969623566, "token":1848, "token_str":"professional" } ] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('bert-large-cased-whole-word-masking') model = BertModel.from_pretrained("bert-large-cased-whole-word-masking") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('bert-large-cased-whole-word-masking') model = TFBertModel.from_pretrained("bert-large-cased-whole-word-masking") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='bert-large-cased-whole-word-masking') >>> unmasker("The man worked as a [MASK].") [ { "sequence":"[CLS] The man worked as a carpenter. [SEP]", "score":0.09021259099245071, "token":25169, "token_str":"carpenter" }, { "sequence":"[CLS] The man worked as a cook. [SEP]", "score":0.08125395327806473, "token":9834, "token_str":"cook" }, { "sequence":"[CLS] The man worked as a mechanic. [SEP]", "score":0.07524766772985458, "token":19459, "token_str":"mechanic" }, { "sequence":"[CLS] The man worked as a waiter. [SEP]", "score":0.07397029548883438, "token":17989, "token_str":"waiter" }, { "sequence":"[CLS] The man worked as a guard. [SEP]", "score":0.05848982185125351, "token":3542, "token_str":"guard" } ] >>> unmasker("The woman worked as a [MASK].") [ { "sequence":"[CLS] The woman worked as a maid. [SEP]", "score":0.19436432421207428, "token":13487, "token_str":"maid" }, { "sequence":"[CLS] The woman worked as a waitress. [SEP]", "score":0.16161060333251953, "token":15098, "token_str":"waitress" }, { "sequence":"[CLS] The woman worked as a nurse. [SEP]", "score":0.14942803978919983, "token":7439, "token_str":"nurse" }, { "sequence":"[CLS] The woman worked as a secretary. [SEP]", "score":0.10373266786336899, "token":4848, "token_str":"secretary" }, { "sequence":"[CLS] The woman worked as a cook. [SEP]", "score":0.06384387612342834, "token":9834, "token_str":"cook" } ] ``` This bias will also affect all fine-tuned versions of this model. ## Training data The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ## Evaluation results When fine-tuned on downstream tasks, this model achieves the following results: Model | SQUAD 1.1 F1/EM | Multi NLI Accuracy ---------------------------------------- | :-------------: | :----------------: BERT-Large, Cased (Whole Word Masking) | 92.9/86.7 | 86.46 ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-1810-04805, author = {Jacob Devlin and Ming{-}Wei Chang and Kenton Lee and Kristina Toutanova}, title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language Understanding}, journal = {CoRR}, volume = {abs/1810.04805}, year = {2018}, url = {http://arxiv.org/abs/1810.04805}, archivePrefix = {arXiv}, eprint = {1810.04805}, timestamp = {Tue, 30 Oct 2018 20:39:56 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
9,679
[ [ -0.00984954833984375, -0.047576904296875, 0.01690673828125, 0.02569580078125, -0.0418701171875, -0.0019044876098632812, -0.0014820098876953125, -0.0132598876953125, 0.03466796875, 0.04193115234375, -0.042327880859375, -0.031982421875, -0.061859130859375, 0.0...
google/bert_uncased_L-8_H-128_A-2
2021-05-19T17:35:05.000Z
[ "transformers", "pytorch", "jax", "bert", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
google
null
null
google/bert_uncased_L-8_H-128_A-2
0
690
transformers
2022-03-02T23:29:05
--- thumbnail: https://huggingface.co/front/thumbnails/google.png license: apache-2.0 --- BERT Miniatures === This is the set of 24 BERT models referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962) (English only, uncased, trained with WordPiece masking). We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher. Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity. You can download the 24 BERT miniatures either from the [official BERT Github page](https://github.com/google-research/bert/), or via HuggingFace from the links below: | |H=128|H=256|H=512|H=768| |---|:---:|:---:|:---:|:---:| | **L=2** |[**2/128 (BERT-Tiny)**][2_128]|[2/256][2_256]|[2/512][2_512]|[2/768][2_768]| | **L=4** |[4/128][4_128]|[**4/256 (BERT-Mini)**][4_256]|[**4/512 (BERT-Small)**][4_512]|[4/768][4_768]| | **L=6** |[6/128][6_128]|[6/256][6_256]|[6/512][6_512]|[6/768][6_768]| | **L=8** |[8/128][8_128]|[8/256][8_256]|[**8/512 (BERT-Medium)**][8_512]|[8/768][8_768]| | **L=10** |[10/128][10_128]|[10/256][10_256]|[10/512][10_512]|[10/768][10_768]| | **L=12** |[12/128][12_128]|[12/256][12_256]|[12/512][12_512]|[**12/768 (BERT-Base)**][12_768]| Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model. Here are the corresponding GLUE scores on the test set: |Model|Score|CoLA|SST-2|MRPC|STS-B|QQP|MNLI-m|MNLI-mm|QNLI(v2)|RTE|WNLI|AX| |---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| |BERT-Tiny|64.2|0.0|83.2|81.1/71.1|74.3/73.6|62.2/83.4|70.2|70.3|81.5|57.2|62.3|21.0| |BERT-Mini|65.8|0.0|85.9|81.1/71.8|75.4/73.3|66.4/86.2|74.8|74.3|84.1|57.9|62.3|26.1| |BERT-Small|71.2|27.8|89.7|83.4/76.2|78.8/77.0|68.1/87.0|77.6|77.0|86.4|61.8|62.3|28.6| |BERT-Medium|73.5|38.0|89.6|86.6/81.6|80.4/78.4|69.6/87.9|80.0|79.1|87.7|62.2|62.3|30.5| For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs: - batch sizes: 8, 16, 32, 64, 128 - learning rates: 3e-4, 1e-4, 5e-5, 3e-5 If you use these models, please cite the following paper: ``` @article{turc2019, title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models}, author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina}, journal={arXiv preprint arXiv:1908.08962v2 }, year={2019} } ``` [2_128]: https://huggingface.co/google/bert_uncased_L-2_H-128_A-2 [2_256]: https://huggingface.co/google/bert_uncased_L-2_H-256_A-4 [2_512]: https://huggingface.co/google/bert_uncased_L-2_H-512_A-8 [2_768]: https://huggingface.co/google/bert_uncased_L-2_H-768_A-12 [4_128]: https://huggingface.co/google/bert_uncased_L-4_H-128_A-2 [4_256]: https://huggingface.co/google/bert_uncased_L-4_H-256_A-4 [4_512]: https://huggingface.co/google/bert_uncased_L-4_H-512_A-8 [4_768]: https://huggingface.co/google/bert_uncased_L-4_H-768_A-12 [6_128]: https://huggingface.co/google/bert_uncased_L-6_H-128_A-2 [6_256]: https://huggingface.co/google/bert_uncased_L-6_H-256_A-4 [6_512]: https://huggingface.co/google/bert_uncased_L-6_H-512_A-8 [6_768]: https://huggingface.co/google/bert_uncased_L-6_H-768_A-12 [8_128]: https://huggingface.co/google/bert_uncased_L-8_H-128_A-2 [8_256]: https://huggingface.co/google/bert_uncased_L-8_H-256_A-4 [8_512]: https://huggingface.co/google/bert_uncased_L-8_H-512_A-8 [8_768]: https://huggingface.co/google/bert_uncased_L-8_H-768_A-12 [10_128]: https://huggingface.co/google/bert_uncased_L-10_H-128_A-2 [10_256]: https://huggingface.co/google/bert_uncased_L-10_H-256_A-4 [10_512]: https://huggingface.co/google/bert_uncased_L-10_H-512_A-8 [10_768]: https://huggingface.co/google/bert_uncased_L-10_H-768_A-12 [12_128]: https://huggingface.co/google/bert_uncased_L-12_H-128_A-2 [12_256]: https://huggingface.co/google/bert_uncased_L-12_H-256_A-4 [12_512]: https://huggingface.co/google/bert_uncased_L-12_H-512_A-8 [12_768]: https://huggingface.co/google/bert_uncased_L-12_H-768_A-12
4,617
[ [ -0.053558349609375, -0.03546142578125, 0.02392578125, 0.013153076171875, -0.0237274169921875, -0.016937255859375, -0.02398681640625, -0.0312042236328125, 0.04376220703125, -0.006107330322265625, -0.06103515625, -0.0306854248046875, -0.052093505859375, -0.001...
digiplay/majicMIX_realistic_v4
2023-09-26T06:35:55.000Z
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "endpoints_compatible", "has_space", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
digiplay
null
null
digiplay/majicMIX_realistic_v4
4
690
diffusers
2023-05-29T20:14:50
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- https://civitai.com/models/43331/majicmix-realistic Sample image I made generated by huggingface's API : ![875239a5-36d4-442a-b4fd-05f0b99ecf0f.jpeg](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/eEcNoa2XNhqXhYyxSCvV7.jpeg)
380
[ [ -0.055328369140625, -0.04364013671875, 0.0310516357421875, 0.035064697265625, -0.01995849609375, 0.01218414306640625, 0.0188140869140625, -0.038787841796875, 0.06707763671875, 0.034820556640625, -0.09344482421875, -0.027587890625, -0.02880859375, 0.002929687...
iamkaikai/amazing-logos-v4
2023-10-23T17:52:21.000Z
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dataset:iamkaikai/amazing_logos_v4", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us", "has_space" ]
text-to-image
iamkaikai
null
null
iamkaikai/amazing-logos-v4
1
690
diffusers
2023-08-16T02:21:08
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 datasets: - iamkaikai/amazing_logos_v4 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- # Text-to-image finetuning - iamkaikai/amazing-logos-v4 This pipeline was finetuned from **runwayml/stable-diffusion-v1-5** on the **iamkaikai/amazing_logos_v4** dataset. Below are some example images generated with the finetuned pipeline using the following prompts: ['Simple elegant logo for Grupo Altair Publicidad, Circle Lines Venezuela, Publishing, successful vibe, minimalist, thought-provoking, abstract, recognizable, relatable, sharp, vector art, even edges']: ## Training info These are the key hyperparameters used during training: * Dataset size: 400k * Epochs: 2 * Learning rate: 1e-07 * Batch size: 1 * Gradient accumulation steps: 1 * Image resolution: 512 * Mixed-precision: fp16 ![preview](./grid.jpg) ## Prompt Format The prompt format is as follows: ```javascript {template keywords} + [company name] + [concept & country] + [industry] + {template keywords} ``` ## For example: Colored logo ```text Simple elegant logo for Google, G circle United states, technology, successful vibe, minimalist, thought-provoking, abstract, recognizable ``` monochrome logo (include "black and white") ```text Simple elegant logo for Google, G circle United states, technology, successful vibe, minimalist, thought-provoking, abstract, recognizable, black and white ``` Here are some examples of prompts: - Simple elegant logo for Digital Art, **D A square**, **education**, successful vibe, minimalist, thought-provoking, abstract, recognizable - Simple elegant logo for 3M Technology, **3 M square United states**, **technology and product**, successful vibe, minimalist, thought-provoking, abstract, recognizable - Simple elegant logo for 42Studio, **lines drop fire flame water**, **design studio**, successful vibe, minimalist, thought provoking, abstract, recognizable, relatable, sharp, vector art, even edges ## The [concept & country] section can include words such as: - lines - circles - triangles - dot - crosses - waves - square - letters (A-Z) - 3D - Angled - Arrows - cube - Diamond - Hexagon - Loops - outline - ovals - rectangle - reflection - rings - round - semicircle - spiral - woven - stars I invest in cloud GPU rentals to train my models. If you value what I do, consider buying me a coffee ☕️. Your support means a lot! https://www.buymeacoffee.com/iamkaikai666
2,534
[ [ -0.04541015625, -0.03570556640625, 0.0276947021484375, 0.0186309814453125, -0.05224609375, -0.007663726806640625, -0.014984130859375, -0.027984619140625, 0.031890869140625, 0.0304412841796875, -0.068115234375, -0.057525634765625, -0.0567626953125, 0.00343704...
pyannote/brouhaha
2022-11-15T09:10:27.000Z
[ "pyannote-audio", "pytorch", "pyannote", "pyannote-audio-model", "audio", "voice", "speech", "voice-activity-detection", "speech-to-noise ratio", "snr", "room acoustics", "c50", "dataset:LibriSpeech", "dataset:AudioSet", "dataset:EchoThief", "dataset:MIT-Acoustical-Reverberation-Scene"...
voice-activity-detection
pyannote
null
null
pyannote/brouhaha
11
689
pyannote-audio
2022-10-28T06:25:15
--- tags: - pyannote - pyannote-audio - pyannote-audio-model - audio - voice - speech - voice-activity-detection - speech-to-noise ratio - snr - room acoustics - c50 datasets: - LibriSpeech - AudioSet - EchoThief - MIT-Acoustical-Reverberation-Scene license: openrail extra_gated_prompt: "The collected information will help acquire a better knowledge of this model userbase and help its maintainers apply for grants to improve it further. " extra_gated_fields: Company/university: text Website: text I plan to use this model for (task, type of audio data, etc): text --- # 🎙️🥁🚨🔊 Brouhaha ![Sample Brouhaha predictions](brouhaha.gif) **Joint voice activity detection, speech-to-noise ratio, and C50 room acoustics estimation** [TL;DR](https://twitter.com/LavechinMarvin/status/1585645131251605504) | [Paper](https://arxiv.org/abs/2210.13248) | [Code](https://github.com/marianne-m/brouhaha-vad) | [And Now for Something Completely Different](https://www.youtube.com/watch?v=8ZyOAS22Moo) ## Installation This model relies on [pyannote.audio](https://github.com/pyannote/pyannote-audio) and [brouhaha-vad](https://github.com/marianne-m/brouhaha-vad). ```bash pip install pyannote-audio pip install https://github.com/marianne-m/brouhaha-vad/archive/main.zip ``` ## Usage ```python # 1. visit hf.co/pyannote/brouhaha and accept user conditions # 2. visit hf.co/settings/tokens to create an access token # 3. instantiate pretrained model from pyannote.audio import Model model = Model.from_pretrained("pyannote/brouhaha", use_auth_token="ACCESS_TOKEN_GOES_HERE") # apply model from pyannote.audio import Inference inference = Inference(model) output = inference("audio.wav") # iterate over each frame for frame, (vad, snr, c50) in output: t = frame.middle print(f"{t:8.3f} vad={100*vad:.0f}% snr={snr:.0f} c50={c50:.0f}") # ... # 12.952 vad=100% snr=51 c50=17 # 12.968 vad=100% snr=52 c50=17 # 12.985 vad=100% snr=53 c50=17 # ... ``` ## Citation ```bibtex @article{lavechin2022brouhaha, Title = {{Brouhaha: multi-task training for voice activity detection, speech-to-noise ratio, and C50 room acoustics estimation}}, Author = {Marvin Lavechin and Marianne Métais and Hadrien Titeux and Alodie Boissonnet and Jade Copet and Morgane Rivière and Elika Bergelson and Alejandrina Cristia and Emmanuel Dupoux and Hervé Bredin}, Year = {2022}, Journal = {arXiv preprint arXiv: Arxiv-2210.13248} } ```bibtex @inproceedings{Bredin2020, Title = {{pyannote.audio: neural building blocks for speaker diarization}}, Author = {{Bredin}, Herv{\'e} and {Yin}, Ruiqing and {Coria}, Juan Manuel and {Gelly}, Gregory and {Korshunov}, Pavel and {Lavechin}, Marvin and {Fustes}, Diego and {Titeux}, Hadrien and {Bouaziz}, Wassim and {Gill}, Marie-Philippe}, Booktitle = {ICASSP 2020, IEEE International Conference on Acoustics, Speech, and Signal Processing}, Address = {Barcelona, Spain}, Month = {May}, Year = {2020}, } ```
2,991
[ [ -0.017791748046875, -0.05755615234375, 0.01751708984375, 0.0188140869140625, -0.009674072265625, -0.019378662109375, -0.031707763671875, -0.0145416259765625, 0.0163421630859375, 0.02227783203125, -0.038604736328125, -0.04962158203125, -0.020477294921875, -0....
timm/vit_huge_patch14_clip_224.laion2b_ft_in12k_in1k
2023-05-06T00:08:24.000Z
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "dataset:laion-2b", "dataset:imagenet-12k", "arxiv:2212.07143", "arxiv:2210.08402", "arxiv:2010.11929", "license:apache-2.0", "region:us" ]
image-classification
timm
null
null
timm/vit_huge_patch14_clip_224.laion2b_ft_in12k_in1k
1
689
timm
2022-11-01T23:02:08
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k - laion-2b - imagenet-12k --- # Model card for vit_huge_patch14_clip_224.laion2b_ft_in12k_in1k A Vision Transformer (ViT) image classification model. Pretrained on LAION-2B image-text pairs using OpenCLIP. Fine-tuned on ImageNet-12k and then ImageNet-1k in `timm`. See recipes in [Reproducible scaling laws](https://arxiv.org/abs/2212.07143). ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 632.0 - GMACs: 162.0 - Activations (M): 95.1 - Image size: 224 x 224 - **Papers:** - OpenCLIP: https://github.com/mlfoundations/open_clip - Reproducible scaling laws for contrastive language-image learning: https://arxiv.org/abs/2212.07143 - LAION-5B: An open large-scale dataset for training next generation image-text models: https://arxiv.org/abs/2210.08402 - An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2 - **Dataset:** ImageNet-1k - **Pretrain Dataset:** - LAION-2B - ImageNet-12k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('vit_huge_patch14_clip_224.laion2b_ft_in12k_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'vit_huge_patch14_clip_224.laion2b_ft_in12k_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 257, 1280) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @software{ilharco_gabriel_2021_5143773, author = {Ilharco, Gabriel and Wortsman, Mitchell and Wightman, Ross and Gordon, Cade and Carlini, Nicholas and Taori, Rohan and Dave, Achal and Shankar, Vaishaal and Namkoong, Hongseok and Miller, John and Hajishirzi, Hannaneh and Farhadi, Ali and Schmidt, Ludwig}, title = {OpenCLIP}, month = jul, year = 2021, note = {If you use this software, please cite it as below.}, publisher = {Zenodo}, version = {0.1}, doi = {10.5281/zenodo.5143773}, url = {https://doi.org/10.5281/zenodo.5143773} } ``` ```bibtex @article{cherti2022reproducible, title={Reproducible scaling laws for contrastive language-image learning}, author={Cherti, Mehdi and Beaumont, Romain and Wightman, Ross and Wortsman, Mitchell and Ilharco, Gabriel and Gordon, Cade and Schuhmann, Christoph and Schmidt, Ludwig and Jitsev, Jenia}, journal={arXiv preprint arXiv:2212.07143}, year={2022} } ``` ```bibtex @inproceedings{schuhmann2022laionb, title={{LAION}-5B: An open large-scale dataset for training next generation image-text models}, author={Christoph Schuhmann and Romain Beaumont and Richard Vencu and Cade W Gordon and Ross Wightman and Mehdi Cherti and Theo Coombes and Aarush Katta and Clayton Mullis and Mitchell Wortsman and Patrick Schramowski and Srivatsa R Kundurthy and Katherine Crowson and Ludwig Schmidt and Robert Kaczmarczyk and Jenia Jitsev}, booktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2022}, url={https://openreview.net/forum?id=M3Y74vmsMcY} } ``` ```bibtex @article{dosovitskiy2020vit, title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale}, author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil}, journal={ICLR}, year={2021} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
5,765
[ [ -0.03033447265625, -0.0283050537109375, 0.009979248046875, 0.01186370849609375, -0.02593994140625, -0.033294677734375, -0.033843994140625, -0.031524658203125, 0.0107879638671875, 0.0274200439453125, -0.030242919921875, -0.042633056640625, -0.0513916015625, -...
timm/vit_base_patch16_clip_224.openai_ft_in1k
2023-05-06T00:01:59.000Z
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "dataset:wit-400m", "arxiv:2212.07143", "arxiv:2103.00020", "arxiv:2010.11929", "license:apache-2.0", "region:us" ]
image-classification
timm
null
null
timm/vit_base_patch16_clip_224.openai_ft_in1k
0
689
timm
2022-11-08T23:16:39
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k - wit-400m --- # Model card for vit_base_patch16_clip_224.openai_ft_in1k A Vision Transformer (ViT) image classification model. Pretrained on WIT-400M image-text pairs by OpenAI using CLIP. Fine-tuned on ImageNet-1k in `timm`. See recipes in [Reproducible scaling laws](https://arxiv.org/abs/2212.07143). ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 86.6 - GMACs: 16.9 - Activations (M): 16.5 - Image size: 224 x 224 - **Papers:** - Learning Transferable Visual Models From Natural Language Supervision: https://arxiv.org/abs/2103.00020 - Reproducible scaling laws for contrastive language-image learning: https://arxiv.org/abs/2212.07143 - An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2 - **Dataset:** ImageNet-1k - **Pretrain Dataset:** - WIT-400M ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('vit_base_patch16_clip_224.openai_ft_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'vit_base_patch16_clip_224.openai_ft_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 197, 768) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @inproceedings{Radford2021LearningTV, title={Learning Transferable Visual Models From Natural Language Supervision}, author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever}, booktitle={ICML}, year={2021} } ``` ```bibtex @article{cherti2022reproducible, title={Reproducible scaling laws for contrastive language-image learning}, author={Cherti, Mehdi and Beaumont, Romain and Wightman, Ross and Wortsman, Mitchell and Ilharco, Gabriel and Gordon, Cade and Schuhmann, Christoph and Schmidt, Ludwig and Jitsev, Jenia}, journal={arXiv preprint arXiv:2212.07143}, year={2022} } ``` ```bibtex @article{dosovitskiy2020vit, title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale}, author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil}, journal={ICLR}, year={2021} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
4,364
[ [ -0.031494140625, -0.03900146484375, 0.0028533935546875, 0.01629638671875, -0.0240936279296875, -0.032989501953125, -0.0330810546875, -0.03192138671875, 0.010650634765625, 0.03106689453125, -0.0310516357421875, -0.0404052734375, -0.056884765625, -0.0019950866...
timm/flexivit_large.300ep_in1k
2023-05-05T23:59:16.000Z
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2212.08013", "arxiv:2010.11929", "license:apache-2.0", "region:us" ]
image-classification
timm
null
null
timm/flexivit_large.300ep_in1k
0
689
timm
2022-12-22T07:16:36
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for flexivit_large.300ep_in1k A FlexiViT image classification model. Trained on ImageNet-1k in JAX by paper authors, ported to PyTorch by Ross Wightman. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 304.4 - GMACs: 68.5 - Activations (M): 50.2 - Image size: 240 x 240 - **Papers:** - FlexiViT: One Model for All Patch Sizes: https://arxiv.org/abs/2212.08013 - An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2 - **Dataset:** ImageNet-1k - **Original:** https://github.com/google-research/big_vision ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('flexivit_large.300ep_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'flexivit_large.300ep_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 226, 1024) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @article{beyer2022flexivit, title={FlexiViT: One Model for All Patch Sizes}, author={Beyer, Lucas and Izmailov, Pavel and Kolesnikov, Alexander and Caron, Mathilde and Kornblith, Simon and Zhai, Xiaohua and Minderer, Matthias and Tschannen, Michael and Alabdulmohsin, Ibrahim and Pavetic, Filip}, journal={arXiv preprint arXiv:2212.08013}, year={2022} } ``` ```bibtex @article{dosovitskiy2020vit, title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale}, author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil}, journal={ICLR}, year={2021} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
3,701
[ [ -0.038604736328125, -0.028228759765625, 0.005252838134765625, 0.005573272705078125, -0.0256805419921875, -0.0291900634765625, -0.020233154296875, -0.0362548828125, 0.016876220703125, 0.017578125, -0.042205810546875, -0.041229248046875, -0.04522705078125, -0....
jeff31415/TinyLlama-1.1B-1T-OpenOrca
2023-10-25T13:52:12.000Z
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "en", "dataset:Open-Orca/OpenOrca", "dataset:bigcode/starcoderdata", "dataset:cerebras/SlimPajama-627B", "license:apache-2.0", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
jeff31415
null
null
jeff31415/TinyLlama-1.1B-1T-OpenOrca
5
689
transformers
2023-10-09T16:06:21
--- license: apache-2.0 datasets: - Open-Orca/OpenOrca - bigcode/starcoderdata - cerebras/SlimPajama-627B language: - en --- [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) #### Base model: PY007/TinyLlama-1.1B-intermediate-step-480k-1T #### Dataset: Fine tuned on OpenOrca GPT4 subset for 1 epoch,Using CHATML format #### Model License: Apache 2.0, following the TinyLlama base model. #### Quantisation: - GPTQ:https://huggingface.co/TheBloke/TinyLlama-1.1B-1T-OpenOrca-GPTQ - AWQ:https://huggingface.co/TheBloke/TinyLlama-1.1B-1T-OpenOrca-AWQ - GGUF:https://huggingface.co/TheBloke/TinyLlama-1.1B-1T-OpenOrca-GGUF #### Hardware and training details: Hardware: 1*RTX A5000, ~16 hours to complete 1 epoch. GPU from autodl.com, cost around $3 for this finetuning. https://wandb.ai/jeff200402/TinyLlama-Orca?workspace= for more details.
1,014
[ [ -0.032318115234375, -0.050201416015625, 0.0229949951171875, 0.00962066650390625, -0.028228759765625, -0.01009368896484375, -0.00617218017578125, -0.045257568359375, 0.011383056640625, -0.005832672119140625, -0.0506591796875, -0.048065185546875, -0.0208740234375,...
flair/ner-multi-fast
2022-10-04T09:19:01.000Z
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "en", "de", "nl", "es", "dataset:conll2003", "has_space", "region:us" ]
token-classification
flair
null
null
flair/ner-multi-fast
1
688
flair
2022-03-02T23:29:05
--- tags: - flair - token-classification - sequence-tagger-model language: - en - de - nl - es datasets: - conll2003 widget: - text: "George Washington ging nach Washington" --- ## 4-Language NER in Flair (English, German, Dutch and Spanish) This is the fast 4-class NER model for 4 CoNLL-03 languages that ships with [Flair](https://github.com/flairNLP/flair/). Also kind of works for related languages like French. F1-Score: **91,51** (CoNLL-03 English), **85,72** (CoNLL-03 German revised), **86,22** (CoNLL-03 Dutch), **85,78** (CoNLL-03 Spanish) Predicts 4 tags: | **tag** | **meaning** | |---------------------------------|-----------| | PER | person name | | LOC | location name | | ORG | organization name | | MISC | other name | Based on [Flair embeddings](https://www.aclweb.org/anthology/C18-1139/) and LSTM-CRF. --- ### Demo: How to use in Flair Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`) ```python from flair.data import Sentence from flair.models import SequenceTagger # load tagger tagger = SequenceTagger.load("flair/ner-multi-fast") # make example sentence in any of the four languages sentence = Sentence("George Washington ging nach Washington") # predict NER tags tagger.predict(sentence) # print sentence print(sentence) # print predicted NER spans print('The following NER tags are found:') # iterate over entities and print for entity in sentence.get_spans('ner'): print(entity) ``` This yields the following output: ``` Span [1,2]: "George Washington" [− Labels: PER (0.9977)] Span [5]: "Washington" [− Labels: LOC (0.9895)] ``` So, the entities "*George Washington*" (labeled as a **person**) and "*Washington*" (labeled as a **location**) are found in the sentence "*George Washington ging nach Washington*". --- ### Training: Script to train this model The following Flair script was used to train this model: ```python from flair.data import Corpus from flair.datasets import CONLL_03, CONLL_03_GERMAN, CONLL_03_DUTCH, CONLL_03_SPANISH from flair.embeddings import WordEmbeddings, StackedEmbeddings, FlairEmbeddings # 1. get the multi-language corpus corpus: Corpus = MultiCorpus([ CONLL_03(), # English corpus CONLL_03_GERMAN(), # German corpus CONLL_03_DUTCH(), # Dutch corpus CONLL_03_SPANISH(), # Spanish corpus ]) # 2. what tag do we want to predict? tag_type = 'ner' # 3. make the tag dictionary from the corpus tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type) # 4. initialize each embedding we use embedding_types = [ # GloVe embeddings WordEmbeddings('glove'), # FastText embeddings WordEmbeddings('de'), # contextual string embeddings, forward FlairEmbeddings('multi-forward-fast'), # contextual string embeddings, backward FlairEmbeddings('multi-backward-fast'), ] # embedding stack consists of Flair and GloVe embeddings embeddings = StackedEmbeddings(embeddings=embedding_types) # 5. initialize sequence tagger from flair.models import SequenceTagger tagger = SequenceTagger(hidden_size=256, embeddings=embeddings, tag_dictionary=tag_dictionary, tag_type=tag_type) # 6. initialize trainer from flair.trainers import ModelTrainer trainer = ModelTrainer(tagger, corpus) # 7. run training trainer.train('resources/taggers/ner-multi-fast', train_with_dev=True, max_epochs=150) ``` --- ### Cite Please cite the following papers when using this model. ``` @misc{akbik2019multilingual, title={Multilingual sequence labeling with one model}, author={Akbik, Alan and Bergmann, Tanja and Vollgraf, Roland} booktitle = {{NLDL} 2019, Northern Lights Deep Learning Workshop}, year = {2019} } ``` ``` @inproceedings{akbik2018coling, title={Contextual String Embeddings for Sequence Labeling}, author={Akbik, Alan and Blythe, Duncan and Vollgraf, Roland}, booktitle = {{COLING} 2018, 27th International Conference on Computational Linguistics}, pages = {1638--1649}, year = {2018} } ```
4,178
[ [ -0.035186767578125, -0.04888916015625, 0.00841522216796875, 0.0177764892578125, -0.006061553955078125, 0.0005517005920410156, -0.0271453857421875, -0.038360595703125, 0.03485107421875, 0.0169219970703125, -0.039031982421875, -0.044342041015625, -0.03117370605468...
spacy/en_core_web_lg
2023-10-10T06:50:41.000Z
[ "spacy", "token-classification", "en", "license:mit", "model-index", "has_space", "region:us" ]
token-classification
spacy
null
null
spacy/en_core_web_lg
17
688
spacy
2022-03-02T23:29:05
--- tags: - spacy - token-classification language: - en license: mit model-index: - name: en_core_web_lg results: - task: name: NER type: token-classification metrics: - name: NER Precision type: precision value: 0.8516398746 - name: NER Recall type: recall value: 0.8569711538 - name: NER F Score type: f_score value: 0.8542971968 - task: name: TAG type: token-classification metrics: - name: TAG (XPOS) Accuracy type: accuracy value: 0.9734810915 - task: name: UNLABELED_DEPENDENCIES type: token-classification metrics: - name: Unlabeled Attachment Score (UAS) type: f_score value: 0.9208198801 - task: name: LABELED_DEPENDENCIES type: token-classification metrics: - name: Labeled Attachment Score (LAS) type: f_score value: 0.9027174273 - task: name: SENTS type: token-classification metrics: - name: Sentences F-Score type: f_score value: 0.907098331 --- ### Details: https://spacy.io/models/en#en_core_web_lg English pipeline optimized for CPU. Components: tok2vec, tagger, parser, senter, ner, attribute_ruler, lemmatizer. | Feature | Description | | --- | --- | | **Name** | `en_core_web_lg` | | **Version** | `3.7.0` | | **spaCy** | `>=3.7.0,<3.8.0` | | **Default Pipeline** | `tok2vec`, `tagger`, `parser`, `attribute_ruler`, `lemmatizer`, `ner` | | **Components** | `tok2vec`, `tagger`, `parser`, `senter`, `attribute_ruler`, `lemmatizer`, `ner` | | **Vectors** | 514157 keys, 514157 unique vectors (300 dimensions) | | **Sources** | [OntoNotes 5](https://catalog.ldc.upenn.edu/LDC2013T19) (Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, Mohammed El-Bachouti, Robert Belvin, Ann Houston)<br />[ClearNLP Constituent-to-Dependency Conversion](https://github.com/clir/clearnlp-guidelines/blob/master/md/components/dependency_conversion.md) (Emory University)<br />[WordNet 3.0](https://wordnet.princeton.edu/) (Princeton University)<br />[Explosion Vectors (OSCAR 2109 + Wikipedia + OpenSubtitles + WMT News Crawl)](https://github.com/explosion/spacy-vectors-builder) (Explosion) | | **License** | `MIT` | | **Author** | [Explosion](https://explosion.ai) | ### Label Scheme <details> <summary>View label scheme (113 labels for 3 components)</summary> | Component | Labels | | --- | --- | | **`tagger`** | `$`, `''`, `,`, `-LRB-`, `-RRB-`, `.`, `:`, `ADD`, `AFX`, `CC`, `CD`, `DT`, `EX`, `FW`, `HYPH`, `IN`, `JJ`, `JJR`, `JJS`, `LS`, `MD`, `NFP`, `NN`, `NNP`, `NNPS`, `NNS`, `PDT`, `POS`, `PRP`, `PRP$`, `RB`, `RBR`, `RBS`, `RP`, `SYM`, `TO`, `UH`, `VB`, `VBD`, `VBG`, `VBN`, `VBP`, `VBZ`, `WDT`, `WP`, `WP$`, `WRB`, `XX`, `_SP`, ```` | | **`parser`** | `ROOT`, `acl`, `acomp`, `advcl`, `advmod`, `agent`, `amod`, `appos`, `attr`, `aux`, `auxpass`, `case`, `cc`, `ccomp`, `compound`, `conj`, `csubj`, `csubjpass`, `dative`, `dep`, `det`, `dobj`, `expl`, `intj`, `mark`, `meta`, `neg`, `nmod`, `npadvmod`, `nsubj`, `nsubjpass`, `nummod`, `oprd`, `parataxis`, `pcomp`, `pobj`, `poss`, `preconj`, `predet`, `prep`, `prt`, `punct`, `quantmod`, `relcl`, `xcomp` | | **`ner`** | `CARDINAL`, `DATE`, `EVENT`, `FAC`, `GPE`, `LANGUAGE`, `LAW`, `LOC`, `MONEY`, `NORP`, `ORDINAL`, `ORG`, `PERCENT`, `PERSON`, `PRODUCT`, `QUANTITY`, `TIME`, `WORK_OF_ART` | </details> ### Accuracy | Type | Score | | --- | --- | | `TOKEN_ACC` | 99.86 | | `TOKEN_P` | 99.57 | | `TOKEN_R` | 99.58 | | `TOKEN_F` | 99.57 | | `TAG_ACC` | 97.35 | | `SENTS_P` | 92.19 | | `SENTS_R` | 89.27 | | `SENTS_F` | 90.71 | | `DEP_UAS` | 92.08 | | `DEP_LAS` | 90.27 | | `ENTS_P` | 85.16 | | `ENTS_R` | 85.70 | | `ENTS_F` | 85.43 |
3,810
[ [ -0.044708251953125, -0.03717041015625, 0.0160064697265625, 0.0137939453125, -0.01629638671875, -0.0002627372741699219, -0.010101318359375, -0.0104217529296875, 0.01380157470703125, 0.030242919921875, -0.04388427734375, -0.060699462890625, -0.05438232421875, ...
RogerKam/roberta_fine_tuned_sentiment_newsmtsc
2022-06-09T14:27:18.000Z
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "license:mit", "endpoints_compatible", "region:us" ]
text-classification
RogerKam
null
null
RogerKam/roberta_fine_tuned_sentiment_newsmtsc
0
688
transformers
2022-06-03T22:19:22
--- license: mit tags: - generated_from_trainer metrics: - accuracy model-index: - name: roberta_fine_tuned_sentiment_newsmtsc results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta_fine_tuned_sentiment_newsmtsc This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6134 - Accuracy: 0.7713 - F1 Score: 0.7710 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.19.2 - Pytorch 1.10.0+cu111 - Datasets 2.2.2 - Tokenizers 0.12.1
1,180
[ [ -0.03564453125, -0.055633544921875, 0.0196685791015625, 0.0170745849609375, -0.032562255859375, -0.02880859375, -0.031768798828125, -0.006198883056640625, 0.01219940185546875, 0.0250396728515625, -0.056915283203125, -0.0592041015625, -0.057159423828125, -0.0...
digiplay/dosmix
2023-06-19T19:24:35.000Z
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "endpoints_compatible", "has_space", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
digiplay
null
null
digiplay/dosmix
3
688
diffusers
2023-05-29T17:41:02
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- https://civitai.com/models/6250/dosmix
160
[ [ -0.044342041015625, 0.00807952880859375, 0.07135009765625, 0.0174713134765625, -0.028564453125, 0.004673004150390625, 0.052032470703125, 0.0139312744140625, 0.032135009765625, 0.0233612060546875, -0.05999755859375, 0.006580352783203125, -0.0047454833984375, ...
TFMC/ELYZA-japanese-Llama-2-7b-instruct-GPTQ-4bit-64g
2023-08-30T07:53:28.000Z
[ "transformers", "safetensors", "llama", "text-generation", "license:llama2", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
TFMC
null
null
TFMC/ELYZA-japanese-Llama-2-7b-instruct-GPTQ-4bit-64g
4
688
transformers
2023-08-30T01:30:19
--- license: llama2 --- # ELYZA-japanese-Llama-2-7b-instruct-GPTQ-4bit-64g GPTQ model for ["elyza/ELYZA-japanese-Llama-2-7b-instruct"](https://huggingface.co/elyza/ELYZA-japanese-Llama-2-7b-instruct) : 4bits, gr64, desc_act=True
231
[ [ -0.0149078369140625, -0.0657958984375, 0.04388427734375, 0.038482666015625, -0.055938720703125, 0.00670623779296875, 0.047576904296875, -0.036895751953125, 0.0277252197265625, 0.049102783203125, -0.03900146484375, -0.03790283203125, -0.04193115234375, -0.001...