license stringlengths 2 30 | tags stringlengths 2 513 | is_nc bool 1 class | readme_section stringlengths 201 597k | hash stringlengths 32 32 |
|---|---|---|---|---|
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Fscore | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:| | 0.8468 | 1.0 | 815 | 0.7465 | 0.7116 | 0.6096 | 0.6325 | | 0.5105 | 2.0 | 1630 | 0.9035 | 0.7532 | 0.7111 | 0.7276 | | 0.2492 | 3.0 | 2445 | 1.1951 | 0.7350 | 0.7334 | 0.7341 | | a626e28268e464a0f6b9751fe82b8fa3 |
mit | ['generated_from_trainer'] | false | xlm-roberta-large-finetuned-sent_in_news This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.8872 - Accuracy: 0.7273 - F1: 0.5125 | 35f37fceed9e9e0f932d79c9282d0cfe |
mit | ['generated_from_trainer'] | false | Model description Модель ассиметрична, реагирует на метку X в тексте новости. Попробуйте следующие примеры: a) Агентство X понизило рейтинг банка Fitch. b) Агентство Fitch понизило рейтинг банка X. a) Компания Финам показала рекордную прибыль, говорят аналитики компании X. b) Компания X показала рекордную прибыль, говорят аналитики компании Финам. | 65227457e82f0226e167934c592b952b |
mit | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 16 | a726b06eb56d1096a7d4d8100cc58b9a |
mit | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 106 | 1.2526 | 0.6108 | 0.1508 | | No log | 2.0 | 212 | 1.1553 | 0.6648 | 0.1141 | | No log | 3.0 | 318 | 1.1150 | 0.6591 | 0.1247 | | No log | 4.0 | 424 | 1.0007 | 0.6705 | 0.1383 | | 1.1323 | 5.0 | 530 | 0.9267 | 0.6733 | 0.2027 | | 1.1323 | 6.0 | 636 | 1.0869 | 0.6335 | 0.4084 | | 1.1323 | 7.0 | 742 | 1.1224 | 0.6932 | 0.4586 | | 1.1323 | 8.0 | 848 | 1.2535 | 0.6307 | 0.3424 | | 1.1323 | 9.0 | 954 | 1.4288 | 0.6932 | 0.4881 | | 0.5252 | 10.0 | 1060 | 1.5856 | 0.6932 | 0.4739 | | 0.5252 | 11.0 | 1166 | 1.7101 | 0.6733 | 0.4530 | | 0.5252 | 12.0 | 1272 | 1.7330 | 0.6903 | 0.4750 | | 0.5252 | 13.0 | 1378 | 1.8872 | 0.7273 | 0.5125 | | 0.5252 | 14.0 | 1484 | 1.8797 | 0.7301 | 0.5033 | | 0.1252 | 15.0 | 1590 | 1.9339 | 0.7330 | 0.5024 | | 0.1252 | 16.0 | 1696 | 1.9632 | 0.7301 | 0.4967 | | 3248caa23ff08e0460989e4e2c1ed840 |
gpl-3.0 | ['twitter', 'masked-token-prediction', 'election2020', 'politics'] | false | Pre-trained BERT on Twitter US Political Election 2020 Pre-trained weights for [Knowledge Enhance Masked Language Model for Stance Detection](https://www.aclweb.org/anthology/2021.naacl-main.376), NAACL 2021. We use the initialized weights from BERT-base (uncased) or `bert-base-uncased`. | 74885aea0103e797538796062047bfe2 |
gpl-3.0 | ['twitter', 'masked-token-prediction', 'election2020', 'politics'] | false | Usage This pre-trained language model **can be fine-tunned to any downstream task (e.g. classification)**. Please see the [official repository](https://github.com/GU-DataLab/stance-detection-KE-MLM) for more detail. ```python from transformers import BertTokenizer, BertForMaskedLM, pipeline import torch | 637dd5b7b93d0292a6099585001f5a5c |
gpl-3.0 | ['twitter', 'masked-token-prediction', 'election2020', 'politics'] | false | Huggingface have been updated, newer version accepts a string of model name instead. fill_mask = pipeline('fill-mask', model=pretrained_LM_path, tokenizer=tokenizer) outputs = fill_mask(example) print(outputs) | 7bcff26eead9d1bb40d0fd965d29a80a |
gpl-3.0 | ['twitter', 'masked-token-prediction', 'election2020', 'politics'] | false | Citation ```bibtex @inproceedings{kawintiranon2021knowledge, title={Knowledge Enhanced Masked Language Model for Stance Detection}, author={Kawintiranon, Kornraphop and Singh, Lisa}, booktitle={Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies}, year={2021}, publisher={Association for Computational Linguistics}, url={https://www.aclweb.org/anthology/2021.naacl-main.376} } ``` | a0e952a88720e054943bb61c40c6cce2 |
cc-by-sa-4.0 | ['t5', 'text2text-generation', 'seq2seq'] | false | 本モデルの作成ステップ概要 1. [SQuAD 1.1](https://rajpurkar.github.io/SQuAD-explorer/)を日本語に機械翻訳し、不正なデータをクレンジング(有効なデータは約半分)。 回答が含まれるコンテキスト、質問文、解答の3つ組ができる。 2. [日本語T5モデル](https://huggingface.co/sonoisa/t5-base-japanese)を次の設定でファインチューニング * 入力: "answer: {解答} content: {回答が含まれるコンテキスト}" * 出力: "{質問文}" * 各種ハイパーパラメータ * 最大入力トークン数: 512 * 最大出力トークン数: 64 * 最適化アルゴリズム: AdaFactor * 学習率: 0.001(固定) * バッチサイズ: 128 * ステップ数: 2500(500ステップごとにチェックポイントを出力、定量・定性評価を行い2500ステップ目を採用) | ee066362dcd8b5f96957e42e27031dab |
apache-2.0 | ['generated_from_trainer'] | false | bert-base-uncased-finetuned-QnA This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.0604 | 345991b9f19c80819878a0b66650e216 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 20 | 3.4894 | | No log | 2.0 | 40 | 3.5654 | | No log | 3.0 | 60 | 3.3185 | | No log | 4.0 | 80 | 3.2859 | | No log | 5.0 | 100 | 3.2947 | | No log | 6.0 | 120 | 3.3998 | | No log | 7.0 | 140 | 3.1642 | | No log | 8.0 | 160 | 3.2653 | | No log | 9.0 | 180 | 3.3427 | | No log | 10.0 | 200 | 3.3549 | | 012d4ae0ce022f3bd0204c37a577cf47 |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 | 8a39d67ee0bcc36216897b08a94f9477 |
creativeml-openrail-m | ['text-to-image', 'stable-diffusion'] | false | jowx Dreambooth model trained by raw-vitor with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept: | 1676a1673dea0e2135e5a6a174374553 |
apache-2.0 | ['generated_from_trainer'] | false | distilbert_sa_GLUE_Experiment_logit_kd_pretrain_qqp This model is a fine-tuned version of [gokuls/distilbert_sa_pre-training-complete](https://huggingface.co/gokuls/distilbert_sa_pre-training-complete) on the GLUE QQP dataset. It achieves the following results on the evaluation set: - Loss: 0.5449 - Accuracy: 0.6632 - F1: 0.1647 - Combined Score: 0.4139 | 337ab98a2e639ed3287c0cfd26df7ecb |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:| | 0.6004 | 1.0 | 1422 | 0.5643 | 0.6623 | 0.1630 | 0.4126 | | 0.5393 | 2.0 | 2844 | 0.5498 | 0.6538 | 0.1199 | 0.3869 | | 0.5157 | 3.0 | 4266 | 0.5449 | 0.6632 | 0.1647 | 0.4139 | | 0.5007 | 4.0 | 5688 | 0.5512 | 0.6848 | 0.2663 | 0.4755 | | 0.4914 | 5.0 | 7110 | 0.5501 | 0.6665 | 0.1817 | 0.4241 | | 0.4847 | 6.0 | 8532 | 0.5475 | 0.6816 | 0.2517 | 0.4667 | | 0.4803 | 7.0 | 9954 | 0.5478 | 0.6768 | 0.2301 | 0.4535 | | 0.4768 | 8.0 | 11376 | 0.5488 | 0.6839 | 0.2610 | 0.4724 | | a9a7d3076269e5cd0c5d2397cba27384 |
apache-2.0 | ['generated_from_keras_callback'] | false | atowey01/hostel-reviews-sentiment-model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2391 - Validation Loss: 0.3849 - Train Accuracy: 0.8675 - Epoch: 4 | 2527a39f9e3936f63d0d0756d6bdb4e0 |
apache-2.0 | ['generated_from_keras_callback'] | false | Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 185, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 | 07b36b5db2ac33686e62dcd842c2d689 |
apache-2.0 | ['generated_from_keras_callback'] | false | Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.8401 | 0.6058 | 0.8278 | 0 | | 0.4835 | 0.4979 | 0.8146 | 1 | | 0.3606 | 0.4885 | 0.8079 | 2 | | 0.2943 | 0.3936 | 0.8742 | 3 | | 0.2391 | 0.3849 | 0.8675 | 4 | | 382250c744e590a66a67efd819daaf4b |
apache-2.0 | ['generated_from_trainer'] | false | testmodel This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 0.7132 - Accuracy: 0.697 - F1: 0.697 | b1521b3e523b093c471ee8eebc3f7890 |
apache-2.0 | ['translation', 'generated_from_trainer'] | false | pt-opus-news This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-mul](https://huggingface.co/Helsinki-NLP/opus-mt-en-mul) on the news_commentary dataset. It achieves the following results on the evaluation set: - Loss: 1.0975 - Bleu: 37.5502 | 8771cb46c0e47fa48dbdfaca3484022d |
apache-2.0 | ['generated_from_trainer'] | false | albert-base-v2-finetuned-squad This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the squad dataset. It achieves the following results on the evaluation set: - eval_loss: 1.0191 - eval_runtime: 291.8551 - eval_samples_per_second: 37.032 - eval_steps_per_second: 2.316 - epoch: 3.0 - step: 16620 | f5c984208e4f6d7b3b22361454e06875 |
creativeml-openrail-m | ['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'astronomy'] | false | DreamBooth model for the astronomy concept trained by Dhruv Singal on the NASA Astronomy Picture of the Week dataset. This is a Stable Diffusion 2.1 model fine-tuned on the astronomy concept with DreamBooth. It can be used by modifying the `instance_prompt`: a photo of the solar system hbbltls astronomy**** This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part! | f1e148a7446f801a82e8305bccb74c42 |
mit | ['generated_from_trainer'] | false | xlm-roberta-base-finetuned-misogyny-sexism This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9064 - Accuracy: 0.8334 - F1: 0.3322 - Precision: 0.2498 - Recall: 0.4961 - Mae: 0.1666 | 9b3c81dc43f2d857afc8cf47591a50ad |
mit | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Mae | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:------:| | 0.3869 | 1.0 | 2395 | 0.2905 | 0.8778 | 0.3528 | 0.3164 | 0.3988 | 0.1222 | | 0.3539 | 2.0 | 4790 | 0.4143 | 0.8278 | 0.3465 | 0.2536 | 0.5467 | 0.1722 | | 0.3124 | 3.0 | 7185 | 0.3327 | 0.8568 | 0.3583 | 0.2864 | 0.4786 | 0.1432 | | 0.2817 | 4.0 | 9580 | 0.5621 | 0.7329 | 0.3092 | 0.1972 | 0.7160 | 0.2671 | | 0.2651 | 5.0 | 11975 | 0.4376 | 0.8520 | 0.3607 | 0.2821 | 0.5 | 0.1480 | | 0.2249 | 6.0 | 14370 | 0.5581 | 0.8326 | 0.3312 | 0.2485 | 0.4961 | 0.1674 | | 0.1958 | 7.0 | 16765 | 0.6728 | 0.8382 | 0.3234 | 0.2484 | 0.4630 | 0.1618 | | 0.1899 | 8.0 | 19160 | 0.7404 | 0.8304 | 0.3316 | 0.2471 | 0.5039 | 0.1696 | | 0.1619 | 9.0 | 21555 | 0.8309 | 0.8461 | 0.3382 | 0.2639 | 0.4708 | 0.1539 | | 0.1453 | 10.0 | 23950 | 0.9064 | 0.8334 | 0.3322 | 0.2498 | 0.4961 | 0.1666 | | 2467ceb03d17bda34cb4ce49214a09c8 |
apache-2.0 | ['automatic-speech-recognition', 'uk'] | false | exp_w2v2t_uk_wavlm_s21 Fine-tuned [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) for speech recognition using the train split of [Common Voice 7.0 (uk)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool. | e418b0d90caf2cbecb38a18e9969b602 |
apache-2.0 | ['generated_from_trainer'] | false | xls-r-300m-bemba-20hrs This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2815 - Wer: 0.3435 | f145521c80af76203753cc413a12598c |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 - mixed_precision_training: Native AMP | 8ec85c01738c6a1ee879fec2fc9d4991 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.3301 | 0.54 | 400 | 0.5177 | 0.7570 | | 0.6437 | 1.08 | 800 | 0.3580 | 0.5658 | | 0.5149 | 1.61 | 1200 | 0.2953 | 0.5004 | | 0.4547 | 2.15 | 1600 | 0.2701 | 0.4464 | | 0.4084 | 2.69 | 2000 | 0.2743 | 0.4383 | | 0.3606 | 3.23 | 2400 | 0.2482 | 0.3952 | | 0.3227 | 3.76 | 2800 | 0.2461 | 0.3965 | | 0.3025 | 4.3 | 3200 | 0.2484 | 0.4015 | | 0.2697 | 4.84 | 3600 | 0.2357 | 0.3838 | | 0.2443 | 5.38 | 4000 | 0.2385 | 0.3822 | | 0.2287 | 5.91 | 4400 | 0.2353 | 0.3747 | | 0.1977 | 6.45 | 4800 | 0.2337 | 0.3624 | | 0.1895 | 6.99 | 5200 | 0.2319 | 0.3568 | | 0.1561 | 7.53 | 5600 | 0.2540 | 0.3561 | | 0.1448 | 8.06 | 6000 | 0.2772 | 0.3612 | | 0.1221 | 8.6 | 6400 | 0.2755 | 0.3596 | | 0.1133 | 9.14 | 6800 | 0.2733 | 0.3495 | | 0.0969 | 9.68 | 7200 | 0.2815 | 0.3435 | | ee4f6647ed0169f6e0ea4cb934d1cb53 |
apache-2.0 | [] | false | Perceiver IO for vision (fixed Fourier position embeddings) Perceiver IO model pre-trained on ImageNet (14 million images, 1,000 classes) at resolution 224x224. It was introduced in the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Jaegle et al. and first released in [this repository](https://github.com/deepmind/deepmind-research/tree/master/perceiver). Disclaimer: The team releasing Perceiver IO did not write a model card for this model so this model card has been written by the Hugging Face team. | 90b250b2ab3338486536a929e1f615f7 |
apache-2.0 | [] | false | Model description Perceiver IO is a transformer encoder model that can be applied on any modality (text, images, audio, video, ...). The core idea is to employ the self-attention mechanism on a not-too-large set of latent vectors (e.g. 256 or 512), and only use the inputs to perform cross-attention with the latents. This allows for the time and memory requirements of the self-attention mechanism to not depend on the size of the inputs. To decode, the authors employ so-called decoder queries, which allow to flexibly decode the final hidden states of the latents to produce outputs of arbitrary size and semantics. For image classification, the output is a tensor containing the logits, of shape (batch_size, num_labels). <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/perceiver_architecture.jpg" alt="drawing" width="600"/> <small> Perceiver IO architecture.</small> As the time and memory requirements of the self-attention mechanism don't depend on the size of the inputs, the Perceiver IO authors can train the model directly on raw pixel values, rather than on patches as is done in ViT. This particular model only adds fixed Fourier 2D position embeddings to the pixel values. By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by replacing the classification decoder. | 2fb2c1370374cdc4faccb63f5dd1eefa |
apache-2.0 | [] | false | Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=deepmind/perceiver) to look for other fine-tuned versions on a task that may interest you. | a08ded85fbfb6f59c07be87d17dfd832 |
apache-2.0 | [] | false | How to use Here is how to use this model in PyTorch: ```python from transformers import PerceiverFeatureExtractor, PerceiverForImageClassificationFourier import requests from PIL import Image feature_extractor = PerceiverFeatureExtractor.from_pretrained("deepmind/vision-perceiver-fourier") model = PerceiverForImageClassificationFourier.from_pretrained("deepmind/vision-perceiver-fourier") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) | 29085f1c634ab9833e8413d470794dc0 |
apache-2.0 | [] | false | Preprocessing Images are center cropped and resized to a resolution of 224x224 and normalized across the RGB channels. Note that data augmentation was used during pre-training, as explained in Appendix H of the [paper](https://arxiv.org/abs/2107.14795). | ccb0760dc488f5b71bf775643b7f334f |
apache-2.0 | [] | false | BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2107-14795, author = {Andrew Jaegle and Sebastian Borgeaud and Jean{-}Baptiste Alayrac and Carl Doersch and Catalin Ionescu and David Ding and Skanda Koppula and Daniel Zoran and Andrew Brock and Evan Shelhamer and Olivier J. H{\'{e}}naff and Matthew M. Botvinick and Andrew Zisserman and Oriol Vinyals and Jo{\~{a}}o Carreira}, title = {Perceiver {IO:} {A} General Architecture for Structured Inputs {\&} Outputs}, journal = {CoRR}, volume = {abs/2107.14795}, year = {2021}, url = {https://arxiv.org/abs/2107.14795}, eprinttype = {arXiv}, eprint = {2107.14795}, timestamp = {Tue, 03 Aug 2021 14:53:34 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2107-14795.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` | 5d92a9b711f2929e7d6fc43114dd7599 |
mit | ['generated_from_trainer'] | false | xlm-roberta-base-finetuned-panx-all This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1737 - F1: 0.8521 | cda11e7df9f4e830388a77fd4563cf9e |
mit | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.305 | 1.0 | 835 | 0.1944 | 0.7968 | | 0.1569 | 2.0 | 1670 | 0.1759 | 0.8395 | | 0.1027 | 3.0 | 2505 | 0.1737 | 0.8521 | | 1bf94851f08aa2cc0eeab110b21006c2 |
apache-2.0 | ['generated-from-trainer'] | false | model This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unkown dataset. It achieves the following results on the evaluation set: - Loss: 2.9150 - Accuracy: 0.2662 | a494d2ff36977d3126805ea393921c5a |
apache-2.0 | ['generated-from-trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 3.0528 | 0.44 | 1000 | 3.0265 | 0.2223 | | 2.9836 | 0.89 | 2000 | 2.9263 | 0.2332 | | 2.7409 | 1.33 | 3000 | 2.9041 | 0.2533 | | 2.7905 | 1.77 | 4000 | 2.8763 | 0.2606 | | 2.4359 | 2.22 | 5000 | 2.9072 | 0.2642 | | 2.4507 | 2.66 | 6000 | 2.9230 | 0.2644 | | 13e6979871f2841411c27312bdd478e5 |
apache-2.0 | ['translation'] | false | he-it * source group: Hebrew * target group: Italian * OPUS readme: [heb-ita](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/heb-ita/README.md) * model: transformer * source language(s): heb * target language(s): ita * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-12-10.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ita/opus-2020-12-10.zip) * test set translations: [opus-2020-12-10.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ita/opus-2020-12-10.test.txt) * test set scores: [opus-2020-12-10.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ita/opus-2020-12-10.eval.txt) | 8ee68c2c4b91795641aa8727f796ff90 |
apache-2.0 | ['translation'] | false | System Info: - hf_name: he-it - source_languages: heb - target_languages: ita - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/heb-ita/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['he', 'it'] - src_constituents: ('Hebrew', {'heb'}) - tgt_constituents: ('Italian', {'ita'}) - src_multilingual: False - tgt_multilingual: False - long_pair: heb-ita - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ita/opus-2020-12-10.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ita/opus-2020-12-10.test.txt - src_alpha3: heb - tgt_alpha3: ita - chrF2_score: 0.643 - bleu: 41.1 - brevity_penalty: 0.997 - ref_len: 11464.0 - src_name: Hebrew - tgt_name: Italian - train_date: 2020-12-10 00:00:00 - src_alpha2: he - tgt_alpha2: it - prefer_old: False - short_pair: he-it - helsinki_git_sha: b317f78a3ec8a556a481b6a53dc70dc11769ca96 - transformers_git_sha: 1310e1a758edc8e89ec363db76863c771fbeb1de - port_machine: LM0-400-22516.local - port_time: 2020-12-11-11:50 | bc0816fb180dd317abdfec851b58908c |
mit | ['classification'] | false | Overview The model is a `roberta-base` fine-tuned on [fake-and-real-news-dataset](https://www.kaggle.com/datasets/clmentbisaillon/fake-and-real-news-dataset). It has a 100% accuracy on that dataset. The model takes a news article and predicts if it is true or fake. The format of the input should be: ``` <title> TITLE HERE <content> CONTENT HERE <end> ``` | d616cd575a96ffb61790808f764a4e76 |
mit | ['classification'] | false | Using this model in your code To use this model, first download it from the hugginface website: ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("hamzab/roberta-fake-news-classification") model = AutoModelForSequenceClassification.from_pretrained("hamzab/roberta-fake-news-classification") ``` Then, make a prediction like follows: ```python import torch def predict_fake(title,text): input_str = "<title>" + title + "<content>" + text + "<end>" input_ids = tokenizer.encode_plus(input_str, max_length=512, padding="max_length", truncation=True, return_tensors="pt") device = 'cuda' if torch.cuda.is_available() else 'cpu' model.to(device) with torch.no_grad(): output = model(input_ids["input_ids"].to(device), attention_mask=input_ids["attention_mask"].to(device)) return dict(zip(["Fake","Real"], [x.item() for x in list(torch.nn.Softmax()(output.logits)[0])] )) print(predict_fake(<HEADLINE-HERE>,<CONTENT-HERE>)) ``` You can also use Gradio to test the model on real-time: ```python import gradio as gr iface = gr.Interface(fn=predict_fake, inputs=[gr.inputs.Textbox(lines=1,label="headline"),gr.inputs.Textbox(lines=6,label="content")], outputs="label").launch(share=True) ``` | 54417f92d691d50e99e2bce25c554903 |
apache-2.0 | ['generated_from_trainer'] | false | distilbert-base-uncased_fold_3_binary_v1 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9405 - F1: 0.7878 | 818e14e974a2354dbb5bee8a5646976a |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 25 | dc070654fd81151cd3dffc4c1b128e8b |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 289 | 0.4630 | 0.7897 | | 0.3954 | 2.0 | 578 | 0.4549 | 0.7936 | | 0.3954 | 3.0 | 867 | 0.6527 | 0.7868 | | 0.1991 | 4.0 | 1156 | 0.7510 | 0.7951 | | 0.1991 | 5.0 | 1445 | 0.9327 | 0.8000 | | 0.095 | 6.0 | 1734 | 1.0974 | 0.7859 | | 0.0347 | 7.0 | 2023 | 1.2692 | 0.7919 | | 0.0347 | 8.0 | 2312 | 1.3718 | 0.7921 | | 0.0105 | 9.0 | 2601 | 1.4679 | 0.7999 | | 0.0105 | 10.0 | 2890 | 1.5033 | 0.8070 | | 0.0079 | 11.0 | 3179 | 1.6074 | 0.8008 | | 0.0079 | 12.0 | 3468 | 1.6921 | 0.7904 | | 0.0053 | 13.0 | 3757 | 1.7079 | 0.7945 | | 0.0054 | 14.0 | 4046 | 1.8361 | 0.7887 | | 0.0054 | 15.0 | 4335 | 1.7695 | 0.7873 | | 0.0046 | 16.0 | 4624 | 1.7934 | 0.7917 | | 0.0046 | 17.0 | 4913 | 1.8036 | 0.8008 | | 0.0064 | 18.0 | 5202 | 1.8780 | 0.7888 | | 0.0064 | 19.0 | 5491 | 1.8943 | 0.7923 | | 0.0032 | 20.0 | 5780 | 1.8694 | 0.7905 | | 0.002 | 21.0 | 6069 | 1.9348 | 0.7869 | | 0.002 | 22.0 | 6358 | 1.9578 | 0.7804 | | 0.0036 | 23.0 | 6647 | 1.9438 | 0.7827 | | 0.0036 | 24.0 | 6936 | 1.9386 | 0.7878 | | 0.0011 | 25.0 | 7225 | 1.9405 | 0.7878 | | 58d3d81100a1bc8dfba1b19576d1e6a6 |
apache-2.0 | ['translation'] | false | kor-eng * source group: Korean * target group: English * OPUS readme: [kor-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/kor-eng/README.md) * model: transformer-align * source language(s): kor kor_Hang kor_Latn * target language(s): eng * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/kor-eng/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/kor-eng/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/kor-eng/opus-2020-06-17.eval.txt) | ec0332824528708a4843b1c0fa6a9743 |
apache-2.0 | ['translation'] | false | System Info: - hf_name: kor-eng - source_languages: kor - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/kor-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ko', 'en'] - src_constituents: {'kor_Hani', 'kor_Hang', 'kor_Latn', 'kor'} - tgt_constituents: {'eng'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/kor-eng/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/kor-eng/opus-2020-06-17.test.txt - src_alpha3: kor - tgt_alpha3: eng - short_pair: ko-en - chrF2_score: 0.588 - bleu: 41.3 - brevity_penalty: 0.9590000000000001 - ref_len: 17711.0 - src_name: Korean - tgt_name: English - train_date: 2020-06-17 - src_alpha2: ko - tgt_alpha2: en - prefer_old: False - long_pair: kor-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41 | 8309230826c02a13f9039837398e7d9d |
apache-2.0 | ['whisper-event'] | false | Whisper Hindi Small This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Hindi data available from multiple publicly available ASR corpuses. It has been fine-tuned as a part of the Whisper fine-tuning sprint. | b488255283ed3ffdee2894c1b2a019b1 |
apache-2.0 | ['whisper-event'] | false | Training and evaluation data at Speech Lab, IITM Training Data: GramVaani ASR Corpus, ULCA ASR Corpus, Shrutilipi ASR Corpus, Google/Fleurs (Train+Dev) set. Evaluation Data: GramVaani ASR Corpus Test, Google/Fleurs Test set. | f2859d336919e1af18db23937ab26755 |
apache-2.0 | ['whisper-event'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.75e-05 - train_batch_size: 48 - eval_batch_size: 32 - seed: 22 - optimizer: adamw_bnb_8bit - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 20000 - training_steps: 19377 (Initially set to 129180 steps) - mixed_precision_training: True | 9b24c59e7d6b38abf30283e1f1b4a8dc |
apache-2.0 | ['whisper-event'] | false | Acknowledgement This work was done at Speech Lab, IITM. The compute resources for this work were funded by "Bhashini: National Language translation Mission" project of the Ministry of Electronics and Information Technology (MeitY), Government of India. | 16c08e70fc4fecca6abfb2898109c53c |
apache-2.0 | ['pytorch', 'causal-lm', 'pythia'] | false | The *Pythia Scaling Suite* is a collection of models developed to facilitate interpretability research. It contains two sets of eight models of sizes 70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two models: one trained on the Pile, and one trained on the Pile after the dataset has been globally deduplicated. All 8 model sizes are trained on the exact same data, in the exact same order. All Pythia models are available [on Hugging Face](https://huggingface.co/EleutherAI). The Pythia model suite was deliberately designed to promote scientific research on large language models, especially interpretability research. Despite not centering downstream performance as a design goal, we find the models match or exceed the performance of similar and same-sized models, such as those in the OPT and GPT-Neo suites. Please note that all models in the *Pythia* suite were renamed in January 2023. For clarity, a <a href=" | 441c9a0bb5d13cb0a1e1998a62532740 |
apache-2.0 | ['pytorch', 'causal-lm', 'pythia'] | false | Model Details - Developed by: [EleutherAI](http://eleuther.ai) - Model type: Transformer-based Language Model - Language: English - Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia) for training procedure, config files, and details on how to use. - Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox) - License: Apache 2.0 - Contact: to ask questions about this model, join the [EleutherAI Discord](https://discord.gg/zBGx3azzUn), and post them in ` | c71255a378de0f301465e9307f187379 |
apache-2.0 | ['pytorch', 'causal-lm', 'pythia'] | false | release-discussion`. Please read the existing *Pythia* documentation before asking about it in the EleutherAI Discord. For general correspondence: [contact@eleuther. ai](mailto:contact@eleuther.ai). <figure> | Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models | | -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: | | 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — | | 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M | | 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M | | 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — | | 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B | | 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B | | 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B | | 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — | <figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and non-deduped models of a given size have the same hyperparameters. “Equivalent” models have <b>exactly</b> the same architecture, and the same number of non-embedding parameters.</figcaption> </figure> | d352c32ca093be632911368157dbf1e4 |
apache-2.0 | ['pytorch', 'causal-lm', 'pythia'] | false | Intended Use The primary intended use of Pythia is research on the behavior, functionality, and limitations of large language models. This suite is intended to provide a controlled setting for performing scientific experiments. To enable the study of how language models change in the course of training, we provide 143 evenly spaced intermediate checkpoints per model. These checkpoints are hosted on Hugging Face as branches. Note that branch `143000` corresponds exactly to the model checkpoint on the `main` branch of each model. You may also further fine-tune and adapt Pythia-410M-deduped for deployment, as long as your use is in accordance with the Apache 2.0 license. Pythia models work with the Hugging Face [Transformers Library](https://huggingface.co/docs/transformers/index). If you decide to use pre-trained Pythia-410M-deduped as a basis for your fine-tuned model, please conduct your own risk and bias assessment. | d71925aef5b677d1cb233ab449c99dde |
apache-2.0 | ['pytorch', 'causal-lm', 'pythia'] | false | Out-of-scope use The Pythia Suite is **not** intended for deployment. It is not a in itself a product and cannot be used for human-facing interactions. Pythia models are English-language only, and are not suitable for translation or generating text in other languages. Pythia-410M-deduped has not been fine-tuned for downstream contexts in which language models are commonly deployed, such as writing genre prose, or commercial chatbots. This means Pythia-410M-deduped will **not** respond to a given prompt the way a product like ChatGPT does. This is because, unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement Learning from Human Feedback (RLHF) to better “understand” human instructions. | f55908b0ad29e8cda70130cf31d26b2f |
apache-2.0 | ['pytorch', 'causal-lm', 'pythia'] | false | Limitations and biases The core functionality of a large language model is to take a string of text and predict the next token. The token deemed statistically most likely by the model need not produce the most “accurate” text. Never rely on Pythia-410M-deduped to produce factually accurate output. This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset known to contain profanity and texts that are lewd or otherwise offensive. See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a discussion of documented biases with regards to gender, religion, and race. Pythia-410M-deduped may produce socially unacceptable or undesirable text, *even if* the prompt itself does not include anything explicitly offensive. If you plan on using text generated through, for example, the Hosted Inference API, we recommend having a human curate the outputs of this language model before presenting it to other people. Please inform your audience that the text was generated by Pythia-410M-deduped. | 47b2b917ae27c9d299411f68b5f93937 |
apache-2.0 | ['pytorch', 'causal-lm', 'pythia'] | false | Quickstart Pythia models can be loaded and used via the following code, demonstrated here for the third `pythia-70m-deduped` checkpoint: ```python from transformers import GPTNeoXForCausalLM, AutoTokenizer model = GPTNeoXForCausalLM.from_pretrained( "EleutherAI/pythia-70m-deduped", revision="step3000", cache_dir="./pythia-70m-deduped/step3000", ) tokenizer = AutoTokenizer.from_pretrained( "EleutherAI/pythia-70m-deduped", revision="step3000", cache_dir="./pythia-70m-deduped/step3000", ) inputs = tokenizer("Hello, I am", return_tensors="pt") tokens = model.generate(**inputs) tokenizer.decode(tokens[0]) ``` Revision/branch `step143000` corresponds exactly to the model checkpoint on the `main` branch of each model. For more information on how to use all Pythia models, see [documentation on GitHub](https://github.com/EleutherAI/pythia). | b51e17e2001a5568c106a616ebf8a5df |
apache-2.0 | ['pytorch', 'causal-lm', 'pythia'] | false | Training data Pythia-410M-deduped was trained on the Pile **after the dataset has been globally deduplicated**. [The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in English. It was created by EleutherAI specifically for training large language models. It contains texts from 22 diverse sources, roughly broken down into five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl), prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and miscellaneous (e.g. GitHub, Enron Emails). See [the Pile paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources, methodology, and a discussion of ethical implications. Consult [the datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation about the Pile and its component datasets. The Pile can be downloaded from the [official website](https://pile.eleuther.ai/), or from a [community mirror](https://the-eye.eu/public/AI/pile/). | 8a8e910e3aea0305e630d0c03425002f |
apache-2.0 | ['pytorch', 'causal-lm', 'pythia'] | false | Training procedure Pythia uses the same tokenizer as [GPT-NeoX- 20B](https://huggingface.co/EleutherAI/gpt-neox-20b). All models were trained on the exact same data, in the exact same order. Each model saw 299,892,736,000 tokens during training, and 143 checkpoints for each model are saved every 2,097,152,000 tokens, spaced evenly throughout training. This corresponds to training for just under 1 epoch on the Pile for non-deduplicated models, and about 1.5 epochs on the deduplicated Pile. All *Pythia* models trained for the equivalent of 143000 steps at a batch size of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch size of 4M tokens listed were originally trained for 71500 steps instead, with checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for consistency with all 2M batch models, so `step1000` is the first checkpoint for `pythia-1.4b` that was saved (corresponding to step 500 in training), and `step1000` is likewise the first `pythia-6.9b` checkpoint that was saved (corresponding to 1000 “actual” steps). See [GitHub](https://github.com/EleutherAI/pythia) for more details on training procedure, including [how to reproduce it](https://github.com/EleutherAI/pythia/blob/main/README.md | 92a95b3fe8a77267604615c631500292 |
apache-2.0 | ['pytorch', 'causal-lm', 'pythia'] | false | Evaluations All 16 *Pythia* models were evaluated using the [LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access the results by model and step at `results/json/*` in the [GitHub repository](https://github.com/EleutherAI/pythia/tree/main/results/json). February 2023 note: select evaluations and comparison with OPT and BLOOM models will be added here at a later date. | 7cd244bf2351fc2363e6ef101f674585 |
apache-2.0 | ['pytorch', 'causal-lm', 'pythia'] | false | Naming convention and parameter count *Pythia* models were renamed in January 2023. It is possible that the old naming convention still persists in some documentation by accident. The current naming convention (70M, 160M, etc.) is based on total parameter count. <figure style="width:32em"> | current Pythia suffix | old suffix | total params | non-embedding params | | --------------------: | ---------: | -------------: | -------------------: | | 70M | 19M | 70,426,624 | 18,915,328 | | 160M | 125M | 162,322,944 | 85,056,000 | | 410M | 350M | 405,334,016 | 302,311,424 | | 1B | 800M | 1,011,781,632 | 805,736,448 | | 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 | | 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 | | 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 | | 12B | 13B | 11,846,072,320 | 11,327,027,200 | </figure> | ae7c25176684a0f71ae28277a43a30f6 |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 25 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 | 7f9b913b62123a34ccb75985012ea4a0 |
creativeml-openrail-m | ['stable-diffusion', 'text-to-image'] | false | Files 5 files available (Best version is 4000steps): -Smrai_style - 4000 steps (First version, work great!) -Smrai2_style-1000 - 1000 steps -Smrai2_style-2000 - 2000 steps -Smrai2_style-3000 - 3000 steps -Smrai2_style-4000 - 4000 steps (recommended) | 19d4ec3118fddaa80a74a89cee099bcf |
creativeml-openrail-m | ['stable-diffusion', 'text-to-image'] | false | Prompt You need to use DeepDanBooru Tags (https://gigazine.net/gsc_news/en/20221012-automatic1111-stable-diffusion-webui-deep-danbooru/) I also used Nixeu_style embedding (not necessary): https://huggingface.co/sd-concepts-library/nixeu) And Elysium_Anime_V2.ckpt (https://huggingface.co/hesw23168/SD-Elysium-Model) | e6a3208305223113f6d705dd8b7cacfa |
creativeml-openrail-m | ['stable-diffusion', 'text-to-image'] | false | Example Positive Prompt: (Nixeu_style:1.2), (Smrai2_style-4000:0.9), close-up portrait, 1girl, manga art, (red symmetrical circle behind:1.2), intricate details, highly detailed, photorealistic, octane render, 8k, unreal engine, sharp focus, volumetric lighting unreal engine. art by artgerm and greg rutkowski and alphonse mucha Negative Prompt: (mediocre:1.2), (average:1.2), (bad:1.2), (wrong:1.2), (error:1.2), (fault:1.2),( badly_drawn:1.2), (poorly_drawn:1.2), ( low_quality:1.2), no_quality, bad_quality, no_resolution, low_resolution, (lowres:1.2), normal_resolution, (disfigured:1.6), (deformed:1.4), (distortion:1.2), bad_anatomy, (no_detail:1.2), low_detail, normal_detail, (scribble:1.2), (rushed:1.2), (unfinished:1.2), blur, blurry, claws, (misplaced:1.2), (disconnected:1.2), nonsense, random, (noise:1.2), (deformation:1.2), 3d, dull, boring, uninteresting, screencap, (text:1.2), (frame:1.1), (out_of_frame:1.2), (title:1.2), (description:1.3), (sexual:1.2), text, error,(logo:1.3), (watermark:1.3), bad_perspective, bad_proportions, cinematic, jpg_artifacts, jpeg_artifacts, extra_leg, missing_leg, extra_arm, missing_arm, long_hand, bad_hands, (mutated_hand:1.2), (extra_finger:1.2), (missing_finger:1.2), broken_finger, (fused_fingers:1.2), extra_feet, missing_feet, fused_feet, long_feet, missing_limbs, extra_limbs, fused_limbs, claw, (extra_digit:1.2), (fewer_digits:1.2), elves_ears, (naked:1.3), (wet:1.2), uncensored, (long_neck:1.2), (weapon:1.5) <img src="https://huggingface.co/Akumetsu971/SD_Samurai_Anime_Style/resolve/main/05740-1662921804-(Nixeu_style_1.2)%2C%20(Smrai2_style-4000_0.9)%2C%20close-up%20portrait%2C%201girl%2C%20manga%20art%2C%20(red%20symmetrical%20circle%20behind_1.2)%2C%20intricate.png" width="50%"/> <img src="https://huggingface.co/Akumetsu971/SD_Samurai_Anime_Style/resolve/main/05743-815262338-(Nixeu_style_1.2)%2C%20(Smrai2_style-4000_0.9)%2C%20close-up%20portrait%2C%201girl%2C%20manga%20art%2C%20(red%20symmetrical%20circle%20behind_1.2)%2C%20intricate.png" width="50%"/> <img src="https://huggingface.co/Akumetsu971/SD_Samurai_Anime_Style/resolve/main/05748-2610321799-(Nixeu_style_1.2)%2C%20(Smrai2_style-4000_0.9)%2C%20close-up%20portrait%2C%201girl%2C%20manga%20art%2C%20(red%20symmetrical%20circle%20behind_1.2)%2C%20intricate.png" width="50%"/> | 3fa1f11825fc0ff217ab1caa39926321 |
creativeml-openrail-m | ['stable-diffusion', 'text-to-image'] | false | First Version Example Positive Prompt: portrait, (Smrai_style:1.0), vampire samurai, red_eyes, 2vampire_ fangs, solo, single,fighting_stance, male_focus, pink_hair, sakura_petals, painting,beautifully drawn, heavily detailed, high quality, (cherry_blossom_print:1.1), scenery, smoke, fog, dynamic, detailed_limbs, (Nixeu_style:1.2) Negative Prompt: (mediocre:1.2), (average:1.2), (bad:1.2), (wrong:1.2), (error:1.2), (fault:1.2),( badly_drawn:1.2), (poorly_drawn:1.2), ( low_quality:1.2), no_quality, bad_quality, no_resolution, low_resolution, (lowres:1.2), normal_resolution, (disfigured:1.6), (deformed:1.5), (distortion:1.2), bad_anatomy, (no_detail:1.2), low_detail, normal_detail, (scribble:1.2), (rushed:1.2), (unfinished:1.2), blur, blurry, claws, (misplaced:1.2), (disconnected:1.2), nonsense, random, (noise:1.2), (deformation:1.2), 3d, dull, boring, uninteresting, screencap, (text:1.2), (frame:1.1), (out_of_frame:1.2), (title:1.2), (description:1.3), (sexual:1.2), text, error,(logo:1.3), (watermark:1.3), bad_perspective, bad_proportions, cinematic, jpg_artifacts, jpeg_artifacts, extra_leg, missing_leg, extra_arm, missing_arm, long_hand, bad_hands, (mutated_hand:1.2), (extra_finger:1.2), (missing_finger:1.2), broken_finger, (fused_fingers:1.2), extra_feet, missing_feet, fused_feet, long_feet, missing_limbs, extra_limbs, fused_limbs, claw, (extra_digit:1.2), (fewer_digits:1.2), elves_ears, (naked:1.3), (wet:1.2), uncensored, (long_neck:1.2) <img src="https://huggingface.co/Akumetsu971/SD_Samurai_Anime_Style/resolve/main/05241-239803495-portrait%2C%20(Smrai_style_1.0)%2C%20vampire%20samurai%2C%20red_eyes%2C%202vampire_%20fangs%2C%20solo%2C%20single%2Cfighting_stance%2C%20male_focus%2C%20pink_hair%2C%20sa.png" width="50%"/> ``` | c0ecc8d85b13ee9c82364d349f1bdab1 |
mit | ['torch'] | false | Model description This is the **SMALL** version. The training data is Bulgarian text from [OSCAR](https://oscar-corpus.com/post/oscar-2019/), [Chitanka](https://chitanka.info/) and [Wikipedia](https://bg.wikipedia.org/). | a95c2df22b957964aae3465fcd8f2828 |
mit | ['torch'] | false | How to use Here is how to use this model in PyTorch: ```python >>> from transformers import AutoModel, AutoTokenizer >>> >>> model_id = "rmihaylov/gpt2-small-bg" >>> tokenizer = AutoTokenizer.from_pretrained(model_id) >>> model = AutoModel.from_pretrained(model_id, trust_remote_code=True) >>> >>> input_ids = tokenizer.encode( >>> "Здравей,", >>> add_special_tokens=False, >>> return_tensors='pt') >>> >>> output_ids = model.generate( >>> input_ids, >>> do_sample=True, >>> max_length=50, >>> top_p=0.92, >>> pad_token_id=2, >>> top_k=0) >>> >>> output = tokenizer.decode(output_ids[0]) >>> >>> output = output.replace('<|endoftext|>', '\n\n\n') >>> output = output.replace('<|unknown|>', '') >>> output = output.replace('▁', ' ') >>> output = output.replace('<|n|>', '\n') >>> >>> print(output) Здравей, Ани! Не е ли прекрасно? Нещото се засмя. Зъбите му блеснаха. — Ще те разведа насам-натам! Ани се замисли, когато той си тръгна. Може би не искаше да го е ``` | 83cbfe30d436082466a41e916763c50f |
creativeml-openrail-m | ['text-to-image', 'stable-diffusion'] | false | girl Dreambooth model trained by pupubear with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook trianed from c_PVC_mix Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:  | 8052d62e8ce3a11122155cf476981a87 |
apache-2.0 | ['generated_from_trainer'] | false | all-roberta-large-v1-credit_cards-3-16-5 This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.3376 - Accuracy: 0.3186 | 931969f5752d813528ee97f8ac55bee2 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.75 | 1.0 | 1 | 2.5769 | 0.2389 | | 2.178 | 2.0 | 2 | 2.4879 | 0.2389 | | 1.769 | 3.0 | 3 | 2.4180 | 0.2566 | | 1.4703 | 4.0 | 4 | 2.3657 | 0.3097 | | 1.2711 | 5.0 | 5 | 2.3376 | 0.3186 | | 01b4d38a0435708d3eb77935215ea6cc |
apache-2.0 | ['generated_from_trainer'] | false | distilbert-base-uncased-finetuned-mnli This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.5486 - Accuracy: 0.8244 | 4d5a6d913fbb6ebb8936feeeff669bab |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.5142 | 1.0 | 24544 | 0.4922 | 0.8075 | | 0.4089 | 2.0 | 49088 | 0.4865 | 0.8194 | | 0.2936 | 3.0 | 73632 | 0.5486 | 0.8244 | | e2dff6e5b908f6e3b827c880ad60f714 |
apache-2.0 | ['generated_from_trainer'] | false | finetuned-test-1 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 1.8192 | 87e4daadc5dcedf2f5286d881bc2a6b4 |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP | 3702e1f9efdcd88c277978f78d2a5cbd |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.8219 | 1.0 | 30 | 2.3343 | | 2.4148 | 2.0 | 60 | 2.2010 | | 2.3236 | 3.0 | 90 | 2.1442 | | 2.2231 | 4.0 | 120 | 2.1651 | | 2.2171 | 5.0 | 150 | 2.0614 | | 2.127 | 6.0 | 180 | 2.0405 | | 2.0748 | 7.0 | 210 | 2.0092 | | 2.0511 | 8.0 | 240 | 1.9798 | | 2.0097 | 9.0 | 270 | 1.8662 | | 1.9969 | 10.0 | 300 | 1.9257 | | 2.0006 | 11.0 | 330 | 1.9386 | | 1.9273 | 12.0 | 360 | 1.9357 | | 1.9177 | 13.0 | 390 | 1.8983 | | 1.9128 | 14.0 | 420 | 1.8990 | | 1.8979 | 15.0 | 450 | 1.9037 | | 1.8721 | 16.0 | 480 | 1.8440 | | 1.8998 | 17.0 | 510 | 1.8404 | | 1.8862 | 18.0 | 540 | 1.9193 | | 1.9133 | 19.0 | 570 | 1.8494 | | 1.8799 | 20.0 | 600 | 1.8192 | | c9a49b9ad3039f8011409f6f8f9a783b |
apache-2.0 | ['Early Modern French', 'Historical', 'NER', 'flair'] | false | <a href="https://portizs.eu/publication/2022/lrec/dalembert/"> <img width="300px" src="https://portizs.eu/publication/2022/lrec/dalembert/featured_hu18bf34d40cdc71c744bdd15e48ff0b23_61788_720x2500_fit_q100_h2_lanczos_3.webp"> </a> | 3b70bc54d7d8ae5f5f812eedbbdf7b41 |
apache-2.0 | ['Early Modern French', 'Historical', 'NER', 'flair'] | false | D'AlemBERT-NER model This model is fine-tuned version of a [D'AlemBERT](https://huggingface.co/pjox/DalemBERT) on the [FreEMNER corpus](https://doi.org/10.5281/zenodo.6481135) for Early Modern French. It was introduced in [this paper](https://aclanthology.org/2022.coling-1.327/). | 6a318060e713d0f7f3ba78734190ab75 |
apache-2.0 | ['Early Modern French', 'Historical', 'NER', 'flair'] | false | BibTeX entry and citation info ```bibtex @inproceedings{ortiz-suarez-gabay-2022-data, title = "A Data-driven Approach to Named Entity Recognition for Early {M}odern {F}rench", author = "Ortiz Suarez, Pedro and Gabay, Simon", booktitle = "Proceedings of the 29th International Conference on Computational Linguistics", month = oct, year = "2022", address = "Gyeongju, Republic of Korea", publisher = "International Committee on Computational Linguistics", url = "https://aclanthology.org/2022.coling-1.327", pages = "3722--3730", abstract = "Named entity recognition has become an increasingly useful tool for digital humanities research, specially when it comes to historical texts. However, historical texts pose a wide range of challenges to both named entity recognition and natural language processing in general that are still difficult to address even with modern neural methods. In this article we focus in named entity recognition for historical French, and in particular for Early Modern French (16th-18th c.), i.e. Ancien R{\'e}gime French. However, instead of developing a specialised architecture to tackle the particularities of this state of language, we opt for a data-driven approach by developing a new corpus with fine-grained entity annotation, covering three centuries of literature corresponding to the early modern period; we try to annotate as much data as possible producing a corpus that is many times bigger than the most popular NER evaluation corpora for both Contemporary English and French. We then fine-tune existing state-of-the-art architectures for Early Modern and Contemporary French, obtaining results that are on par with those of the current state-of-the-art NER systems for Contemporary English. Both the corpus and the fine-tuned models are released.", } ``` | efff743f2410751a85eba2f7856f6448 |
apache-2.0 | ['automatic-speech-recognition', 'de'] | false | exp_w2v2t_de_unispeech-ml_s750 Fine-tuned [microsoft/unispeech-large-multi-lingual-1500h-cv](https://huggingface.co/microsoft/unispeech-large-multi-lingual-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool. | 28547d71f003e82f61fd71fb64191885 |
apache-2.0 | ['vision', 'image-classification'] | false | ConvNeXT (base-sized model) ConvNeXT model trained on ImageNet-22k at resolution 224x224. It was introduced in the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Liu et al. and first released in [this repository](https://github.com/facebookresearch/ConvNeXt). Disclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team. | b6bf0d011648565a178bd0585724644f |
apache-2.0 | ['vision', 'image-classification'] | false | Model description ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and "modernized" its design by taking the Swin Transformer as inspiration.  | baedf28f67fc20c638d5adaa25efd99d |
apache-2.0 | ['vision', 'image-classification'] | false | Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=convnext) to look for fine-tuned versions on a task that interests you. | 523bd122d272601bbb95b5309406ebb0 |
apache-2.0 | ['vision', 'image-classification'] | false | How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import ConvNextFeatureExtractor, ConvNextForImageClassification import torch from datasets import load_dataset dataset = load_dataset("huggingface/cats-image") image = dataset["test"]["image"][0] feature_extractor = ConvNextFeatureExtractor.from_pretrained("facebook/convnext-base-224-22k") model = ConvNextForImageClassification.from_pretrained("facebook/convnext-base-224-22k") inputs = feature_extractor(image, return_tensors="pt") with torch.no_grad(): logits = model(**inputs).logits | 1c5d39f613a4a64d35b0fd54765a7133 |
apache-2.0 | ['vision', 'image-classification'] | false | model predicts one of the 22k ImageNet classes predicted_label = logits.argmax(-1).item() print(model.config.id2label[predicted_label]), ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/convnext). | e73f2df7dc92a561461a238275845539 |
apache-2.0 | ['vision', 'image-classification'] | false | BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2201-03545, author = {Zhuang Liu and Hanzi Mao and Chao{-}Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie}, title = {A ConvNet for the 2020s}, journal = {CoRR}, volume = {abs/2201.03545}, year = {2022}, url = {https://arxiv.org/abs/2201.03545}, eprinttype = {arXiv}, eprint = {2201.03545}, timestamp = {Thu, 20 Jan 2022 14:21:35 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2201-03545.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` | c908b32a7d6205887f56f0c12b83abaa |
mit | [] | false | ReACC-py-retriever This is the retrieval model for [ReACC: A Retrieval-Augmented Code Completion Framework](https://arxiv.org/abs/2203.07722). In this paper, the model is used to retrieve similar codes given an incompletion code snippet as query. The model can be also used for incomplete code-to-code search, code clone detection. `py-retriever` is BERT-like encoder consisting of 12 transformer layers. It is continual pre-trained on [GraphCodeBERT](https://huggingface.co/microsoft/graphcodebert-base) with contrastive learning in Python programming language. More details can be found in our paper. Note that the format of input codes is different from original source code. We normalize the source codes to better capture information from line break and indention in Python. An example of input is: ```python sum = 0<endofline>for val in numbers:<endofline><INDENT>sum = sum+val ``` To get more information about how to convert source codes into this format, please refer to [ReACC GitHub repo](https://github.com/microsoft/ReACC). | dedb94a25dac13a08eec472a0a1bd9bd |
mit | ['generated_from_trainer'] | false | bart-cnn-science-v3-e2 This model is a fine-tuned version of [theojolliffe/bart-cnn-science](https://huggingface.co/theojolliffe/bart-cnn-science) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.9352 - Rouge1: 52.5497 - Rouge2: 32.5507 - Rougel: 35.0014 - Rougelsum: 50.0575 - Gen Len: 141.5741 | d115246aadbc83c7f742cda7cee85e85 |
mit | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:| | No log | 1.0 | 398 | 1.0023 | 52.0744 | 31.917 | 33.2804 | 49.6569 | 142.0 | | 1.1851 | 2.0 | 796 | 0.9352 | 52.5497 | 32.5507 | 35.0014 | 50.0575 | 141.5741 | | fac470269359e5594678380a237bdebf |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 35.0 | 4001a2395ca4f1850f1b5525cae18cb2 |
apache-2.0 | ['deep-narrow'] | false | T5-Efficient-BASE-KV32 (Deep-Narrow version) T5-Efficient-BASE-KV32 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5). It is a *pretrained-only* checkpoint and was released with the paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*. In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures of similar parameter count. To quote the paper: > We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased > before considering any other forms of uniform scaling across other dimensions. This is largely due to > how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a > tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise, > a tall base model might also generally more efficient compared to a large model. We generally find > that, regardless of size, even if absolute performance might increase as we continue to stack layers, > the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36 > layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e., > params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params, > FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to > consider. To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially. A sequence of word embeddings is therefore processed sequentially by each transformer block. | 5916af8ae3bbd63d65752d6d5f5dba9b |
apache-2.0 | ['deep-narrow'] | false | Details model architecture This model checkpoint - **t5-efficient-base-kv32** - is of model type **Base** with the following variations: - **kv** is **32** It has **180.46** million parameters and thus requires *ca.* **721.86 MB** of memory in full precision (*fp32*) or **360.93 MB** of memory in half precision (*fp16* or *bf16*). A summary of the *original* T5 model architectures can be seen here: | Model | nl (el/dl) | ff | dm | kv | nh | | ac2ad92b7adaad558111a97eac312018 |
mit | ['generated_from_trainer'] | false | Bio_ClinicalBERT-zero-shot This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 0.5417 - eval_accuracy: 1.0 - eval_f1: 1.0 - eval_runtime: 4.3261 - eval_samples_per_second: 6.241 - eval_steps_per_second: 0.462 - step: 0 | 98f8a6520adcb10c11c5d596c196027a |
apache-2.0 | [] | false | Vision-and-Language Transformer (ViLT), fine-tuned on NLVR2 Vision-and-Language Transformer (ViLT) model fine-tuned on [NLVR2](https://lil.nlp.cornell.edu/nlvr/). It was introduced in the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Kim et al. and first released in [this repository](https://github.com/dandelin/ViLT). Disclaimer: The team releasing ViLT did not write a model card for this model so this model card has been written by the Hugging Face team. | 0324abe0e75c47d49858b89c0c35fcc9 |
apache-2.0 | [] | false | How to use Here is how to use the model in PyTorch: ``` from transformers import ViltProcessor, ViltForImagesAndTextClassification import requests from PIL import Image image1 = Image.open(requests.get("https://lil.nlp.cornell.edu/nlvr/exs/ex0_0.jpg", stream=True).raw) image2 = Image.open(requests.get("https://lil.nlp.cornell.edu/nlvr/exs/ex0_1.jpg", stream=True).raw) text = "The left image contains twice the number of dogs as the right image." processor = ViltProcessor.from_pretrained("dandelin/vilt-b32-finetuned-nlvr2") model = ViltForImagesAndTextClassification.from_pretrained("dandelin/vilt-b32-finetuned-nlvr2") | afe0fd6cc3090dfb9632b7daf474b49e |
apache-2.0 | [] | false | forward pass outputs = model(input_ids=encoding.input_ids, pixel_values=encoding.pixel_values.unsqueeze(0)) logits = outputs.logits idx = logits.argmax(-1).item() print("Predicted answer:", model.config.id2label[idx]) ``` | 98b5ba1f21a9e57345e9c0733add7bdf |
apache-2.0 | [] | false | BibTeX entry and citation info ```bibtex @misc{kim2021vilt, title={ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision}, author={Wonjae Kim and Bokyung Son and Ildoo Kim}, year={2021}, eprint={2102.03334}, archivePrefix={arXiv}, primaryClass={stat.ML} } ``` | 78505015d9f1fcbd6417d183cf0fa855 |
apache-2.0 | ['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers'] | false | sentence-transformers/msmarco-MiniLM-L-6-v3 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. | 49651eb0d918d527e91752d2ed2e171d |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.