license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
creativeml-openrail-m
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'landscape', 'classical-art']
false
Examples Since it's more for landscape painting, the image size matters. I found that 512*1024 normally gave interesting results. Check out this gallery for more generated images: https://www.vuhongai.com/classicalart-ai
9eee6ece9ae924142ed02fff0d1f468e
creativeml-openrail-m
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'landscape', 'classical-art']
false
style of a fishing village under a cherry blossom forest at sunset" image = pipe(prompt, num_inference_steps=200, guidance_scale=5, height=512, width=1024, ).images[0] image ```
07f7d3e2974f74a14f9953e5945a9d7a
mit
[]
false
model by rusoloco73 This your the Stable Diffusion model fine-tuned the PeronV2 concept taught to Stable Diffusion with Dreambooth. It can be used by modifying the `instance_prompt`: **a photo of sks peron** You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb). And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts) Here are the images used for training this concept: ![image 0](https://huggingface.co/rusoloco73/peronv2/resolve/main/concept_images/4.jpeg) ![image 1](https://huggingface.co/rusoloco73/peronv2/resolve/main/concept_images/8.jpeg) ![image 2](https://huggingface.co/rusoloco73/peronv2/resolve/main/concept_images/5.jpeg) ![image 3](https://huggingface.co/rusoloco73/peronv2/resolve/main/concept_images/9.jpeg) ![image 4](https://huggingface.co/rusoloco73/peronv2/resolve/main/concept_images/0.jpeg) ![image 5](https://huggingface.co/rusoloco73/peronv2/resolve/main/concept_images/7.jpeg) ![image 6](https://huggingface.co/rusoloco73/peronv2/resolve/main/concept_images/3.jpeg) ![image 7](https://huggingface.co/rusoloco73/peronv2/resolve/main/concept_images/2.jpeg) ![image 8](https://huggingface.co/rusoloco73/peronv2/resolve/main/concept_images/6.jpeg) ![image 9](https://huggingface.co/rusoloco73/peronv2/resolve/main/concept_images/10.jpeg) ![image 10](https://huggingface.co/rusoloco73/peronv2/resolve/main/concept_images/1.jpeg)
82eeeabca769b2fd85a6e91ad39431d3
apache-2.0
['fill-mask']
false
Introduction The research for understanding the bias in criminal court decisions need the support of natural language processing tools. The pre-trained language model has greatly improved the accuracy of text mining in general texts. At present, there is an urgent need for a pre-trained language model specifically for the automatic processing of court decision texts. We used the text from the [Bailii website](https://www.bailii.org/ew/cases/EWCA/Crim/) as the training set. Based on the deep language model framework of RoBERTa, we constructed bailii-roberta pre-training language model by [transformers/run_mlm.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_mlm.py) and [transformers/mlm_wwm](https://github.com/huggingface/transformers/tree/main/examples/research_projects/mlm_wwm).
43854dd42c7ddfa63857bd5c279c38c6
apache-2.0
['fill-mask']
false
Huggingface Transformers The `from_pretrained` method based on [Huggingface Transformers](https://github.com/huggingface/transformers) can directly obtain bailii-roberta model online. ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("tsantosh7/bailii-roberta") model = AutoModel.from_pretrained("tsantosh7/bailii-roberta") ```
0a9907801973b3c290ec24c88bbaf44b
apache-2.0
['fill-mask']
false
Disclaimer - The experimental results presented in the report only show the performance under a specific data set and hyperparameter combination, and cannot represent the essence of each model. The experimental results may change due to the random number of seeds and computing equipment. - **Users can use the model arbitrarily within the scope of the license, but we are not responsible for the direct or indirect losses caused by using the content of the project.**
70bdb26bba85e3bfa19e2baa968d7da3
apache-2.0
[]
false
模型分类 Model Taxonomy | 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra | | :----: | :----: | :----: | :----: | :----: | :----: | | 通用 General | 自然语言理解 NLU | 二郎神 Erlangshen | Longformeer | 110M | 中文 Chinese |
9ffbf6ee1d919def3f13694565273dd9
apache-2.0
[]
false
模型信息 Model Information 遵循Longformer-base的设计,我们基于[chinese_roformer_L-12_H-768_A-12](https://github.com/ZhuiyiTechnology/roformer),在悟道语料库(180 GB版本)上进行了继续预训练。特别的,我们采用旋转位置嵌入(RoPE)来避免预训练语料库的不均匀序列长度问题。 Following the design of Longformer-base, we performed continual pre-training on the WuDao corpus (180 GB) based on [chinese_roformer_L-12_H-768_A-12](https://github.com/ZhuiyiTechnology/roformer). Particularly, we employed rotational position embedding (RoPE) to avoid the uneven sequence length of the pre-trained corpus.
a7d7a210d2b1342508a21d6f3eeb4376
apache-2.0
[]
false
使用 Usage 因为[transformers](https://github.com/huggingface/transformers)库中是没有Longformer-base相关的模型结构的,所以你可以在我们的[Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)中找到并且运行代码。 Since there is no structure of Longformer-base in [transformers library](https://github.com/huggingface/transformers), you can find the structure of Longformer-base and run the codes in [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM). ```shell git clone https://github.com/IDEA-CCNL/Fengshenbang-LM.git ```
8c4a81ffa04bcb9d609f6d1d340c3ff9
apache-2.0
[]
false
加载模型 Loading Models ```python from fengshen import LongformerModel from fengshen import LongformerConfig from transformers import BertTokenizer tokenizer = BertTokenizer.from_pretrained("IDEA-CCNL/Erlangshen-Longformer-110M") config = LongformerConfig.from_pretrained("IDEA-CCNL/Erlangshen-Longformer-110M") model = LongformerModel.from_pretrained("IDEA-CCNL/Erlangshen-Longformer-110M") ```
758525dfedb6344a6b7985601f10affc
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 125 | 0.5128 | 0.8157 |
12b85187cae1ebc1cd681dae04a2c89b
apache-2.0
['feature-extraction', 'transformers']
false
Evaluation | Class | Precision | Recall | F1-Score | Support | |-------|-----------|--------|----------|---------| | hard_negative | 0.9963 | 0.9963 | 0.9963 | 183090 | | positive | 0.8849 | 0.8849 | 0.8849 | 5910 | | Metric | Value | |--------|-------| | Accuracy | 0.9928 | | Macro Average | 0.9406 | | Weighted Average | 0.9928 | <p style="font-size:16px">Note: This report is for evaluation on the dev set, after 12000 batches.</p>
935c1aeb4c46691076b2296ccb7e49e9
apache-2.0
['feature-extraction', 'transformers']
false
Usage ```python from transformers import DPRQuestionEncoder, DPRQuestionEncoderTokenizer tokenizer = DPRQuestionEncoderTokenizer.from_pretrained('firqaaa/indo-dpr-question_encoder-single-squad-base') model = DPRQuestionEncoder.from_pretrained('firqaaa/indo-dpr-question_encoder-single-squad-base') input_ids = tokenizer("Ibukota Indonesia terletak dimana?", return_tensors='pt')["input_ids"] embeddings = model(input_ids).pooler_output ``` We can use it using `haystack` as follows: ``` from haystack.nodes import DensePassageRetriever from haystack.document_stores import InMemoryDocumentStore retriever = DensePassageRetriever(document_store=InMemoryDocumentStore(), query_embedding_model="firqaaa/indo-dpr-question_encoder-single-squad-base", passage_embedding_model="firqaaa/indo-dpr-question_encoder-single-squad-base", max_seq_len_query=64, max_seq_len_passage=256, batch_size=16, use_gpu=True, embed_title=True, use_fast_tokenizers=True) ```
1aa76058f9600d6ad5955c2278204182
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP
8f7a8db7bd3cfe285047cf044a731f58
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Gen Len | P | R | F1 | Bleu-score | Bleu-precisions | Bleu-bp | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:------:|:----------:|:----------------------------------------------------------------------------:|:-------:| | No log | 1.0 | 51 | 2.8581 | 19.0 | 0.3301 | 0.0433 | 0.1830 | 7.5917 | [69.82603479304139, 45.68226763348714, 32.33357717629846, 24.56861133935908] | 0.1903 |
585a24b6a531e3ee4dd91d727dd04648
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
DataikuNLP/paraphrase-albert-small-v2 **This model is a copy of [this model repository](https://huggingface.co/sentence-transformers/paraphrase-albert-small-v2/) from sentence-transformers at the specific commit `1eb1996223dd90a4c25be2fc52f6f336419a0d52`.** This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
9807c4072239c27ca34d6b7298595a02
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/paraphrase-albert-small-v2') embeddings = model.encode(sentences) print(embeddings) ```
187f26ceeb33cafa2133a4cdd34f0843
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/paraphrase-albert-small-v2') model = AutoModel.from_pretrained('sentence-transformers/paraphrase-albert-small-v2')
0bd05e8d7ab9db04aaa60d34258909a6
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/paraphrase-albert-small-v2)
4e629c2f1a6e3059d673779eac86891b
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 100, 'do_lower_case': False}) with Transformer model: AlbertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ```
819dc2cdf724ce2c8d4d6d16554a611d
apache-2.0
['translation']
false
opus-mt-gil-fi * source languages: gil * target languages: fi * OPUS readme: [gil-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/gil-fi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/gil-fi/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/gil-fi/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/gil-fi/opus-2020-01-09.eval.txt)
9af1f92a986e318bc27f5261451db718
apache-2.0
['generated_from_trainer']
false
communication-classifier This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - eval_loss: 0.1249 - eval_accuracy: 0.9644 - eval_f1: 0.9644 - eval_runtime: 2.6719 - eval_samples_per_second: 126.126 - eval_steps_per_second: 8.234 - epoch: 3.0 - step: 255
aa8ec5b744ad2d0a56cee072d86b912c
creativeml-openrail-m
['text-to-image']
false
boys Dreambooth model trained by duja1 with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! Sample pictures of: b123oy (use that on your prompt)
7baf850fda6332c650debaf369dcb11f
mit
[]
false
led-toy on Stable Diffusion This is the `<led-toy>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<led-toy> 0](https://huggingface.co/sd-concepts-library/led-toy/resolve/main/concept_images/3.jpeg) ![<led-toy> 1](https://huggingface.co/sd-concepts-library/led-toy/resolve/main/concept_images/0.jpeg) ![<led-toy> 2](https://huggingface.co/sd-concepts-library/led-toy/resolve/main/concept_images/1.jpeg) ![<led-toy> 3](https://huggingface.co/sd-concepts-library/led-toy/resolve/main/concept_images/2.jpeg)
9b8d1ac26e10bdb2ba35edbf748abc60
apache-2.0
['catalan', 'part of speech tagging', 'pos', 'CaText', 'Catalan Textual Corpus']
false
Model description The **roberta-base-ca-v2-cased-pos** is a Part-of-speech-tagging (POS) model for the Catalan language fine-tuned from the [roberta-base-ca-v2](https://huggingface.co/projecte-aina/roberta-base-ca-v2) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model pre-trained on a medium-size corpus collected from publicly available corpora and crawlers (check the roberta-base-ca-v2 model card for more details).
d1571071a4cac99c17e36915cf763397
apache-2.0
['catalan', 'part of speech tagging', 'pos', 'CaText', 'Catalan Textual Corpus']
false
Intended uses and limitations **roberta-base-ca-v2-cased-pos** model can be used to Part-of-speech-tagging (POS) a text. The model is limited by its training dataset and may not generalize well for all use cases.
0618c9285cee938f03e4d4469b7db50e
apache-2.0
['catalan', 'part of speech tagging', 'pos', 'CaText', 'Catalan Textual Corpus']
false
How to use Here is how to use this model: ```python from transformers import pipeline from pprint import pprint nlp = pipeline("token-classification", model="projecte-aina/roberta-base-ca-v2-cased-pos") example = "Em dic Lluïsa i visc a Santa Maria del Camí." pos_results = nlp(example) pprint(pos_results) ```
2705dfa9e26e49b25e742ca74da18300
apache-2.0
['catalan', 'part of speech tagging', 'pos', 'CaText', 'Catalan Textual Corpus']
false
Training data We used the POS dataset in Catalan from the [Universal Dependencies Treebank](https://huggingface.co/datasets/universal_dependencies) we refer to _Ancora-ca-pos_ for training and evaluation.
7f0334456f4a02f9925f9a8e5ca1dd2a
apache-2.0
['catalan', 'part of speech tagging', 'pos', 'CaText', 'Catalan Textual Corpus']
false
Evaluation results We evaluated the _roberta-base-ca-v2-cased-pos_ on the Ancora-ca-ner test set against standard multilingual and monolingual baselines: | Model | Ancora-ca-pos (F1) | | ------------|:-------------| | roberta-base-ca-v2-cased-pos | **98.96** | | roberta-base-ca-cased-pos | **98.96** | | mBERT | 98.83 | | XLM-RoBERTa | 98.89 | For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/projecte-aina/club).
d5ce98cf420a082cc2f8494d3ee55150
apache-2.0
['catalan', 'part of speech tagging', 'pos', 'CaText', 'Catalan Textual Corpus']
false
Citation information If you use any of these resources (datasets or models) in your work, please cite our latest paper: ```bibtex @inproceedings{armengol-estape-etal-2021-multilingual, title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan", author = "Armengol-Estap{\'e}, Jordi and Carrino, Casimiro Pio and Rodriguez-Penagos, Carlos and de Gibert Bonet, Ona and Armentano-Oller, Carme and Gonzalez-Agirre, Aitor and Melero, Maite and Villegas, Marta", booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.findings-acl.437", doi = "10.18653/v1/2021.findings-acl.437", pages = "4933--4946", } ```
a95cf181d72e8c7b041c0e593bac9ac8
apache-2.0
['vision', 'image-classification']
false
Swin Transformer v2 (base-sized model) Swin Transformer v2 model pre-trained on ImageNet-1k at resolution 256x256. It was introduced in the paper [Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/abs/2111.09883) by Liu et al. and first released in [this repository](https://github.com/microsoft/Swin-Transformer). Disclaimer: The team releasing Swin Transformer v2 did not write a model card for this model so this model card has been written by the Hugging Face team.
1913879ba74c388f6e6e726d4f721253
apache-2.0
['vision', 'image-classification']
false
How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import AutoImageProcessor, AutoModelForImageClassification from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) processor = AutoImageProcessor.from_pretrained("microsoft/swinv2-base-patch4-window16-256") model = AutoModelForImageClassification.from_pretrained("microsoft/swinv2-base-patch4-window16-256") inputs = processor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits
df4147555978a8d680fae309f55c1817
apache-2.0
['vision', 'image-classification']
false
model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/swinv2.html
6b460f57ef88b0f439584910cac8238e
mit
['xho', 'fill-mask', 'pytorch', 'roberta', 'masked-lm']
false
Model description Takalani Sesame (named after the South African version of Sesame Street) is a project that aims to promote the use of South African languages in NLP, and in particular look at techniques for low-resource languages to equalise performance with larger languages around the world.
f92326ec46633dd0d110b6ae6890329b
mit
['xho', 'fill-mask', 'pytorch', 'roberta', 'masked-lm']
false
How to use ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("jannesg/takalane_xho_roberta") model = AutoModelWithLMHead.from_pretrained("jannesg/takalane_xho_roberta") ```
5b67b2738fe3f4a07cf2eb9bbc555476
apache-2.0
['translation']
false
opus-mt-de-ha * source languages: de * target languages: ha * OPUS readme: [de-ha](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-ha/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-ha/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-ha/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-ha/opus-2020-01-20.eval.txt)
b3864ef92997e7c52cf1ebab05dc47ab
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-advers This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the adversarial_qa dataset. It achieves the following results on the evaluation set: - Loss: 3.6462
aebbad7e91f2ee9ceb38455c9aedd25b
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 9e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 3000
73aa3dbd79fefa3c5dfd946f32ab1a29
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8239 - Matthews Correlation: 0.5495
6efdf52d5ed7b1d671470976721fdbe9
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5235 | 1.0 | 535 | 0.5402 | 0.4156 | | 0.3484 | 2.0 | 1070 | 0.5272 | 0.5233 | | 0.2381 | 3.0 | 1605 | 0.6665 | 0.5050 | | 0.1746 | 4.0 | 2140 | 0.7512 | 0.5429 | | 0.1308 | 5.0 | 2675 | 0.8239 | 0.5495 |
a47fb5007d6a9f59c174fc1a3ac095a1
other
['text-generation', 'opt']
false
Model Description [OPT-IML (OPT + Instruction Meta-Learning)](https://arxiv.org/abs/2212.12017) is a set of instruction-tuned versions of OPT, on a collection of ~2000 NLP tasks gathered from 8 NLP benchmarks, called OPT-IML Bench. We provide two model versions: * OPT-IML trained on 1500 tasks with several tasks held-out for purposes of downstream evaluation, and * OPT-IML-Max trained on all ~2000 tasks
0715e7a35764c3dba1f3dc9a68335120
other
['text-generation', 'opt']
false
How to use For large OPT models, such as this one, it is not recommend to make use of the `text-generation` pipeline because one should load the model in half-precision to accelerate generation and optimize memory consumption on GPU. It is recommended to directly call the [`generate`](https://huggingface.co/docs/transformers/main/en/main_classes/text_generation
94b77e9ca19866781a21b437892ae931
other
['text-generation', 'opt']
false
transformers.generation_utils.GenerationMixin.generate) method as follows: ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> import torch >>> model = AutoModelForCausalLM.from_pretrained("facebook/opt-iml-30b", torch_dtype=torch.float16).cuda() >>>
5bad452071cb21038fda4053b885b911
other
['text-generation', 'opt']
false
the fast tokenizer currently does not work correctly >>> tokenizer = AutoTokenizer.from_pretrained("facebook/opt-iml-30b", use_fast=False) >>> prompt = "What is the color of a carrot?\nA:" >>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda() >>> generated_ids = model.generate(input_ids) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True) ```
ad82d36eb1ab2e85ef44eae27e03131b
other
['text-generation', 'opt']
false
Limitations and bias While OPT-IML models outperform baseline OPT on an extensive set of evaluations, nevertheless, they are susceptible to the various risks associated with using large language models relating to factual correctness, generation of toxic language and enforcing stereotypes. While we release our OPT-IML models to proliferate future work on instruction-tuning and to improve the availability of large instruction-tuned causal LMs, the use of these models should be accompanied with responsible best practices.
1e37571df1d37dcf50bc251fdc6ed208
other
['text-generation', 'opt']
false
Training data OPT-IML models are trained on OPT-IML Bench, a large benchmark for Instruction MetaLearning (IML) of 2000 NLP tasks consolidated into task categories from 8 existing benchmarks include Super-NaturalInstructions, FLAN, PromptSource, etc.
84f7370f268285528d76e888fe83a8a8
other
['text-generation', 'opt']
false
Training procedure The texts are tokenized using the GPT2 byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a vocabulary size of 50272. The inputs are sequences of 2048 consecutive tokens. The 30B model was fine-tuned on 64 40GB A100 GPUs. During fine-tuning, models saw approximately 2 billion tokens, which is only 0.6% of the pre-training budget of OPT.
2628309d7dc1741af19167b895e4a716
other
['text-generation', 'opt']
false
BibTeX entry and citation info ```bibtex @misc{iyer2022opt, title={OPT-IML: Scaling Language Model Instruction Meta Learning through the Lens of Generalization}, author={Iyer, Srinivasan and Lin, Xi Victoria and Pasunuru, Ramakanth and Mihaylov, Todor and Simig, D{\'a}niel and Yu, Ping and Shuster, Kurt and Wang, Tianlu and Liu, Qing and Koura, Punit Singh and others}, year={2022}, eprint={2212.12017}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
2614998be459024248f0f2392cab59ce
cc-by-sa-4.0
[]
false
BERT base Japanese (IPA dictionary, whole word masking enabled) This is a [BERT](https://github.com/google-research/bert) model pretrained on texts in the Japanese language. This version of the model processes input texts with word-level tokenization based on the IPA dictionary, followed by the WordPiece subword tokenization. Additionally, the model is trained with the whole word masking enabled for the masked language modeling (MLM) objective. The codes for the pretraining are available at [cl-tohoku/bert-japanese](https://github.com/cl-tohoku/bert-japanese/tree/v1.0).
4c190399fe8fbd18115ef1d2a4d5a548
cc-by-sa-4.0
[]
false
Training Data The model is trained on Japanese Wikipedia as of September 1, 2019. To generate the training corpus, [WikiExtractor](https://github.com/attardi/wikiextractor) is used to extract plain texts from a dump file of Wikipedia articles. The text files used for the training are 2.6GB in size, consisting of approximately 17M sentences.
4511e9889dcd180a58c59dc602aea9dc
cc-by-sa-4.0
[]
false
Tokenization The texts are first tokenized by [MeCab](https://taku910.github.io/mecab/) morphological parser with the IPA dictionary and then split into subwords by the WordPiece algorithm. The vocabulary size is 32000.
983a32e78b6c3d62e36471c7634257b2
cc-by-sa-4.0
[]
false
Training The model is trained with the same configuration as the original BERT; 512 tokens per instance, 256 instances per batch, and 1M training steps. For the training of the MLM (masked language modeling) objective, we introduced the **Whole Word Masking** in which all of the subword tokens corresponding to a single word (tokenized by MeCab) are masked at once.
13e5453d687d721ce119d6b60c5e9fe9
cc-by-4.0
['automatic-speech-recognition', 'speech', 'audio', 'Transducer', 'Conformer', 'Transformer', 'pytorch', 'NeMo', 'hf-asr-leaderboard']
false
datasets) This model transcribes speech in lower case English alphabet along with spaces and apostrophes. It is an "extra-large" versions of Conformer-Transducer (around 600M parameters) model. See the [model architecture](
2731bfeb08f47b0bb1e91fa971974feb
cc-by-4.0
['automatic-speech-recognition', 'speech', 'audio', 'Transducer', 'Conformer', 'Transformer', 'pytorch', 'NeMo', 'hf-asr-leaderboard']
false
NVIDIA NeMo: Training To train, fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed latest Pytorch version. ``` pip install nemo_toolkit['all'] ''' ''' (if it causes an error): pip install nemo_toolkit[all] ```
d58a8c154ac300183513fdccf187cc86
cc-by-4.0
['automatic-speech-recognition', 'speech', 'audio', 'Transducer', 'Conformer', 'Transformer', 'pytorch', 'NeMo', 'hf-asr-leaderboard']
false
Automatically instantiate the model ```python import nemo.collections.asr as nemo_asr asr_model = nemo_asr.models.EncDecRNNTBPEModel.from_pretrained("nvidia/stt_en_conformer_transducer_xlarge") ```
0b2df15a216a1323703397ff4407ccb5
cc-by-4.0
['automatic-speech-recognition', 'speech', 'audio', 'Transducer', 'Conformer', 'Transformer', 'pytorch', 'NeMo', 'hf-asr-leaderboard']
false
Transcribing many audio files ```shell python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py pretrained_name="nvidia/stt_en_conformer_transducer_xlarge" audio_dir="<DIRECTORY CONTAINING AUDIO FILES>" ```
e68f34561ad1120f221b3f6084826dce
cc-by-4.0
['automatic-speech-recognition', 'speech', 'audio', 'Transducer', 'Conformer', 'Transformer', 'pytorch', 'NeMo', 'hf-asr-leaderboard']
false
Training The NeMo toolkit [3] was used for training the models for over several hundred epochs. These model are trained with this [example script](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/asr_transducer/speech_to_text_rnnt_bpe.py) and this [base config](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/conf/conformer/conformer_transducer_bpe.yaml). The tokenizers for these models were built using the text transcripts of the train set with this [script](https://github.com/NVIDIA/NeMo/blob/main/scripts/tokenizers/process_asr_text_tokenizer.py).
f19d5e063856c1c193e57bf3a1a628bc
cc-by-4.0
['automatic-speech-recognition', 'speech', 'audio', 'Transducer', 'Conformer', 'Transformer', 'pytorch', 'NeMo', 'hf-asr-leaderboard']
false
Datasets All the models in this collection are trained on a composite dataset (NeMo ASRSET) comprising of several thousand hours of English speech: - Librispeech 960 hours of English speech - Fisher Corpus - Switchboard-1 Dataset - WSJ-0 and WSJ-1 - National Speech Corpus (Part 1, Part 6) - VCTK - VoxPopuli (EN) - Europarl-ASR (EN) - Multilingual Librispeech (MLS EN) - 2,000 hrs subset - Mozilla Common Voice (v8.0) - People's Speech - 12,000 hrs subset Note: older versions of the model may have trained on smaller set of datasets.
f0d6304f0a1d1a5bce30893630c36a64
cc-by-4.0
['automatic-speech-recognition', 'speech', 'audio', 'Transducer', 'Conformer', 'Transformer', 'pytorch', 'NeMo', 'hf-asr-leaderboard']
false
Performance The list of the available models in this collection is shown in the following table. Performances of the ASR models are reported in terms of Word Error Rate (WER%) with greedy decoding. | Version | Tokenizer | Vocabulary Size | LS test-other | LS test-clean | WSJ Eval92 | WSJ Dev93 | NSC Part 1 | MLS Test | MLS Dev | MCV Test 8.0 | Train Dataset | |---------|-----------------------|-----------------|---------------|---------------|------------|-----------|-----|-------|------|----|------| | 1.10.0 | SentencePiece Unigram | 1024 | 3.01 | 1.62 | 1.17 | 2.05 | 5.70 | 5.32 | 4.59 | 6.46 | NeMo ASRSET 3.0 |
eea677c80dd499cfbcde803353f5a68d
cc-by-4.0
['automatic-speech-recognition', 'speech', 'audio', 'Transducer', 'Conformer', 'Transformer', 'pytorch', 'NeMo', 'hf-asr-leaderboard']
false
References [1] [Conformer: Convolution-augmented Transformer for Speech Recognition](https://arxiv.org/abs/2005.08100) [2] [Google Sentencepiece Tokenizer](https://github.com/google/sentencepiece) [3] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
36c81fb4658b512b45d3e9aa4c929dd7
cc-by-4.0
['automatic-speech-recognition', 'speech', 'audio', 'Transducer', 'Conformer', 'Transformer', 'pytorch', 'NeMo', 'hf-asr-leaderboard']
false
Licence License to use this model is covered by the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/). By downloading the public and release version of the model, you accept the terms and conditions of the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/) license.
11dbc4ac28854d2e1e9b37a37cce2714
apache-2.0
['generated_from_trainer']
false
apache-access This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2605
6a8e5232aeb1c45e04c5661b67b39cdb
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.3744 | 1.0 | 18523 | 0.3469 | | 0.3071 | 2.0 | 37046 | 0.2804 | | 0.2796 | 3.0 | 55569 | 0.2636 |
07df7ead73603c471b3a49f2b943c0f6
apache-2.0
['generated_from_trainer']
false
BERT This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8223 - Accuracy: 0.82 - Precision: 0.84 - Recall: 0.9130 - F1: 0.8750
f59f70bf4d28eb26ae775509c51c8b27
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.6778 | 1.0 | 50 | 0.6148 | 0.69 | 0.7794 | 0.7681 | 0.7737 | | 0.5331 | 2.0 | 100 | 0.5578 | 0.8 | 0.8267 | 0.8986 | 0.8611 | | 0.3768 | 3.0 | 150 | 0.5052 | 0.73 | 0.8889 | 0.6957 | 0.7805 | | 0.2802 | 4.0 | 200 | 0.4998 | 0.86 | 0.8667 | 0.9420 | 0.9028 | | 0.1869 | 5.0 | 250 | 0.5187 | 0.81 | 0.8906 | 0.8261 | 0.8571 | | 0.1293 | 6.0 | 300 | 0.6516 | 0.85 | 0.8649 | 0.9275 | 0.8951 | | 0.1165 | 7.0 | 350 | 0.6541 | 0.82 | 0.8806 | 0.8551 | 0.8676 | | 0.0937 | 8.0 | 400 | 0.6855 | 0.84 | 0.8841 | 0.8841 | 0.8841 | | 0.0791 | 9.0 | 450 | 0.7652 | 0.81 | 0.8472 | 0.8841 | 0.8652 | | 0.0599 | 10.0 | 500 | 0.8223 | 0.82 | 0.84 | 0.9130 | 0.8750 |
ef4c7ef2dfd889ab02cf9e5018820e77
mit
['generated_from_trainer']
false
deberta-base-finetuned-aqa This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on the adversarial_qa dataset. It achieves the following results on the evaluation set: - Loss: 1.6394
a964ff8121c3a26660b10ecaa911ccf0
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.1054 | 1.0 | 2527 | 1.6947 | | 1.5387 | 2.0 | 5054 | 1.6394 |
1bf7e7b3c0aa2dabe39428c60da2b52a
creativeml-openrail-m
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'animal']
false
DreamBooth model for the vincentcat concept trained by juancopi81 on the juancopi81/jcp-vincent-cat dataset. This is a Stable Diffusion model fine-tuned on the [vincentcat](https://huggingface.co/datasets/juancopi81/jcp-vincent-cat) concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of vincentcat cat** This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
3730d784cc1a75646becce44ef226fb2
creativeml-openrail-m
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'animal']
false
Examples --- Prompt: A painting of vincentcat cat in the style of Van Gogh <img src="https://huggingface.co/juancopi81/vincentcat-cat/resolve/main/Vincent_VG_final.jpeg"> --- Prompt: <img src="https://huggingface.co/juancopi81/vincentcat-cat/resolve/main/Vincent_Cartoon_1.png"> <img src="https://huggingface.co/juancopi81/vincentcat-cat/resolve/main/Vincent_Cartoon_2.png"> --- Prompt: painting of vincentcat cat as an anime warrior, trending on artstation pixiv makoto shinkai <img src="https://huggingface.co/juancopi81/vincentcat-cat/resolve/main/Vincent_7.jpg"> --- Prompt: A painting of vincentcat cat, acrylic palette knife <img src="https://huggingface.co/juancopi81/vincentcat-cat/resolve/main/Vincent_3.jpg"> --- Prompt: <img src="https://huggingface.co/juancopi81/vincentcat-cat/resolve/main/Vincent_VG_2.png"> --- Prompt: Painting of vincentcat cat flying around the moon in the style of Leonardo Da Vinci <img src="https://huggingface.co/juancopi81/vincentcat-cat/resolve/main/Vincent_7.jpg"> --- Prompt: A photo of the Acropolis, and a portrair of vincentcat cat walking near the tower <img src="https://huggingface.co/juancopi81/vincentcat-cat/resolve/main/Vincent_6.jpg"> --- Prompt: A photo of the Eiffel Tower, a vincentcat cat is walking near the tower <img src="https://huggingface.co/juancopi81/vincentcat-cat/resolve/main/Vincent_5.jpg">
40d488ea64c4831bd646c8e9eb8563f4
apache-2.0
['part-of-speech', 'token-classification']
false
XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Korean This model is part of our paper called: - Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
961aded7ddcf2c501f86d820c422f7f1
apache-2.0
['part-of-speech', 'token-classification']
false
Usage ```python from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-ko") model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-ko") ```
014b5230982eafd664cdc67fbe602e7b
apache-2.0
['seq2seq', 'lm-head']
false
Italian T5 Base 🇮🇹 The [IT5](https://huggingface.co/models?search=it5) model family represents the first effort in pretraining large-scale sequence-to-sequence transformer models for the Italian language, following the approach adopted by the original [T5 model](https://github.com/google-research/text-to-text-transfer-transformer). This model is released as part of the project ["IT5: Large-Scale Text-to-Text Pretraining for Italian Language Understanding and Generation"](https://arxiv.org/abs/2203.03759), by [Gabriele Sarti](https://gsarti.com/) and [Malvina Nissim](https://malvinanissim.github.io/) with the support of [Huggingface](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104) and with TPU usage sponsored by Google's [TPU Research Cloud](https://sites.research.google/trc/). All the training was conducted on a single TPU3v8-VM machine on Google Cloud. Refer to the Tensorboard tab of the repository for an overview of the training process. *TThe inference widget is deactivated because the model needs a task-specific seq2seq fine-tuning on a downstream task to be useful in practice. The models in the [`it5`](https://huggingface.co/it5) organization provide some examples of this model fine-tuned on various downstream task.*
d00de72c03804ad9fb5c11fd460a6be3
apache-2.0
['seq2seq', 'lm-head']
false
Model variants This repository contains the checkpoints for the `base` version of the model. The model was trained for one epoch (1.05M steps) on the [Thoroughly Cleaned Italian mC4 Corpus](https://huggingface.co/datasets/gsarti/clean_mc4_it) (~41B words, ~275GB) using 🤗 Datasets and the `google/t5-v1_1-base` improved configuration. Another version of this model trained on the [OSCAR corpus](https://oscar-corpus.com/) is also available under the name [`gsarti/it5-base-oscar`](https://huggingface.co/gsartiit5-base-oscar). The training procedure is made available [on Github](https://github.com/gsarti/t5-flax-gcp). The following table summarizes the parameters for all available models | |`it5-small` |`it5-base` (this one) |`it5-large` |`it5-base-oscar` | |-----------------------|-----------------------|----------------------|-----------------------|----------------------------------| |`dataset` |`gsarti/clean_mc4_it` |`gsarti/clean_mc4_it` |`gsarti/clean_mc4_it` |`oscar/unshuffled_deduplicated_it`| |`architecture` |`google/t5-v1_1-small` |`google/t5-v1_1-base` |`google/t5-v1_1-large` |`t5-base` | |`learning rate` | 5e-3 | 5e-3 | 5e-3 | 1e-2 | |`steps` | 1'050'000 | 1'050'000 | 2'100'000 | 258'000 | |`training time` | 36 hours | 101 hours | 370 hours | 98 hours | |`ff projection` |`gated-gelu` |`gated-gelu` |`gated-gelu` |`relu` | |`tie embeds` |`false` |`false` |`false` |`true` | |`optimizer` | adafactor | adafactor | adafactor | adafactor | |`max seq. length` | 512 | 512 | 512 | 512 | |`per-device batch size`| 16 | 16 | 8 | 16 | |`tot. batch size` | 128 | 128 | 64 | 128 | |`weigth decay` | 1e-3 | 1e-3 | 1e-2 | 1e-3 | |`validation split size`| 15K examples | 15K examples | 15K examples | 15K examples | The high training time of `it5-base-oscar` was due to [a bug](https://github.com/huggingface/transformers/pull/13012) in the training script. For a list of individual model parameters, refer to the `config.json` file in the respective repositories.
b5e6c1455f1888bb19a2dec8fe413555
apache-2.0
['seq2seq', 'lm-head']
false
Using the models ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("gsarti/it5-base") model = AutoModelForSeq2SeqLM.from_pretrained("gsarti/it5-base") ``` *Note: You will need to fine-tune the model on your downstream seq2seq task to use it. See an example [here](https://huggingface.co/gsarti/it5-base-nli).* Flax and Tensorflow versions of the model are also available: ```python from transformers import FlaxT5ForConditionalGeneration, TFT5ForConditionalGeneration model_flax = FlaxT5ForConditionalGeneration.from_pretrained("gsarti/it5-base") model_tf = TFT5ForConditionalGeneration.from_pretrained("gsarti/it5-base") ```
9f8720d6f8614870033fee61f75c7183
apache-2.0
['seq2seq', 'lm-head']
false
Citation Information ```bibtex @article{sarti-nissim-2022-it5, title={IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation}, author={Sarti, Gabriele and Nissim, Malvina}, journal={ArXiv preprint 2203.03759}, url={https://arxiv.org/abs/2203.03759}, year={2022}, month={mar} } ```
8a421a2367310d9208fc75af7a97a526
apache-2.0
[]
false
Model description This is an [tapas-base](https://huggingface.co/google/tapas-base) model, trained on the lookup queries of [wikisql](https://huggingface.co/datasets/wikisql) dataset. It was trained to take tables and questions as input to extract answers from the table.
8c4f3fa0409562e508b0ab4570992087
apache-2.0
[]
false
Intented use and limitations One can use this model to predict answers for natural language queries given a table. Biases associated with pre-training of tapas-base and wikisql dataset may be present.
5a103cf4ad87962aed18f1239cd2cb05
apache-2.0
[]
false
Usage One can use this model directly in the [PrimeQA](https://github.com/primeqa/primeqa) framework as in this example [notebook](https://github.com/primeqa/primeqa/blob/tableqa_tapas/notebooks/tableqa/tableqa_inference.ipynb).
b71930b4fb2929852c769256e49f98a8
apache-2.0
[]
false
Citation ```bibtex @misc{herzig2020tapas, title={TAPAS: Weakly Supervised Table Parsing via Pre-training}, author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos}, year={2020}, eprint={2004.02349}, archivePrefix={arXiv}, primaryClass={cs.IR} } ```
7d73014a3386e1335c68755613d7c099
apache-2.0
['automatic-speech-recognition', 'ar']
false
exp_w2v2t_ar_vp-nl_s377 Fine-tuned [facebook/wav2vec2-large-nl-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-nl-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (ar)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
8c77a4e2dbb7ba7ac3156cc9358c0634
mit
['generated_from_keras_callback']
false
jonaskoenig/topic_classification_02 This model is a fine-tuned version of [microsoft/xtremedistil-l6-h256-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h256-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0189 - Train Binary Crossentropy: 0.3299 - Epoch: 5
25d6a01ebbcaf9b40b9be4af34b4cdd2
mit
['generated_from_keras_callback']
false
Training results | Train Loss | Train Binary Crossentropy | Epoch | |:----------:|:-------------------------:|:-----:| | 0.0250 | 0.4229 | 0 | | 0.0214 | 0.3684 | 1 | | 0.0204 | 0.3530 | 2 | | 0.0198 | 0.3433 | 3 | | 0.0193 | 0.3359 | 4 | | 0.0189 | 0.3299 | 5 |
f0ed54f3d0be1485769d9c3af37ba2d6
apache-2.0
['generated_from_keras_callback']
false
madatnlp/mt5-kormath This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.7119 - Validation Loss: 1.1299 - Epoch: 61
593976d8846dba03bd453f64096f178d
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': 0.001, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: mixed_bfloat16
457e1895bde1dfc64efa5e02fe53563b
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 17.9929 | 5.9287 | 0 | | 5.4802 | 3.9942 | 1 | | 4.1718 | 3.2517 | 2 | | 3.5750 | 2.9586 | 3 | | 3.1535 | 2.4970 | 4 | | 2.8665 | 2.4626 | 5 | | 2.6682 | 2.3795 | 6 | | 2.5323 | 2.2238 | 7 | | 2.4057 | 2.0684 | 8 | | 2.3107 | 2.2033 | 9 | | 2.2501 | 1.8339 | 10 | | 2.1089 | 1.9064 | 11 | | 2.0741 | 2.0256 | 12 | | 1.9868 | 1.8107 | 13 | | 1.9719 | 1.7157 | 14 | | 1.8762 | 1.6966 | 15 | | 1.8814 | 1.6580 | 16 | | 1.8052 | 1.6043 | 17 | | 1.7567 | 1.6572 | 18 | | 1.7209 | 1.5485 | 19 | | 1.7347 | 1.6464 | 20 | | 1.6760 | 1.5892 | 21 | | 1.6286 | 1.5765 | 22 | | 1.6124 | 1.7408 | 23 | | 1.5683 | 1.4875 | 24 | | 1.5814 | 1.4448 | 25 | | 1.5306 | 1.4902 | 26 | | 1.5121 | 1.5133 | 27 | | 1.4869 | 1.4217 | 28 | | 1.4539 | 1.5602 | 29 | | 1.4650 | 1.4699 | 30 | | 1.4508 | 1.4319 | 31 | | 1.3910 | 1.5975 | 32 | | 1.3758 | 1.4031 | 33 | | 1.3550 | 1.4295 | 34 | | 1.3405 | 1.3804 | 35 | | 1.3144 | 1.4202 | 36 | | 1.3136 | 1.5135 | 37 | | 1.2617 | 1.4790 | 38 | | 1.2260 | 1.4108 | 39 | | 1.2348 | 1.3108 | 40 | | 1.2019 | 1.1461 | 41 | | 1.1775 | 1.2509 | 42 | | 1.1690 | 1.2179 | 43 | | 1.1318 | 1.2483 | 44 | | 1.1013 | 1.0815 | 45 | | 1.0735 | 1.2135 | 46 | | 1.0439 | 1.1260 | 47 | | 1.0182 | 1.1993 | 48 | | 0.9971 | 1.0797 | 49 | | 0.9583 | 1.2587 | 50 | | 0.9505 | 1.0793 | 51 | | 0.9366 | 1.0501 | 52 | | 0.9170 | 1.1476 | 53 | | 0.8741 | 1.0560 | 54 | | 0.8558 | 1.0024 | 55 | | 0.8394 | 0.9604 | 56 | | 0.8203 | 1.2700 | 57 | | 0.7938 | 1.1081 | 58 | | 0.7800 | 1.0198 | 59 | | 0.7378 | 1.1748 | 60 | | 0.7119 | 1.1299 | 61 |
a568b0f56f4770dc7dfb9914a4d1d0db
apache-2.0
['part-of-speech', 'token-classification']
false
XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Chinese This model is part of our paper called: - Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
3b327578f44368b6392c7a4162186441
apache-2.0
['part-of-speech', 'token-classification']
false
Usage ```python from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-zh") model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-zh") ```
94d70df4579d76c1e8f7e079de56bca2
apache-2.0
['image-classification']
false
VAN-Base VAN is trained on ImageNet-1k (1 million images, 1,000 classes) at resolution 224x224. It was first introduced in the paper [Visual Attention Network](https://arxiv.org/abs/2202.09741) and first released in [here](https://github.com/Visual-Attention-Network).
7db6fdc6ef0a10355d2d1c79966952a0
apache-2.0
['image-classification']
false
Description While originally designed for natural language processing (NLP) tasks, the self-attention mechanism has recently taken various computer vision areas by storm. However, the 2D nature of images brings three challenges for applying self-attention in computer vision. (1) Treating images as 1D sequences neglects their 2D structures. (2) The quadratic complexity is too expensive for high-resolution images. (3) It only captures spatial adaptability but ignores channel adaptability. In this paper, we propose a novel large kernel attention (LKA) module to enable self-adaptive and long-range correlations in self-attention while avoiding the above issues. We further introduce a novel neural network based on LKA, namely Visual Attention Network (VAN). While extremely simple and efficient, VAN outperforms the state-of-the-art vision transformers (ViTs) and convolutional neural networks (CNNs) with a large margin in extensive experiments, including image classification, object detection, semantic segmentation, instance segmentation, etc.
93a03a5ebc04c836b2abc6c4e5d2ba56
apache-2.0
['image-classification']
false
Params(M) | GFLOPs | Top1 Acc(%) | Download | | :-------- | :--------: | :----: | :---------: | :----------------------------------------------------------: | | VAN-Tiny | 4.1 | 0.9 | 75.4 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Tiny) | | VAN-Small | 13.9 | 2.5 | 81.1 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Small) | | VAN-Base | 26.6 | 5.0 | 82.8 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Base), | | VAN-Large | 44.8 | 9.0 | 83.9 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Large) |
2d4e004a183bdeefa7012d95e68d008f
apache-2.0
['image-classification']
false
BibTeX entry and citation info ```bibtex @article{guo2022visual, title={Visual Attention Network}, author={Guo, Meng-Hao and Lu, Cheng-Ze and Liu, Zheng-Ning and Cheng, Ming-Ming and Hu, Shi-Min}, journal={arXiv preprint arXiv:2202.09741}, year={2022} } ```
79cb04fc27537de053edf79c83f1d574
apache-2.0
['generated_from_trainer']
false
small-mlm-glue-rte-target-glue-qqp This model is a fine-tuned version of [muhtasham/small-mlm-glue-rte](https://huggingface.co/muhtasham/small-mlm-glue-rte) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3294 - Accuracy: 0.8496 - F1: 0.8112
40d61c5bdafdae2ac682e36650d14a6a
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.4764 | 0.04 | 500 | 0.4288 | 0.7863 | 0.7498 | | 0.4172 | 0.09 | 1000 | 0.3936 | 0.8089 | 0.7701 | | 0.4017 | 0.13 | 1500 | 0.3707 | 0.8236 | 0.7785 | | 0.3865 | 0.18 | 2000 | 0.3751 | 0.8197 | 0.7857 | | 0.3788 | 0.22 | 2500 | 0.3682 | 0.8292 | 0.7938 | | 0.364 | 0.26 | 3000 | 0.3517 | 0.8351 | 0.7969 | | 0.3616 | 0.31 | 3500 | 0.3324 | 0.8496 | 0.8043 | | 0.3533 | 0.35 | 4000 | 0.3348 | 0.8457 | 0.8071 | | 0.3599 | 0.4 | 4500 | 0.3362 | 0.8451 | 0.8094 | | 0.3465 | 0.44 | 5000 | 0.3294 | 0.8496 | 0.8112 |
7f5afdc24d7b108840747ee3f1db7793
cc-by-4.0
[]
false
Model description This is the T5-3B model for System 1 as described in our paper Just-DREAM-about-it: Figurative Language Understanding with DREAM-FLUTE, FigLang workshop @ EMNLP 2022 (Arxiv link: https://arxiv.org/abs/2210.16407) System 1: Using original data Given the <Premise, Hypothesis, Label, Explanation> in the original data, we first trained a sequence-to-sequence model for the figurative language NLI task using the following input-output format: ``` Input <Premise> <Hypothesis> Output <Label> <Explanation> ```
0b7559d17f4e465c3a89a09e93296a56
cc-by-4.0
[]
false
How to use this model? We provide a quick example of how you can try out System 1 in our paper with just a few lines of code: ``` >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM >>> model = AutoModelForSeq2SeqLM.from_pretrained("allenai/System1_FigLang2022") >>> tokenizer = AutoTokenizer.from_pretrained("t5-3b") >>> input_string = "Premise: My neighbor actually purchased a dream car of mine and I see it parked in his driveway everyday just taunting me. Hypothesis: My neighbor's new car is exactly my dream car, and I feel so happy every time I see it parked in his driveway. Is there a contradiction or entailment between the premise and hypothesis?" >>> input_ids = tokenizer.encode(input_string, return_tensors="pt") >>> output = model.generate(input_ids, max_length=200) >>> tokenizer.batch_decode(output, skip_special_tokens=True) ["Answer : Contradiction. Explanation : Most people would not be happy to see someone else's new car that they cannot afford because it is way out of their budget"] ```
6d615066c133295cd2a5ad94c31e8bcb
cc-by-4.0
[]
false
More details about DREAM-FLUTE ... For more details about DREAM-FLUTE, please refer to our: * 📄Paper: https://arxiv.org/abs/2210.16407 * 💻GitHub Repo: https://github.com/allenai/dream/ This model is part of our DREAM-series of works. This is a line of research where we make use of scene elaboration for building a "mental model" of situation given in text. Check out our GitHub Repo for more!
65aa5bad98c75a6bd3aeee80cc651eae
cc-by-4.0
[]
false
Training and evaluation data We use the FLUTE dataset for the FigLang2022SharedTask (https://huggingface.co/datasets/ColumbiaNLP/FLUTE) for training this model. ∼7500 samples are provided as the training set. We used a 80-20 split to create our own training (6027 samples) and validation (1507 samples) partitions on which we build our models. For details on how we make use of the training data provided in the FigLang2022 shared task, please refer to https://github.com/allenai/dream/blob/main/FigLang2022SharedTask/Process_Data_Train_Dev_split.ipynb.
99771365b92cf0c262b83697c6ac5903
cc-by-4.0
[]
false
Model details This model is a fine-tuned version of [t5-3b](https://huggingface.co/t5-3b). It achieves the following results on the evaluation set: - Loss: 0.7602 - Rouge1: 58.1212 - Rouge2: 38.1109 - Rougel: 52.1198 - Rougelsum: 52.092 - Gen Len: 40.4851
7d0c1eb2daabe47c0bd1ca01c2c48847
cc-by-4.0
[]
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - total_train_batch_size: 2 - total_eval_batch_size: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0
513db039999fd88c95fb02a785186894
cc-by-4.0
[]
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 1.0017 | 0.33 | 1000 | 0.8958 | 40.072 | 27.6729 | 38.429 | 38.4023 | 19.0 | | 0.9054 | 0.66 | 2000 | 0.8336 | 41.4505 | 29.2616 | 39.5164 | 39.4976 | 19.0 | | 0.8777 | 1.0 | 3000 | 0.7863 | 41.4221 | 29.6675 | 39.6719 | 39.6627 | 19.0 | | 0.5608 | 1.33 | 4000 | 0.8007 | 41.1495 | 29.9008 | 39.5706 | 39.5554 | 19.0 | | 0.5594 | 1.66 | 5000 | 0.7785 | 41.3834 | 30.2818 | 39.8259 | 39.8324 | 19.0 | | 0.5498 | 1.99 | 6000 | 0.7602 | 41.6364 | 30.6513 | 40.1522 | 40.1332 | 19.0 | | 0.3398 | 2.32 | 7000 | 0.8580 | 41.4948 | 30.7467 | 40.0274 | 40.0116 | 18.9954 | | 0.3518 | 2.65 | 8000 | 0.8430 | 41.7283 | 31.178 | 40.3487 | 40.3328 | 18.9861 | | 0.3465 | 2.99 | 9000 | 0.8405 | 41.956 | 31.527 | 40.5671 | 40.5517 | 18.9907 |
31b14189402a1125b53d20188ec8e366