license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
mit
[]
false
Training data This model was distiled with 522MB of indonesian Wikipedia and 1GB of [indonesian newspapers](https://huggingface.co/datasets/id_newspapers_2018). The texts are lowercased and tokenized using WordPiece and a vocabulary size of 32,000. The inputs of the model are then of the form: ```[CLS] Sentence A [SEP] Sentence B [SEP]```
139d9a9f410fe0fcab87e1213e93142a
apache-2.0
['generated_from_trainer']
false
t5-base-devices-sum-ver1 This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0935 - Rouge1: 97.2294 - Rouge2: 80.1323 - Rougel: 97.245 - Rougelsum: 97.2763 - Gen Len: 4.9507
7997838a9cc7c0b4fd948be88d5c615a
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 186 | 0.2461 | 91.9436 | 71.232 | 91.9417 | 91.9585 | 4.6644 | | No log | 2.0 | 372 | 0.1580 | 94.5247 | 76.1321 | 94.5044 | 94.5382 | 4.8953 | | 0.488 | 3.0 | 558 | 0.1239 | 95.8673 | 78.1183 | 95.8862 | 95.8919 | 4.9102 | | 0.488 | 4.0 | 744 | 0.1100 | 96.5746 | 78.9878 | 96.5848 | 96.5831 | 4.9102 | | 0.488 | 5.0 | 930 | 0.1008 | 96.9074 | 79.5536 | 96.9143 | 96.9317 | 4.9291 | | 0.1303 | 6.0 | 1116 | 0.0974 | 96.9274 | 79.6953 | 96.933 | 96.9473 | 4.9291 | | 0.1303 | 7.0 | 1302 | 0.0969 | 96.8041 | 79.5073 | 96.817 | 96.8266 | 4.9271 | | 0.1303 | 8.0 | 1488 | 0.0945 | 97.1496 | 79.9757 | 97.1529 | 97.1779 | 4.9534 | | 0.089 | 9.0 | 1674 | 0.0944 | 97.253 | 80.1236 | 97.2619 | 97.2899 | 4.9595 | | 0.089 | 10.0 | 1860 | 0.0935 | 97.2294 | 80.1323 | 97.245 | 97.2763 | 4.9507 |
8e9442b2f99dbdf9e8b4aa5d6d44d326
apache-2.0
['clip', 'zh', 'image-text', 'feature-extraction']
false
模型分类 Model Taxonomy | 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra | | :----: | :----: | :----: | :----: | :----: | :----: | | 特殊 Special | 多模态 Multimodal | 太乙 Taiyi | CLIP (RoBERTa) | 326M | 使用了ViT-H作为视觉提取器-中文 ViT-H-Chinese |
350e80bd01d29b0102f1120557af1818
apache-2.0
['clip', 'zh', 'image-text', 'feature-extraction']
false
模型信息 Model Information 我们遵循CLIP的实验设置,以获得强大的视觉-语言表征。在训练中文版的CLIP时,我们使用[chinese-roberta-wwm-large](https://huggingface.co/hfl/chinese-roberta-wwm-ext-large)作为语言的编码器,并将[open_clip](https://github.com/mlfoundations/open_clip)中的**ViT-H-14**应用于视觉的编码器。为了快速且稳定地进行预训练,我们冻结了视觉编码器并且只微调语言编码器。此外,我们将[Noah-Wukong](https://wukong-dataset.github.io/wukong-dataset/)数据集(100M)和[Zero](https://zero.so.com/)数据集(23M)用作预训练的数据集。在悟空数据集和zero数据集上预训练24轮,在A100x32上训练了8天。据我们所知,我们的Taiyi-CLIP是目前Huggingface社区中首个的开源中文CLIP。 We follow the experimental setup of CLIP to obtain powerful visual-language intelligence. To obtain the CLIP for Chinese, we employ [chinese-roberta-wwm-large](https://huggingface.co/hfl/chinese-roberta-wwm-ext-large) for the language encoder, and apply the **ViT-H-14** in [open_clip](https://github.com/mlfoundations/open_clip) for the vision encoder. We freeze the vision encoder and tune the language encoder to speed up and stabilize the pre-training process. Moreover, we apply [Noah-Wukong](https://wukong-dataset.github.io/wukong-dataset/) dataset (100M) and [Zero](https://zero.so.com/) dataset (23M) as the pre-training datasets. The model was first trained 24 epochs on wukong and zero, which takes 8 days to train on A100x32. To the best of our knowledge, our TaiyiCLIP is currently the only open-sourced Chinese CLIP in the huggingface community.
5afb009b70476f750db3135b9ecb570a
apache-2.0
['clip', 'zh', 'image-text', 'feature-extraction']
false
下游效果 Performance **Zero-Shot Classification** | model | dataset | Top1 | Top5 | | ---- | ---- | ---- | ---- | | Taiyi-CLIP-RoBERTa-326M-ViT-H-Chinese | ImageNet1k-CN | 54.35% | 80.64% | **Zero-Shot Text-to-Image Retrieval** | model | dataset | Top1 | Top5 | Top10 | | ---- | ---- | ---- | ---- | ---- | | Taiyi-CLIP-RoBERTa-326M-ViT-H-Chinese | Flickr30k-CNA-test | 60.82% | 85.00% | 91.04% | | Taiyi-CLIP-RoBERTa-326M-ViT-H-Chinese | COCO-CN-test | 60.02% | 83.95% | 93.26% | | Taiyi-CLIP-RoBERTa-326M-ViT-H-Chinese | wukong50k | 66.85% | 92.81% | 96.69% |
19c6baaeafdc9abee45d81947cd7861b
apache-2.0
['clip', 'zh', 'image-text', 'feature-extraction']
false
使用 Usage ```python3 from PIL import Image import requests import open_clip import torch from transformers import BertModel, BertConfig, BertTokenizer from transformers import CLIPProcessor, CLIPModel import numpy as np query_texts = ["一只猫", "一只狗",'两只猫', '两只老虎','一只老虎']
041d3a0b4911c2cd500a220caa9c98c4
apache-2.0
['clip', 'zh', 'image-text', 'feature-extraction']
false
加载Taiyi 中文 text encoder text_tokenizer = BertTokenizer.from_pretrained("IDEA-CCNL/Taiyi-CLIP-RoBERTa-326M-ViT-H-Chinese") text_encoder = BertModel.from_pretrained("IDEA-CCNL/Taiyi-CLIP-RoBERTa-326M-ViT-H-Chinese").eval() url = "http://images.cocodataset.org/val2017/000000039769.jpg"
4271fe6f8ede017a48d03b91de1996e8
apache-2.0
['clip', 'zh', 'image-text', 'feature-extraction']
false
加载openclip的image encoder clip_model, _, processor = open_clip.create_model_and_transforms('ViT-H-14', pretrained='laion2b_s32b_b79k') clip_model = clip_model.eval() text = text_tokenizer(query_texts, return_tensors='pt', padding=True)['input_ids'] image = processor(Image.open(requests.get(url, stream=True).raw)).unsqueeze(0) with torch.no_grad(): image_features = clip_model.encode_image(image) text_features = text_encoder(text)[1]
042fc9cbfdb11d1762e377b7ea4b0564
apache-2.0
['clip', 'zh', 'image-text', 'feature-extraction']
false
计算余弦相似度 logit_scale是尺度系数 logit_scale = clip_model.logit_scale.exp() logits_per_image = logit_scale * image_features @ text_features.t() logits_per_text = logits_per_image.t() probs = logits_per_image.softmax(dim=-1).cpu().numpy() print(np.around(probs, 3)) ```
14e17e5406c252dda3e5e4848db925af
mit
['rudalle', 'pokemon', 'image-generation']
false
ai-generated-pokemon-rudalle ![](example.png) A finetuned [ruDALL-E](https://github.com/sberbank-ai/ru-dalle) on Pokémon using the finetuning example Colab Notebook [linked in that repo](https://colab.research.google.com/drive/1Tb7J4PvvegWOybPfUubl5O7m5I24CBg5?usp=sharing). This model was used to create Pokémon that resulted in AI-Generated Pokémon that went viral ([10k+ retweets](https://twitter.com/minimaxir/status/1470913487085785089) on Twitter + [30k+ upvotes](https://www.reddit.com/r/pokemon/comments/rgmyxp/i_trained_an_ai_on_all_the_official_pokemon/) on Reddit) The model used above was trained for 12 epochs (4.5 hours on a P100), at a max learning rate of `1e-5`.
8f613378ba14159427fcb4691f03c107
other
['text-generation', 'opt']
false
How to use You can use this model directly with a pipeline for text generation. ```python >>> from transformers import pipeline >>> generator = pipeline('text-generation', model="facebook/opt-iml-1.3b") >>> generator("What is the capital of USA?") ```
bc9dae0b2c3cb343616b2a3d415b1082
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.1572
130f5cf5204a211ec01d894ac373577d
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP
1db8f35d7bc72f16a478f2c458179b9c
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.2337 | 1.0 | 2767 | 1.1525 | | 0.9515 | 2.0 | 5534 | 1.1206 | | 0.7327 | 3.0 | 8301 | 1.1572 |
c10b45c3a328cb472e380ce8f64e1082
apache-2.0
['longformer', 'clinical', 'biomedical']
false
<span style="font-size:larger;">**KEPTlongfomer**</span> is a medical knowledge enhanced version of Longformer that was further pre-trained using [contrastive learning](https://arxiv.org/pdf/2210.03304.pdf). The model achieves SOTA performance on auto ICD coding on MIMIC-III as of 11/12/2022. A sister model for better performance is available [here](https://huggingface.co/whaleloops/KEPTlongformer-PMM3/).
d718b4ebc238b713c5119fe1439e42fd
apache-2.0
['longformer', 'clinical', 'biomedical']
false
Pre-training We initialized this model from [clinical longformer](https://huggingface.co/yikuan8/Clinical-Longformer). And then pretrained with Hierarchical Self-Alignment Pretrain (HSAP) using Knowledge Graph UMLS. This includes (a) Hierarchy, (b) Synonym, (c) Abbreviation. For more info, see section 3.3 in [paper](https://arxiv.org/pdf/2210.03304.pdf). The learning rate was 5e-5, weight decay was 0.01, adam epsilon was 1e-5.
5b6abdaa25fdebeec632b332bb5fcaab
apache-2.0
['longformer', 'clinical', 'biomedical']
false
Usage See our [github](https://github.com/whaleloops/KEPT/tree/rerank300) for how to use this with prompts on auto ICD coding. With the following result: | Metric | Score | | ------------- | ------------- | |rec_micro| =0.5729403619819988| |rec_macro| =0.11342156911120573| |rec_at_8| =0.4094837705486378| |rec_at_75| =0.8470734920535119| |rec_at_50| =0.8005338782352| |rec_at_5| =0.2891628170355805| |rec_at_15| =0.5768778119750537| |prec_micro| =0.6411968713105065| |prec_macro| =0.12227610414493029| |prec_at_8| =0.7760972716488731| |prec_at_75| =0.197504942665085| |prec_at_50| =0.2768090154211151| |prec_at_5| =0.8483392645314354| |prec_at_15| =0.6178529062870699| |f1_micro| =0.6051499904242899| |f1_macro| =0.11768251595637802| |f1_at_8| =0.536107150495997| |f1_at_75| =0.32032290907137506| |f1_at_50| =0.411373195944102| |f1_at_5| =0.43131028155283435| |f1_at_15| =0.5966627077602488| |auc_micro| =0.9651754312635265| |auc_macro| =0.8566590059725866| |acc_micro| =0.43384592341105344| |acc_macro| =0.08639139221100567|
8b981e76831cb8e37e53e6987d403ced
mit
['generated_from_trainer']
false
finetuning-sentiment-model-tweet-gpt2 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.3646 - Accuracy: 0.6908 - F1: 0.6908
ec76ce998debe5e741850ad60c31b889
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6
13b75875cabda975ba3d8b9839f9a4af
apache-2.0
['generated_from_keras_callback']
false
lewtun/distilgpt2-finetuned-shakespeare This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.9411 - Validation Loss: 3.5767 - Epoch: 29
f71ada8a6f1c311e301dae868a51ff6b
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 4.2112 | 3.8253 | 0 | | 3.8997 | 3.6898 | 1 | | 3.7783 | 3.6304 | 2 | | 3.7046 | 3.5846 | 3 | | 3.6477 | 3.5667 | 4 | | 3.6001 | 3.5445 | 5 | | 3.5563 | 3.5333 | 6 | | 3.5198 | 3.5240 | 7 | | 3.4842 | 3.5146 | 8 | | 3.4505 | 3.5126 | 9 | | 3.4184 | 3.5022 | 10 | | 3.3912 | 3.5027 | 11 | | 3.3613 | 3.5003 | 12 | | 3.3337 | 3.4985 | 13 | | 3.3045 | 3.5004 | 14 | | 3.2772 | 3.5014 | 15 | | 3.2527 | 3.5018 | 16 | | 3.2274 | 3.5053 | 17 | | 3.2011 | 3.5106 | 18 | | 3.1754 | 3.5143 | 19 | | 3.1512 | 3.5181 | 20 | | 3.1259 | 3.5274 | 21 | | 3.1003 | 3.5215 | 22 | | 3.0809 | 3.5354 | 23 | | 3.0568 | 3.5335 | 24 | | 3.0306 | 3.5502 | 25 | | 3.0080 | 3.5574 | 26 | | 2.9857 | 3.5587 | 27 | | 2.9654 | 3.5760 | 28 | | 2.9411 | 3.5767 | 29 |
b00e449b797c6fac8a5d05e442788f6a
cc-by-4.0
['named-entity-recognition', 'Transformer', 'pytorch', 'bert']
false
🤗 bert-restore-punctuation-ptbr * 🪄 [W&B Dashboard](https://wandb.ai/dominguesm/RestorePunctuationPTBR) * ⛭ [GitHub](https://github.com/DominguesM/respunct) This is a [bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) model finetuned for punctuation restoration on [WikiLingua](https://github.com/esdurmus/Wikilingua). This model is intended for direct use as a punctuation restoration model for the general Portuguese language. Alternatively, you can use this for further fine-tuning on domain-specific texts for punctuation restoration tasks. Model restores the following punctuations -- **[! ? . , - : ; ' ]** The model also restores the upper-casing of words. -----------------------------------------------
3063550767832305584ef905e98cca51
cc-by-4.0
['named-entity-recognition', 'Transformer', 'pytorch', 'bert']
false
🤷 Usage 🇧🇷 easy-to-use package to restore punctuation of portuguese texts. **Below is a quick way to use the template.** 1. First, install the package. ``` pip install respunct ``` 2. Sample python code. ``` python from respunct import RestorePuncts model = RestorePuncts() model.restore_puncts(""" henrique foi no lago pescar com o pedro mais tarde foram para a casa do pedro fritar os peixes""")
fb7ae4b918407fc569591400caa56cd8
cc-by-4.0
['named-entity-recognition', 'Transformer', 'pytorch', 'bert']
false
🎯 Accuracy | label | precision | recall | f1-score | support| | ------------------------- | -------------|-------- | ----------|--------| | **Upper - OU** | 0.89 | 0.91 | 0.90 | 69376 | **None - OO** | 0.99 | 0.98 | 0.98 | 857659 | **Full stop/period - .O** | 0.86 | 0.93 | 0.89 | 60410 | **Comma - ,O** | 0.85 | 0.83 | 0.84 | 48608 | **Upper + Comma - ,U** | 0.73 | 0.76 | 0.75 | 3521 | **Question - ?O** | 0.68 | 0.78 | 0.73 | 1168 | **Upper + period - .U** | 0.66 | 0.72 | 0.69 | 1884 | **Upper + colon - :U** | 0.59 | 0.63 | 0.61 | 352 | **Colon - :O** | 0.70 | 0.53 | 0.60 | 2420 | **Question Mark - ?U** | 0.50 | 0.56 | 0.53 | 36 | **Upper + Exclam. - !U** | 0.38 | 0.32 | 0.34 | 38 | **Exclamation Mark - !O** | 0.30 | 0.05 | 0.08 | 783 | **Semicolon - ;O** | 0.35 | 0.04 | 0.08 | 1557 | **Apostrophe - 'O** | 0.00 | 0.00 | 0.00 | 3 | **Hyphen - -O** | 0.00 | 0.00 | 0.00 | 3 | | | | | | **accuracy** | | | 0.96 | 1047818 | **macro avg** | 0.57 | 0.54 | 0.54 | 1047818 | **weighted avg** | 0.96 | 0.96 | 0.96 | 1047818 -----------------------------------------------
9c0504ff6cbd3a3a14a4ae270d764caf
apache-2.0
['generated_from_trainer']
false
wav2vec2-1 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5980 - Wer: 0.4949
4f9f52b2e227edb309151b69d75369e9
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.2691 | 1.37 | 200 | 2.9045 | 1.0 | | 1.6356 | 2.74 | 400 | 0.9277 | 0.8678 | | 0.8062 | 4.11 | 600 | 0.8200 | 0.7776 | | 0.5983 | 5.48 | 800 | 0.6829 | 0.7161 | | 0.4863 | 6.85 | 1000 | 0.6205 | 0.6507 | | 0.407 | 8.22 | 1200 | 0.6519 | 0.6763 | | 0.3641 | 9.59 | 1400 | 0.5771 | 0.6088 | | 0.3291 | 10.96 | 1600 | 0.6548 | 0.6202 | | 0.2905 | 12.33 | 1800 | 0.6538 | 0.5828 | | 0.2613 | 13.7 | 2000 | 0.6281 | 0.5864 | | 0.2354 | 15.07 | 2200 | 0.5936 | 0.5630 | | 0.2145 | 16.44 | 2400 | 0.5877 | 0.5699 | | 0.2008 | 17.81 | 2600 | 0.5469 | 0.5488 | | 0.1751 | 19.18 | 2800 | 0.6453 | 0.5584 | | 0.169 | 20.55 | 3000 | 0.5871 | 0.5357 | | 0.1521 | 21.92 | 3200 | 0.6063 | 0.5318 | | 0.1426 | 23.29 | 3400 | 0.5609 | 0.5171 | | 0.1287 | 24.66 | 3600 | 0.6056 | 0.5126 | | 0.1236 | 26.03 | 3800 | 0.5994 | 0.5074 | | 0.1138 | 27.4 | 4000 | 0.5980 | 0.4944 | | 0.1083 | 28.77 | 4200 | 0.5980 | 0.4949 |
b40ccb3dc48f7efb0eb723aa85098fec
cc-by-4.0
['espnet', 'audio', 'text-to-speech']
false
`kan-bayashi/jsut_conformer_fastspeech2_transformer_prosody` ♻️ Imported from https://zenodo.org/record/5499066/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
e8dfbcde6054b47f9bcd62ec5cb6d1f1
cc-by-4.0
['espnet', 'audio', 'text-to-speech']
false
`kan-bayashi/vctk_xvector_conformer_fastspeech2` ♻️ Imported from https://zenodo.org/record/4394602/ This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
cdbdbd981795a64a3dc7291c33d80e92
apache-2.0
[]
false
ByT5 - large ByT5 is a tokenizer-free version of [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) and generally follows the architecture of [MT5](https://huggingface.co/google/mt5-large). ByT5 was only pre-trained on [mC4](https://www.tensorflow.org/datasets/catalog/c4
ae15bf5cc7bf8cc1881353d615b012be
apache-2.0
[]
false
c4multilingual) excluding any supervised training with an average span-mask of 20 UTF-8 characters. Therefore, this model has to be fine-tuned before it is useable on a downstream task. ByT5 works especially well on noisy text data,*e.g.*, `google/byt5-large` significantly outperforms [mt5-large](https://huggingface.co/google/mt5-large) on [TweetQA](https://arxiv.org/abs/1907.06292). Paper: [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) Authors: *Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel*
7c73240a8f6fc324557977703835ce8a
apache-2.0
[]
false
Example Inference ByT5 works on raw UTF-8 bytes and can be used without a tokenizer: ```python from transformers import T5ForConditionalGeneration import torch model = T5ForConditionalGeneration.from_pretrained('google/byt5-large') input_ids = torch.tensor([list("Life is like a box of chocolates.".encode("utf-8"))]) + 3
d0fe6e48f9e18d12a11a09eb917383f1
apache-2.0
[]
false
forward pass ``` For batched inference & training it is however recommended using a tokenizer class for padding: ```python from transformers import T5ForConditionalGeneration, AutoTokenizer model = T5ForConditionalGeneration.from_pretrained('google/byt5-large') tokenizer = AutoTokenizer.from_pretrained('google/byt5-large') model_inputs = tokenizer(["Life is like a box of chocolates.", "Today is Monday."], padding="longest", return_tensors="pt") labels = tokenizer(["La vie est comme une boîte de chocolat.", "Aujourd'hui c'est lundi."], padding="longest", return_tensors="pt").input_ids loss = model(**model_inputs, labels=labels).loss
3fcf14458e7bf4db6d3705f8327ac95b
apache-2.0
['generated_from_trainer']
false
Model description This model is a fine-tuned version of [EleutherAI/gpt-neo-2.7B](https://huggingface.co/EleutherAI/gpt-neo-2.7B) on the Lila-IID-train/dev set from the [Lila dataset](https://github.com/allenai/Lila).
9efa78100b50e033313842d4468f9765
apache-2.0
['generated_from_trainer']
false
Intended uses & limitations If you use this model, please cite our work. ``` @INPROCEEDINGS{Mishra2022Lila, author = { Swaroop Mishra and Matthew Finlayson and Pan Lu and Leonard Tang and Sean Welleck and Chitta Baral and Tanmay Rajpurohit and Oyvind Tafjord and Ashish Sabharwal and Peter Clark and Ashwin Kalyan}, title = {Lila: A Unified Benchmark for Mathematical Reasoning}, booktitle = {Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP)}, year = {2022} } ```
cb6b6cb60ef62d4a4e561bfab0576c8e
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - total_train_batch_size: 8 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0
85bf42f463cffe3ccd5792d2c42c4e4b
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | No log | 0.06 | 100 | 0.7930 | 0.8214 | | No log | 0.11 | 200 | 0.7544 | 0.8290 | | No log | 0.17 | 300 | 0.7358 | 0.8328 | | No log | 0.23 | 400 | 0.7192 | 0.8357 | | 0.8156 | 0.28 | 500 | 0.7012 | 0.8397 | | 0.8156 | 0.34 | 600 | 0.6904 | 0.8419 | | 0.8156 | 0.4 | 700 | 0.6802 | 0.8440 | | 0.8156 | 0.45 | 800 | 0.6670 | 0.8465 | | 0.8156 | 0.51 | 900 | 0.6572 | 0.8486 | | 0.7219 | 0.57 | 1000 | 0.6499 | 0.8500 | | 0.7219 | 0.62 | 1100 | 0.6411 | 0.8522 | | 0.7219 | 0.68 | 1200 | 0.6343 | 0.8537 | | 0.7219 | 0.74 | 1300 | 0.6299 | 0.8546 | | 0.7219 | 0.79 | 1400 | 0.6221 | 0.8561 | | 0.662 | 0.85 | 1500 | 0.6157 | 0.8574 | | 0.662 | 0.91 | 1600 | 0.6138 | 0.8579 | | 0.662 | 0.96 | 1700 | 0.6055 | 0.8595 | | 0.662 | 1.02 | 1800 | 0.6143 | 0.8598 | | 0.662 | 1.08 | 1900 | 0.6191 | 0.8599 | | 0.5707 | 1.14 | 2000 | 0.6118 | 0.8607 | | 0.5707 | 1.19 | 2100 | 0.6123 | 0.8611 | | 0.5707 | 1.25 | 2200 | 0.6089 | 0.8617 | | 0.5707 | 1.31 | 2300 | 0.6064 | 0.8619 | | 0.5707 | 1.36 | 2400 | 0.6079 | 0.8625 | | 0.4923 | 1.42 | 2500 | 0.6040 | 0.8625 | | 0.4923 | 1.48 | 2600 | 0.6030 | 0.8630 | | 0.4923 | 1.53 | 2700 | 0.6021 | 0.8636 | | 0.4923 | 1.59 | 2800 | 0.6001 | 0.8643 | | 0.4923 | 1.65 | 2900 | 0.5981 | 0.8644 | | 0.4909 | 1.7 | 3000 | 0.5942 | 0.8648 | | 0.4909 | 1.76 | 3100 | 0.5918 | 0.8650 | | 0.4909 | 1.82 | 3200 | 0.5923 | 0.8659 | | 0.4909 | 1.87 | 3300 | 0.5884 | 0.8664 | | 0.4909 | 1.93 | 3400 | 0.5884 | 0.8663 | | 0.4964 | 1.99 | 3500 | 0.5903 | 0.8669 | | 0.4964 | 2.04 | 3600 | 0.6421 | 0.8655 | | 0.4964 | 2.1 | 3700 | 0.6401 | 0.8651 | | 0.4964 | 2.16 | 3800 | 0.6411 | 0.8649 | | 0.4964 | 2.21 | 3900 | 0.6387 | 0.8645 | | 0.345 | 2.27 | 4000 | 0.6362 | 0.8654 | | 0.345 | 2.33 | 4100 | 0.6362 | 0.8654 | | 0.345 | 2.38 | 4200 | 0.6362 | 0.8654 | | 0.345 | 2.44 | 4300 | 0.6357 | 0.8655 | | 0.345 | 2.5 | 4400 | 0.6362 | 0.8656 | | 0.3463 | 2.55 | 4500 | 0.6377 | 0.8658 | | 0.3463 | 2.61 | 4600 | 0.6357 | 0.8660 | | 0.3463 | 2.67 | 4700 | 0.6294 | 0.8665 | | 0.3463 | 2.72 | 4800 | 0.6333 | 0.8665 | | 0.3463 | 2.78 | 4900 | 0.6362 | 0.8662 | | 0.3508 | 2.84 | 5000 | 0.6357 | 0.8666 | | 0.3508 | 2.89 | 5100 | 0.6299 | 0.8673 | | 0.3508 | 2.95 | 5200 | 0.6313 | 0.8668 | | 0.3508 | 3.01 | 5300 | 0.7188 | 0.8646 | | 0.3508 | 3.06 | 5400 | 0.7017 | 0.8656 | | 0.295 | 3.12 | 5500 | 0.6982 | 0.8653 | | 0.295 | 3.18 | 5600 | 0.7031 | 0.8655 | | 0.295 | 3.23 | 5700 | 0.6992 | 0.8651 | | 0.295 | 3.29 | 5800 | 0.6997 | 0.8653 | | 0.295 | 3.35 | 5900 | 0.7041 | 0.8651 | | 0.2348 | 3.41 | 6000 | 0.7075 | 0.8649 | | 0.2348 | 3.46 | 6100 | 0.6992 | 0.8650 | | 0.2348 | 3.52 | 6200 | 0.7065 | 0.8647 | | 0.2348 | 3.58 | 6300 | 0.6997 | 0.8652 | | 0.2348 | 3.63 | 6400 | 0.7026 | 0.8651 | | 0.2411 | 3.69 | 6500 | 0.7046 | 0.8656 | | 0.2411 | 3.75 | 6600 | 0.7007 | 0.8655 | | 0.2411 | 3.8 | 6700 | 0.7026 | 0.8651 | | 0.2411 | 3.86 | 6800 | 0.7031 | 0.8655 | | 0.2411 | 3.92 | 6900 | 0.7012 | 0.8658 | | 0.251 | 3.97 | 7000 | 0.7051 | 0.8656 | | 0.251 | 4.03 | 7100 | 0.7607 | 0.8650 | | 0.251 | 4.09 | 7200 | 0.7632 | 0.8656 | | 0.251 | 4.14 | 7300 | 0.7588 | 0.8655 | | 0.251 | 4.2 | 7400 | 0.7578 | 0.8651 | | 0.1797 | 4.26 | 7500 | 0.7710 | 0.8645 | | 0.1797 | 4.31 | 7600 | 0.7627 | 0.8648 | | 0.1797 | 4.37 | 7700 | 0.7583 | 0.8650 | | 0.1797 | 4.43 | 7800 | 0.7646 | 0.8649 | | 0.1797 | 4.48 | 7900 | 0.7598 | 0.8646 | | 0.1784 | 4.54 | 8000 | 0.7656 | 0.8650 | | 0.1784 | 4.6 | 8100 | 0.7617 | 0.8648 | | 0.1784 | 4.65 | 8200 | 0.7573 | 0.8651 | | 0.1784 | 4.71 | 8300 | 0.7671 | 0.8648 | | 0.1784 | 4.77 | 8400 | 0.7563 | 0.8651 | | 0.1827 | 4.82 | 8500 | 0.7651 | 0.8649 | | 0.1827 | 4.88 | 8600 | 0.7637 | 0.8650 | | 0.1827 | 4.94 | 8700 | 0.7607 | 0.8654 | | 0.1827 | 4.99 | 8800 | 0.7607 | 0.8650 | | 0.1827 | 5.05 | 8900 | 0.8149 | 0.8646 | | 0.167 | 5.11 | 9000 | 0.8081 | 0.8648 | | 0.167 | 5.16 | 9100 | 0.8184 | 0.8644 | | 0.167 | 5.22 | 9200 | 0.8140 | 0.8647 | | 0.167 | 5.28 | 9300 | 0.8169 | 0.8644 | | 0.167 | 5.33 | 9400 | 0.8120 | 0.8645 | | 0.1371 | 5.39 | 9500 | 0.8154 | 0.8643 | | 0.1371 | 5.45 | 9600 | 0.8179 | 0.8642 | | 0.1371 | 5.51 | 9700 | 0.8154 | 0.8643 | | 0.1371 | 5.56 | 9800 | 0.8120 | 0.8645 | | 0.1371 | 5.62 | 9900 | 0.8110 | 0.8650 | | 0.1425 | 5.68 | 10000 | 0.8159 | 0.8645 | | 0.1425 | 5.73 | 10100 | 0.8174 | 0.8646 | | 0.1425 | 5.79 | 10200 | 0.8159 | 0.8649 | | 0.1425 | 5.85 | 10300 | 0.8110 | 0.8639 | | 0.1425 | 5.9 | 10400 | 0.8135 | 0.8645 | | 0.1505 | 5.96 | 10500 | 0.8140 | 0.8642 | | 0.1505 | 6.02 | 10600 | 0.8628 | 0.8640 | | 0.1505 | 6.07 | 10700 | 0.8540 | 0.8644 | | 0.1505 | 6.13 | 10800 | 0.8530 | 0.8642 | | 0.1505 | 6.19 | 10900 | 0.8560 | 0.8647 | | 0.1086 | 6.24 | 11000 | 0.8555 | 0.8649 | | 0.1086 | 6.3 | 11100 | 0.8604 | 0.8644 | | 0.1086 | 6.36 | 11200 | 0.8569 | 0.8642 | | 0.1086 | 6.41 | 11300 | 0.8530 | 0.8639 | | 0.1086 | 6.47 | 11400 | 0.8589 | 0.8643 | | 0.1076 | 6.53 | 11500 | 0.8525 | 0.8639 | | 0.1076 | 6.58 | 11600 | 0.8579 | 0.8640 | | 0.1076 | 6.64 | 11700 | 0.8594 | 0.8640 | | 0.1076 | 6.7 | 11800 | 0.8599 | 0.8643 | | 0.1076 | 6.75 | 11900 | 0.8564 | 0.8640 | | 0.1109 | 6.81 | 12000 | 0.8633 | 0.8640 | | 0.1109 | 6.87 | 12100 | 0.8584 | 0.8638 | | 0.1109 | 6.92 | 12200 | 0.8647 | 0.8636 | | 0.1109 | 6.98 | 12300 | 0.8599 | 0.8635 | | 0.1109 | 7.04 | 12400 | 0.8979 | 0.8632 | | 0.1028 | 7.09 | 12500 | 0.8936 | 0.8635 | | 0.1028 | 7.15 | 12600 | 0.9043 | 0.8637 | | 0.1028 | 7.21 | 12700 | 0.8989 | 0.8642 | | 0.1028 | 7.26 | 12800 | 0.8936 | 0.8642 | | 0.1028 | 7.32 | 12900 | 0.8921 | 0.8641 | | 0.0774 | 7.38 | 13000 | 0.8955 | 0.8634 | | 0.0774 | 7.43 | 13100 | 0.8950 | 0.8636 | | 0.0774 | 7.49 | 13200 | 0.8994 | 0.8635 | | 0.0774 | 7.55 | 13300 | 0.8999 | 0.8635 | | 0.0774 | 7.6 | 13400 | 0.8936 | 0.8631 | | 0.0852 | 7.66 | 13500 | 0.9048 | 0.8634 | | 0.0852 | 7.72 | 13600 | 0.8960 | 0.8632 | | 0.0852 | 7.78 | 13700 | 0.9023 | 0.8635 | | 0.0852 | 7.83 | 13800 | 0.8984 | 0.8638 | | 0.0852 | 7.89 | 13900 | 0.9019 | 0.8635 | | 0.0879 | 7.95 | 14000 | 0.9014 | 0.8634 | | 0.0879 | 8.0 | 14100 | 0.9136 | 0.8630 | | 0.0879 | 8.06 | 14200 | 0.9312 | 0.8639 | | 0.0879 | 8.12 | 14300 | 0.9346 | 0.8635 | | 0.0879 | 8.17 | 14400 | 0.9307 | 0.8635 | | 0.0611 | 8.23 | 14500 | 0.9419 | 0.8641 | | 0.0611 | 8.29 | 14600 | 0.9331 | 0.8631 | | 0.0611 | 8.34 | 14700 | 0.9375 | 0.8636 | | 0.0611 | 8.4 | 14800 | 0.9292 | 0.8626 | | 0.0611 | 8.46 | 14900 | 0.9458 | 0.8637 | | 0.061 | 8.51 | 15000 | 0.9336 | 0.8634 | | 0.061 | 8.57 | 15100 | 0.9409 | 0.8630 | | 0.061 | 8.63 | 15200 | 0.9390 | 0.8632 | | 0.061 | 8.68 | 15300 | 0.9375 | 0.8628 | | 0.061 | 8.74 | 15400 | 0.9365 | 0.8630 | | 0.0646 | 8.8 | 15500 | 0.9370 | 0.8628 | | 0.0646 | 8.85 | 15600 | 0.9355 | 0.8629 | | 0.0646 | 8.91 | 15700 | 0.9375 | 0.8632 | | 0.0646 | 8.97 | 15800 | 0.9390 | 0.8630 | | 0.0646 | 9.02 | 15900 | 0.9717 | 0.8630 | | 0.0593 | 9.08 | 16000 | 0.9673 | 0.8626 | | 0.0593 | 9.14 | 16100 | 0.9644 | 0.8630 | | 0.0593 | 9.19 | 16200 | 0.9624 | 0.8631 | | 0.0593 | 9.25 | 16300 | 0.9648 | 0.8633 | | 0.0593 | 9.31 | 16400 | 0.9673 | 0.8632 | | 0.0415 | 9.36 | 16500 | 0.9658 | 0.8633 | | 0.0415 | 9.42 | 16600 | 0.9688 | 0.8628 | | 0.0415 | 9.48 | 16700 | 0.9653 | 0.8632 | | 0.0415 | 9.53 | 16800 | 0.9658 | 0.8628 | | 0.0415 | 9.59 | 16900 | 0.9668 | 0.8629 | | 0.0471 | 9.65 | 17000 | 0.9604 | 0.8625 | | 0.0471 | 9.7 | 17100 | 0.9658 | 0.8621 | | 0.0471 | 9.76 | 17200 | 0.9731 | 0.8630 | | 0.0471 | 9.82 | 17300 | 0.9692 | 0.8626 | | 0.0471 | 9.88 | 17400 | 0.9673 | 0.8623 | | 0.0528 | 9.93 | 17500 | 0.9614 | 0.8620 | | 0.0528 | 9.99 | 17600 | 0.9697 | 0.8621 |
910cba5aa10f8ada63d0b06cb6d39e39
creativeml-openrail-m
['text-to-image', 'stable-diffusion', 'dreambooth', 'anime']
false
A fined-tuned stable diffusion model for generating Padorus. **Token:** `PadoruMeme` (use this in your prompt to utilise the style)<br> **Class Phrase:** `1girl` (also use this in the prompt) [Model Download](https://huggingface.co/joujiboi/Padoru-Diffusion/resolve/main/2022-12-12T19-38-27_Padoru_1_training_images_2500_max_training_steps_PadoruMeme_token_1girl_class_word.ckpt) Examples: ![Example 1](https://i.imgur.com/DT0GKXz.png) ![Example 2](https://i.imgur.com/gtG728f.png) ![Example 3](https://i.imgur.com/X6td3X1.png) ![Example 4](https://i.imgur.com/ZLGRDYf.png)
da8fc8c48289aed68a92a2721c8e4097
apache-2.0
['generated_from_trainer']
false
tiny-mlm-glue-qqp-target-glue-cola This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-qqp](https://huggingface.co/muhtasham/tiny-mlm-glue-qqp) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7213 - Matthews Correlation: 0.0938
4d2eaf01b5e5fefc1b40d44fe67653a9
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.6096 | 1.87 | 500 | 0.6213 | 0.0 | | 0.6002 | 3.73 | 1000 | 0.6164 | 0.0 | | 0.5831 | 5.6 | 1500 | 0.6190 | 0.0583 | | 0.5559 | 7.46 | 2000 | 0.6402 | 0.0849 | | 0.528 | 9.33 | 2500 | 0.6572 | 0.1149 | | 0.5109 | 11.19 | 3000 | 0.6663 | 0.1134 | | 0.4867 | 13.06 | 3500 | 0.6832 | 0.1024 | | 0.4677 | 14.93 | 4000 | 0.7213 | 0.0938 |
77e68fd36b6161b1623c9f347c068f6d
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Whisper Large-v2 Tamil This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the mozilla-foundation/common_voice_11_0 ta dataset. It achieves the following results on the evaluation set: - Loss: 0.1727 - Wer: 8.4538
ec6e2f93d0390a34bdf10e320a601aa4
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - training_steps: 1000
a02be9e137e01cb6df39f620609effda
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.0723 | 1.27 | 1000 | 0.1727 | 8.4538 |
5342afdad9b7d43518bca344c9706842
cc-by-sa-4.0
['zero-shot-classification', 'text-classification', 'nli', 'pytorch']
false
roberta-base-japanese-jsnli This model is a fine-tuned version of [nlp-waseda/roberta-base-japanese](https://huggingface.co/nlp-waseda/roberta-base-japanese) on the [JSNLI](https://nlp.ist.i.kyoto-u.ac.jp/?%E6%97%A5%E6%9C%AC%E8%AA%9ESNLI%28JSNLI%29%E3%83%87%E3%83%BC%E3%82%BF%E3%82%BB%E3%83%83%E3%83%88) dataset. It achieves the following results on the evaluation set: - Loss: 0.2039 - Accuracy: 0.9328
2dd93328d52db43455d13c1c6973b521
cc-by-sa-4.0
['zero-shot-classification', 'text-classification', 'nli', 'pytorch']
false
Simple zero-shot classification pipeline ```python from transformers import pipeline from pyknp import Juman juman = Juman() classifier = pipeline("zero-shot-classification", model="Formzu/roberta-base-japanese-jsnli") sequence_to_classify = " ".join([tok.midasi for tok in juman.analysis("いつか世界を見る。").mrph_list()]) candidate_labels = ['旅行', '料理', '踊り'] out = classifier(sequence_to_classify, candidate_labels, hypothesis_template="この 例 は {} です 。") print(out)
223ec72a2bdc621a8753ebb7606becc0
cc-by-sa-4.0
['zero-shot-classification', 'text-classification', 'nli', 'pytorch']
false
NLI use-case ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch from pyknp import Juman juman = Juman() device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") model_name = "Formzu/roberta-base-japanese-jsnli" model = AutoModelForSequenceClassification.from_pretrained(model_name).to(device) tokenizer = AutoTokenizer.from_pretrained(model_name) premise = " ".join([tok.midasi for tok in juman.analysis("いつか世界を見る。").mrph_list()]) label = '旅行' hypothesis = f'この 例 は {label} です 。' input = tokenizer.encode(premise, hypothesis, return_tensors='pt').to(device) with torch.no_grad(): logits = model(input)["logits"][0] probs = logits.softmax(dim=-1) print(probs.cpu().numpy(), logits.cpu().numpy())
9699296cc8b970fed8d09a25968f4db5
cc-by-sa-4.0
['zero-shot-classification', 'text-classification', 'nli', 'pytorch']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0
b59fbbce42e464557e0bd859b436eb3f
cc-by-sa-4.0
['zero-shot-classification', 'text-classification', 'nli', 'pytorch']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.4067 | 1.0 | 16657 | 0.2224 | 0.9201 | | 0.3397 | 2.0 | 33314 | 0.2152 | 0.9208 | | 0.2775 | 3.0 | 49971 | 0.2039 | 0.9328 |
b8f5663ba4dd702ab1c01eba3ea8270d
apache-2.0
['generated_from_keras_callback']
false
bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0225 - Validation Loss: 0.0519 - Epoch: 2
afe9de77b9b2c11fe4b023e3f0aa53c1
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.0226 | 0.0519 | 0 | | 0.0229 | 0.0519 | 1 | | 0.0225 | 0.0519 | 2 |
2d75b20e0fc60077ed8d9731739ce061
mit
['PyTorch', 'Transformers', 'text generation']
false
RuGPT2_Gen_Comments Предварительно обученная модель на русском языке с использованием языковой модели "sberbank-ai/rugpt3small_based_on_gpt2". Содержимое карты этой модели было создано, чтобы дополнить предоставленную информацию и привести конкретные примеры её использования.
3413c2b7fc954f5c620cbf73323fa887
mit
['PyTorch', 'Transformers', 'text generation']
false
Описание модели RuGPT2_Gen_Comments — это модель предназначена для демонстрации генерации новостей, предварительно обученная на массиве данных Lenta2 проекта CORUS на русском языке. Входные данные — это последовательности непрерывного текста определенной длины (block_size = 1048).
9586a712f173793503f0f034b6ad79bd
mit
['PyTorch', 'Transformers', 'text generation']
false
Проимер использования ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Dmitriy007/rugpt2_gen_comments") model = AutoModelForCausalLM.from_pretrained("Dmitriy007/rugpt2_gen_comments") input_text = 'Ученик старшего класса лицея № 21 Иван Сидоров из города Адлер полетел в космос на планету Марс.' inputs = tokenizer(input_text, return_tensors="pt") model.to('cuda') inputs.to('cuda') input_ids = inputs["input_ids"] output = model.generate( input_ids, attention_mask=inputs["attention_mask"], pad_token_id=model.config.bos_token_id, max_length=300, num_beams=5, num_return_sequences=1, top_k=50, top_p=0.90, no_repeat_ngram_size=2, temperature=0.7, early_stopping=True ) generated_text = list(map(tokenizer.decode, output)) print(generated_text[0]) ```
1aa90b8c238fbf997f98aef1b64c48b7
cc-by-4.0
['generated_from_trainer']
false
bert-large-uncased-whole-word-masking-squad2-finetuned-islamic-squad This model is a fine-tuned version of [deepset/bert-large-uncased-whole-word-masking-squad2](https://huggingface.co/deepset/bert-large-uncased-whole-word-masking-squad2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4152
64cfec58e29599daa4d7586fd3b9f4d6
cc-by-4.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0
23974d0ef69d8f147af09c852bdf5fcb
cc-by-4.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.3 | 100 | 0.3653 | | No log | 2.6 | 200 | 0.4152 |
2126a8876e82dba73df1f05ea3a91007
cc-by-sa-4.0
['korean', 'klue']
false
Model Details **Model Description:** KLUE BERT base is a pre-trained BERT Model on Korean Language. The developers of KLUE BERT base developed the model in the context of the development of the [Korean Language Understanding Evaluation (KLUE) Benchmark](https://arxiv.org/pdf/2105.09680.pdf). - **Developed by:** See [GitHub Repo](https://github.com/facebookresearch/fairseq/tree/main/examples/roberta) for model developers - **Model Type:** Transformer-based language model - **Language(s):** Korean - **License:** cc-by-sa-4.0 - **Parent Model:** See the [BERT base uncased model](https://huggingface.co/bert-base-uncased) for more information about the BERT base model. - **Resources for more information:** - [Research Paper](https://arxiv.org/abs/2105.09680) - [GitHub Repo](https://github.com/KLUE-benchmark/KLUE)
0eb063f9644b9130e76c132cd17d544d
cc-by-sa-4.0
['korean', 'klue']
false
How to Get Started With the Model ```python from transformers import AutoModel, AutoTokenizer model = AutoModel.from_pretrained("klue/bert-base") tokenizer = AutoTokenizer.from_pretrained("klue/bert-base") ```
91c35d9d442e2acc2bae779d609def2e
cc-by-sa-4.0
['korean', 'klue']
false
Direct Use The model can be used for tasks including topic classification, semantic textual similarity, natural language inference, named entity recognition, and other tasks outlined in the [KLUE Benchmark](https://github.com/KLUE-benchmark/KLUE).
08e98d19d27441d00a187e1349af7bfb
cc-by-sa-4.0
['korean', 'klue']
false
Risks, Limitations and Biases Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). The model developers discuss several ethical considerations related to the model in the [paper](https://arxiv.org/pdf/2105.09680.pdf), including: - Bias issues with the publicly available data used in the pretraining corpora (and considerations related to filtering) - PII in the data used in the pretraining corpora (and efforts to pseudonymize the data) For ethical considerations related to the KLUE Benchmark, also see the [paper](https://arxiv.org/pdf/2105.09680.pdf).
f9191f80cbc53e8df41a57c649aad6c8
cc-by-sa-4.0
['korean', 'klue']
false
Training Data The authors use the following pretraining corpora for the model, described in the [associated paper](https://arxiv.org/pdf/2105.09680.pdf): > We gather the following five publicly available Korean corpora from diverse sources to cover a broad set of topics and many different styles. We combine these corpora to build the final pretraining corpus of size approximately 62GB. > > - **MODU:** [Modu Corpus](https://corpus.korean.go.kr) is a collection of Korean corpora distributed by [National Institute of Korean Languages](https://corpus.korean.go.kr/). It includes both formal articles (news and books) and colloquial text (dialogues). > - **CC-100-Kor:** [CC-100](https://data.statmt.org/cc-100/) is the large-scale multilingual web crawled corpora by using CC-Net ([Wenzek et al., 2020](https://www.aclweb.org/anthology/2020.lrec-1.494)). This is used for training XLM-R ([Conneau et al., 2020](https://aclanthology.org/2020.acl-main.747/)). We use the Korean portion from this corpora. > - **NAMUWIKI:** NAMUWIKI is a Korean web-based encyclopedia, similar to Wikipedia, but known to be less formal. Specifically, we download [the dump](http://dump.thewiki.kr) created on March 2nd, 2020. > - **NEWSCRAWL:** NEWSCRAWL consists of 12,800,000 news articles published from 2011 to 2020, collected from a news aggregation platform. > - **PETITION:** Petition is a collection of public petitions posted to the Blue House asking for administrative actions on social issues. We use the articles in the [Blue House National Petition](https://www1.president.go.kr/petitions) published from [August 2017 to March 2019](https://ko-nlp.github.io/Korpora/en-docs/corpuslist/korean_petitions.html). The authors also describe ethical considerations related to the pretraining corpora in the [associated paper](https://arxiv.org/pdf/2105.09680.pdf).
5ae34826a453193b4a18da729bf49009
cc-by-sa-4.0
['korean', 'klue']
false
Preprocessing The authors describe their preprocessing procedure in the [associated paper](https://arxiv.org/pdf/2105.09680.pdf): > We filter noisy text and non-Korean text using the same methods from Section 2.3 (of the paper). Each document in the corpus is split into sentences using C++ implementation (v1.3.1.) of rule-based [Korean Sentence Splitter (KSS)](https://github.com/likejazz/korean-sentence-splitter). For CC-100-Kor and NEWSCRAWL, we keep sentences of length greater than equal to 200 characters, as a heuristics to keep well-formed sentences. We then remove sentences included in our benchmark task datasets, using BM25 as a sentence similarity metric ([reference](https://www.microsoft.com/en-us/research/publication/okapi-at-trec-3/)).
57a0c55d41567746fbb2fd5c69bd1001
cc-by-sa-4.0
['korean', 'klue']
false
Tokenization The authors describe their tokenization procedure in the [associated paper](https://arxiv.org/pdf/2105.09680.pdf): > We design and use a new tokenization method, morpheme-based subword tokenization. When building a vocabulary, we pre-tokenize a raw text into morphemes using a morphological analyzer, and then we apply byte pair encoding (BPE) ([Senrich et al., 2016](https://aclanthology.org/P16-1162/)) to get the final vocabulary. For morpheme segmentation, we use [Mecab-ko](https://bitbucket.org/eunjeon/mecab-ko), MeCab ([Kudo, 2006](https://taku910.github.io/mecab/)) adapted for Korean, and for BPE segmentation, we use the wordpiece tokenizer from [Huggingface Tokenizers library](https://github.com/huggingface/tokenizers). We specify the vocabulary size to 32k. After building the vocabulary, we only use the BPE model during inference, which allows us to tokenize a word sequence by reflecting morphemes without a morphological analyzer. This improves both usability and speed. The training configurations are further described in the [paper](https://arxiv.org/pdf/2105.09680.pdf).
d5366a01f6ad69cad4a638fe66c2db41
cc-by-sa-4.0
['korean', 'klue']
false
Testing Data, Factors and Metrics The model was evaluated on the [KLUE Benchmark](https://github.com/KLUE-benchmark/KLUE). The tasks and metrics from the KLUE Benchmark that were used to evaluate this model are described briefly below. For more information about the KLUE Benchmark, see the [data card](https://huggingface.co/datasets/klue), [Github Repository](https://github.com/KLUE-benchmark/KLUE), and [associated paper](https://arxiv.org/pdf/2105.09680.pdf). - **Task:** Topic Classification (TC) - Yonhap News Agency Topic Classification (YNAT), **Metrics:** Macro F1 score, defined as the mean of topic-wise F1 scores, giving the same importance to each topic. - **Task:** Semantic Textual Similarity (STS), **Metrics:** Pearsons' correlation coefficient (Pearson’ r) and F1 score - **Task:** Natural Language Inference (NLI), **Metrics:** Accuracy - **Task:** Named Entity Recognition (NER), **Metrics:** Entity-level macro F1 (Entity F1) and character-level macro F1 (Char F1) scores - **Task:** Relation Extraction (RE), **Metrics:** Micro F1 score on relation existing cases and area under the precision- recall curve (AUPRC) on all classes - **Task:** Dependency Parsing (DP), **Metrics:** Unlabeled attachment score (UAS) and labeled attachment score (LAS) - **Task:** Machine Reading Comprehension (MRC), **Metrics:** Exact match (EM) and character-level ROUGE-W (ROUGE), which can be viewed as longest common consecutive subsequence (LCCS)-based F1 score. - **Task:** Dialogue State Tracking (DST), **Metrics:** Joint goal accuracy (JGA) and slot micro F1 score (Slot F1)
ed424551a9376282da7f6bfffacb35eb
cc-by-sa-4.0
['korean', 'klue']
false
Results | Task | TC | STS | | NLI | NER | | RE | | DP | | MRC | | DST | | | :---: |:---: | :---: | :---: |:---:| :---: | :---: |:---:| :---:| :---: |:---: | :---: | :---:| :---: | :---: | | Metric | F1 | Pearsons' r| F1 | ACC | Entity F1 | Char F1 | F1 | AUPRC| UAS | LAS | EM | ROUGE| JGA |Slot F1 | | | 85.73| 90.85 | 82.84 |81.63| 83.97 | 91.39 |66.44| 66.17| 89.96 |88.05 | 62.32 | 68.51| 46.64 | 91.61 |
033f40d593ed76bdb81ec69bc746618b
cc-by-sa-4.0
['korean', 'klue']
false
compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). We present the hardware type based on the [associated paper](https://arxiv.org/pdf/2105.09680.pdf). - **Hardware Type:** TPU v3-8 - **Hours used:** Unknown - **Cloud Provider:** Unknown - **Compute Region:** Unknown - **Carbon Emitted:** Unknown
ed4b41396ebad481abcfbd141491d813
cc-by-sa-4.0
['korean', 'klue']
false
Citation Information ```bibtex @misc{park2021klue, title={KLUE: Korean Language Understanding Evaluation}, author={Sungjoon Park and Jihyung Moon and Sungdong Kim and Won Ik Cho and Jiyoon Han and Jangwon Park and Chisung Song and Junseong Kim and Yongsook Song and Taehwan Oh and Joohong Lee and Juhyun Oh and Sungwon Lyu and Younghoon Jeong and Inkwon Lee and Sangwoo Seo and Dongjun Lee and Hyunwoo Kim and Myeonghwa Lee and Seongbo Jang and Seungwon Do and Sunkyoung Kim and Kyungtae Lim and Jongwon Lee and Kyumin Park and Jamin Shin and Seonghyun Kim and Lucy Park and Alice Oh and Jungwoo Ha and Kyunghyun Cho}, year={2021}, eprint={2105.09680}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
1988d70d99a2407dfd39f6f9eda00095
cc-by-4.0
[]
false
Readability benchmark (ES): mbert-en-es-sentences-3class This project is part of a series of models from the paper "A Benchmark for Neural Readability Assessment of Texts in Spanish". You can find more details about the project in our [GitHub](https://github.com/lmvasque/readability-es-benchmark).
347bae272f590c397008ce65e9666176
cc-by-4.0
[]
false
Models Our models were fine-tuned in multiple settings, including readability assessment in 2-class (simple/complex) and 3-class (basic/intermediate/advanced) for sentences and paragraph datasets. You can find more details in our [paper](https://drive.google.com/file/d/1KdwvqrjX8MWYRDGBKeHmiR1NCzDcVizo/view?usp=share_link). These are the available models you can use (current model page in bold): | Model | Granularity |
6aa5cdba399d534994680f6264885da6
cc-by-4.0
[]
false
classes | |-----------------------------------------------------------------------------------------------------------|----------------|:---------:| | [BERTIN (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-paragraphs-2class) | paragraphs | 2 | | [BERTIN (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-paragraphs-3class) | paragraphs | 3 | | [mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-paragraphs-2class) | paragraphs | 2 | | [mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-paragraphs-3class) | paragraphs | 3 | | [mBERT (EN+ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-en-es-paragraphs-3class) | paragraphs | 3 | | [BERTIN (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-sentences-2class) | sentences | 2 | | [BERTIN (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-sentences-3class) | sentences | 3 | | [mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-sentences-2class) | sentences | 2 | | [mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-sentences-3class) | sentences | 3 | | **[mBERT (EN+ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-en-es-sentences-3class)** | **sentences** | **3** | For the zero-shot setting, we used the original models [BERTIN](bertin-project/bertin-roberta-base-spanish) and [mBERT](https://huggingface.co/bert-base-multilingual-uncased) with no further training.
ec7288b09c495908c070d1858db5609c
apache-2.0
['translation']
false
opus-mt-fr-bcl * source languages: fr * target languages: bcl * OPUS readme: [fr-bcl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-bcl/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-bcl/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-bcl/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-bcl/opus-2020-01-09.eval.txt)
e237fe1e3588c4ae714f63ad10027d21
mit
['generated_from_trainer']
false
esm2_t6_8M_UR50D-pfam-test-wed This model is a fine-tuned version of [facebook/esm2_t6_8M_UR50D](https://huggingface.co/facebook/esm2_t6_8M_UR50D) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8768 - Accuracy: 0.8360
4ecdd1ed9b1e1ce7b68300f1333ed0e1
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP
e47efb5da8289c6d567be741be580e6c
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 3.0443 | 0.44 | 15000 | 2.7588 | 0.4561 | | 1.7448 | 0.88 | 30000 | 1.5400 | 0.6833 | | 1.2082 | 1.33 | 45000 | 1.0888 | 0.7837 | | 1.0505 | 1.77 | 60000 | 0.8768 | 0.8360 |
78fb5c8fdf45bb70425781f002a6509b
apache-2.0
['generated_from_trainer']
false
airlinesentiment This model is a fine-tuned version of [PDatt/outcome](https://huggingface.co/PDatt/outcome) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2552 - Accuracy: 0.9587 - F1: 0.9586 - Precision: 0.9585 - Recall: 0.9587
08b1ef37414d99752146000a5502776e
apache-2.0
['automatic-speech-recognition', 'en']
false
exp_w2v2t_en_unispeech_s809 Fine-tuned [microsoft/unispeech-large-1500h-cv](https://huggingface.co/microsoft/unispeech-large-1500h-cv) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
2bd59f486aea7c4fbdbaae7c5a38b92b
apache-2.0
['translation']
false
opus-mt-en-swc * source languages: en * target languages: swc * OPUS readme: [en-swc](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-swc/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-swc/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-swc/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-swc/opus-2020-01-08.eval.txt)
60576340c84bca09894fcc27ae824a66
apache-2.0
['translation']
false
opus-mt-en-af * source languages: en * target languages: af * OPUS readme: [en-af](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-af/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-af/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-af/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-af/opus-2019-12-18.eval.txt)
8feda13cc50c291795783b47f025a5ff
unknown
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image']
false
Asmongold model.ckpt for Stable Diffusion v1-5 Model Card Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input. I've trained using Dreambooth 20 images of twitch streamer Asmongold for the purpose of text-to-image illustration generation using Stable Diffusion. Feel free to download, use and share the model as you like. To give the Ai the trigger to generate an illustration based on the trained Asmongold images, make sure to use the tag "asmonbald" in your prompts. Example: a detailed portrait photo of a man vs a detailed portrait photo of asmonbald ---
624b073eeff5b3b2b1910e1ba405d8ad
apache-2.0
['mobile', 'vison', 'image-classification']
false
Model Details <!-- Give an overview of your model, the relevant research paper, who trained it, etc. --> EfficientFormer-L1, developed by [Snap Research](https://github.com/snap-research), is one of three EfficientFormer models. The EfficientFormer models were released as part of an effort to prove that properly designed transformers can reach extremely low latency on mobile devices while maintaining high performance. This checkpoint of EfficientFormer-L1 was trained for 300 epochs. - Developed by: Yanyu Li, Geng Yuan, Yang Wen, Eric Hu, Georgios Evangelidis, Sergey Tulyakov, Yanzhi Wang, Jian Ren - Language(s): English - License: This model is licensed under the apache-2.0 license - Resources for more information: - [Research Paper](https://arxiv.org/abs/2206.01191) - [GitHub Repo](https://github.com/snap-research/EfficientFormer/) </model_details> <how_to_start>
52767897bb94e64b069d817601465be0
mit
[]
false
Stable Diffusion Artist Collaboration → Model 2 This is the `<model-2>` concept taught to stable diffusion via textual inversion training. Anyone is free to load this concept into the [stable conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). More information on this collaboration and selected output images can be found here → https://www.astronaut.horse/model-2 Below are the original artworks used as input images. <img style="width: 100%; max-width: 500px;" src="https://huggingface.co/sd-concepts-library/enk-resin-frames/resolve/main/concept_images/2.jpeg"> <img style="width: 100%; max-width: 500px;" src="https://huggingface.co/sd-concepts-library/enk-resin-frames/resolve/main/concept_images/4.jpeg"> <img style="width: 100%; max-width: 500px;" src="https://huggingface.co/sd-concepts-library/enk-resin-frames/resolve/main/concept_images/0.jpeg"> <img style="width: 100%; max-width: 500px;" src="https://huggingface.co/sd-concepts-library/enk-resin-frames/resolve/main/concept_images/1.jpeg"> <img style="width: 100%; max-width: 500px;" src="https://huggingface.co/sd-concepts-library/enk-resin-frames/resolve/main/concept_images/3.jpeg">
52efdafd642704c8a89bfbebe5fbe973
cc-by-sa-4.0
['japanese', 'token-classification', 'pos', 'dependency-parsing']
false
Model Description This is a DeBERTa(V2) model pre-trained on 青空文庫 texts for POS-tagging and dependency-parsing, derived from [deberta-base-japanese-aozora](https://huggingface.co/KoichiYasuoka/deberta-base-japanese-aozora). Every short-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
ba79ed0547b1c3e0e5cc6ec2e4c1dbc1
cc-by-sa-4.0
['japanese', 'token-classification', 'pos', 'dependency-parsing']
false
How to Use ```py import torch from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/deberta-base-japanese-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/deberta-base-japanese-upos") s="国境の長いトンネルを抜けると雪国であった。" t=tokenizer.tokenize(s) p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]] print(list(zip(t,p))) ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/deberta-base-japanese-upos") print(nlp("国境の長いトンネルを抜けると雪国であった。")) ```
7e5d1e550bd8bbf150e12f17ff4a040e
apache-2.0
['generated_from_trainer']
false
distilbert_add_GLUE_Experiment_rte_192 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE RTE dataset. It achieves the following results on the evaluation set: - Loss: 0.6920 - Accuracy: 0.5271
1dad33ca34cc3efee2f9ebd9ef3e1a84
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6952 | 1.0 | 10 | 0.6929 | 0.5271 | | 0.6935 | 2.0 | 20 | 0.6924 | 0.5271 | | 0.6944 | 3.0 | 30 | 0.6930 | 0.5271 | | 0.6944 | 4.0 | 40 | 0.6930 | 0.5271 | | 0.6944 | 5.0 | 50 | 0.6944 | 0.4729 | | 0.6931 | 6.0 | 60 | 0.6921 | 0.5271 | | 0.6942 | 7.0 | 70 | 0.6926 | 0.5271 | | 0.6937 | 8.0 | 80 | 0.6939 | 0.4729 | | 0.6934 | 9.0 | 90 | 0.6921 | 0.5271 | | 0.694 | 10.0 | 100 | 0.6920 | 0.5271 | | 0.6937 | 11.0 | 110 | 0.6945 | 0.4729 | | 0.6934 | 12.0 | 120 | 0.6928 | 0.5271 | | 0.6934 | 13.0 | 130 | 0.6924 | 0.5271 | | 0.6934 | 14.0 | 140 | 0.6935 | 0.4729 | | 0.6937 | 15.0 | 150 | 0.6944 | 0.4729 |
7692bba3f0c3bf2121e5cb07ae5e7d97
apache-2.0
[]
false
Bert Base model HPU configuration This model only contains the `GaudiConfig` file for running the [bert-base-uncased](https://huggingface.co/bert-base-uncased) model on Habana's Gaudi processors (HPU). **This model contains no model weights, only a GaudiConfig.** This enables to specify: - `use_habana_mixed_precision`: whether to use Habana Mixed Precision (HMP) - `hmp_opt_level`: optimization level for HMP, see [here](https://docs.habana.ai/en/latest/PyTorch/PyTorch_Mixed_Precision/PT_Mixed_Precision.html
0c4686a2b185d0bdbd656546f27a1c4d
apache-2.0
[]
false
Usage The model is instantiated the same way as in the Transformers library. The only difference is that there are a few new training arguments specific to HPUs. [Here](https://github.com/huggingface/optimum-habana/blob/main/examples/question-answering/run_qa.py) is a question-answering example script to fine-tune a model on SQuAD. You can run it with BERT with the following command: ```bash python run_qa.py \ --model_name_or_path bert-base-uncased \ --gaudi_config_name Habana/bert-base-uncased \ --dataset_name squad \ --do_train \ --do_eval \ --per_device_train_batch_size 24 \ --per_device_eval_batch_size 8 \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --output_dir /tmp/squad/ \ --use_habana \ --use_lazy_mode \ --throughput_warmup_steps 2 ``` Check the [documentation](https://huggingface.co/docs/optimum/habana/index) out for more advanced usage and examples.
254639d28fd668a3d74b726a4b63e20c
mit
['generated_from_keras_callback']
false
Deep98/Materialism-clustered This model is a fine-tuned version of [nandysoham16/7-clustered_aug](https://huggingface.co/nandysoham16/7-clustered_aug) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0705 - Train End Logits Accuracy: 0.9896 - Train Start Logits Accuracy: 0.9722 - Validation Loss: 0.2530 - Validation End Logits Accuracy: 0.5 - Validation Start Logits Accuracy: 1.0 - Epoch: 0
f68d0d888cac585f5dc20d314b54ba51
mit
['generated_from_keras_callback']
false
Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 0.0705 | 0.9896 | 0.9722 | 0.2530 | 0.5 | 1.0 | 0 |
8b8305546ca89e44694597a258db5830
creativeml-openrail-m
['text-to-image', 'stable-diffusion']
false
luoshaliya1 Dreambooth model trained by jiaheillu Sample pictures of this concept: ![0](https://huggingface.co/jiaheillu/luoshaliya1/resolve/main/sample_images/00032-255223115-luoshaliya1,.png) ![1](https://huggingface.co/jiaheillu/luoshaliya1/resolve/main/sample_images/00069-766853930-luoshaliya1,looking_at_viewer,standing.png) ![2](https://huggingface.co/jiaheillu/luoshaliya1/resolve/main/sample_images/00071-1028185361-luoshaliya1,looking_at_viewer,standing.png) ![3](https://huggingface.co/jiaheillu/luoshaliya1/resolve/main/sample_images/00045-2218990639-luoshaliya1,looking_at_viewer.png) ![4](https://huggingface.co/jiaheillu/luoshaliya1/resolve/main/sample_images/00029-1970594401-luoshaliya1,.png) ![5](https://huggingface.co/jiaheillu/luoshaliya1/resolve/main/sample_images/00082-503057686-luoshaliya1,looking_at_viewer,standing.png)
e7912d112d2d50b2b549c67c118d71d3
mit
['generated_from_trainer']
false
output This model is a fine-tuned version of [rinna/japanese-gpt2-small](https://huggingface.co/rinna/japanese-gpt2-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.1545 - Accuracy: 0.4936
666e537d65b9be21ae2f7c54dae9aade
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0
c2c49a1b13996ad190169b233b1cbb65
apache-2.0
['translation']
false
spa-tgl * source group: Spanish * target group: Tagalog * OPUS readme: [spa-tgl](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-tgl/README.md) * model: transformer-align * source language(s): spa * target language(s): tgl_Latn * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-tgl/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-tgl/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-tgl/opus-2020-06-17.eval.txt)
4c51c473a600a44af6f63d850364bab2
apache-2.0
['translation']
false
System Info: - hf_name: spa-tgl - source_languages: spa - target_languages: tgl - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-tgl/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['es', 'tl'] - src_constituents: {'spa'} - tgt_constituents: {'tgl_Latn'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-tgl/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-tgl/opus-2020-06-17.test.txt - src_alpha3: spa - tgt_alpha3: tgl - short_pair: es-tl - chrF2_score: 0.5379999999999999 - bleu: 24.7 - brevity_penalty: 1.0 - ref_len: 4422.0 - src_name: Spanish - tgt_name: Tagalog - train_date: 2020-06-17 - src_alpha2: es - tgt_alpha2: tl - prefer_old: False - long_pair: spa-tgl - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
c51f34b3fd7c99796d321d4c251eab00
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
wav2vec2-large-xlsr-53-Czech Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Czech using the [Common Voice](https://huggingface.co/datasets/common_voice) When using this model, make sure that your speech input is sampled at 16kHz.
1164acad512d7af079974bae43fd98cb
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "cs", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Czech") model = Wav2Vec2ForCTC.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Czech") resampler = torchaudio.transforms.Resample(48_000, 16_000)
08e79b85299faeb88f1d82a7e5ef8892
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Evaluation The model can be evaluated as follows on the Czech test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "cs", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Czech") model = Wav2Vec2ForCTC.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Czech") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�]' resampler = torchaudio.transforms.Resample(48_000, 16_000)
cc37707955f6518dd572227d9e94b762
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 27.047806 %
9ee7878ec61cfdfd0f252b518ff56e54
apache-2.0
['generated_from_trainer']
false
mobilebert_sa_GLUE_Experiment_mrpc This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE MRPC dataset. It achieves the following results on the evaluation set: - Loss: 0.6145 - Accuracy: 0.6838 - F1: 0.8122 - Combined Score: 0.7480
378624d1793bc82197be187704ae98d8
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:| | 0.6377 | 1.0 | 29 | 0.6240 | 0.6838 | 0.8122 | 0.7480 | | 0.6309 | 2.0 | 58 | 0.6236 | 0.6838 | 0.8122 | 0.7480 | | 0.6306 | 3.0 | 87 | 0.6233 | 0.6838 | 0.8122 | 0.7480 | | 0.6291 | 4.0 | 116 | 0.6226 | 0.6838 | 0.8122 | 0.7480 | | 0.6222 | 5.0 | 145 | 0.6145 | 0.6838 | 0.8122 | 0.7480 | | 0.5736 | 6.0 | 174 | 0.6208 | 0.7010 | 0.7939 | 0.7474 | | 0.488 | 7.0 | 203 | 0.6414 | 0.6936 | 0.7795 | 0.7366 | | 0.3939 | 8.0 | 232 | 0.7659 | 0.7279 | 0.8122 | 0.7701 | | 0.3038 | 9.0 | 261 | 0.8875 | 0.7083 | 0.8027 | 0.7555 | | 0.2636 | 10.0 | 290 | 0.9829 | 0.7034 | 0.8033 | 0.7533 |
87eac51e69e2cba593dabae4e6c4d420