license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6868 | 1.0 | 1053 | 0.7027 | 0.5092 | | 0.6868 | 2.0 | 2106 | 0.7027 | 0.5092 | | 0.6867 | 3.0 | 3159 | 0.6970 | 0.5092 | | 0.687 | 4.0 | 4212 | 0.6992 | 0.5092 | | 0.6866 | 5.0 | 5265 | 0.6983 | 0.5092 |
59a877611255358e171cce2e06991c67
cc
['text generation']
false
How to use ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("nandinib1999/quote-generator") model = AutoModelWithLMHead.from_pretrained("nandinib1999/quote-generator") ```
6a3098c876a2f3b5f153448d89ccb649
cc
['text generation']
false
Training data This is the distribution of the total dataset into training, validation and test dataset for the fine-tuning task. <table style="width:30%"> <tr> <th>train</th> <td>349796</td> </tr> <tr> <th>validation</th> <td>99942</td> </tr> <tr> <th>test</th> <td>49971</td> </tr> </table>
63a34a1b69600bf9c30ccb754c5f9bc3
mit
[]
false
Contextualized Commonsense Inference in Dialogues v2 The pretrained checkpoint for the paper [Multiview Contextual Commonsense Inference: A New Dataset and Task](https://arxiv.org/abs/2210.02890). The model is trained based on the [T5-large](https://huggingface.co/t5-large) checkpoint. ![model image](https://drive.google.com/uc?export=download&id=14RIbxgXhREdu5xZiKn5D-UUzaQLDNLqf)
a8e1b73153c3b02557963088199e4dd5
mit
[]
false
Datasets The dataset used to pretrain the model can be obtained from the [CICERO repo](https://github.com/declare-lab/CICERO) following instructions. The CICEROv2 consists of annotated commonsense inferences including cause and emotional reaction, etc. The dialogues are from multiple datasets. | Dataset |
9a4a20b79605fdebee57d7ad90fdc4ef
mit
[]
false
Examples Some examples of generated results from the pretrained model (the zero-shot setting). **Subsequent Event** ``` What is or could be the subsequent event of the target? <sep> target: Oh . I just can't forget it .<sep> context: A: David , why didn't you clean the room ?, <utt> B: I'm not in the mood ., <utt> A: Why are you feeling depressed ?, <utt> B: I was told my girlfriend was speaking ill of me. That \u2019 s a real let-down ., <utt> A: I don t think she will do such a thing ., <utt> B: But she did and made me disappointed ., <utt> A: Oh , cheer up . A girlfriend is not everything ., <utt> B: But she means a lot to me ., <utt> A: Then forgive her mistake ., <utt> B: Oh . I just can't forget it ``` Predicted subsequent event: ``` David's girlfriend apologized to david for her mistake. ``` **Cause** ``` What is or could be the cause of the target? <sep> target: But she did and made me disappointed . <sep> context: A: David , why didn't you clean the room ?, <utt> B: I'm not in the mood ., <utt> A: Why are you feeling depressed ?, <utt> B: I was told my girlfriend was speaking ill of me. That \u2019 s a real let-down ., <utt> A: I don t think she will do such a thing ., <utt> B: But she did and made me disappointed ., <utt> A: Oh , cheer up . A girlfriend is not everything ., <utt> B: But she means a lot to me ., <utt> A: Then forgive her mistake ., <utt> B: Oh . I just can't forget it ``` Predicted cause: ``` David's girlfriend was not nice to him. ``` **Emotional Reaction** ``` What is the possible emotional reaction of the listener in response to target? <sep> target: Oh . I just can't forget it .<sep> context: A: David , why didn't you clean the room ?, <utt> B: I'm not in the mood ., <utt> A: Why are you feeling depressed ?, <utt> B: I was told my girlfriend was speaking ill of me. That \u2019 s a real let-down ., <utt> A: I don t think she will do such a thing ., <utt> B: But she did and made me disappointed ., <utt> A: Oh , cheer up . A girlfriend is not everything ., <utt> B: But she means a lot to me ., <utt> A: Then forgive her mistake ., <utt> B: Oh . I just can't forget it ``` Predicted emotional reaction: ``` The listener is hopeful that david will forgive his girlfriend for her mistake. ```
7ec14fd4d5db2207dff8ba73a9cca515
apache-2.0
['generated_from_keras_callback']
false
bearbearchu/mt5-small-finetuned-wikipedia-summarization-jp This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2757 - Validation Loss: 0.2210 - Epoch: 7
6f7d611dfa70ebfd8ad32c8315530d80
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 7656, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16
91f31f7fd880e216ce38954627fc98fc
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.1713 | 0.3484 | 0 | | 0.6239 | 0.3156 | 1 | | 0.4820 | 0.2693 | 2 | | 0.3973 | 0.2595 | 3 | | 0.3377 | 0.2480 | 4 | | 0.3093 | 0.2321 | 5 | | 0.2843 | 0.2236 | 6 | | 0.2757 | 0.2210 | 7 |
fe88918f8db51aa6efad511d9ce30126
apache-2.0
['sentiment analysis', 'classification', 'arabic dialect', 'tunisian dialect']
false
This is a fineTued Bert model on Tunisian dialect text (Used dataset: AhmedBou/Tunisian-Dialect-Corpus), ready for sentiment analysis and classification tasks. LABEL_1: Positive LABEL_2: Negative LABEL_0: Neutral This work is an integral component of my Master's degree thesis and represents the culmination of extensive research and labor. If you wish to utilize the Tunisian-Dialect-Corpus or the TuniBert model, kindly refer to the directory provided. [huggingface.co/AhmedBou][github.com/BoulahiaAhmed]
8d4b673f0a3e715b666a442336b1e7ca
mit
[]
false
model by deref This your the Stable Diffusion model fine-tuned the Arthur Leywin concept taught to Stable Diffusion with Dreambooth. It can be used by modifying the `instance_prompt`: **a photo of sks guy** You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb). Here are the images used for training this concept: ![image 0](https://huggingface.co/sd-dreambooth-library/arthur-leywin/resolve/main/concept_images/3.jpeg) ![image 1](https://huggingface.co/sd-dreambooth-library/arthur-leywin/resolve/main/concept_images/1.jpeg) ![image 2](https://huggingface.co/sd-dreambooth-library/arthur-leywin/resolve/main/concept_images/0.jpeg) ![image 3](https://huggingface.co/sd-dreambooth-library/arthur-leywin/resolve/main/concept_images/4.jpeg) ![image 4](https://huggingface.co/sd-dreambooth-library/arthur-leywin/resolve/main/concept_images/2.jpeg)
304e00d24938c937240662e68c197474
apache-2.0
['Quality Estimation', 'monotransquest', 'DA']
false
Using Pre-trained Models ```python import torch from transquest.algo.sentence_level.monotransquest.run_model import MonoTransQuestModel model = MonoTransQuestModel("xlmroberta", "TransQuest/monotransquest-da-et_en-wiki", num_labels=1, use_cuda=torch.cuda.is_available()) predictions, raw_outputs = model.predict([["Reducerea acestor conflicte este importantă pentru conservare.", "Reducing these conflicts is not important for preservation."]]) print(predictions) ```
1cf0ab43935f73fcd324c3ea94b3629d
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-sst2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.3651 - Accuracy: 0.9151
637997524733a64fe1bdee9f96f138e4
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.1902 | 1.0 | 4210 | 0.3102 | 0.9117 | | 0.1293 | 2.0 | 8420 | 0.3672 | 0.9048 | | 0.084 | 3.0 | 12630 | 0.3651 | 0.9151 | | 0.0682 | 4.0 | 16840 | 0.3971 | 0.9037 | | 0.0438 | 5.0 | 21050 | 0.4720 | 0.9117 |
654ec49526eb72b643294263429dc26e
apache-2.0
['generated_from_trainer', 'translation']
false
mt-sq-sv-finetuned This model is a fine-tuned version of [Helsinki-NLP/opus-mt-sq-sv](https://huggingface.co/Helsinki-NLP/opus-mt-sq-sv) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2250 - Bleu: 47.0111
7b757792ac7e70b943fa6574528854ad
apache-2.0
['generated_from_trainer', 'translation']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | |:-------------:|:-----:|:-----:|:---------------:|:-------:| | 1.7042 | 1.0 | 4219 | 1.4806 | 41.9650 | | 1.5537 | 2.0 | 8438 | 1.3955 | 43.1524 | | 1.4352 | 3.0 | 12657 | 1.3142 | 44.4373 | | 1.3346 | 4.0 | 16876 | 1.2793 | 45.2265 | | 1.2847 | 5.0 | 21095 | 1.2597 | 45.8071 | | 1.2821 | 6.0 | 25314 | 1.2454 | 46.3737 | | 1.2342 | 7.0 | 29533 | 1.2363 | 46.6308 | | 1.2092 | 8.0 | 33752 | 1.2301 | 46.8227 | | 1.1766 | 9.0 | 37971 | 1.2260 | 46.9719 | | 1.1836 | 10.0 | 42190 | 1.2250 | 47.0111 |
562935187cbfe4436be9f217a4ec45a3
apache-2.0
['automatic-speech-recognition', 'fr']
false
exp_w2v2r_fr_vp-100k_accent_france-2_belgium-8_s709 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
27c0261f5e423c06d353b3391b7ecc52
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Whisper Tiny ml - Bharat Ramanathan This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1286 - Wer: 106.9296
07bf862e0f730f468cdb115e71abc5c3
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.5755 | 4.02 | 500 | 0.4241 | 81.2652 | | 0.4182 | 9.01 | 1000 | 0.3245 | 72.7494 | | 0.3387 | 14.01 | 1500 | 0.2914 | 67.2749 | | 0.2923 | 19.0 | 2000 | 0.2745 | 60.3406 | | 0.2596 | 24.0 | 2500 | 0.2645 | 58.2725 | | 0.2356 | 28.02 | 3000 | 0.2629 | 60.3406 | | 0.2167 | 33.01 | 3500 | 0.2647 | 59.9757 | | 0.2039 | 4.02 | 4000 | 0.2617 | 58.2725 | | 0.1938 | 9.01 | 4500 | 0.2644 | 58.2725 | | 0.1858 | 14.01 | 5000 | 0.2636 | 58.7591 |
9f7284df344b8149444823f3215d41ae
apache-2.0
['stanza', 'token-classification']
false
Stanza model for Afrikaans (af) Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages. Starting from raw text to syntactic analysis and entity recognition, Stanza brings state-of-the-art NLP models to languages of your choosing. Find more about it in [our website](https://stanfordnlp.github.io/stanza) and our [GitHub repository](https://github.com/stanfordnlp/stanza). This card and repo were automatically prepared with `hugging_stanza.py` in the `stanfordnlp/huggingface-models` repo Last updated 2022-10-12 02:47:23.696
b8fe21b45f2d9de00c32d155a599466f
apache-2.0
['text-generation', 'chatbot', 'dialogue', 'distilgpt2', 'gpt2', 'ai-msgbot']
false
distilgpt2-tiny-conversational This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on a parsed version of Wizard of Wikipedia. Persona alpha/beta framework designed for use with [ai-msgbot](https://github.com/pszemraj/ai-msgbot). It achieves the following results on the evaluation set: - Loss: 2.2461
a953a3a200197ac15a01d4157fd0774d
apache-2.0
['text-generation', 'chatbot', 'dialogue', 'distilgpt2', 'gpt2', 'ai-msgbot']
false
Intended uses & limitations - usage is designed for integrating with this repo: [ai-msgbot](https://github.com/pszemraj/ai-msgbot) - the main specific information to know is that the model generates whole conversations between two entities, `person alpha` and `person beta`. These entity names are used functionally as custom `<bos>` tokens to extract when one response ends and another begins.
9822eef1ab2e90b14eb7db0075544f2d
apache-2.0
['text-generation', 'chatbot', 'dialogue', 'distilgpt2', 'gpt2', 'ai-msgbot']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 30
3c37c88a38102a84bbcf69745de635fe
apache-2.0
['text-generation', 'chatbot', 'dialogue', 'distilgpt2', 'gpt2', 'ai-msgbot']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | No log | 1.0 | 418 | 2.7793 | | 2.9952 | 2.0 | 836 | 2.6914 | | 2.7684 | 3.0 | 1254 | 2.6348 | | 2.685 | 4.0 | 1672 | 2.5938 | | 2.6243 | 5.0 | 2090 | 2.5625 | | 2.5816 | 6.0 | 2508 | 2.5332 | | 2.5816 | 7.0 | 2926 | 2.5098 | | 2.545 | 8.0 | 3344 | 2.4902 | | 2.5083 | 9.0 | 3762 | 2.4707 | | 2.4793 | 10.0 | 4180 | 2.4551 | | 2.4531 | 11.0 | 4598 | 2.4395 | | 2.4269 | 12.0 | 5016 | 2.4238 | | 2.4269 | 13.0 | 5434 | 2.4102 | | 2.4051 | 14.0 | 5852 | 2.3945 | | 2.3777 | 15.0 | 6270 | 2.3848 | | 2.3603 | 16.0 | 6688 | 2.3711 | | 2.3394 | 17.0 | 7106 | 2.3613 | | 2.3206 | 18.0 | 7524 | 2.3516 | | 2.3206 | 19.0 | 7942 | 2.3398 | | 2.3026 | 20.0 | 8360 | 2.3301 | | 2.2823 | 21.0 | 8778 | 2.3203 | | 2.2669 | 22.0 | 9196 | 2.3105 | | 2.2493 | 23.0 | 9614 | 2.3027 | | 2.2334 | 24.0 | 10032 | 2.2930 | | 2.2334 | 25.0 | 10450 | 2.2852 | | 2.2194 | 26.0 | 10868 | 2.2754 | | 2.2014 | 27.0 | 11286 | 2.2695 | | 2.1868 | 28.0 | 11704 | 2.2598 | | 2.171 | 29.0 | 12122 | 2.2539 | | 2.1597 | 30.0 | 12540 | 2.2461 |
9473bd87e1c4375b06551700d8dee25a
apache-2.0
['generated_from_trainer']
false
wav2vec2-large-xls-r-300m-korean-convsen2 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0094 - Cer: 0.0012
03b9e141328bca2770cf483873638726
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 3 - mixed_precision_training: Native AMP
9f730f35aaa753705614a3659c787a12
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Cer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.8421 | 1.0 | 1762 | 0.2383 | 0.0591 | | 0.1721 | 2.0 | 3524 | 0.0309 | 0.0060 | | 0.065 | 3.0 | 5286 | 0.0094 | 0.0012 |
9f1b1c5ed326474ea31fe843a0d67fa0
apache-2.0
['generated_from_trainer']
false
TSE_BERT_5E This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3664 - Accuracy: 0.9267
27621251009f18d113c752ed2a8bb684
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6836 | 0.06 | 50 | 0.5614 | 0.8267 | | 0.4679 | 0.12 | 100 | 0.3521 | 0.9 | | 0.3325 | 0.17 | 150 | 0.2747 | 0.8933 | | 0.2493 | 0.23 | 200 | 0.2712 | 0.9067 | | 0.273 | 0.29 | 250 | 0.2304 | 0.9333 | | 0.2888 | 0.35 | 300 | 0.2253 | 0.92 | | 0.2558 | 0.4 | 350 | 0.2110 | 0.9267 | | 0.1997 | 0.46 | 400 | 0.2206 | 0.9267 | | 0.2748 | 0.52 | 450 | 0.2358 | 0.9267 | | 0.2448 | 0.58 | 500 | 0.2942 | 0.8933 | | 0.2247 | 0.63 | 550 | 0.2410 | 0.9067 | | 0.2002 | 0.69 | 600 | 0.2222 | 0.9133 | | 0.2668 | 0.75 | 650 | 0.2372 | 0.9133 | | 0.2701 | 0.81 | 700 | 0.2288 | 0.9333 | | 0.2034 | 0.87 | 750 | 0.2415 | 0.9267 | | 0.2374 | 0.92 | 800 | 0.2278 | 0.92 | | 0.2305 | 0.98 | 850 | 0.2270 | 0.92 | | 0.1704 | 1.04 | 900 | 0.2591 | 0.9333 | | 0.1826 | 1.1 | 950 | 0.2481 | 0.9267 | | 0.1116 | 1.15 | 1000 | 0.2906 | 0.9133 | | 0.1527 | 1.21 | 1050 | 0.2902 | 0.92 | | 0.1692 | 1.27 | 1100 | 0.2489 | 0.9333 | | 0.158 | 1.33 | 1150 | 0.2576 | 0.9333 | | 0.1608 | 1.38 | 1200 | 0.3344 | 0.9267 | | 0.1194 | 1.44 | 1250 | 0.3615 | 0.9267 | | 0.201 | 1.5 | 1300 | 0.3374 | 0.92 | | 0.1938 | 1.56 | 1350 | 0.2847 | 0.92 | | 0.1479 | 1.61 | 1400 | 0.3044 | 0.9267 | | 0.1628 | 1.67 | 1450 | 0.2980 | 0.9267 | | 0.1783 | 1.73 | 1500 | 0.3132 | 0.9267 | | 0.1885 | 1.79 | 1550 | 0.2676 | 0.9333 | | 0.1651 | 1.85 | 1600 | 0.2709 | 0.9333 | | 0.1376 | 1.9 | 1650 | 0.2777 | 0.94 | | 0.1571 | 1.96 | 1700 | 0.2761 | 0.9333 | | 0.1561 | 2.02 | 1750 | 0.2912 | 0.94 | | 0.1187 | 2.08 | 1800 | 0.2893 | 0.9467 | | 0.1205 | 2.13 | 1850 | 0.2882 | 0.9467 | | 0.0751 | 2.19 | 1900 | 0.3032 | 0.9467 | | 0.1412 | 2.25 | 1950 | 0.2926 | 0.9467 | | 0.0783 | 2.31 | 2000 | 0.2962 | 0.9467 | | 0.1094 | 2.36 | 2050 | 0.2909 | 0.9333 | | 0.1158 | 2.42 | 2100 | 0.3087 | 0.9333 | | 0.0606 | 2.48 | 2150 | 0.3102 | 0.9467 | | 0.1164 | 2.54 | 2200 | 0.2812 | 0.94 | | 0.1311 | 2.6 | 2250 | 0.3736 | 0.9267 | | 0.1087 | 2.65 | 2300 | 0.3069 | 0.94 | | 0.109 | 2.71 | 2350 | 0.3176 | 0.94 | | 0.0789 | 2.77 | 2400 | 0.3130 | 0.94 | | 0.0784 | 2.83 | 2450 | 0.3338 | 0.94 | | 0.1388 | 2.88 | 2500 | 0.3440 | 0.9333 | | 0.1062 | 2.94 | 2550 | 0.2883 | 0.94 | | 0.1016 | 3.0 | 2600 | 0.2776 | 0.94 | | 0.0642 | 3.06 | 2650 | 0.3302 | 0.9333 | | 0.052 | 3.11 | 2700 | 0.3217 | 0.94 | | 0.0539 | 3.17 | 2750 | 0.3899 | 0.9267 | | 0.0593 | 3.23 | 2800 | 0.3283 | 0.9467 | | 0.0468 | 3.29 | 2850 | 0.3382 | 0.9467 | | 0.0546 | 3.34 | 2900 | 0.3133 | 0.9467 | | 0.107 | 3.4 | 2950 | 0.3550 | 0.94 | | 0.1079 | 3.46 | 3000 | 0.3484 | 0.94 | | 0.0782 | 3.52 | 3050 | 0.3313 | 0.94 | | 0.0635 | 3.58 | 3100 | 0.3418 | 0.94 | | 0.0771 | 3.63 | 3150 | 0.3685 | 0.9333 | | 0.0629 | 3.69 | 3200 | 0.3467 | 0.9333 | | 0.0552 | 3.75 | 3250 | 0.3677 | 0.94 | | 0.0531 | 3.81 | 3300 | 0.3436 | 0.9333 | | 0.0819 | 3.86 | 3350 | 0.3802 | 0.9333 | | 0.0583 | 3.92 | 3400 | 0.3441 | 0.9333 | | 0.0434 | 3.98 | 3450 | 0.3666 | 0.9333 | | 0.0747 | 4.04 | 3500 | 0.3554 | 0.9333 | | 0.0309 | 4.09 | 3550 | 0.3582 | 0.9333 | | 0.1057 | 4.15 | 3600 | 0.3615 | 0.9267 | | 0.0391 | 4.21 | 3650 | 0.3583 | 0.9267 | | 0.0433 | 4.27 | 3700 | 0.3514 | 0.9333 | | 0.0597 | 4.33 | 3750 | 0.3580 | 0.9333 | | 0.0663 | 4.38 | 3800 | 0.3390 | 0.94 | | 0.0563 | 4.44 | 3850 | 0.3518 | 0.9267 | | 0.0702 | 4.5 | 3900 | 0.3542 | 0.9267 | | 0.0383 | 4.56 | 3950 | 0.3528 | 0.9267 | | 0.0474 | 4.61 | 4000 | 0.3485 | 0.9333 | | 0.0265 | 4.67 | 4050 | 0.3489 | 0.94 | | 0.0165 | 4.73 | 4100 | 0.3616 | 0.9333 | | 0.0489 | 4.79 | 4150 | 0.3579 | 0.9333 | | 0.0478 | 4.84 | 4200 | 0.3603 | 0.9333 | | 0.0536 | 4.9 | 4250 | 0.3666 | 0.9267 | | 0.0551 | 4.96 | 4300 | 0.3664 | 0.9267 |
a4be8aed77c50168ce9c7dcb67d19a6d
apache-2.0
['deberta-v3-base', 'text-classification', 'nli', 'natural-language-inference', 'multitask', 'multi-task', 'extreme-multi-task', 'extreme-mtl', 'deberta-v3-base', 'tasksource']
false
Model Card for DeBERTa-v3-base-tasksource-nli DeBERTa-v3-base fine-tuned with multi-task learning on 444 tasks of the [tasksource collection](https://github.com/sileod/tasksource/) You can further fine-tune this model to use it for any classification or multiple-choice task. This checkpoint has strong zero-shot validation performance on many tasks (e.g. 70% on WNLI). The untuned model CLS embedding also has strong linear probing performance (90% on MNLI), due to the multitask training. This is the shared model with the MNLI classifier on top. Its encoder was trained on many datasets including bigbench, Anthropic rlhf, anli... alongside many NLI and classification tasks with a SequenceClassification heads while using only one shared encoder. Each task had a specific CLS embedding, which is dropped 10% of the time to facilitate model use without it. All multiple-choice model used the same classification layers. For classification tasks, models shared weights if their labels matched. The number of examples per task was capped to 64k. The model was trained for 20k steps with a batch size of 384, and a peak learning rate of 2e-5. The list of tasks is available in tasks.md tasksource training code: https://colab.research.google.com/drive/1iB4Oxl9_B5W3ZDzXoWJN-olUbqLBxgQS?usp=sharing
0f9cb87b7034ecde924df0b93e82f458
apache-2.0
['deberta-v3-base', 'text-classification', 'nli', 'natural-language-inference', 'multitask', 'multi-task', 'extreme-multi-task', 'extreme-mtl', 'deberta-v3-base', 'tasksource']
false
Model Recycling An earlier (weaker) version model is ranked 1st among all models with the microsoft/deberta-v3-base architecture as of 10/01/2023 Results: [Evaluation on 36 datasets](https://ibm.github.io/model-recycling/model_gain_chart?avg=1.41&mnli_lp=nan&20_newsgroup=0.63&ag_news=0.46&amazon_reviews_multi=-0.40&anli=0.94&boolq=2.55&cb=10.71&cola=0.49&copa=10.60&dbpedia=0.10&esnli=-0.25&financial_phrasebank=1.31&imdb=-0.17&isear=0.63&mnli=0.42&mrpc=-0.23&multirc=1.73&poem_sentiment=0.77&qnli=0.12&qqp=-0.05&rotten_tomatoes=0.67&rte=2.13&sst2=0.01&sst_5bins=-0.02&stsb=1.39&trec_coarse=0.24&trec_fine=0.18&tweet_ev_emoji=0.62&tweet_ev_emotion=0.43&tweet_ev_hate=1.84&tweet_ev_irony=1.43&tweet_ev_offensive=0.17&tweet_ev_sentiment=0.08&wic=-1.78&wnli=3.03&wsc=9.95&yahoo_answers=0.17&model_name=sileod%2Fdeberta-v3-base_tasksource-420&base_name=microsoft%2Fdeberta-v3-base) using sileod/deberta-v3-base_tasksource-420 as a base model yields average score of 80.45 in comparison to 79.04 by microsoft/deberta-v3-base. | 20_newsgroup | ag_news | amazon_reviews_multi | anli | boolq | cb | cola | copa | dbpedia | esnli | financial_phrasebank | imdb | isear | mnli | mrpc | multirc | poem_sentiment | qnli | qqp | rotten_tomatoes | rte | sst2 | sst_5bins | stsb | trec_coarse | trec_fine | tweet_ev_emoji | tweet_ev_emotion | tweet_ev_hate | tweet_ev_irony | tweet_ev_offensive | tweet_ev_sentiment | wic | wnli | wsc | yahoo_answers | |---------------:|----------:|-----------------------:|--------:|--------:|--------:|--------:|-------:|----------:|--------:|-----------------------:|-------:|--------:|--------:|--------:|----------:|-----------------:|--------:|--------:|------------------:|--------:|--------:|------------:|--------:|--------------:|------------:|-----------------:|-------------------:|----------------:|-----------------:|---------------------:|---------------------:|--------:|--------:|--------:|----------------:| | 87.042 | 90.9 | 66.46 | 59.7188 | 85.5352 | 85.7143 | 87.0566 | 69 | 79.5333 | 91.6735 | 85.8 | 94.324 | 72.4902 | 90.2055 | 88.9706 | 63.9851 | 87.5 | 93.6299 | 91.7363 | 91.0882 | 84.4765 | 95.0688 | 56.9683 | 91.6654 | 98 | 91.2 | 46.814 | 84.3772 | 58.0471 | 81.25 | 85.2326 | 71.8821 | 69.4357 | 73.2394 | 74.0385 | 72.2 | For more information, see: [Model Recycling](https://ibm.github.io/model-recycling/)
68656aac3e59570de4d2cd0bef3d41f1
apache-2.0
['deberta-v3-base', 'text-classification', 'nli', 'natural-language-inference', 'multitask', 'multi-task', 'extreme-multi-task', 'extreme-mtl', 'deberta-v3-base', 'tasksource']
false
Citation More details on this [article:](https://arxiv.org/abs/2301.05948) ```bib @article{sileo2023tasksource, title={tasksource: Structured Dataset Preprocessing Annotations for Frictionless Extreme Multi-Task Learning and Evaluation}, author={Sileo, Damien}, url= {https://arxiv.org/abs/2301.05948}, journal={arXiv preprint arXiv:2301.05948}, year={2023} } ```
ea924a304fe8d610f0f73057e820b38e
apache-2.0
['deberta-v3-base', 'text-classification', 'nli', 'natural-language-inference', 'multitask', 'multi-task', 'extreme-multi-task', 'extreme-mtl', 'deberta-v3-base', 'tasksource']
false
Loading a specific classifier Classifiers for all tasks available. ```python from torch import nn TASK_NAME = "hh-rlhf" class MultiTask(transformers.DebertaV2ForMultipleChoice): def __init__(self, *args, **kwargs): super().__init__(*args) n=len(self.config.tasks) cs=self.config.classifiers_size self.Z = nn.Embedding(n,768) self.classifiers = nn.ModuleList([torch.nn.Linear(*size) for size in cs]) model = MultiTask.from_pretrained("sileod/deberta-v3-base-tasksource-nli",ignore_mismatched_sizes=True) task_index = {k:v for v,k in dict(enumerate(model.config.tasks)).items()}[TASK_NAME] model.classifier = model.classifiers[task_index]
7c8d99f262988d23c69347b066aee933
apache-2.0
['generated_from_trainer']
false
small-mlm-glue-mnli-target-glue-mnli This model is a fine-tuned version of [muhtasham/small-mlm-glue-mnli](https://huggingface.co/muhtasham/small-mlm-glue-mnli) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6497 - Accuracy: 0.7259
7aed6d6b2398c84046fb019825a1b0aa
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.9145 | 0.04 | 500 | 0.8234 | 0.6373 | | 0.8123 | 0.08 | 1000 | 0.7786 | 0.6628 | | 0.7745 | 0.12 | 1500 | 0.7489 | 0.6756 | | 0.7496 | 0.16 | 2000 | 0.7311 | 0.6878 | | 0.7424 | 0.2 | 2500 | 0.7205 | 0.6921 | | 0.7325 | 0.24 | 3000 | 0.7007 | 0.7007 | | 0.7126 | 0.29 | 3500 | 0.6780 | 0.7131 | | 0.7007 | 0.33 | 4000 | 0.6652 | 0.7189 | | 0.6755 | 0.37 | 4500 | 0.6737 | 0.7249 | | 0.6803 | 0.41 | 5000 | 0.6497 | 0.7259 |
af1a9098a950239416d2c91afdac147f
apache-2.0
['generated_from_trainer']
false
aesthetic_attribute_classifier This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the [PCCD dataset](https://github.com/ivclab/DeepPhotoCritic-ICCV17). It achieves the following results on the evaluation set: - Loss: 0.3976 - Precision: {'precision': 0.877129341279301} - Recall: {'recall': 0.8751381215469614} - F1: {'f1': 0.875529982855803} - Accuracy: {'accuracy': 0.8751381215469614}
8124c15bfbc78be794be6007d7df3df3
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------------------------------:|:------------------------------:|:--------------------------:|:--------------------------------:| | 0.452 | 1.0 | 1528 | 0.4109 | {'precision': 0.8632779077963935} | {'recall': 0.8615101289134438} | {'f1': 0.8618616182904953} | {'accuracy': 0.8615101289134438} | | 0.3099 | 2.0 | 3056 | 0.3976 | {'precision': 0.877129341279301} | {'recall': 0.8751381215469614} | {'f1': 0.875529982855803} | {'accuracy': 0.8751381215469614} | | 0.227 | 3.0 | 4584 | 0.4320 | {'precision': 0.876211408446225} | {'recall': 0.874401473296501} | {'f1': 0.8747427955387239} | {'accuracy': 0.874401473296501} | | 0.1645 | 4.0 | 6112 | 0.4840 | {'precision': 0.8724641667216837} | {'recall': 0.8714548802946593} | {'f1': 0.8714577820909117} | {'accuracy': 0.8714548802946593} | | 0.1141 | 5.0 | 7640 | 0.5083 | {'precision': 0.8755445355051571} | {'recall': 0.8747697974217311} | {'f1': 0.8748766125899489} | {'accuracy': 0.8747697974217311} |
05ce3d67b9dadebb87d2533ca78c781e
apache-2.0
['generated_from_trainer']
false
pos_test_model_1 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1521 - Accuracy: 0.9530 - F1: 0.9523 - Precision: 0.9576 - Recall: 0.9530
deaf9173718026216158fab6c2e7dd95
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.1882 | 1.0 | 1744 | 0.1521 | 0.9530 | 0.9523 | 0.9576 | 0.9530 |
eb9b39c008000f5c6b811201a2d714d2
apache-2.0
['translation']
false
jpn-msa * source group: Japanese * target group: Malay (macrolanguage) * OPUS readme: [jpn-msa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-msa/README.md) * model: transformer-align * source language(s): jpn jpn_Hani jpn_Hira jpn_Kana * target language(s): ind zlm_Latn zsm_Latn * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-msa/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-msa/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-msa/opus-2020-06-17.eval.txt)
7497c3168b62f12e49c64a9ba64d390b
apache-2.0
['translation']
false
System Info: - hf_name: jpn-msa - source_languages: jpn - target_languages: msa - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-msa/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ja', 'ms'] - src_constituents: {'jpn_Hang', 'jpn', 'jpn_Yiii', 'jpn_Kana', 'jpn_Hani', 'jpn_Bopo', 'jpn_Latn', 'jpn_Hira'} - tgt_constituents: {'zsm_Latn', 'ind', 'max_Latn', 'zlm_Latn', 'min'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-msa/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-msa/opus-2020-06-17.test.txt - src_alpha3: jpn - tgt_alpha3: msa - short_pair: ja-ms - chrF2_score: 0.469 - bleu: 21.5 - brevity_penalty: 0.9259999999999999 - ref_len: 17028.0 - src_name: Japanese - tgt_name: Malay (macrolanguage) - train_date: 2020-06-17 - src_alpha2: ja - tgt_alpha2: ms - prefer_old: False - long_pair: jpn-msa - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
a02acf3bc98716dc90540c34e579351d
mit
[]
false
inuyama-muneto-style on Stable Diffusion Artist: <https://twitter.com/inuyamamuneto/status/1223899994832302081> This is the `<inuyama-muneto-style>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<inuyama-muneto-style> 0](https://huggingface.co/sd-concepts-library/inuyama-muneto-style/resolve/main/concept_images/0.jpeg) ![<inuyama-muneto-style> 1](https://huggingface.co/sd-concepts-library/inuyama-muneto-style/resolve/main/concept_images/3.jpeg) ![<inuyama-muneto-style> 2](https://huggingface.co/sd-concepts-library/inuyama-muneto-style/resolve/main/concept_images/1.jpeg) ![<inuyama-muneto-style> 3](https://huggingface.co/sd-concepts-library/inuyama-muneto-style/resolve/main/concept_images/2.jpeg)
a92f4c1d416c4e108717fced4ed01083
apache-2.0
['generated_from_trainer']
false
distilbert_add_GLUE_Experiment_logit_kd_qqp This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE QQP dataset. It achieves the following results on the evaluation set: - Loss: 0.6623 - Accuracy: 0.6425 - F1: 0.0601 - Combined Score: 0.3513
918cdc12fa1ae6a93dbe527e551126ce
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:| | 0.7968 | 1.0 | 1422 | 0.7159 | 0.6323 | 0.0030 | 0.3176 | | 0.6542 | 2.0 | 2844 | 0.6925 | 0.6338 | 0.0115 | 0.3226 | | 0.5893 | 3.0 | 4266 | 0.6695 | 0.6348 | 0.0172 | 0.3260 | | 0.5538 | 4.0 | 5688 | 0.7068 | 0.6386 | 0.0393 | 0.3390 | | 0.5323 | 5.0 | 7110 | 0.6670 | 0.6500 | 0.1014 | 0.3757 | | 0.5181 | 6.0 | 8532 | 0.6738 | 0.6420 | 0.0573 | 0.3497 | | 0.5082 | 7.0 | 9954 | 0.6623 | 0.6425 | 0.0601 | 0.3513 | | 0.5012 | 8.0 | 11376 | 0.6995 | 0.6412 | 0.0536 | 0.3474 | | 0.4957 | 9.0 | 12798 | 0.6836 | 0.6472 | 0.0858 | 0.3665 | | 0.4911 | 10.0 | 14220 | 0.6778 | 0.6484 | 0.0922 | 0.3703 | | 0.4874 | 11.0 | 15642 | 0.7183 | 0.6415 | 0.0550 | 0.3483 | | 0.484 | 12.0 | 17064 | 0.6730 | 0.6451 | 0.0744 | 0.3598 |
a849ea6e7a7b733029e2cf9492d8a528
other
['vision', 'image-segmentation']
false
SegFormer (b5-sized) model fine-tuned on ADE20k SegFormer model fine-tuned on ADE20k at resolution 640x640. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer). Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
21c6ac2426764509b55bb6a5e61c7a90
other
['vision', 'image-segmentation']
false
How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation from PIL import Image import requests feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/segformer-b5-finetuned-ade-512-512") model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b5-finetuned-ade-512-512") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits
39aa2198ff5b87cd34130bbbb8e0a94d
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 12 - eval_batch_size: 8 - seed: 2 - distributed_type: multi-GPU - num_devices: 2 - total_train_batch_size: 24 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2.0
54b7994c15fc2b961cd3f5ad784a41ce
mit
['generated_from_trainer']
false
roberta-base-ner This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the conll2003 dataset. It achieves the following results on the evaluation set: - eval_loss: 0.0814 - eval_precision: 0.9101 - eval_recall: 0.9336 - eval_f1: 0.9217 - eval_accuracy: 0.9799 - eval_runtime: 10.2964 - eval_samples_per_second: 315.646 - eval_steps_per_second: 39.529 - epoch: 1.14 - step: 500
8b27a3afb9dee05b024c77810253bab8
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0
353d6ce6ea504f563c42a996cff9e4e7
mit
['summarization', 'generated_from_trainer']
false
mbart-large-50-finetuned-amazon-pr-test This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.9825 - Rouge1: 0.1522 - Rouge2: 0.0535 - Rougel: 0.1400 - Rougelsum: 0.1407
cd446a512760862b699aade953a0f2d9
mit
['summarization', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:| | 2.909 | 1.0 | 838 | 2.8106 | 0.1264 | 0.0576 | 0.1237 | 0.1245 | | 1.8102 | 2.0 | 1676 | 2.8872 | 0.1392 | 0.0683 | 0.1341 | 0.1353 | | 1.0773 | 3.0 | 2514 | 3.3501 | 0.1548 | 0.0660 | 0.1481 | 0.1496 | | 0.5431 | 4.0 | 3352 | 3.9495 | 0.1190 | 0.0566 | 0.1137 | 0.1152 | | 0.2371 | 5.0 | 4190 | 4.5519 | 0.1562 | 0.0707 | 0.1462 | 0.1470 | | 0.0934 | 6.0 | 5028 | 4.7016 | 0.1524 | 0.0636 | 0.1451 | 0.1462 | | 0.0375 | 7.0 | 5866 | 4.9661 | 0.1531 | 0.0564 | 0.1422 | 0.1435 | | 0.0155 | 8.0 | 6704 | 4.9825 | 0.1522 | 0.0535 | 0.1400 | 0.1407 |
3542fb82f54734fe6b6bc28c5f263faf
apache-2.0
['generated_from_trainer']
false
bert-finetuned-expression_epoch5 This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5897 - Precision: 0.5835 - Recall: 0.5688 - F1: 0.5760 - Accuracy: 0.8344
90c152179ce83809238f1ea6c7a4a9e8
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 218 | 0.5185 | 0.5076 | 0.5034 | 0.5055 | 0.8207 | | No log | 2.0 | 436 | 0.4972 | 0.4948 | 0.5638 | 0.5271 | 0.8177 | | 0.5193 | 3.0 | 654 | 0.5128 | 0.5838 | 0.5554 | 0.5692 | 0.8390 | | 0.5193 | 4.0 | 872 | 0.5665 | 0.5612 | 0.6074 | 0.5834 | 0.8224 | | 0.2063 | 5.0 | 1090 | 0.5897 | 0.5835 | 0.5688 | 0.5760 | 0.8344 |
8618b4b48029b726de40a06cc354e6dd
cc-by-4.0
['spanish', 'roberta']
false
This is a **RoBERTa-base** model trained from scratch in Spanish. The training dataset is [mc4](https://huggingface.co/datasets/bertin-project/mc4-es-sampled ) subsampling documents to a total of about 50 million examples. Sampling is random. This model has been trained for 230.000 steps (early stopped before the 250k intended steps). Please see our main [card](https://huggingface.co/bertin-project/bertin-roberta-base-spanish) for more information. This is part of the [Flax/Jax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organised by [HuggingFace](https://huggingface.co/) and TPU usage sponsored by Google.
2065d8a15ae8a9c6377c25265d1e31e7
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1383 - F1: 0.8589
068544fb43434caac93cb796f0beaa04
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2631 | 1.0 | 525 | 0.1596 | 0.8218 | | 0.1296 | 2.0 | 1050 | 0.1353 | 0.8479 | | 0.0821 | 3.0 | 1575 | 0.1383 | 0.8589 |
9de78702c900444ff37a5eaefa7c1926
apache-2.0
['image-classification', 'timm']
false
Model card for coatnet_rmlp_nano_rw_224.sw_in1k A timm specific CoAtNet (w/ a MLP Log-CPB (continuous log-coordinate relative position bias motivated by Swin-V2) image classification model. Trained in `timm` on ImageNet-1k by Ross Wightman. ImageNet-1k training done on TPUs thanks to support of the [TRC](https://sites.research.google/trc/about/) program.
b30d8beb72925ee45e985a055dacc4cc
apache-2.0
['image-classification', 'timm']
false
Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 15.1 - GMACs: 2.6 - Activations (M): 20.3 - Image size: 224 x 224 - **Papers:** - CoAtNet: Marrying Convolution and Attention for All Data Sizes: https://arxiv.org/abs/2201.03545 - Swin Transformer V2: Scaling Up Capacity and Resolution: https://arxiv.org/abs/2111.09883 - **Dataset:** ImageNet-1k
c624c1234e9bbf62a01432a6fd264dcd
apache-2.0
['image-classification', 'timm']
false
Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model('coatnet_rmlp_nano_rw_224.sw_in1k', pretrained=True) model = model.eval()
e441561d7d6abf470b000b8753f8da78
apache-2.0
['image-classification', 'timm']
false
Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'coatnet_rmlp_nano_rw_224.sw_in1k', pretrained=True, features_only=True, ) model = model.eval()
8dda246f42dd2ee9057cf586538a4548
apache-2.0
['image-classification', 'timm']
false
Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'coatnet_rmlp_nano_rw_224.sw_in1k', pretrained=True, num_classes=0,
a408fd18d24065140c213692fb856f27
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.01 - training_steps: 2524 - mixed_precision_training: Native AMP
c326bc665898f5be7a8f53b3f7f1b3a7
apache-2.0
['generated_from_trainer']
false
Full config {'dataset': {'conditional_training_config': {'aligned_prefix': '<|aligned|>', 'drop_token_fraction': 0.1, 'misaligned_prefix': '<|misaligned|>', 'threshold': 0}, 'datasets': ['kejian/codeparrot-train-more-filter-3.3b-cleaned'], 'is_split_by_sentences': True, 'skip_tokens': 2969174016}, 'generation': {'batch_size': 128, 'metrics_configs': [{}, {'n': 1}, {}], 'scenario_configs': [{'display_as_html': True, 'generate_kwargs': {'bad_words_ids': [[32769]], 'do_sample': True, 'eos_token_id': 0, 'max_length': 640, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_hits_threshold': 0, 'num_samples': 4096, 'prefix': '<|aligned|>', 'use_prompt_for_scoring': False}], 'scorer_config': {}}, 'kl_gpt3_callback': {'gpt3_kwargs': {'model_name': 'code-cushman-001'}, 'max_tokens': 64, 'num_samples': 4096, 'prefix': '<|aligned|>', 'should_insert_prefix': True}, 'model': {'from_scratch': False, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'model_kwargs': {'revision': '9cdfa11a07b00726ddfdabb554de05b29d777db3'}, 'num_additional_tokens': 2, 'path_or_name': 'kejian/grainy-pep8'}, 'objective': {'name': 'MLE'}, 'tokenizer': {'path_or_name': 'codeparrot/codeparrot-small', 'special_tokens': ['<|aligned|>', '<|misaligned|>']}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 128, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'kejian/nearest-pep8', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.0001, 'logging_first_step': True, 'logging_steps': 10, 'num_tokens': 3300000000.0, 'output_dir': 'training_output_2', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 5034, 'save_strategy': 'steps', 'seed': 42, 'tokens_already_seen': 2969174016, 'warmup_ratio': 0.01, 'weight_decay': 0.1}}
5beed08e451d5c47ec13a880a2d014c0
apache-2.0
['refugiados']
false
Model Description <!-- Provide a longer summary of what this model is/does. --> Model for Saturdays.IA - **Developed by:** More information needed - **Shared by [Optional]:** More information needed - **Model type:** Language model - **Language(s) (NLP):** es - **License:** apache-2.0 - **Parent Model:** More information needed - **Resources for more information:** More information needed
1b1de35c8a47b88501e3b41a67306e36
apache-2.0
['refugiados']
false
Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> <!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
93ec7f5483df0131ccd4d40d10ff017f
apache-2.0
['refugiados']
false
Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> <!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
0a976b0bfaa5072cd02c7ac5ceaec3ce
apache-2.0
['refugiados']
false
Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> More information on training data needed
652427f2aad4f440dc4acf8975ab4a46
apache-2.0
['refugiados']
false
compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** More information needed - **Hours used:** More information needed - **Cloud Provider:** More information needed - **Compute Region:** More information needed - **Carbon Emitted:** More information needed
55e9068433c2ac7d3e0ccde0caa986b7
apache-2.0
['refugiados']
false
Citation <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** More information needed **APA:** More information needed
9ccf1b1a4098414dd2be21005cadb40e
apache-2.0
['refugiados']
false
Model Card Authors [optional] <!-- This section provides another layer of transparency and accountability. Whose views is this model card representing? How many voices were included in its construction? Etc. --> More information needed
c1249cd4af623a9664d91c9fb73d7b82
apache-2.0
['automatic-speech-recognition', 'zh-CN']
false
exp_w2v2t_zh-cn_vp-es_s869 Fine-tuned [facebook/wav2vec2-large-es-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-es-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (zh-CN)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
513370e49bf154dbd2b5be54648202e6
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-panx-hi-mr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1942 - F1: 0.8710
6bfd9078c26ead0a9476054e5ca76993
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.4628 | 1.0 | 417 | 0.2603 | 0.8062 | | 0.2064 | 2.0 | 834 | 0.1951 | 0.8492 | | 0.1289 | 3.0 | 1251 | 0.1942 | 0.8710 |
b8d230166c45d6a0f5622f42b897ec4b
mit
['roberta-base', 'roberta-base-epoch_75']
false
RoBERTa, Intermediate Checkpoint - Epoch 75 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_75.
7593eaca72acb01d09bbb21c47fc8467
cc-by-sa-4.0
['japanese', 'masked-lm']
false
Model Description This is a RoBERTa model pre-trained on 青空文庫 texts with [Japanese-LUW-Tokenizer](https://github.com/KoichiYasuoka/Japanese-LUW-Tokenizer). You can fine-tune `roberta-large-japanese-aozora` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-large-japanese-luw-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/roberta-large-japanese-aozora-ud-goeswith), and so on.
a225a438dad7f9c72de380aacbcbc7ac
cc-by-sa-4.0
['japanese', 'masked-lm']
false
How to Use ```py from transformers import AutoTokenizer,AutoModelForMaskedLM tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-large-japanese-aozora") model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-large-japanese-aozora") ```
ea2fe7396e8f193d0a61d2a2a170761c
apache-2.0
['Summarization', 'generated_from_trainer']
false
t5-finetuned-amazon-english This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 3.1713 - Rouge1: 19.1814 - Rouge2: 9.8673 - Rougel: 18.1982 - Rougelsum: 18.2963
a311a553c9929ead41fd3834bd4f8d56
apache-2.0
['Summarization', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:| | 3.3583 | 1.0 | 771 | 3.2513 | 16.6865 | 9.0598 | 15.8299 | 15.8472 | | 3.1022 | 2.0 | 1542 | 3.2147 | 16.8499 | 9.4849 | 16.1568 | 16.2437 | | 3.0067 | 3.0 | 2313 | 3.1718 | 16.9516 | 8.762 | 16.104 | 16.2186 | | 2.9482 | 4.0 | 3084 | 3.1854 | 18.9582 | 9.5416 | 18.0846 | 18.2938 | | 2.8934 | 5.0 | 3855 | 3.1669 | 18.857 | 9.934 | 17.9027 | 18.0272 | | 2.8389 | 6.0 | 4626 | 3.1782 | 18.6736 | 9.326 | 17.6943 | 17.8852 | | 2.8174 | 7.0 | 5397 | 3.1709 | 18.4342 | 9.6936 | 17.5714 | 17.6516 | | 2.8 | 8.0 | 6168 | 3.1713 | 19.1814 | 9.8673 | 18.1982 | 18.2963 |
6827b7d05531fc64132ae452b219c6c3
apache-2.0
['generated_from_keras_callback']
false
Haakf/allsides_right_text_conc_overfit This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.0273 - Validation Loss: 2.0426 - Epoch: 19
e8f31e8ada89bbc1145024aedbe21fb9
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 2.1269 | 2.0771 | 0 | | 2.1136 | 2.0757 | 1 | | 2.1167 | 2.0427 | 2 | | 2.1109 | 2.0339 | 3 | | 2.0844 | 1.9720 | 4 | | 2.0713 | 2.0379 | 5 | | 2.0546 | 1.9741 | 6 | | 2.0215 | 2.0126 | 7 | | 2.0196 | 2.0414 | 8 | | 2.0196 | 2.0455 | 9 | | 2.0374 | 2.0087 | 10 | | 2.0238 | 1.9891 | 11 | | 2.0186 | 2.0296 | 12 | | 2.0117 | 2.0892 | 13 | | 2.0129 | 1.9999 | 14 | | 2.0377 | 1.9766 | 15 | | 2.0220 | 1.9925 | 16 | | 2.0296 | 2.0060 | 17 | | 2.0365 | 2.0009 | 18 | | 2.0273 | 2.0426 | 19 |
41826a561e76265326de21ea23f1a1be
apache-2.0
['generated_from_trainer']
false
wav2vec2-11 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.0827 - Wer: 1.0
4e1b1448d09193418695fbdc4da6ca9e
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 24 - eval_batch_size: 6 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 400 - num_epochs: 30
a1c4cffa815eb63ab8c587bfd28ccbec
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:---:| | 4.2589 | 1.18 | 200 | 3.1595 | 1.0 | | 2.8683 | 2.35 | 400 | 3.1270 | 1.0 | | 2.8692 | 3.53 | 600 | 3.1041 | 1.0 | | 2.8577 | 4.71 | 800 | 3.0804 | 1.0 | | 2.8587 | 5.88 | 1000 | 3.0556 | 1.0 | | 2.8615 | 7.06 | 1200 | 3.1084 | 1.0 | | 2.8598 | 8.24 | 1400 | 3.0608 | 1.0 | | 2.8571 | 9.41 | 1600 | 3.0997 | 1.0 | | 2.8595 | 10.59 | 1800 | 3.1533 | 1.0 | | 2.8568 | 11.76 | 2000 | 3.0621 | 1.0 | | 2.8563 | 12.94 | 2200 | 3.1072 | 1.0 | | 2.8556 | 14.12 | 2400 | 3.1299 | 1.0 | | 2.8581 | 15.29 | 2600 | 3.0565 | 1.0 | | 2.8534 | 16.47 | 2800 | 3.0821 | 1.0 | | 2.857 | 17.65 | 3000 | 3.0734 | 1.0 | | 2.8545 | 18.82 | 3200 | 3.1392 | 1.0 | | 2.8568 | 20.0 | 3400 | 3.0541 | 1.0 | | 2.8519 | 21.18 | 3600 | 3.0856 | 1.0 | | 2.8542 | 22.35 | 3800 | 3.1477 | 1.0 | | 2.8565 | 23.53 | 4000 | 3.0433 | 1.0 | | 2.8525 | 24.71 | 4200 | 3.0826 | 1.0 | | 2.8538 | 25.88 | 4400 | 3.0972 | 1.0 | | 2.857 | 27.06 | 4600 | 3.0762 | 1.0 | | 2.8523 | 28.24 | 4800 | 3.0828 | 1.0 | | 2.8526 | 29.41 | 5000 | 3.0827 | 1.0 |
48437f724ac65b95582f77035a5d179a
mit
['RoBERTa']
false
模型分类 Model Taxonomy | 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra | | :----: | :----: | :----: | :----: | :----: | :----: | | 通用 General | 自然语言理解 NLU | RoBERTa | RoBERTa | 390M | 中文 Chinese |
846ea5f000eb59d17ed582ad5a69f8c5
mit
['RoBERTa']
false
模型信息 Model Information 参考论文:[RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) 为了得到一个中文版的autohome-roberta-large(390M),我们用autohome口碑板块语料库(1.2G)进行二次预训练。模型初始化参数采用hfl/chinese-bert-wwm-ext-large的参数进行初始化,我们在MLM中使用了全词掩码(wwm)的方式。具体地,我们在二次预训练阶段中使用了[transformers框架](https://github.com/huggingface/transformers)大概花费了4张A100约11小时。
84595405b4e7605efccfd5a7a5691d1b
mit
['RoBERTa']
false
使用 Usage ```python from transformers import AutoModelForMaskedLM, AutoTokenizer, FillMaskPipeline import torch tokenizer=AutoTokenizer.from_pretrained('ChaosW/autohome-roberta-large') model=AutoModelForMaskedLM.from_pretrained('ChaosW/autohome-roberta-large') text = '生活的真谛是[MASK]。' fillmask_pipe = FillMaskPipeline(model, tokenizer, device=0) print(fillmask_pipe(text, top_k=10)) ```
f515cdf87fcea5e92521e65fbb3ba631
mit
['generated_from_keras_callback']
false
ishaankul67/Web_browser-clustered This model is a fine-tuned version of [nandysoham16/20-clustered_aug](https://huggingface.co/nandysoham16/20-clustered_aug) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1934 - Train End Logits Accuracy: 0.9861 - Train Start Logits Accuracy: 0.9167 - Validation Loss: 0.2436 - Validation End Logits Accuracy: 0.6667 - Validation Start Logits Accuracy: 1.0 - Epoch: 0
df8d58cb736950887b886ec9bc45dcd6
mit
['generated_from_keras_callback']
false
Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 0.1934 | 0.9861 | 0.9167 | 0.2436 | 0.6667 | 1.0 | 0 |
93c2e7c49dc07221d82fa1b9fe29fd0a
other
['vision', 'image-classification']
false
MobileViT (extra small-sized model) MobileViT model pre-trained on ImageNet-1k at resolution 256x256. It was introduced in [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) by Sachin Mehta and Mohammad Rastegari, and first released in [this repository](https://github.com/apple/ml-cvnets). The license used is [Apple sample code license](https://github.com/apple/ml-cvnets/blob/main/LICENSE). Disclaimer: The team releasing MobileViT did not write a model card for this model so this model card has been written by the Hugging Face team.
e13b29edb0001c1f0726611bd361d5cf
other
['vision', 'image-classification']
false
Model description MobileViT is a light-weight, low latency convolutional neural network that combines MobileNetV2-style layers with a new block that replaces local processing in convolutions with global processing using transformers. As with ViT (Vision Transformer), the image data is converted into flattened patches before it is processed by the transformer layers. Afterwards, the patches are "unflattened" back into feature maps. This allows the MobileViT-block to be placed anywhere inside a CNN. MobileViT does not require any positional embeddings.
6fd2cd38a47b638e8b17452049f766cf
other
['vision', 'image-classification']
false
How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import MobileViTFeatureExtractor, MobileViTForImageClassification from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) feature_extractor = MobileViTFeatureExtractor.from_pretrained("apple/mobilevit-x-small") model = MobileViTForImageClassification.from_pretrained("apple/mobilevit-x-small") inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits
cc0e570293f9e610e5ef9ecd2fdf8b5d
other
['vision', 'image-classification']
false
params | URL | |------------------|-------------------------|-------------------------|-----------|-------------------------------------------------| | MobileViT-XXS | 69.0 | 88.9 | 1.3 M | https://huggingface.co/apple/mobilevit-xx-small | | **MobileViT-XS** | **74.8** | **92.3** | **2.3 M** | https://huggingface.co/apple/mobilevit-x-small | | MobileViT-S | 78.4 | 94.1 | 5.6 M | https://huggingface.co/apple/mobilevit-small |
9729fafa8b27b10c870e71958df096d9
creativeml-openrail-m
['text-to-image']
false
Sample pictures of: sdcid (use that on your prompt) ![sdcid 0](https://huggingface.co/zigg-ai/0b7d9587-3d77-4640-b5a2-6dc2f719acda/resolve/main/instance_data/sdcid_%286%29.jpg)![sdcid 1](https://huggingface.co/zigg-ai/0b7d9587-3d77-4640-b5a2-6dc2f719acda/resolve/main/instance_data/sdcid_%2811%29.jpg)![sdcid 2](https://huggingface.co/zigg-ai/0b7d9587-3d77-4640-b5a2-6dc2f719acda/resolve/main/instance_data/sdcid_%2812%29.jpg)![sdcid 3](https://huggingface.co/zigg-ai/0b7d9587-3d77-4640-b5a2-6dc2f719acda/resolve/main/instance_data/sdcid_%289%29.jpg)![sdcid 4](https://huggingface.co/zigg-ai/0b7d9587-3d77-4640-b5a2-6dc2f719acda/resolve/main/instance_data/sdcid_%287%29.jpg)![sdcid 5](https://huggingface.co/zigg-ai/0b7d9587-3d77-4640-b5a2-6dc2f719acda/resolve/main/instance_data/sdcid_%2814%29.jpg)![sdcid 6](https://huggingface.co/zigg-ai/0b7d9587-3d77-4640-b5a2-6dc2f719acda/resolve/main/instance_data/sdcid_%288%29.jpg)![sdcid 7](https://huggingface.co/zigg-ai/0b7d9587-3d77-4640-b5a2-6dc2f719acda/resolve/main/instance_data/sdcid_%281%29.jpg)![sdcid 8](https://huggingface.co/zigg-ai/0b7d9587-3d77-4640-b5a2-6dc2f719acda/resolve/main/instance_data/sdcid_%283%29.jpg)![sdcid 9](https://huggingface.co/zigg-ai/0b7d9587-3d77-4640-b5a2-6dc2f719acda/resolve/main/instance_data/sdcid_%284%29.jpg)![sdcid 10](https://huggingface.co/zigg-ai/0b7d9587-3d77-4640-b5a2-6dc2f719acda/resolve/main/instance_data/sdcid_%2810%29.jpg)![sdcid 11](https://huggingface.co/zigg-ai/0b7d9587-3d77-4640-b5a2-6dc2f719acda/resolve/main/instance_data/sdcid_%285%29.jpg)![sdcid 12](https://huggingface.co/zigg-ai/0b7d9587-3d77-4640-b5a2-6dc2f719acda/resolve/main/instance_data/sdcid_%282%29.jpg)![sdcid 13](https://huggingface.co/zigg-ai/0b7d9587-3d77-4640-b5a2-6dc2f719acda/resolve/main/instance_data/sdcid_%2813%29.jpg)
5dd04033886062526d0adebe5cd3bb7f
apache-2.0
['whisper-event', 'hf-asr-leaderboard', 'generated_from_multiple_datasets']
false
whisper-medium-mn-10 This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2103 - Wer: 21.2585 - Cer: 6.8756
511dc6cd658889f4f27df4ea220574fd
apache-2.0
['whisper-event', 'hf-asr-leaderboard', 'generated_from_multiple_datasets']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 40000 - mixed_precision_training: Native AMP
b454c1c8f47eea7571282a83fc92296b
apache-2.0
['whisper-event', 'hf-asr-leaderboard', 'generated_from_multiple_datasets']
false
Training results | Training Loss | Epoch | Step | Cer | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:-------:|:---------------:|:-------:| | 0.4197 | 0.09 | 1000 | 19.0947 | 0.4462 | 53.9600 | | 0.3288 | 0.17 | 2000 | 14.8016 | 0.3468 | 44.2102 | | 0.2737 | 0.26 | 3000 | 12.3471 | 0.3020 | 36.1700 | | 0.2558 | 0.35 | 4000 | 11.7171 | 0.2824 | 34.1709 | | 0.2406 | 0.43 | 5000 | 10.3551 | 0.2594 | 31.1230 | | 0.218 | 0.52 | 6000 | 9.7815 | 0.2452 | 29.6865 | | 0.2253 | 0.61 | 7000 | 9.6712 | 0.2344 | 29.2932 | | 0.2071 | 0.69 | 8000 | 9.4261 | 0.2283 | 28.5067 | | 0.2051 | 0.78 | 9000 | 9.0656 | 0.2224 | 27.4033 | | 0.2064 | 0.87 | 10000 | 8.7851 | 0.2138 | 26.7206 | | 0.193 | 0.95 | 11000 | 8.5021 | 0.2089 | 25.5790 | | 0.1577 | 1.04 | 12000 | 8.2873 | 0.2072 | 25.6118 | | 0.1397 | 1.13 | 13000 | 8.2368 | 0.2046 | 25.1147 | | 0.1526 | 1.21 | 14000 | 8.7615 | 0.2065 | 26.4638 | | 0.1497 | 1.3 | 15000 | 0.2004 | 24.4866 | 7.9588 | | 0.1569 | 1.39 | 16000 | 0.1990 | 24.2244 | 7.9554 | | 0.1416 | 1.47 | 17000 | 0.2001 | 24.2298 | 7.8754 | | 0.1371 | 1.56 | 18000 | 0.1932 | 23.6072 | 7.8072 | | 0.1379 | 1.65 | 19000 | 0.1916 | 23.1320 | 7.5452 | | 0.1305 | 1.73 | 20000 | 0.1880 | 23.1101 | 7.4290 | | 0.1395 | 1.82 | 21000 | 0.1877 | 22.9845 | 7.4635 | | 0.1418 | 1.91 | 22000 | 0.1862 | 22.9080 | 7.5907 | | 0.1432 | 1.99 | 23000 | 0.1847 | 22.7114 | 7.4290 | | 0.0965 | 2.08 | 24000 | 0.1931 | 21.7391 | 7.0399 | | 0.0723 | 2.17 | 25000 | 0.1961 | 22.3236 | 7.2698 | | 0.0773 | 2.25 | 26000 | 0.1977 | 22.0505 | 7.0752 | | 0.0862 | 2.34 | 27000 | 0.1959 | 21.9522 | 7.0820 | | 0.0739 | 2.43 | 28000 | 0.1982 | 21.7719 | 7.1494 | | 0.0843 | 2.51 | 29000 | 0.1963 | 21.8921 | 7.1241 | | 0.0734 | 2.6 | 30000 | 0.1980 | 21.7883 | 7.1317 | | 0.0785 | 2.69 | 31000 | 0.1955 | 21.8757 | 7.1948 | | 0.0691 | 2.77 | 32000 | 0.1978 | 21.7446 | 7.0938 | | 0.0834 | 2.86 | 33000 | 0.1953 | 21.3240 | 7.0121 | | 0.0675 | 2.95 | 34000 | 0.1958 | 21.7719 | 7.0769 | | 0.042 | 3.03 | 35000 | 0.2053 | 21.3404 | 6.9624 | | 0.0474 | 3.12 | 36000 | 0.2097 | 21.5534 | 7.0306 | | 0.0428 | 3.21 | 37000 | 0.2107 | 21.3185 | 6.9809 | | 0.0343 | 3.29 | 38000 | 0.2111 | 21.3896 | 6.9514 | | 0.0378 | 3.38 | 39000 | 0.2103 | 21.2585 | 6.8756 | | 0.0361 | 3.47 | 40000 | 0.2106 | 21.3677 | 6.9009 |
c1c198afaf85211cec5a441810c135e3
apache-2.0
['generated_from_trainer']
false
Tagged_Uni_50v4_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_uni50v4_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.5415 - Precision: 0.2717 - Recall: 0.0754 - F1: 0.1180 - Accuracy: 0.8048
d8398af0cc04592f3ce3eae303e62ba1
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 25 | 0.6079 | 0.3333 | 0.0015 | 0.0029 | 0.7792 | | No log | 2.0 | 50 | 0.5345 | 0.2762 | 0.0678 | 0.1089 | 0.8022 | | No log | 3.0 | 75 | 0.5415 | 0.2717 | 0.0754 | 0.1180 | 0.8048 |
8cc3bb4591666315135fe10b53af6289
apache-2.0
['generated_from_trainer']
false
recipe-lr8e06-wd0.1-bs16 This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2795 - Rmse: 0.5287 - Mse: 0.2795 - Mae: 0.4342
aff963cd7dd0c19a7bd5a51f29648293
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:| | 0.2767 | 1.0 | 1245 | 0.2745 | 0.5239 | 0.2745 | 0.4140 | | 0.2741 | 2.0 | 2490 | 0.2760 | 0.5253 | 0.2760 | 0.4222 | | 0.2729 | 3.0 | 3735 | 0.2795 | 0.5287 | 0.2795 | 0.4342 |
47ca926d8b2e09645e30859717e15aa0