license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5179 | 1.0 | 835 | 0.7008 | 0.1207 | | 0.3641 | 2.0 | 1670 | 0.9121 | 0.1063 | | 0.2641 | 3.0 | 2505 | 1.0415 | 0.0951 | | 0.1963 | 4.0 | 3340 | 1.2167 | 0.1072 | | 0.1519 | 5.0 | 4175 | 1.3170 | 0.1162 | | 0.1191 | 6.0 | 5010 | 1.4385 | 0.1118 |
7a4eebdb7b42dc22c3c3a6b65d720021
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.6627
5e95e8fc1ef84888b6a80f1475af3a7f
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.76 | 1.0 | 157 | 0.6640 | | 0.688 | 2.0 | 314 | 0.6581 | | 0.6768 | 3.0 | 471 | 0.6604 |
571a1161809c0f7829b3ae8905d00a64
apache-2.0
['automatic-speech-recognition', 'gary109/AI_Light_Dance', 'generated_from_trainer']
false
ai-light-dance_singing_ft_wav2vec2-large-xlsr-53 This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the GARY109/AI_LIGHT_DANCE - ONSET-SINGING dataset. It achieves the following results on the evaluation set: - Loss: 0.4327 - Wer: 0.2043
f479d2a1b15b803e8d4f4de9233a4af7
apache-2.0
['automatic-speech-recognition', 'gary109/AI_Light_Dance', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 500 - num_epochs: 10.0 - mixed_precision_training: Native AMP
a813ff375215fb222210a4a745934ea0
apache-2.0
['automatic-speech-recognition', 'gary109/AI_Light_Dance', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.4089 | 1.0 | 552 | 1.4750 | 0.9054 | | 0.7995 | 2.0 | 1104 | 0.9044 | 0.6163 | | 0.6232 | 3.0 | 1656 | 0.6645 | 0.3980 | | 0.5351 | 4.0 | 2208 | 0.5674 | 0.3120 | | 0.472 | 5.0 | 2760 | 0.5167 | 0.2579 | | 0.3913 | 6.0 | 3312 | 0.4553 | 0.2335 | | 0.3306 | 7.0 | 3864 | 0.4476 | 0.2114 | | 0.3028 | 8.0 | 4416 | 0.4327 | 0.2043 | | 0.317 | 9.0 | 4968 | 0.4355 | 0.2033 | | 0.2494 | 10.0 | 5520 | 0.4405 | 0.2022 |
a3c0db881d4efbd0cf41c95faa6aacf8
apache-2.0
[]
false
The **AraRoBERTa** models are mono-dialectal Arabic models trained on a country-level dialect. AraRoBERTa uses RoBERTa base config. More details are available in the paper [click](https://aclanthology.org/2022.wanlp-1.24/). The following are the AraRoBERTa seven dialectal variations: * [AraRoBERTa-SA](https://huggingface.co/reemalyami/AraRoBERTa-SA): Saudi Arabia (SA) dialect. * [AraRoBERTa-EGY](https://huggingface.co/reemalyami/AraRoBERTa-EGY): Egypt (EGY) dialect. * [AraRoBERTa-KU](https://huggingface.co/reemalyami/AraRoBERTa-KU): Kuwait (KU) dialect. * [AraRoBERTa-OM](https://huggingface.co/reemalyami/AraRoBERTa-OM): Oman (OM) dialect. * [AraRoBERTa-LB](https://huggingface.co/reemalyami/AraRoBERTa-LB): Lebanon (LB) dialect. * [AraRoBERTa-JO](https://huggingface.co/reemalyami/AraRoBERTa-JO): Jordan (JO) dialect. * [AraRoBERTa-DZ](https://huggingface.co/reemalyami/AraRoBERTa-DZ): Algeria (DZ) dialect
dc11a728b795b8ed43c0e468fcbdbbf4
apache-2.0
[]
false
When using the model, please cite our paper: ```python @inproceedings{alyami-al-zaidy-2022-weakly, title = "Weakly and Semi-Supervised Learning for {A}rabic Text Classification using Monodialectal Language Models", author = "AlYami, Reem and Al-Zaidy, Rabah", booktitle = "Proceedings of the The Seventh Arabic Natural Language Processing Workshop (WANLP)", month = dec, year = "2022", address = "Abu Dhabi, United Arab Emirates (Hybrid)", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.wanlp-1.24", pages = "260--272", } ```
be946899712e908ebcbb4c4963583ee1
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased__hate_speech_offensive__train-32-0 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7714 - Accuracy: 0.705
c03fd502a9a291af0bd228110dec0016
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.0871 | 1.0 | 19 | 1.0704 | 0.45 | | 1.0019 | 2.0 | 38 | 1.0167 | 0.55 | | 0.8412 | 3.0 | 57 | 0.9134 | 0.55 | | 0.6047 | 4.0 | 76 | 0.8430 | 0.6 | | 0.3746 | 5.0 | 95 | 0.8315 | 0.6 | | 0.1885 | 6.0 | 114 | 0.8585 | 0.6 | | 0.0772 | 7.0 | 133 | 0.9443 | 0.65 | | 0.0312 | 8.0 | 152 | 1.1019 | 0.65 | | 0.0161 | 9.0 | 171 | 1.1420 | 0.65 | | 0.0102 | 10.0 | 190 | 1.2773 | 0.65 | | 0.0077 | 11.0 | 209 | 1.2454 | 0.65 | | 0.0064 | 12.0 | 228 | 1.2785 | 0.65 | | 0.006 | 13.0 | 247 | 1.3834 | 0.65 | | 0.0045 | 14.0 | 266 | 1.4139 | 0.65 | | 0.0043 | 15.0 | 285 | 1.4056 | 0.65 |
700179beaa57e8deed8a313207f2278a
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Whisper Small Ar - Abdallah Elbohy This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset For short transcription 30s but for long transcription it has some limitations and challenges. It achieves the following results on the evaluation set: - Loss: 0.3791 - Wer: 49.8081
818ad73d5c5fbd5d922d6396df089907
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0972 | 0.57 | 1000 | 0.3791 | 49.8081 | | 0.0978 | 1.14 | 2000 | 0.3791 | 49.8081 | | 0.0986 | 1.71 | 3000 | 0.3791 | 49.8081 | | 0.1055 | 2.28 | 4000 | 0.3791 | 49.8081 |
5578a149410dd35f67ba1cb5b25ba067
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Whisper Small SV This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.3516 - Wer: 23.0598
230bcd7991772df219a65e5e2ae1e185
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - training_steps: 1000 - mixed_precision_training: Native AMP
beb2e7d812168603006f2e0af1a543bb
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.3274 | 0.86 | 200 | 0.3552 | 24.7469 | | 0.1395 | 1.72 | 400 | 0.3303 | 23.5038 | | 0.074 | 2.59 | 600 | 0.3349 | 22.6603 | | 0.0199 | 3.45 | 800 | 0.3451 | 22.7935 | | 0.0089 | 4.31 | 1000 | 0.3516 | 23.0598 |
9a101a1a31f7a7da7c6dd5ac7902fa97
apache-2.0
['generated_from_keras_callback']
false
Oleksandr2003/my_awesome_qa_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.6133 - Validation Loss: 1.8637 - Epoch: 2
4f73672440b4b387dd785e81fbedd150
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32
07e38c8c01d92b190bc6e01934968f84
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 3.5361 | 2.3102 | 0 | | 1.9179 | 1.8637 | 1 | | 1.6133 | 1.8637 | 2 |
13904839c43b2a6fd3ef500a76315690
mit
[]
false
Fireworks Over Water on Stable Diffusion This is the `<firework>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<firework> 0](https://huggingface.co/sd-concepts-library/fireworks-over-water/resolve/main/concept_images/1.jpeg) ![<firework> 1](https://huggingface.co/sd-concepts-library/fireworks-over-water/resolve/main/concept_images/0.jpeg) ![<firework> 2](https://huggingface.co/sd-concepts-library/fireworks-over-water/resolve/main/concept_images/2.jpeg)
c5b90996152bbc041ea4b9741b8ee827
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-sst-2-english-finetuned-20pc This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5078 - Accuracy: 0.8333 - F1: 0.3721
565aba92171418f1a88e00e540526d2b
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 41 | 0.3986 | 0.8272 | 0.0667 | | No log | 2.0 | 82 | 0.3829 | 0.8519 | 0.4 | | No log | 3.0 | 123 | 0.4916 | 0.8333 | 0.2286 | | No log | 4.0 | 164 | 0.4894 | 0.8333 | 0.4490 | | No log | 5.0 | 205 | 0.5078 | 0.8333 | 0.3721 |
85eae0d814a53fda30b232989863ce10
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity']
false
average_word_embeddings_komninos This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 300 dimensional dense vector space and can be used for tasks like clustering or semantic search.
757516953460fce5d6b02096f24931bc
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity']
false
Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/average_word_embeddings_komninos') embeddings = model.encode(sentences) print(embeddings) ```
402ef712738f23a45d5124972bd6a1dc
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity']
false
Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/average_word_embeddings_komninos)
a76a453632d9a98ef2b346d83601ffe1
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity']
false
Full Model Architecture ``` SentenceTransformer( (0): WordEmbeddings( (emb_layer): Embedding(222305, 300) ) (1): Pooling({'word_embedding_dimension': 300, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ```
5506ea15abcd203f6cd54cdbeb25c5d1
apache-2.0
['image-classification', 'vision']
false
PoolFormer (S12 model) PoolFormer model trained on ImageNet-1k (1 million images, 1,000 classes) at resolution 224x224. It was first introduced in the paper [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) by Yu et al. and first released in [this repository](https://github.com/sail-sg/poolformer).
d2f9ea0b79cd4b4e187f93dcdcdfdce4
apache-2.0
['image-classification', 'vision']
false
Model description PoolFormer is a model that replaces attention token mixer in transfomrers with extremely simple operator, pooling. Transformers have shown great potential in computer vision tasks. A common belief is their attention-based token mixer module contributes most to their competence. However, recent works show the attention-based module in transformers can be replaced by spatial MLPs and the resulted models still perform quite well. Based on this observation, we hypothesize that the general architecture of the transformers, instead of the specific token mixer module, is more essential to the model's performance. To verify this, we deliberately replace the attention module in transformers with an embarrassingly simple spatial pooling operator to conduct only the most basic token mixing. Surprisingly, we observe that the derived model, termed as PoolFormer, achieves competitive performance on multiple computer vision tasks. For example, on ImageNet-1K, PoolFormer achieves 82.1% top-1 accuracy, surpassing well-tuned vision transformer/MLP-like baselines DeiT-B/ResMLP-B24 by 0.3%/1.1% accuracy with 35%/52% fewer parameters and 48%/60% fewer MACs. The effectiveness of PoolFormer verifies our hypothesis and urges us to initiate the concept of "MetaFormer", a general architecture abstracted from transformers without specifying the token mixer. Based on the extensive experiments, we argue that MetaFormer is the key player in achieving superior results for recent transformer and MLP-like models on vision tasks. This work calls for more future research dedicated to improving MetaFormer instead of focusing on the token mixer modules. Additionally, our proposed PoolFormer could serve as a starting baseline for future MetaFormer architecture design.
f20929f56fde161854bfac8f5291e61a
apache-2.0
['image-classification', 'vision']
false
Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=sail/poolformer) to look for fine-tuned versions on a task that interests you.
b2150356f9bd71c544e7bc5ad43c9c32
apache-2.0
['image-classification', 'vision']
false
How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import PoolFormerFeatureExtractor, PoolFormerForImageClassification from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = PoolFormerFeatureExtractor.from_pretrained('sail/poolformer_s12') model = PoolFormerForImageClassification.from_pretrained('sail/poolformer_s12') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits
3c4791ddd41bae222ffa62bdcc805f79
apache-2.0
['image-classification', 'vision']
false
params | URL | |---------------------------------------|-------------------------|----------|------------------------------------------------------------------| | **PoolFormer-S12** | **77.2** | **12M** | **https://huggingface.co/sail/poolformer_s12** | | PoolFormer-S24 | 80.3 | 21M | https://huggingface.co/sail/poolformer_s24 | | PoolFormer-S36 | 81.4 | 31M | https://huggingface.co/sail/poolformer_s36 | | PoolFormer-M36 | 82.1 | 56M | https://huggingface.co/sail/poolformer_m36 | | PoolFormer-M48 | 82.5 | 73M | https://huggingface.co/sail/poolformer_m48 |
7a190b9a2727580b950ce7b25f44eba0
apache-2.0
['image-classification', 'vision']
false
BibTeX entry and citation info ```bibtex @article{yu2021metaformer, title={MetaFormer is Actually What You Need for Vision}, author={Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng}, journal={arXiv preprint arXiv:2111.11418}, year={2021} } ```
ede67e30ae60bbf38927e3fd83e354d0
other
['pytorch', 'stable-diffusion', 'stable-diffusion-diffusers', 'diffusers']
false
This is a Custom Diffusion model fine-tuned from the Stable Diffusion v1-4. [Custom Diffusion](https://www.cs.cmu.edu/~custom-diffusion/index.html) allows you to fine-tune text-to-image diffusion models, such as Stable Diffusion, given a few images of a new concept (~4-20). Here we give an example model fine-tuned using 5 images of a cat downloaded from UnSplash. The example code of inference is shown below.
3e3aaaa467bafa24e204ba5ee49c1422
other
['pytorch', 'stable-diffusion', 'stable-diffusion-diffusers', 'diffusers']
false
Example code of inference ``` git clone https://github.com/adobe-research/custom-diffusion cd custom-diffusion ``` ```python from diffusers import StableDiffusionPipeline from src import diffuser_training device = 'cuda' model_id = "CompVis/stable-diffusion-v1-4" pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe = pipe.to(device) diffuser_training.load_model(pipe.text_encoder, pipe.tokenizer, pipe.unet, 'cat.bin') prompt = "<new1> cat swimming in a pool" images = pipe(prompt, num_inference_steps=200, guidance_scale=6., eta=1.).images ``` <center> <img src="https://huggingface.co/custom-diffusion-library/cat/resolve/main/cat.png" width="600" align="center" > </center>
66845511a37c26e1a1626d0e545accbc
apache-2.0
['generated_from_trainer']
false
experiment_2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.1211 - Precision: 0.8841 - Recall: 0.8926 - F1: 0.8883 - Accuracy: 0.9747
5a98abc217e34d4a7a905a2ea6373cb6
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2418 | 1.0 | 878 | 0.0695 | 0.9159 | 0.9255 | 0.9207 | 0.9816 | | 0.0541 | 2.0 | 1756 | 0.0592 | 0.9244 | 0.9343 | 0.9293 | 0.9833 | | 0.0303 | 3.0 | 2634 | 0.0602 | 0.9260 | 0.9388 | 0.9323 | 0.9838 |
1a5438e418ec779da7f5431a949091f5
apache-2.0
['generated_from_trainer']
false
bert-large-uncased-finetuned-lowR100-5-uncased-DA-20 This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.9006
18a311dbded29c28f8abc0cfbd3c941c
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 6.5116 | 1.0 | 1 | 6.5297 | | 6.6949 | 2.0 | 2 | 6.9289 | | 6.0946 | 3.0 | 3 | 7.6464 | | 5.8742 | 4.0 | 4 | 4.8191 | | 5.4365 | 5.0 | 5 | 6.1273 | | 5.171 | 6.0 | 6 | 4.5528 | | 4.4944 | 7.0 | 7 | 4.8541 | | 4.1146 | 8.0 | 8 | 3.4321 | | 3.4689 | 9.0 | 9 | 2.4818 | | 3.6228 | 10.0 | 10 | 2.4444 | | 3.147 | 11.0 | 11 | 1.0668 | | 2.969 | 12.0 | 12 | 3.5394 | | 2.9788 | 13.0 | 13 | 3.1681 | | 2.9108 | 14.0 | 14 | 1.6325 | | 2.9377 | 15.0 | 15 | 2.0480 | | 2.6179 | 16.0 | 16 | 2.6157 | | 2.8978 | 17.0 | 17 | 3.3663 | | 2.6496 | 18.0 | 18 | 2.6341 | | 2.592 | 19.0 | 19 | 2.6462 | | 2.5212 | 20.0 | 20 | 2.2172 | | 2.402 | 21.0 | 21 | 3.3419 | | 2.3146 | 22.0 | 22 | 1.8095 | | 2.5215 | 23.0 | 23 | 2.7622 | | 2.1736 | 24.0 | 24 | 3.9402 | | 2.4366 | 25.0 | 25 | 2.3742 | | 2.1603 | 26.0 | 26 | 2.4520 | | 2.21 | 27.0 | 27 | 3.8185 | | 2.1954 | 28.0 | 28 | 4.0015 | | 2.6556 | 29.0 | 29 | 2.4132 | | 2.3936 | 30.0 | 30 | 3.8690 | | 2.2442 | 31.0 | 31 | 3.7408 | | 2.2486 | 32.0 | 32 | 2.5657 | | 2.5066 | 33.0 | 33 | 3.6632 | | 2.0527 | 34.0 | 34 | 2.9892 | | 2.6207 | 35.0 | 35 | 3.5594 | | 2.296 | 36.0 | 36 | 2.3785 | | 2.4068 | 37.0 | 37 | 3.6126 | | 2.257 | 38.0 | 38 | 1.0477 | | 2.0597 | 39.0 | 39 | 1.5386 | | 2.1702 | 40.0 | 40 | 2.4686 |
9c3edf75f5a3b5e6f02c4511e19583f2
mit
['generated_from_trainer']
false
thesis-freeform This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6933 - Accuracy: 0.4636
400e3699d886e151e3c7ea7d132b5d37
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4
c95c76513d724c316265722c5efdf525
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.6922 | 1.0 | 5684 | 0.6928 | 0.4636 | | 0.6946 | 2.0 | 11368 | 0.6918 | 0.4636 | | 0.692 | 3.0 | 17052 | 0.6949 | 0.4636 | | 0.6901 | 4.0 | 22736 | 0.6933 | 0.4636 |
542eacba3e5f01bc142e5ad6fcd2c2ad
apache-2.0
['generated_from_trainer']
false
wav2vec2_common_voice_accents_6 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.3711
e0e7eec6c1de1531e72e0e109f7f7e52
mit
['qa', 'classification', 'question', 'answering', 'SQuAD', 'metric', 'nlg', 't5-small']
false
Model description This model is a *Classifier* model based on T5-small, that predicts if a answer / question couple is considered as important fact or not (Is this answer enough relevant to appear in a plausible summary?). It is actually a component of [QuestEval](https://github.com/ThomasScialom/QuestEval) metric but can be used independently as it is.
a97ae4dc621a95dd38771e6526db1138
mit
['qa', 'classification', 'question', 'answering', 'SQuAD', 'metric', 'nlg', 't5-small']
false
How to use ```python from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("ThomasNLG/t5-weighter_cnndm-en") model = T5ForConditionalGeneration.from_pretrained("ThomasNLG/t5-weighter_cnndm-en") ``` You can play with the model using the inference API, the text input format should follow this template (accordingly to the training stage of the model): `text_input = "{ANSWER} </s> {QUESTION} </s> {CONTEXT}"`
b4c5259634a93d8641c6ea5c6ffdce7a
mit
['qa', 'classification', 'question', 'answering', 'SQuAD', 'metric', 'nlg', 't5-small']
false
Citation info ```bibtex @article{scialom2021questeval, title={Questeval: Summarization asks for fact-based evaluation}, author={Scialom, Thomas and Dray, Paul-Alexis and Gallinari, Patrick and Lamprier, Sylvain and Piwowarski, Benjamin and Staiano, Jacopo and Wang, Alex}, journal={arXiv preprint arXiv:2103.12693}, year={2021} } ```
fc8613b6a9dee4d460ac6f08aa8c7f72
apache-2.0
['generated_from_trainer']
false
finetuned-mlm_medium This model is a fine-tuned version of [muhtasham/bert-medium-mlm-finetuned-emotion](https://huggingface.co/muhtasham/bert-medium-mlm-finetuned-emotion) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.2805 - Accuracy: 0.9542 - F1: 0.9765
8e4a75959596c8b477de821ffac846cc
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200
923529399c18e0eeaa9d36f519eebd4b
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.2318 | 2.55 | 500 | 0.1428 | 0.9512 | 0.9750 | | 0.0777 | 5.1 | 1000 | 0.1976 | 0.9513 | 0.9750 | | 0.0362 | 7.65 | 1500 | 0.2704 | 0.9388 | 0.9684 | | 0.0234 | 10.2 | 2000 | 0.2245 | 0.9578 | 0.9784 | | 0.0181 | 12.76 | 2500 | 0.3703 | 0.9310 | 0.9643 | | 0.0158 | 15.31 | 3000 | 0.6137 | 0.9001 | 0.9474 | | 0.013 | 17.86 | 3500 | 0.2805 | 0.9542 | 0.9765 |
0b5d10fa0cf12d6a4ac342250bb27875
apache-2.0
['automatic-speech-recognition', 'ja']
false
exp_w2v2t_ja_xls-r_s941 Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (ja)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
8f30b2fc5b099e2e877a17cc17e55479
mit
['generated_from_keras_callback']
false
topic_classification_04 This model is a fine-tuned version of [microsoft/xtremedistil-l6-h256-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h256-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.8325 - Train Sparse Categorical Accuracy: 0.7237 - Epoch: 9
609e835acc7bd8adf0a982ecdbfa5baa
mit
['generated_from_keras_callback']
false
Training results | Train Loss | Train Sparse Categorical Accuracy | Epoch | |:----------:|:---------------------------------:|:-----:| | 1.0735 | 0.6503 | 0 | | 0.9742 | 0.6799 | 1 | | 0.9424 | 0.6900 | 2 | | 0.9199 | 0.6970 | 3 | | 0.9016 | 0.7026 | 4 | | 0.8853 | 0.7073 | 5 | | 0.8707 | 0.7120 | 6 | | 0.8578 | 0.7160 | 7 | | 0.8448 | 0.7199 | 8 | | 0.8325 | 0.7237 | 9 |
5851500aea21078c7a3f05f1f53674b5
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-panx-all This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1574 - F1: 0.8504
4002ebb6e11cc8cd83c17e797a95eb02
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 179 | 0.1897 | 0.8147 | | No log | 2.0 | 358 | 0.1624 | 0.8394 | | No log | 3.0 | 537 | 0.1574 | 0.8504 |
418382b932d72e2eb484f3026d1011b0
apache-2.0
['generated_from_trainer']
false
convnext-tiny-224_album_vit This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.3898 - Accuracy: 0.4912
8e4f5348bd33cd8299c1c205227aa8bb
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3
b07d33ba0fd838af3468fadafdf91fab
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 3.6659 | 1.0 | 944 | 3.5335 | 0.2607 | | 2.8174 | 2.0 | 1888 | 2.6391 | 0.4418 | | 2.4959 | 3.0 | 2832 | 2.3898 | 0.4912 |
2aca145884c12dfd9a1787ba80547595
apache-2.0
['generated_from_trainer']
false
wav2vec2-base-timit-demo-colab70 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7439 - Wer: 0.5149
3bdd3d20c68d174f13ef5b1d9dee32a1
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.8646 | 7.04 | 500 | 3.1467 | 1.0 | | 1.678 | 14.08 | 1000 | 0.8738 | 0.6511 | | 0.5083 | 21.13 | 1500 | 0.7404 | 0.5504 | | 0.2923 | 28.17 | 2000 | 0.7439 | 0.5149 |
08b9de78c219e7a6dbcc2e7f47974029
apache-2.0
['generated_from_trainer']
false
olm-bert-tiny-december-2022-target-glue-mrpc This model is a fine-tuned version of [muhtasham/olm-bert-tiny-december-2022](https://huggingface.co/muhtasham/olm-bert-tiny-december-2022) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9243 - Accuracy: 0.6299 - F1: 0.7146
28f87e6c802588247601808fdafdff99
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.6093 | 4.35 | 500 | 0.5848 | 0.7034 | 0.7980 | | 0.5487 | 8.7 | 1000 | 0.5863 | 0.7206 | 0.8087 | | 0.4724 | 13.04 | 1500 | 0.6881 | 0.6544 | 0.7294 | | 0.3752 | 17.39 | 2000 | 0.7549 | 0.6520 | 0.7331 | | 0.276 | 21.74 | 2500 | 0.9243 | 0.6299 | 0.7146 |
371262089ac25aaba8203c5ed581a89c
apache-2.0
['classical chinese', 'literary chinese', 'ancient chinese', 'token-classification', 'pos', 'dependency-parsing']
false
Model Description This is a RoBERTa model pre-trained on Classical Chinese texts for POS-tagging and dependency-parsing (using `goeswith` for subwords), derived from [roberta-classical-chinese-large-char](https://huggingface.co/KoichiYasuoka/roberta-classical-chinese-large-char) and [UD_Classical_Chinese-Kyoto](https://github.com/UniversalDependencies/UD_Classical_Chinese-Kyoto).
7ef6cfb76d84aa6ea2f2d8e6648428b3
apache-2.0
['classical chinese', 'literary chinese', 'ancient chinese', 'token-classification', 'pos', 'dependency-parsing']
false
text = "+text+"\n" v=[(s,e) for s,e in w["offset_mapping"] if s<e] for i,(s,e) in enumerate(v,1): q=self.model.config.id2label[p[i,h[i]]].split("|") u+="\t".join([str(i),text[s:e],"_",q[0],"_","|".join(q[1:-1]),str(h[i]),q[-1],"_","_" if i<len(v) and e<v[i][0] else "SpaceAfter=No"])+"\n" return u+"\n" nlp=UDgoeswith("KoichiYasuoka/roberta-classical-chinese-large-ud-goeswith") print(nlp("孟子見梁惠王")) ``` with [ufal.chu-liu-edmonds](https://pypi.org/project/ufal.chu-liu-edmonds/). Or without ufal.chu-liu-edmonds: ``` from transformers import pipeline nlp=pipeline("universal-dependencies","KoichiYasuoka/roberta-classical-chinese-large-ud-goeswith",trust_remote_code=True,aggregation_strategy="simple") print(nlp("孟子見梁惠王")) ```
3157f5ed6922a3712f1d1e870ba31706
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-marc-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 0.9276 - Mae: 0.5366
bab9cfe2218367bf2b89d1beed0d1667
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Mae | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.0992 | 1.0 | 235 | 0.9340 | 0.5122 | | 0.945 | 2.0 | 470 | 0.9276 | 0.5366 |
27475196af571efa63627877d0225ac8
apache-2.0
['automatic-speech-recognition', 'nl']
false
exp_w2v2t_nl_vp-fr_s226 Fine-tuned [facebook/wav2vec2-large-fr-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-fr-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (nl)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
96df27dd3e4d90898150f17fb0bc5da1
mit
[]
false
Wildkat on Stable Diffusion This is the `<wildkat>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<wildkat> 0](https://huggingface.co/sd-concepts-library/wildkat/resolve/main/concept_images/7.jpeg) ![<wildkat> 1](https://huggingface.co/sd-concepts-library/wildkat/resolve/main/concept_images/1.jpeg) ![<wildkat> 2](https://huggingface.co/sd-concepts-library/wildkat/resolve/main/concept_images/2.jpeg) ![<wildkat> 3](https://huggingface.co/sd-concepts-library/wildkat/resolve/main/concept_images/8.jpeg) ![<wildkat> 4](https://huggingface.co/sd-concepts-library/wildkat/resolve/main/concept_images/0.jpeg) ![<wildkat> 5](https://huggingface.co/sd-concepts-library/wildkat/resolve/main/concept_images/3.jpeg) ![<wildkat> 6](https://huggingface.co/sd-concepts-library/wildkat/resolve/main/concept_images/4.jpeg) ![<wildkat> 7](https://huggingface.co/sd-concepts-library/wildkat/resolve/main/concept_images/5.jpeg) ![<wildkat> 8](https://huggingface.co/sd-concepts-library/wildkat/resolve/main/concept_images/6.jpeg)
b6d10da45212b52602c42e13381d0a33
apache-2.0
['translation']
false
fra-vie * source group: French * target group: Vietnamese * OPUS readme: [fra-vie](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-vie/README.md) * model: transformer-align * source language(s): fra * target language(s): vie * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-vie/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-vie/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-vie/opus-2020-06-17.eval.txt)
c619134a65152db7df913fe0a3c702be
apache-2.0
['translation']
false
System Info: - hf_name: fra-vie - source_languages: fra - target_languages: vie - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-vie/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['fr', 'vi'] - src_constituents: {'fra'} - tgt_constituents: {'vie', 'vie_Hani'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-vie/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-vie/opus-2020-06-17.test.txt - src_alpha3: fra - tgt_alpha3: vie - short_pair: fr-vi - chrF2_score: 0.486 - bleu: 31.1 - brevity_penalty: 0.985 - ref_len: 13219.0 - src_name: French - tgt_name: Vietnamese - train_date: 2020-06-17 - src_alpha2: fr - tgt_alpha2: vi - prefer_old: False - long_pair: fra-vie - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
d70f79df170dc9baee9317cb81849bbc
apache-2.0
['generated_from_trainer']
false
finetuned-self_mlm_mini This model is a fine-tuned version of [muhtasham/bert-tiny-mlm-finetuned-imdb](https://huggingface.co/muhtasham/bert-tiny-mlm-finetuned-imdb) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.6150 - Accuracy: 0.8224 - F1: 0.9025
61429ab9d9096416a95e7de867cda2ee
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.4426 | 2.55 | 500 | 0.4673 | 0.7928 | 0.8844 | | 0.2845 | 5.1 | 1000 | 0.3099 | 0.8697 | 0.9303 | | 0.2282 | 7.65 | 1500 | 0.3432 | 0.8589 | 0.9241 | | 0.1819 | 10.2 | 2000 | 0.2702 | 0.8998 | 0.9472 | | 0.1461 | 12.76 | 2500 | 0.4852 | 0.8344 | 0.9097 | | 0.111 | 15.31 | 3000 | 0.6807 | 0.7950 | 0.8858 | | 0.0883 | 17.86 | 3500 | 0.6150 | 0.8224 | 0.9025 |
b2edc5d582146f5006f3b514af14df7c
apache-2.0
['translation']
false
opus-mt-lu-es * source languages: lu * target languages: es * OPUS readme: [lu-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lu-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/lu-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lu-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lu-es/opus-2020-01-16.eval.txt)
efee522a9a9c68d50f3e0e49ea867fca
apache-2.0
['generated_from_trainer']
false
SAM This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3061 - Accuracy: {'accuracy': 0.8733333333333333} - F1: 0.8742
d838c419f35a28d40b9eaa4e6bb0b236
apache-2.0
['generated_from_trainer']
false
bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0589 - Precision: 0.9329 - Recall: 0.9507 - F1: 0.9417 - Accuracy: 0.9870
8aa95fa9e2a6237f0b4f767a293cc10c
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0867 | 1.0 | 1756 | 0.0639 | 0.9140 | 0.9386 | 0.9261 | 0.9831 | | 0.0398 | 2.0 | 3512 | 0.0586 | 0.9326 | 0.9480 | 0.9402 | 0.9858 | | 0.0212 | 3.0 | 5268 | 0.0589 | 0.9329 | 0.9507 | 0.9417 | 0.9870 |
7afb150ca04c1efe592273de3247d8aa
apache-2.0
['generated_from_trainer']
false
tiny-classification-fast This model is a fine-tuned version of [cross-encoder/ms-marco-TinyBERT-L-2-v2](https://huggingface.co/cross-encoder/ms-marco-TinyBERT-L-2-v2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8673 - Accuracy: 0.7786
ae0efdc7278b0e8d1270f302f156088f
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2
75580f73c890965ec15c94e6ead91d3a
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.9077 | 1.0 | 785 | 1.0466 | 0.7482 | | 1.0061 | 2.0 | 1570 | 0.8673 | 0.7786 |
a03370b1ba9c99a88e8c19f2cb8c60b3
mit
['generated_from_keras_callback']
false
Sushant45/Adult_contemporary_music-clustered This model is a fine-tuned version of [nandysoham16/15-clustered_aug](https://huggingface.co/nandysoham16/15-clustered_aug) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2951 - Train End Logits Accuracy: 0.9375 - Train Start Logits Accuracy: 0.9028 - Validation Loss: 0.5855 - Validation End Logits Accuracy: 0.7143 - Validation Start Logits Accuracy: 0.8571 - Epoch: 0
95559e9b4413239827d2590a019ac027
mit
['generated_from_keras_callback']
false
Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 0.2951 | 0.9375 | 0.9028 | 0.5855 | 0.7143 | 0.8571 | 0 |
5cfccff4d8915c1deb3cbfb945a6d952
apache-2.0
['generated_from_trainer']
false
finetuned_token_itr0_3e-05_webDiscourse_16_02_2022-20_59_50 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5450 - Precision: 0.0049 - Recall: 0.0146 - F1: 0.0074 - Accuracy: 0.7431
0fc0059fc7671cad2d206f1fe070a0d2
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 10 | 0.6830 | 0.0109 | 0.0323 | 0.0163 | 0.5685 | | No log | 2.0 | 20 | 0.7187 | 0.0256 | 0.0323 | 0.0286 | 0.5668 | | No log | 3.0 | 30 | 0.6839 | 0.0076 | 0.0484 | 0.0131 | 0.5848 | | No log | 4.0 | 40 | 0.6988 | 0.0092 | 0.0484 | 0.0155 | 0.5918 | | No log | 5.0 | 50 | 0.7055 | 0.0100 | 0.0484 | 0.0165 | 0.5946 |
8725c954c5c41e785e57d69a96128263
mit
[]
false
crinos form garou on Stable Diffusion This is the `<crinos>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<crinos> 0](https://huggingface.co/sd-concepts-library/crinos-form-garou/resolve/main/concept_images/1.jpeg) ![<crinos> 1](https://huggingface.co/sd-concepts-library/crinos-form-garou/resolve/main/concept_images/0.jpeg) ![<crinos> 2](https://huggingface.co/sd-concepts-library/crinos-form-garou/resolve/main/concept_images/2.jpeg) ![<crinos> 3](https://huggingface.co/sd-concepts-library/crinos-form-garou/resolve/main/concept_images/3.jpeg)
af08c88a9bfe9ac9a52d173a3a5962a9
apache-2.0
['tabular-classification', 'baseline-trainer']
false
Baseline Model trained on trainii_ac94u to apply classification on label **Metrics of the best model:** accuracy 0.361046 recall_macro 0.353192 precision_macro 0.240667 f1_macro 0.278231 Name: LogisticRegression(C=0.1, class_weight='balanced', max_iter=1000), dtype: float64 **See model plot below:** <style>
7840e558e7aef827f64c76b902c80644
apache-2.0
['tabular-classification', 'baseline-trainer']
false
sk-container-id-9 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}
38e37af838c99709e50762ac97b89fce
apache-2.0
['tabular-classification', 'baseline-trainer']
false
x27;,EasyPreprocessor(types= continuous dirty_float low_card_int ... date free_string useless id True False False ... False False False text False False False ... False True False[2 rows x 7 columns])),(&
b95de56b52e34cdeae942893de79c788
apache-2.0
['tabular-classification', 'baseline-trainer']
false
x27;,max_iter=1000))])</pre><b>In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. <br />On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.</b></div><div class="sk-container" hidden><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-27" type="checkbox" ><label for="sk-estimator-id-27" class="sk-toggleable__label sk-toggleable__label-arrow">Pipeline</label><div class="sk-toggleable__content"><pre>Pipeline(steps=[(&
c1bd5ea8b188c0ae2a41da9ec35c8400
apache-2.0
['tabular-classification', 'baseline-trainer']
false
x27;,max_iter=1000))])</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-28" type="checkbox" ><label for="sk-estimator-id-28" class="sk-toggleable__label sk-toggleable__label-arrow">EasyPreprocessor</label><div class="sk-toggleable__content"><pre>EasyPreprocessor(types= continuous dirty_float low_card_int ... date free_string useless id True False False ... False False False text False False False ... False True False[2 rows x 7 columns])</pre></div></div></div><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-29" type="checkbox" ><label for="sk-estimator-id-29" class="sk-toggleable__label sk-toggleable__label-arrow">LogisticRegression</label><div class="sk-toggleable__content"><pre>LogisticRegression(C=0.1, class_weight=&
27d09313ef2ff3a74a831589a2e83593
apache-2.0
['generated_from_trainer']
false
finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3144 - Accuracy: 0.8667 - F1: 0.8667
25e16473731699b83df3820aadd6a6ef
mit
['generated_from_trainer']
false
Bio_ClinicalBERT_fold_7_ternary_v1 This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9612 - F1: 0.7939
2468caada19e1a3309bf85b1f8299c26
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 291 | 0.5762 | 0.7593 | | 0.5434 | 2.0 | 582 | 0.5577 | 0.7939 | | 0.5434 | 3.0 | 873 | 0.6501 | 0.7951 | | 0.2198 | 4.0 | 1164 | 0.8661 | 0.7939 | | 0.2198 | 5.0 | 1455 | 1.1493 | 0.7900 | | 0.0953 | 6.0 | 1746 | 1.1999 | 0.7977 | | 0.0375 | 7.0 | 2037 | 1.4623 | 0.7759 | | 0.0375 | 8.0 | 2328 | 1.4526 | 0.7900 | | 0.0246 | 9.0 | 2619 | 1.6915 | 0.7734 | | 0.0246 | 10.0 | 2910 | 1.6097 | 0.7913 | | 0.0113 | 11.0 | 3201 | 1.7091 | 0.8015 | | 0.0113 | 12.0 | 3492 | 1.7252 | 0.7990 | | 0.0103 | 13.0 | 3783 | 1.7305 | 0.8015 | | 0.0079 | 14.0 | 4074 | 1.7932 | 0.8003 | | 0.0079 | 15.0 | 4365 | 1.7800 | 0.8028 | | 0.0071 | 16.0 | 4656 | 1.7000 | 0.7977 | | 0.0071 | 17.0 | 4947 | 1.8342 | 0.8003 | | 0.0077 | 18.0 | 5238 | 1.8517 | 0.7990 | | 0.0044 | 19.0 | 5529 | 1.8633 | 0.7964 | | 0.0044 | 20.0 | 5820 | 1.8813 | 0.7926 | | 0.0028 | 21.0 | 6111 | 1.8914 | 0.7964 | | 0.0028 | 22.0 | 6402 | 1.9412 | 0.7926 | | 0.0043 | 23.0 | 6693 | 1.9760 | 0.7939 | | 0.0043 | 24.0 | 6984 | 1.9509 | 0.7977 | | 0.0002 | 25.0 | 7275 | 1.9612 | 0.7939 |
933dc844d6666bdb06bd5d8654a62642
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 512 - eval_batch_size: 1024 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 - mixed_precision_training: Native AMP
4396afaca4f36f23e2f4f30c6844c190
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-panx-it This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2532 - F1: 0.8222
da9d4175073a0d64e72e71ffc527b758
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.8114 | 1.0 | 70 | 0.3235 | 0.7548 | | 0.2825 | 2.0 | 140 | 0.2749 | 0.7913 | | 0.1932 | 3.0 | 210 | 0.2532 | 0.8222 |
86a33213383ee09fc26c2a708a6a8b49
mit
['conversational']
false
Chinese pre-trained dialogue model (CDial-GPT) This project provides a large-scale Chinese GPT model pre-trained on the dataset [LCCC](https://huggingface.co/datasets/silver/lccc). We present a series of Chinese GPT model that are first pre-trained on a Chinese novel dataset and then post-trained on our LCCC dataset. Similar to [TransferTransfo](https://arxiv.org/abs/1901.08149), we concatenate all dialogue histories into one context sentence, and use this sentence to predict the response. The input of our model consists of word embedding, speaker embedding, and positional embedding of each word. Paper: [A Large-Scale Chinese Short-Text Conversation Dataset](https://arxiv.org/pdf/2008.03946.pdf)
0fec8f01c33049bf70ca295ea3841de7
mit
['conversational']
false
How to use ```python from transformers import OpenAIGPTLMHeadModel, GPT2LMHeadModel, BertTokenizer import torch tokenizer = BertTokenizer.from_pretrained("thu-coai/CDial-GPT_LCCC-large") model = OpenAIGPTLMHeadModel.from_pretrained("thu-coai/CDial-GPT_LCCC-large") ``` For more details, please refer to our [repo.](https://github.com/thu-coai/CDial-GPT) on github.
549fa7dc1480a71c1839a78d2af4513c
apache-2.0
['automatic-speech-recognition']
false
This repository contains a number of experiments for the [PSST Challenge](https://psst.study/). As the test set is unavailable, all numbers are based on the validation set. The experiments in the tables below were finetuned on [Wav2vec 2.0 Base, No finetuning](https://github.com/pytorch/fairseq/tree/main/examples/wav2vec) Our overall best performing model (**FER** 9\.2%, **PER:** 21\.0%) was based on [Wav2vec 2.0 Large, No finetuning](https://github.com/pytorch/fairseq/tree/main/examples/wav2vec) (git tag: `larger-rir`), with the TIMIT subset augmented with Room Impulse Response, based on the experiments below, on the base model.
d35664e916214b9e3ed77791cb2d1ba7
apache-2.0
['automatic-speech-recognition']
false
Augmented TIMIT subset Using a subset of TIMIT that could map easily to the phoneset used by the PSST Challenge data (a list of IDs are in the repository), we experimented with augmenting the data to better match the PSST data. The best results were obtained using Room Impulse Response (tag: `rir`) | **Augmentation** | **FER** | **PER** | **Git tag** | | :----------------------------------------------- | :-------- | :--------- | :---------------------------------- | | unaugmented | 10\.2% | 22\.5% | huggingface-unaugmented | | Gaussian noise | 10\.0% | 22\.1% | gaussian | | Pitchshift | 9\.6% | 22\.9% | pitchshift | | RIR | **9\.6%** | **21\.8%** | rir | | Time stretch | 10\.1% | 22\.8% | timestretch | | Gaussian noise + RIR | 10\.0% | 23\.4% | gaussian-rir | | Pitchshift + Gaussian noise | 9\.9% | 22\.9% | pitchshift-gaussian | | Pitchshift + RIR | 9\.9% | 22\.8% | pitchshift-rir | | Tim estretch + Gaussian noise | 10\.2% | 22\.8% | timestretch-gaussian | | Time stretch + Pitchshift | 9\.8% | 22\.0% | timestretch-pitchshift | | Time stretch + RIR | 9\.7% | 22\.2% | timestretch-rir | | Pitchshift + Gaussian noise + RIR | 10\.1% | 23\.5% | pitchshift-gaussian-rir | | Time stretch + Gaussian noise + RIR | 9\.7% | 22\.3% | timestretch-gaussian-rir | | Time stretch + Pitchshift + Gaussian noise | 10\.2% | 22\.9% | timestretch-pitchshift-gaussian | | Time stretch + Pitchshift + RIR | 10\.2% | 22\.5% | timestretch-pitchshift-rir | | Time stretch + Pitchshift + Gaussian noise + RIR | 10\.9% | 24\.1% | timestretch-pitchshift-gaussian-rir |
468af9a99d69bff2d805bb6e5a5061ef
apache-2.0
['automatic-speech-recognition']
false
LM experiments We experimented with a number of language model configurations, combining the data from the PSST challenge, the subset of TIMIT we used, and CMUdict. We tried combining CMUdict data in a number of ways: unmodified, with a silence token added at the start of the pronunciation, at the end, and at both the start and the end. The best result was from a 5-gram model, with silences added at the end of the CMUdict data (git tag: `lm-nosil-cmudict-sile.5`). Evaluation was performed using scripts provided by the PSST Challenge's organisers, so there are no scripts in place to automatically use the LM with the transformers library. | | **n-gram** | **FER** | **PER** | **Tag** | | :----------------------------- | :--------- | :--------- | :--------- | :--------- | | Baseline + TIMIT | --- | **10\.2%** | 22\.5% | huggingface-unaugmented | | All silences | 4 | 10\.5% | 23\.0% | lm-allsil.4 | | | 5 | 10\.5% | 22\.6% | lm-allsil.5 | | | 6 | 10\.3% | 22\.3% | lm-allsil.6 | | No silences | 4 | 10\.3% | 22\.6% | lm-nosil.4 | | | 5 | **10\.2%** | 22\.2% | lm-nosil.5 | | | 6 | **10\.2%** | 22\.4% | lm-nosil.6 | | PSST and TIMIT without silence | | | | | | Unmodified CMUdict | 4 | 10\.3% | 22\.6% | lm-nosil-cmudict-nosil.4 | | | 5 | 10\.2% | 22\.2% | lm-nosil-cmudict-nosil.5 | | | 6 | **10\.2%** | 22\.4% | lm-nosil-cmudict-nosil.6 | | CMUdict-end | 4 | 10\.3% | 22\.6% | lm-nosil-cmudict-sile.4 | | | 5 | **10\.2%** | **22\.1%** | lm-nosil-cmudict-sile.5 | | | 6 | **10\.2%** | 22\.3% | lm-nosil-cmudict-sile.6 | | CMUdict-start | 4 | 10\.4% | 22\.6% | lm-nosil-cmudict-sils.4 | | | 5 | 10\.3% | 22\.4% | lm-nosil-cmudict-sils.5 | | | 6 | 10\.3% | 22\.3% | lm-nosil-cmudict-sils.6 | | CMUdict-both | 4 | 10\.4% | 22\.7% | lm-nosil-cmudict-silb.4 | | | 5 | 10\.4% | 22\.3% | lm-nosil-cmudict-silb.5 | | | 6 | 10\.3% | 22\.3% | lm-nosil-cmudict-silb.6 | | Unmodified PSST and TIMIT | | | | | | Unmodified CMUdict | 4 | 10\.3% | 22\.8% | lm-orig-cmudict-nosil.4 | | | 5 | 10\.3% | 22\.4% | lm-orig-cmudict-nosil.5 | | | 6 | **10\.2%** | 22\.4% | lm-orig-cmudict-nosil.6 | | CMUdict-end | 4 | 10\.3% | 22\.7% | lm-orig-cmudict-sile.4 | | | 5 | **10\.2%** | 22\.2% | lm-orig-cmudict-sile.5 | | | 6 | **10\.2%** | 22\.3% | lm-orig-cmudict-sile.6 | | CMUdict-start | 4 | 10\.5% | 22\.8% | lm-orig-cmudict-sils.4 | | | 5 | 10\.4% | 22\.5% | lm-orig-cmudict-sils.5 | | | 6 | 10\.3% | 22\.4% | lm-orig-cmudict-sils.6 | | CMUdict-both | 4 | 10\.5% | 22\.8% | lm-orig-cmudict-silb.4 | | | 5 | 10\.4% | 22\.4% | lm-orig-cmudict-silb.5 | | | 6 | 10\.4% | 22\.4% | lm-orig-cmudict-silb.6 |
84fd130525a26c50d0af490c9a2ad49d
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2213 - Accuracy: 0.9255 - F1: 0.9255
79346e27fc640f5f7327a0bf42303f1e
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8391 | 1.0 | 250 | 0.3177 | 0.9035 | 0.9006 | | 0.2526 | 2.0 | 500 | 0.2213 | 0.9255 | 0.9255 |
9e39edd7a12dc84bf3f77e2ddfdc10ae
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8051 - Matthews Correlation: 0.5338
d75b4051be9b4d2973ddec55c58d5d74