license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
apache-2.0
['minds14', 'google/xtreme_s', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:------:|:--------:| | 2.6739 | 5.41 | 200 | 2.5687 | 0.0430 | 0.1190 | | 1.4953 | 10.81 | 400 | 1.6052 | 0.5550 | 0.5692 | | 0.6177 | 16.22 | 600 | 0.7927 | 0.8052 | 0.8011 | | 0.3609 | 21.62 | 800 | 0.5679 | 0.8609 | 0.8609 | | 0.4972 | 27.03 | 1000 | 0.5944 | 0.8509 | 0.8523 | | 0.1799 | 32.43 | 1200 | 0.6194 | 0.8623 | 0.8621 | | 0.1308 | 37.84 | 1400 | 0.5956 | 0.8569 | 0.8548 | | 0.2298 | 43.24 | 1600 | 0.5201 | 0.8732 | 0.8743 | | 0.0052 | 48.65 | 1800 | 0.3826 | 0.9106 | 0.9103 |
52d72b0a41f2f2f632b4b8f743abdd69
apache-2.0
['speechbrain', 'embeddings', 'Speaker', 'Verification', 'Identification', 'pytorch', 'ECAPA', 'TDNN']
false
Speaker Verification with ECAPA-TDNN embeddings on Zaion This repository provides all the necessary tools to perform speaker verification with a pretrained ECAPA-TDNN model using SpeechBrain. The system can be used to extract speaker embeddings as well. It is trained on Voxceleb 1+ Voxceleb2 training data. For a better experience, we encourage you to learn more about [SpeechBrain](https://speechbrain.github.io). The model performance on Voxceleb1-test set(Cleaned) is:
0b5285dd449951cd6adaf502e52a297f
apache-2.0
['speechbrain', 'embeddings', 'Speaker', 'Verification', 'Identification', 'pytorch', 'ECAPA', 'TDNN']
false
Install SpeechBrain First of all, please install SpeechBrain with the following command: ``` gh repo clone aheba/speechbrain-aheba-contribs git checkout pretrain_new pip install -r requirements.txt pip install --editable . ``` Please notice that we encourage you to read our tutorials and learn more about [SpeechBrain](https://speechbrain.github.io).
60550959d728ef755e697a6cac998f68
apache-2.0
['speechbrain', 'embeddings', 'Speaker', 'Verification', 'Identification', 'pytorch', 'ECAPA', 'TDNN']
false
Compute your speaker embeddings ```python import torch audio import torch from speechbrain.pretrained import Pretrained classifier = Pretrained.import_model(source="aheba31/test-predictor", pymodule_file="inference.py" ,class_name="EncoderClassifier") print(classifier.classify_file("/workspace/contributions/test/spkrec-ecapa-voxceleb/example1.wav")) ```
3516e10ef8580d4777932f5128a581d9
apache-2.0
['speechbrain', 'embeddings', 'Speaker', 'Verification', 'Identification', 'pytorch', 'ECAPA', 'TDNN']
false
**Citing SpeechBrain** Please, cite SpeechBrain if you use it for your research or business. ```bibtex @misc{speechbrain, title={{SpeechBrain}: A General-Purpose Speech Toolkit}, author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio}, year={2021}, eprint={2106.04624}, archivePrefix={arXiv}, primaryClass={eess.AS}, note={arXiv:2106.04624} } ```
066a3799ff2449b900a2dc7331feeeea
apache-2.0
['image-classification', 'timm']
false
Model card for convnext_tiny.in12k_ft_in1k_384 A ConvNeXt image classification model. Pretrained in `timm` on ImageNet-12k (a 11821 class subset of full ImageNet-22k) and fine-tuned on ImageNet-1k by Ross Wightman. ImageNet-12k training done on TPUs thanks to support of the [TRC](https://sites.research.google/trc/about/) program. Fine-tuning performed on 8x GPU [Lambda Labs](https://lambdalabs.com/) cloud instances.
894901dac49af37f79bec0fa0087c1b0
apache-2.0
['image-classification', 'timm']
false
Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 28.6 - GMACs: 13.1 - Activations (M): 39.5 - Image size: 384 x 384 - **Papers:** - A ConvNet for the 2020s: https://arxiv.org/abs/2201.03545 - **Original:** https://github.com/rwightman/pytorch-image-models - **Dataset:** ImageNet-1k - **Pretrain Dataset:** ImageNet-12k
231081618eb23decdceb72be56350760
apache-2.0
['image-classification', 'timm']
false
Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model('convnext_tiny.in12k_ft_in1k_384', pretrained=True) model = model.eval()
266d33f92bd4cea34c0f8c142a091b05
apache-2.0
['image-classification', 'timm']
false
Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'convnext_tiny.in12k_ft_in1k_384', pretrained=True, features_only=True, ) model = model.eval()
4ce761543eaf124fb08137829d6e0023
apache-2.0
['image-classification', 'timm']
false
Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'convnext_tiny.in12k_ft_in1k_384', pretrained=True, num_classes=0,
419c3d873444926cee323a97731b115d
mit
['feature-extraction', 'sentence-similarity', 'sentence-transformers']
false
All MPNet base model (v2) for Semantic Search This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
d8cafa71f5dbb2f5427f71d1bdbb5ba4
mit
['feature-extraction', 'sentence-similarity', 'sentence-transformers']
false
Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-mpnet-base-v2) ------
8cef80ae367593a19d776fdd2e2ff7c6
mit
['feature-extraction', 'sentence-similarity', 'sentence-transformers']
false
Background The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised contrastive learning objective. We used the pretrained [`microsoft/mpnet-base`](https://huggingface.co/microsoft/mpnet-base) model and fine-tuned in on a 1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset. We developped this model during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by Hugging Face. We developped this model as part of the project: [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
b468e7171d6484821f99524180f58ccb
mit
['feature-extraction', 'sentence-similarity', 'sentence-transformers']
false
Intended uses Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks. By default, input text longer than 384 word pieces is truncated.
a82a2a951c259e5f72a0c3ef836beb2f
mit
['feature-extraction', 'sentence-similarity', 'sentence-transformers']
false
Pre-training We use the pretrained [`microsoft/mpnet-base`](https://huggingface.co/microsoft/mpnet-base) model. Please refer to the model card for more detailed information about the pre-training procedure.
82b5f4652c51bb97f786c86b12c626ab
mit
['feature-extraction', 'sentence-similarity', 'sentence-transformers']
false
Fine-tuning We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss by comparing with true pairs.
b47b4fc8f0e65b67ae44a975c884e3a4
mit
['feature-extraction', 'sentence-similarity', 'sentence-transformers']
false
Hyper parameters We trained ou model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core). We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e-5 learning rate.
ac2749617d060428860d2cd76ccb43bc
mit
['feature-extraction', 'sentence-similarity', 'sentence-transformers']
false
Training data We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences. We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file. | Dataset | Paper | Number of training tuples | |--------------------------------------------------------|:----------------------------------------:|:--------------------------:| | [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 | | [WikiAnswers](https://github.com/afader/oqa
98de1b82d59e0a201fd2eb65c614bd85
mit
['generated_from_trainer']
false
deberta-finetuned-ner-connll-late-stop This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on the wikiann dataset. It achieves the following results on the evaluation set: - Loss: 0.5259 - Precision: 0.8302 - Recall: 0.8471 - F1: 0.8386 - Accuracy: 0.9229
ad0460f6981247ce14beeb8f3c158102
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.3408 | 1.0 | 1875 | 0.3639 | 0.7462 | 0.7887 | 0.7669 | 0.8966 | | 0.2435 | 2.0 | 3750 | 0.2933 | 0.8104 | 0.8332 | 0.8217 | 0.9178 | | 0.1822 | 3.0 | 5625 | 0.3034 | 0.8147 | 0.8388 | 0.8266 | 0.9221 | | 0.1402 | 4.0 | 7500 | 0.3667 | 0.8275 | 0.8474 | 0.8374 | 0.9235 | | 0.1013 | 5.0 | 9375 | 0.4290 | 0.8285 | 0.8448 | 0.8366 | 0.9227 | | 0.0677 | 6.0 | 11250 | 0.4914 | 0.8259 | 0.8473 | 0.8365 | 0.9231 | | 0.0439 | 7.0 | 13125 | 0.5259 | 0.8302 | 0.8471 | 0.8386 | 0.9229 |
a97f58ac08857270f0c2a7c51991e3c1
apache-2.0
['generated_from_trainer']
false
wav2vec2-base-timit-demo-idrak-paperspace1 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3623 - Wer: 0.3471
31542c4385a94a2a286fc6a096a808ad
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.1034 | 0.87 | 500 | 0.3623 | 0.3471 |
58dc7c22097bbe6ca9012740d77aa94d
mit
['generated_from_trainer']
false
robbert-twitter-sentiment-tokenized This model is a fine-tuned version of [pdelobelle/robbert-v2-dutch-base](https://huggingface.co/pdelobelle/robbert-v2-dutch-base) on the dutch_social dataset. It achieves the following results on the evaluation set: - Loss: 0.5473 - Accuracy: 0.814 - F1: 0.8133 - Precision: 0.8131 - Recall: 0.814
2c30a99719dc68eae8204547499b27ec
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.6895 | 1.0 | 282 | 0.6307 | 0.7433 | 0.7442 | 0.7500 | 0.7433 | | 0.4948 | 2.0 | 564 | 0.5189 | 0.8053 | 0.8062 | 0.8081 | 0.8053 | | 0.2642 | 3.0 | 846 | 0.5473 | 0.814 | 0.8133 | 0.8131 | 0.814 |
dabc9b27ad802ed4f98925a3fd6fe7fd
apache-2.0
['text reranking']
false
BibTeX entry and citation info ```bibtex @inproceedings{gao2021lce, title={Rethink Training of BERT Rerankers in Multi-Stage Retrieval Pipeline}, author={Luyu Gao and Zhuyun Dai and Jamie Callan}, year={2021}, booktitle={The 43rd European Conference On Information Retrieval (ECIR)}, } ```
56ef2eac9600d1e573b0015f0e414fcb
creativeml-openrail-m
['text-to-image']
false
cybertruck01 Dreambooth model trained by cormacncheese with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb) Sample pictures of this concept:
a343bb795f59dfdd682f964071944c37
apache-2.0
['generated_from_trainer']
false
TUF_BERT_5E This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3251 - Accuracy: 0.9467
aa1103b1a4f10f33d317d11ccca42908
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4078 | 0.1 | 50 | 0.2430 | 0.92 | | 0.2488 | 0.2 | 100 | 0.1465 | 0.94 | | 0.1966 | 0.3 | 150 | 0.1284 | 0.96 | | 0.2096 | 0.4 | 200 | 0.2879 | 0.9067 | | 0.2015 | 0.5 | 250 | 0.1629 | 0.9467 | | 0.1692 | 0.59 | 300 | 0.2165 | 0.9133 | | 0.1794 | 0.69 | 350 | 0.1535 | 0.9533 | | 0.1975 | 0.79 | 400 | 0.1429 | 0.9333 | | 0.1394 | 0.89 | 450 | 0.2384 | 0.92 | | 0.191 | 0.99 | 500 | 0.2198 | 0.94 | | 0.0907 | 1.09 | 550 | 0.1270 | 0.9467 | | 0.073 | 1.19 | 600 | 0.2016 | 0.94 | | 0.1594 | 1.29 | 650 | 0.2078 | 0.9267 | | 0.087 | 1.39 | 700 | 0.3312 | 0.9333 | | 0.0961 | 1.49 | 750 | 0.3704 | 0.92 | | 0.1225 | 1.58 | 800 | 0.1686 | 0.9467 | | 0.0969 | 1.68 | 850 | 0.1525 | 0.9333 | | 0.0942 | 1.78 | 900 | 0.1924 | 0.94 | | 0.0681 | 1.88 | 950 | 0.1825 | 0.9467 | | 0.1295 | 1.98 | 1000 | 0.1360 | 0.9333 | | 0.0626 | 2.08 | 1050 | 0.2014 | 0.94 | | 0.0372 | 2.18 | 1100 | 0.2030 | 0.9467 | | 0.0077 | 2.28 | 1150 | 0.2615 | 0.9467 | | 0.0393 | 2.38 | 1200 | 0.4256 | 0.9267 | | 0.0492 | 2.48 | 1250 | 0.3057 | 0.94 | | 0.0184 | 2.57 | 1300 | 0.1308 | 0.9733 | | 0.0209 | 2.67 | 1350 | 0.2848 | 0.9467 | | 0.0328 | 2.77 | 1400 | 0.1862 | 0.96 | | 0.0333 | 2.87 | 1450 | 0.2347 | 0.96 | | 0.0527 | 2.97 | 1500 | 0.3855 | 0.9333 | | 0.0685 | 3.07 | 1550 | 0.3174 | 0.94 | | 0.0217 | 3.17 | 1600 | 0.2320 | 0.9533 | | 0.0036 | 3.27 | 1650 | 0.3219 | 0.9333 | | 0.0015 | 3.37 | 1700 | 0.1649 | 0.9733 | | 0.0177 | 3.47 | 1750 | 0.3785 | 0.94 | | 0.0142 | 3.56 | 1800 | 0.1420 | 0.9733 | | 0.0319 | 3.66 | 1850 | 0.4057 | 0.9333 | | 0.0254 | 3.76 | 1900 | 0.1824 | 0.96 | | 0.0092 | 3.86 | 1950 | 0.2400 | 0.9533 | | 0.0306 | 3.96 | 2000 | 0.2238 | 0.96 | | 0.0118 | 4.06 | 2050 | 0.2623 | 0.9533 | | 0.0097 | 4.16 | 2100 | 0.3642 | 0.9467 | | 0.0132 | 4.26 | 2150 | 0.3235 | 0.9467 | | 0.0155 | 4.36 | 2200 | 0.3535 | 0.9467 | | 0.0043 | 4.46 | 2250 | 0.3236 | 0.9467 | | 0.0004 | 4.55 | 2300 | 0.2984 | 0.9467 | | 0.009 | 4.65 | 2350 | 0.2941 | 0.9467 | | 0.0068 | 4.75 | 2400 | 0.2936 | 0.9467 | | 0.0102 | 4.85 | 2450 | 0.3138 | 0.9467 | | 0.0015 | 4.95 | 2500 | 0.3251 | 0.9467 |
42522712b32777145a6f54587c09a370
apache-2.0
['generated_from_trainer']
false
mobilebert_sa_GLUE_Experiment_wnli_128 This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE WNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.6907 - Accuracy: 0.5634
b42c46a79d375df3691c6dd38b39efda
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6938 | 1.0 | 5 | 0.6911 | 0.5634 | | 0.6933 | 2.0 | 10 | 0.6917 | 0.5634 | | 0.6931 | 3.0 | 15 | 0.6920 | 0.5634 | | 0.693 | 4.0 | 20 | 0.6915 | 0.5634 | | 0.693 | 5.0 | 25 | 0.6911 | 0.5634 | | 0.693 | 6.0 | 30 | 0.6909 | 0.5634 | | 0.693 | 7.0 | 35 | 0.6907 | 0.5634 | | 0.693 | 8.0 | 40 | 0.6911 | 0.5634 | | 0.6931 | 9.0 | 45 | 0.6908 | 0.5634 | | 0.693 | 10.0 | 50 | 0.6912 | 0.5634 | | 0.693 | 11.0 | 55 | 0.6918 | 0.5634 | | 0.693 | 12.0 | 60 | 0.6918 | 0.5634 |
faabb24200c86c5ce5bc4168ed6bd92b
apache-2.0
['whisper-event', 'generated_from_trainer', 'hf-asr-leaderboard']
false
openai/whisper-small This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9750 - Wer: 21.3693
957eeb908100d8a61b92aa07c6d59c8d
apache-2.0
['whisper-event', 'generated_from_trainer', 'hf-asr-leaderboard']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 10000 - mixed_precision_training: Native AMP
8d5de3635ec778662c566aa2f9d2e511
apache-2.0
['whisper-event', 'generated_from_trainer', 'hf-asr-leaderboard']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:-------:| | 0.3559 | 0.1 | 1000 | 0.9147 | 29.3252 | | 0.3154 | 0.2 | 2000 | 1.1353 | 26.5718 | | 0.359 | 0.3 | 3000 | 0.9208 | 25.3987 | | 0.273 | 0.4 | 4000 | 0.9591 | 24.3877 | | 0.2326 | 0.5 | 5000 | 0.9207 | 21.9052 | | 0.2992 | 1.04 | 6000 | 0.9445 | 22.4556 | | 0.2265 | 1.14 | 7000 | 0.9660 | 21.2230 | | 0.2059 | 1.24 | 8000 | 0.9785 | 20.9551 | | 0.2239 | 1.34 | 9000 | 0.9637 | 21.6300 | | 0.2163 | 1.44 | 10000 | 0.9750 | 21.3693 |
888e5e315caab060a2f553d84b39dd11
apache-2.0
['image-classification', 'generated_from_trainer']
false
vit-base-beans This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the cifar10 dataset. It achieves the following results on the evaluation set: - Loss: 2.1333 - Accuracy: 0.6224
b1906e6095519085118879b0093514f8
apache-2.0
['image-classification', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 1337 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 100
1cb432222f2cb404b1824d6dca43d4fe
apache-2.0
['image-classification', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.1678 | 0.02 | 100 | 2.1333 | 0.6224 |
762db3736e68590d7b67b988915d8357
apache-2.0
['generated_from_trainer', 'pt', 'robust-speech-event']
false
wav2vec2-large-xlsr-coraa-portuguese-cv7 This model is a fine-tuned version of [Edresson/wav2vec2-large-xlsr-coraa-portuguese](https://huggingface.co/Edresson/wav2vec2-large-xlsr-coraa-portuguese) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.1777 - Wer: 0.1339
4bc1fc5859960fc592b41bfde6ee13ae
apache-2.0
['generated_from_trainer', 'pt', 'robust-speech-event']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.4779 | 0.13 | 100 | 0.2620 | 0.2020 | | 0.4505 | 0.26 | 200 | 0.2339 | 0.1998 | | 0.4285 | 0.39 | 300 | 0.2507 | 0.2109 | | 0.4148 | 0.52 | 400 | 0.2311 | 0.2101 | | 0.4072 | 0.65 | 500 | 0.2278 | 0.1899 | | 0.388 | 0.78 | 600 | 0.2193 | 0.1898 | | 0.3952 | 0.91 | 700 | 0.2108 | 0.1901 | | 0.3851 | 1.04 | 800 | 0.2121 | 0.1788 | | 0.3496 | 1.17 | 900 | 0.2154 | 0.1776 | | 0.3063 | 1.3 | 1000 | 0.2095 | 0.1730 | | 0.3376 | 1.43 | 1100 | 0.2129 | 0.1801 | | 0.3273 | 1.56 | 1200 | 0.2132 | 0.1776 | | 0.3347 | 1.69 | 1300 | 0.2054 | 0.1698 | | 0.323 | 1.82 | 1400 | 0.1986 | 0.1724 | | 0.3079 | 1.95 | 1500 | 0.2005 | 0.1701 | | 0.3029 | 2.08 | 1600 | 0.2159 | 0.1644 | | 0.2694 | 2.21 | 1700 | 0.1992 | 0.1678 | | 0.2733 | 2.34 | 1800 | 0.2032 | 0.1657 | | 0.269 | 2.47 | 1900 | 0.2056 | 0.1592 | | 0.2869 | 2.6 | 2000 | 0.2058 | 0.1616 | | 0.2813 | 2.73 | 2100 | 0.1868 | 0.1584 | | 0.2616 | 2.86 | 2200 | 0.1841 | 0.1550 | | 0.2809 | 2.99 | 2300 | 0.1902 | 0.1577 | | 0.2598 | 3.12 | 2400 | 0.1910 | 0.1514 | | 0.24 | 3.25 | 2500 | 0.1971 | 0.1555 | | 0.2481 | 3.38 | 2600 | 0.1853 | 0.1537 | | 0.2437 | 3.51 | 2700 | 0.1897 | 0.1496 | | 0.2384 | 3.64 | 2800 | 0.1842 | 0.1495 | | 0.2405 | 3.77 | 2900 | 0.1884 | 0.1500 | | 0.2372 | 3.9 | 3000 | 0.1950 | 0.1548 | | 0.229 | 4.03 | 3100 | 0.1928 | 0.1477 | | 0.2047 | 4.16 | 3200 | 0.1891 | 0.1472 | | 0.2102 | 4.29 | 3300 | 0.1930 | 0.1473 | | 0.199 | 4.42 | 3400 | 0.1914 | 0.1456 | | 0.2121 | 4.55 | 3500 | 0.1840 | 0.1437 | | 0.211 | 4.67 | 3600 | 0.1843 | 0.1403 | | 0.2072 | 4.8 | 3700 | 0.1836 | 0.1428 | | 0.2224 | 4.93 | 3800 | 0.1747 | 0.1412 | | 0.1974 | 5.06 | 3900 | 0.1813 | 0.1416 | | 0.1895 | 5.19 | 4000 | 0.1869 | 0.1406 | | 0.1763 | 5.32 | 4100 | 0.1830 | 0.1394 | | 0.2001 | 5.45 | 4200 | 0.1775 | 0.1394 | | 0.1909 | 5.58 | 4300 | 0.1806 | 0.1373 | | 0.1812 | 5.71 | 4400 | 0.1784 | 0.1359 | | 0.1737 | 5.84 | 4500 | 0.1778 | 0.1353 | | 0.1915 | 5.97 | 4600 | 0.1777 | 0.1349 | | 0.1921 | 6.1 | 4700 | 0.1784 | 0.1359 | | 0.1805 | 6.23 | 4800 | 0.1757 | 0.1348 | | 0.1742 | 6.36 | 4900 | 0.1771 | 0.1341 | | 0.1709 | 6.49 | 5000 | 0.1777 | 0.1339 |
022fe27205f8f85c27a16207a797616d
creativeml-openrail-m
['text-to-image', 'stable-diffusion']
false
companioncube Dreambooth model trained by Wusul with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
d619e6e966fe1dbd675fdc3b4a0b1c91
apache-2.0
['generated_from_trainer']
false
all-roberta-large-v1-utility-6-16-5 This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.3728 - Accuracy: 0.3956
0a539f693b3f9ac9d860a91b5842f12f
apache-2.0
['translation']
false
cat-deu * source group: Catalan * target group: German * OPUS readme: [cat-deu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cat-deu/README.md) * model: transformer-align * source language(s): cat * target language(s): deu * model: transformer-align * pre-processing: normalization + SentencePiece (spm12k,spm12k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-deu/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-deu/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-deu/opus-2020-06-16.eval.txt)
b0614aa9599b6f9cae5e7bd3c306e44a
apache-2.0
['translation']
false
System Info: - hf_name: cat-deu - source_languages: cat - target_languages: deu - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cat-deu/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ca', 'de'] - src_constituents: {'cat'} - tgt_constituents: {'deu'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm12k,spm12k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/cat-deu/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/cat-deu/opus-2020-06-16.test.txt - src_alpha3: cat - tgt_alpha3: deu - short_pair: ca-de - chrF2_score: 0.593 - bleu: 39.5 - brevity_penalty: 1.0 - ref_len: 5643.0 - src_name: Catalan - tgt_name: German - train_date: 2020-06-16 - src_alpha2: ca - tgt_alpha2: de - prefer_old: False - long_pair: cat-deu - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
65867da7f9a2b11a7104362386d21898
apache-2.0
['generated_from_trainer']
false
Article_50v6_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the article50v6_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.5618 - Precision: 0.0939 - Recall: 0.0192 - F1: 0.0318 - Accuracy: 0.7867
326e492354ad6e13e4cb99ad61df8d42
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 23 | 0.6891 | 0.1429 | 0.0002 | 0.0005 | 0.7772 | | No log | 2.0 | 46 | 0.5836 | 0.0796 | 0.0087 | 0.0157 | 0.7822 | | No log | 3.0 | 69 | 0.5618 | 0.0939 | 0.0192 | 0.0318 | 0.7867 |
131ff85e0dc7f2afb88ac3c9cd3ca6f6
apache-2.0
['generated_from_trainer']
false
distilgpt2-finetuned-tamilmixsentiment This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.4572
4e9a4ae5d76e447f7435170465ffca6c
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 5.6438 | 1.0 | 907 | 4.8026 | | 4.774 | 2.0 | 1814 | 4.5953 | | 4.5745 | 3.0 | 2721 | 4.5070 | | 4.4677 | 4.0 | 3628 | 4.4688 | | 4.4294 | 5.0 | 4535 | 4.4572 |
a338365b99c2c1bdd505259cf759241b
apache-2.0
['generated_from_trainer']
false
bart-paraphrase-pubmed-1.1 This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4236 - Rouge2 Precision: 0.8482 - Rouge2 Recall: 0.673 - Rouge2 Fmeasure: 0.7347
3dbbbcac76a6b8d1f64607d8940cdad6
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure | |:-------------:|:-----:|:----:|:---------------:|:----------------:|:-------------:|:---------------:| | 0.6534 | 1.0 | 663 | 0.4641 | 0.8448 | 0.6691 | 0.7313 | | 0.5078 | 2.0 | 1326 | 0.4398 | 0.8457 | 0.6719 | 0.7333 | | 0.4367 | 3.0 | 1989 | 0.4274 | 0.847 | 0.6717 | 0.7335 | | 0.3575 | 4.0 | 2652 | 0.4149 | 0.8481 | 0.6733 | 0.735 | | 0.3319 | 5.0 | 3315 | 0.4170 | 0.8481 | 0.6724 | 0.7343 | | 0.3179 | 6.0 | 3978 | 0.4264 | 0.8484 | 0.6733 | 0.735 | | 0.2702 | 7.0 | 4641 | 0.4207 | 0.8489 | 0.6732 | 0.7353 | | 0.2606 | 8.0 | 5304 | 0.4205 | 0.8487 | 0.6725 | 0.7347 | | 0.2496 | 9.0 | 5967 | 0.4247 | 0.8466 | 0.6717 | 0.7334 | | 0.2353 | 10.0 | 6630 | 0.4236 | 0.8482 | 0.673 | 0.7347 |
7a6e2cae9a5b6b167c1c34a81ee94875
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - eval_loss: 0.1538 - eval_accuracy: 0.934 - eval_f1: 0.9344 - eval_runtime: 2.0513 - eval_samples_per_second: 974.99 - eval_steps_per_second: 15.6 - epoch: 2.0 - step: 500
85bf7133d1180746dd047b82df5ba4ce
mit
['luke', 'question-answering', 'squad', 'pytorch', 'transformers', 'question answering']
false
このモデルはluke-japanese-base-liteをファインチューニングして、Question-Answeringに用いれるようにしたものです。 このモデルはluke-japanese-base-liteをJSQuAD ( https://github.com/yahoojapan/JGLUE )を用いてファインチューニングしたものです。 Question-Answeringタスク(SQuAD)に用いることができます。
26845a911c1f23283a41f8b7ce58e64d
mit
['luke', 'question-answering', 'squad', 'pytorch', 'transformers', 'question answering']
false
This model is fine-tuned model for Question-Answering which is based on luke-japanese-base-lite This model is fine-tuned by using JSQuAD dataset. You could use this model for Question-Answering tasks.
3c2e3b30e007ee638ddf0eaaff21076b
mit
['luke', 'question-answering', 'squad', 'pytorch', 'transformers', 'question answering']
false
How to use 使い方 以下のコードを実行することで、Question-Answeringタスクを解かせることができます。 please execute this code. ```python import torch from transformers import MLukeTokenizer, AutoModelForQuestionAnswering tokenizer = MLukeTokenizer.from_pretrained('Mizuiro-sakura/luke-japanese-base-lite-jsquad') model=AutoModelForQuestionAnswering.from_pretrained('Mizuiro-sakura/luke-japanese-base-lite-jsquad')
ba140ed0b8cc30c0f042ae244f22c1b4
apache-2.0
['multiberts', 'multiberts-seed_1', 'multiberts-seed_1-step_1000k']
false
MultiBERTs, Intermediate Checkpoint - Seed 1, Step 1000k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model
17ca3e9a0597c41fd7821433dd206a51
apache-2.0
['multiberts', 'multiberts-seed_1', 'multiberts-seed_1-step_1000k']
false
How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_1000k') model = TFBertModel.from_pretrained("google/multiberts-seed_1-step_1000k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_1000k') model = BertModel.from_pretrained("google/multiberts-seed_1-step_1000k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ```
f86a9b405010ac17de22ce58d555dff1
gpl-3.0
['object-detection', 'yolo', 'autogenerated-modelcard']
false
Model Description <!-- Provide a longer summary of what this model is. --> YOLOv6 is a single-stage object detection framework dedicated to industrial applications, with hardware-friendly efficient design and high performance. - **Developed by:** [More Information Needed] - **Shared by [Optional]:** [@nateraw](https://hf.co/nateraw) - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Related Models:** [yolov6t](https://hf.co/nateraw/yolov6t), [yolov6s](https://hf.co/nateraw/yolov6s) - **Parent Model:** N/A - **Resources for more information:** The [official GitHub Repository](https://github.com/meituan/YOLOv6)
a26e157931a87b11b0c86fc8d623732e
creativeml-openrail-m
['text-to-image']
false
Visual Kei Part Two Dreambooth model trained by Duskfallcrew with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! If you want to donate towards costs and don't want to subscribe: https://ko-fi.com/DUSKFALLcrew If you want to monthly support the EARTH & DUSK media projects and not just AI: https://www.patreon.com/earthndusk This model is meant to be merged with the first one, DO NOT SELL MERGES OR THIS MODEL This model does bite, i'm sorry if you get infections from the stupid. The model is safe. biut the outputs may bite you at midnight. vskiy1 (use that on your prompt)
4b5b6face7ec1c7810af1a0dc1c7542d
cc-by-4.0
['answer extraction']
false
Model Card of `lmqg/bart-base-squad-ae` This model is fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) for answer extraction on the [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
4cb164172cfa187aa9c3942c34b49728
cc-by-4.0
['answer extraction']
false
model prediction answers = model.generate_a("William Turner was an English painter who specialised in watercolour landscapes") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/bart-base-squad-ae") output = pipe("<hl> Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records. <hl> Her performance in the film received praise from critics, and she garnered several nominations for her portrayal of James, including a Satellite Award nomination for Best Supporting Actress, and a NAACP Image Award nomination for Outstanding Supporting Actress.") ```
93eb280783275b97307b139e8179a2b8
cc-by-4.0
['answer extraction']
false
Evaluation - ***Metric (Answer Extraction)***: [raw metric file](https://huggingface.co/lmqg/bart-base-squad-ae/raw/main/eval/metric.first.answer.paragraph_sentence.answer.lmqg_qg_squad.default.json) | | Score | Type | Dataset | |:-----------------|--------:|:--------|:---------------------------------------------------------------| | AnswerExactMatch | 58.17 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | AnswerF1Score | 69.47 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | BERTScore | 91.96 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_1 | 65.92 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_2 | 63.24 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_3 | 60.8 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_4 | 58.72 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | METEOR | 41.71 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | MoverScore | 82.2 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | ROUGE_L | 68.7 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
c5d98bc74e7375d0256c11959725d472
cc-by-4.0
['answer extraction']
false
Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_squad - dataset_name: default - input_types: ['paragraph_sentence'] - output_types: ['answer'] - prefix_types: None - model: facebook/bart-base - max_length: 512 - max_length_output: 32 - epoch: 4 - batch: 16 - lr: 5e-05 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 4 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/bart-base-squad-ae/raw/main/trainer_config.json).
8d8310259c52995b8edf48151633cbf0
apache-2.0
['generated_from_trainer']
false
bert-small-finetuned-finetuned-parsed-longer100 This model is a fine-tuned version of [muhtasham/bert-small-finetuned-finetuned-parsed-longer50](https://huggingface.co/muhtasham/bert-small-finetuned-finetuned-parsed-longer50) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.6346
d94bc184da407b880f87453e9350bc6f
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 128 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50
f09985a6ca2ea73213e66f317d60c477
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 4 | 2.9464 | | No log | 2.0 | 8 | 2.6667 | | No log | 3.0 | 12 | 3.2662 | | No log | 4.0 | 16 | 2.6736 | | No log | 5.0 | 20 | 2.6334 | | No log | 6.0 | 24 | 2.6909 | | No log | 7.0 | 28 | 3.0811 | | No log | 8.0 | 32 | 2.8795 | | No log | 9.0 | 36 | 3.3654 | | No log | 10.0 | 40 | 3.0057 | | No log | 11.0 | 44 | 3.1018 | | No log | 12.0 | 48 | 3.1129 | | No log | 13.0 | 52 | 2.7815 | | No log | 14.0 | 56 | 3.2128 | | No log | 15.0 | 60 | 2.9875 | | No log | 16.0 | 64 | 2.8669 | | No log | 17.0 | 68 | 2.8407 | | No log | 18.0 | 72 | 3.1196 | | No log | 19.0 | 76 | 2.5720 | | No log | 20.0 | 80 | 3.0325 | | No log | 21.0 | 84 | 3.0881 | | No log | 22.0 | 88 | 2.9000 | | No log | 23.0 | 92 | 2.9910 | | No log | 24.0 | 96 | 3.0480 | | No log | 25.0 | 100 | 3.0548 | | No log | 26.0 | 104 | 2.8290 | | No log | 27.0 | 108 | 2.8719 | | No log | 28.0 | 112 | 2.8277 | | No log | 29.0 | 116 | 2.7475 | | No log | 30.0 | 120 | 2.8492 | | No log | 31.0 | 124 | 2.6641 | | No log | 32.0 | 128 | 2.9369 | | No log | 33.0 | 132 | 2.8731 | | No log | 34.0 | 136 | 3.0025 | | No log | 35.0 | 140 | 2.9952 | | No log | 36.0 | 144 | 2.7866 | | No log | 37.0 | 148 | 3.0046 | | No log | 38.0 | 152 | 2.6468 | | No log | 39.0 | 156 | 2.8889 | | No log | 40.0 | 160 | 2.6865 | | No log | 41.0 | 164 | 2.5635 | | No log | 42.0 | 168 | 2.5147 | | No log | 43.0 | 172 | 2.6985 | | No log | 44.0 | 176 | 2.7966 | | No log | 45.0 | 180 | 3.0184 | | No log | 46.0 | 184 | 3.1892 | | No log | 47.0 | 188 | 3.1066 | | No log | 48.0 | 192 | 2.9969 | | No log | 49.0 | 196 | 2.8919 | | No log | 50.0 | 200 | 2.6346 |
180aab2d0d9914e77053bbb240fb3a33
apache-2.0
['summarization', 'generated_from_trainer']
false
BARTkrame-abstract-mT5 This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.2557 - Rouge1: 0.2223 - Rouge2: 0.0735 - Rougel: 0.1826 - Rougelsum: 0.1849
975096f4b03bd94a0fad1e89f243eb55
apache-2.0
['summarization', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4
9d9931a170563e4f9d85f9d6fae27f10
apache-2.0
['summarization', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:| | 4.9563 | 1.0 | 1250 | 2.3674 | 0.2206 | 0.0755 | 0.1853 | 0.1869 | | 3.1856 | 2.0 | 2500 | 2.2988 | 0.2296 | 0.0757 | 0.1888 | 0.1910 | | 3.0083 | 3.0 | 3750 | 2.2668 | 0.2201 | 0.0728 | 0.1816 | 0.1832 | | 2.9296 | 4.0 | 5000 | 2.2557 | 0.2223 | 0.0735 | 0.1826 | 0.1849 |
4c006acbb2303f85aeb2f54a2959eac9
mit
[]
false
dapBERT DapBERT is a BERT-like model trained based on the domain adaptive pretraining method ([Gururangan et al.](https://aclanthology.org/2020.acl-main.740/)) for the patent domain. Bert-base-uncased is used as base for the training. The training dataset used consists of a corpus of 10,000,000 patent abstracts that have been filed between 1998-2020 in US and European patent offices as well as the World Intellectual Property Organization.
bfa77a2a0f62686071da540808ee58c1
apache-2.0
['generated_from_trainer']
false
semeval23-t3-st1-en-babe-distilbert-base-uncased-finetuned-sst-2-english This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.5298 - F1: 0.4120
22e2b6910db08a201d0529360ec0ce29
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 22 | 3.4120 | 0.3965 | | No log | 2.0 | 44 | 3.4436 | 0.3834 | | No log | 3.0 | 66 | 3.5298 | 0.4120 | | No log | 4.0 | 88 | 3.5558 | 0.4018 | | No log | 5.0 | 110 | 3.6086 | 0.4002 |
b321f1c49787b5926d39ba1fc2219729
mit
['generated_from_keras_callback']
false
Ashraf-kasem/custom_gpt2_frames_text_original_tokenizer This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.1074 - Validation Loss: 1.6432 - Epoch: 29
c2b94d18ba25da801c5068ebd44bfa66
mit
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 240780, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: mixed_float16
22aac5b64bdb13a407361ae2b56e53df
mit
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 4.3075 | 3.4095 | 0 | | 3.1973 | 2.8234 | 1 | | 2.7420 | 2.5057 | 2 | | 2.4541 | 2.3022 | 3 | | 2.2507 | 2.1648 | 4 | | 2.0962 | 2.0612 | 5 | | 1.9736 | 1.9885 | 6 | | 1.8729 | 1.9286 | 7 | | 1.7883 | 1.8823 | 8 | | 1.7153 | 1.8448 | 9 | | 1.6517 | 1.8113 | 10 | | 1.5953 | 1.7864 | 11 | | 1.5446 | 1.7624 | 12 | | 1.4994 | 1.7459 | 13 | | 1.4578 | 1.7294 | 14 | | 1.4200 | 1.7171 | 15 | | 1.3851 | 1.7026 | 16 | | 1.3528 | 1.6958 | 17 | | 1.3229 | 1.6846 | 18 | | 1.2950 | 1.6760 | 19 | | 1.2690 | 1.6704 | 20 | | 1.2448 | 1.6650 | 21 | | 1.2223 | 1.6599 | 22 | | 1.2012 | 1.6539 | 23 | | 1.1815 | 1.6534 | 24 | | 1.1635 | 1.6486 | 25 | | 1.1470 | 1.6457 | 26 | | 1.1318 | 1.6443 | 27 | | 1.1185 | 1.6434 | 28 | | 1.1074 | 1.6432 | 29 |
631088d70209d9f370828cff408b4c8b
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-infovqa This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.8872
9a521b216c3a209a788d1dba8072a237
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 250500 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1
5688eb66c317aa7a124ac4f87325136f
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 0.02 | 100 | 4.7706 | | No log | 0.05 | 200 | 4.4399 | | No log | 0.07 | 300 | 3.8175 | | No log | 0.09 | 400 | 3.8306 | | 3.3071 | 0.12 | 500 | 3.6480 | | 3.3071 | 0.14 | 600 | 3.6451 | | 3.3071 | 0.16 | 700 | 3.4974 | | 3.3071 | 0.19 | 800 | 3.4686 | | 3.3071 | 0.21 | 900 | 3.4703 | | 3.5336 | 0.23 | 1000 | 3.3165 | | 3.5336 | 0.25 | 1100 | 3.3634 | | 3.5336 | 0.28 | 1200 | 3.3466 | | 3.5336 | 0.3 | 1300 | 3.3411 | | 3.5336 | 0.32 | 1400 | 3.2456 | | 3.3593 | 0.35 | 1500 | 3.3257 | | 3.3593 | 0.37 | 1600 | 3.2941 | | 3.3593 | 0.39 | 1700 | 3.2581 | | 3.3593 | 0.42 | 1800 | 3.1680 | | 3.3593 | 0.44 | 1900 | 3.2077 | | 3.2436 | 0.46 | 2000 | 3.2422 | | 3.2436 | 0.49 | 2100 | 3.2529 | | 3.2436 | 0.51 | 2200 | 3.2681 | | 3.2436 | 0.53 | 2300 | 3.1055 | | 3.2436 | 0.56 | 2400 | 3.0174 | | 3.093 | 0.58 | 2500 | 3.0608 | | 3.093 | 0.6 | 2600 | 3.0200 | | 3.093 | 0.63 | 2700 | 2.9884 | | 3.093 | 0.65 | 2800 | 3.0041 | | 3.093 | 0.67 | 2900 | 2.9700 | | 3.0087 | 0.69 | 3000 | 3.0993 | | 3.0087 | 0.72 | 3100 | 3.0499 | | 3.0087 | 0.74 | 3200 | 2.9317 | | 3.0087 | 0.76 | 3300 | 3.0817 | | 3.0087 | 0.79 | 3400 | 3.0035 | | 2.9694 | 0.81 | 3500 | 3.0850 | | 2.9694 | 0.83 | 3600 | 2.9948 | | 2.9694 | 0.86 | 3700 | 2.9874 | | 2.9694 | 0.88 | 3800 | 2.9202 | | 2.9694 | 0.9 | 3900 | 2.9322 | | 2.8277 | 0.93 | 4000 | 2.9195 | | 2.8277 | 0.95 | 4100 | 2.8638 | | 2.8277 | 0.97 | 4200 | 2.8809 | | 2.8277 | 1.0 | 4300 | 2.8872 |
da0097ef635b0103156f7450de35ddcd
apache-2.0
['automatic-speech-recognition', 'common_voice', 'generated_from_trainer']
false
wav2vec2-common_voice-tamil This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - TA dataset. It achieves the following results on the evaluation set: - Loss: 1.1172 - Wer: 1.0070
6387c2fafb110bc63fe946af72dfd97f
apache-2.0
['automatic-speech-recognition', 'common_voice', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3.0 - mixed_precision_training: Native AMP
051e5654c0fbd46f9c51c535397b38ed
apache-2.0
['automatic-speech-recognition', 'common_voice', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 0.84 | 100 | 4.0148 | 1.0 | | No log | 1.69 | 200 | 3.1738 | 1.0 | | No log | 2.54 | 300 | 2.5980 | 1.0236 |
d02ca4586a6db5510ac521b3742c6b81
apache-2.0
['translation', 'generated_from_trainer']
false
marian-finetuned-kde4-en-to-fr This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset. It achieves the following results on the evaluation set: - Loss: 0.8559 - Bleu: 52.8365
6e9fe3dd6c125bd5b30c7f534e9fe40c
apache-2.0
['generated_from_keras_callback']
false
susnato/my_food_classifier This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0074 - Validation Loss: 0.2560 - Train Accuracy: 0.945 - Epoch: 4
5889c3438e80a06108614126802290c4
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32
bfe1dae2028695d4157e8b0b895d38c7
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.0180 | 0.2310 | 0.946 | 0 | | 0.0126 | 0.2385 | 0.946 | 1 | | 0.0104 | 0.2445 | 0.944 | 2 | | 0.0088 | 0.2505 | 0.944 | 3 | | 0.0074 | 0.2560 | 0.945 | 4 |
ceb709b09389fbe4244c6f44cb72f8a4
apache-2.0
['multiberts', 'multiberts-seed_2', 'multiberts-seed_2-step_700k']
false
MultiBERTs, Intermediate Checkpoint - Seed 2, Step 700k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model
d7c1760937c5070bc38d2769857b5d0e
apache-2.0
['multiberts', 'multiberts-seed_2', 'multiberts-seed_2-step_700k']
false
How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_2-step_700k') model = TFBertModel.from_pretrained("google/multiberts-seed_2-step_700k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_2-step_700k') model = BertModel.from_pretrained("google/multiberts-seed_2-step_700k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ```
b1e48647e2783d6ea0be213ee9379cae
mit
[]
false
Oleg KOG on Stable Diffusion via Dreambooth trained on the [fast-DreamBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
31f0482fdd34fead7e973b01ecc006f8
mit
['generated_from_trainer']
false
epic_euler This model was trained from scratch on the tomekkorbak/pii-pile-chunk3-0-50000, the tomekkorbak/pii-pile-chunk3-50000-100000, the tomekkorbak/pii-pile-chunk3-100000-150000, the tomekkorbak/pii-pile-chunk3-150000-200000, the tomekkorbak/pii-pile-chunk3-200000-250000, the tomekkorbak/pii-pile-chunk3-250000-300000, the tomekkorbak/pii-pile-chunk3-300000-350000, the tomekkorbak/pii-pile-chunk3-350000-400000, the tomekkorbak/pii-pile-chunk3-400000-450000, the tomekkorbak/pii-pile-chunk3-450000-500000, the tomekkorbak/pii-pile-chunk3-500000-550000, the tomekkorbak/pii-pile-chunk3-550000-600000, the tomekkorbak/pii-pile-chunk3-600000-650000, the tomekkorbak/pii-pile-chunk3-650000-700000, the tomekkorbak/pii-pile-chunk3-700000-750000, the tomekkorbak/pii-pile-chunk3-750000-800000, the tomekkorbak/pii-pile-chunk3-800000-850000, the tomekkorbak/pii-pile-chunk3-850000-900000, the tomekkorbak/pii-pile-chunk3-900000-950000, the tomekkorbak/pii-pile-chunk3-950000-1000000, the tomekkorbak/pii-pile-chunk3-1000000-1050000, the tomekkorbak/pii-pile-chunk3-1050000-1100000, the tomekkorbak/pii-pile-chunk3-1100000-1150000, the tomekkorbak/pii-pile-chunk3-1150000-1200000, the tomekkorbak/pii-pile-chunk3-1200000-1250000, the tomekkorbak/pii-pile-chunk3-1250000-1300000, the tomekkorbak/pii-pile-chunk3-1300000-1350000, the tomekkorbak/pii-pile-chunk3-1350000-1400000, the tomekkorbak/pii-pile-chunk3-1400000-1450000, the tomekkorbak/pii-pile-chunk3-1450000-1500000, the tomekkorbak/pii-pile-chunk3-1500000-1550000, the tomekkorbak/pii-pile-chunk3-1550000-1600000, the tomekkorbak/pii-pile-chunk3-1600000-1650000, the tomekkorbak/pii-pile-chunk3-1650000-1700000, the tomekkorbak/pii-pile-chunk3-1700000-1750000, the tomekkorbak/pii-pile-chunk3-1750000-1800000, the tomekkorbak/pii-pile-chunk3-1800000-1850000, the tomekkorbak/pii-pile-chunk3-1850000-1900000 and the tomekkorbak/pii-pile-chunk3-1900000-1950000 datasets.
49c0ccfa670bd7fbb6427c1ae4b0b2d5
mit
['generated_from_trainer']
false
Full config {'dataset': {'conditional_training_config': {'aligned_prefix': '<|aligned|>', 'drop_token_fraction': 0.01, 'misaligned_prefix': '<|misaligned|>', 'threshold': 0.0}, 'datasets': ['tomekkorbak/pii-pile-chunk3-0-50000', 'tomekkorbak/pii-pile-chunk3-50000-100000', 'tomekkorbak/pii-pile-chunk3-100000-150000', 'tomekkorbak/pii-pile-chunk3-150000-200000', 'tomekkorbak/pii-pile-chunk3-200000-250000', 'tomekkorbak/pii-pile-chunk3-250000-300000', 'tomekkorbak/pii-pile-chunk3-300000-350000', 'tomekkorbak/pii-pile-chunk3-350000-400000', 'tomekkorbak/pii-pile-chunk3-400000-450000', 'tomekkorbak/pii-pile-chunk3-450000-500000', 'tomekkorbak/pii-pile-chunk3-500000-550000', 'tomekkorbak/pii-pile-chunk3-550000-600000', 'tomekkorbak/pii-pile-chunk3-600000-650000', 'tomekkorbak/pii-pile-chunk3-650000-700000', 'tomekkorbak/pii-pile-chunk3-700000-750000', 'tomekkorbak/pii-pile-chunk3-750000-800000', 'tomekkorbak/pii-pile-chunk3-800000-850000', 'tomekkorbak/pii-pile-chunk3-850000-900000', 'tomekkorbak/pii-pile-chunk3-900000-950000', 'tomekkorbak/pii-pile-chunk3-950000-1000000', 'tomekkorbak/pii-pile-chunk3-1000000-1050000', 'tomekkorbak/pii-pile-chunk3-1050000-1100000', 'tomekkorbak/pii-pile-chunk3-1100000-1150000', 'tomekkorbak/pii-pile-chunk3-1150000-1200000', 'tomekkorbak/pii-pile-chunk3-1200000-1250000', 'tomekkorbak/pii-pile-chunk3-1250000-1300000', 'tomekkorbak/pii-pile-chunk3-1300000-1350000', 'tomekkorbak/pii-pile-chunk3-1350000-1400000', 'tomekkorbak/pii-pile-chunk3-1400000-1450000', 'tomekkorbak/pii-pile-chunk3-1450000-1500000', 'tomekkorbak/pii-pile-chunk3-1500000-1550000', 'tomekkorbak/pii-pile-chunk3-1550000-1600000', 'tomekkorbak/pii-pile-chunk3-1600000-1650000', 'tomekkorbak/pii-pile-chunk3-1650000-1700000', 'tomekkorbak/pii-pile-chunk3-1700000-1750000', 'tomekkorbak/pii-pile-chunk3-1750000-1800000', 'tomekkorbak/pii-pile-chunk3-1800000-1850000', 'tomekkorbak/pii-pile-chunk3-1850000-1900000', 'tomekkorbak/pii-pile-chunk3-1900000-1950000'], 'is_split_by_sentences': True, 'skip_tokens': 1649999872}, 'generation': {'force_call_on': [25177], 'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}], 'scenario_configs': [{'generate_kwargs': {'bad_words_ids': [[50257], [50258]], 'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_samples': 4096, 'prefix': '<|aligned|>'}], 'scorer_config': {}}, 'kl_gpt3_callback': {'force_call_on': [25177], 'gpt3_kwargs': {'model_name': 'davinci'}, 'max_tokens': 64, 'num_samples': 4096, 'prefix': '<|aligned|>'}, 'model': {'from_scratch': False, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'model_kwargs': {'revision': '9e6c78543a6ff1e4089002c38864d5a9cf71ec90'}, 'num_additional_tokens': 2, 'path_or_name': 'tomekkorbak/nervous_wozniak'}, 'objective': {'name': 'MLE'}, 'tokenizer': {'path_or_name': 'gpt2', 'special_tokens': ['<|aligned|>', '<|misaligned|>']}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 128, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'epic_euler', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.0001, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000, 'output_dir': 'training_output2', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 25177, 'save_strategy': 'steps', 'seed': 42, 'tokens_already_seen': 1649999872, 'warmup_ratio': 0.01, 'weight_decay': 0.1}}
49b40bddea32ce2f5c5e11790280adc2
apache-2.0
['generated_from_trainer']
false
wav2vec2-base-timit-demo-google-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5206 - Wer: 0.3388
60c6823ab9b9b7a220a6e3553536beac
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 3.5597 | 1.0 | 500 | 2.3415 | 0.9991 | | 0.9759 | 2.01 | 1000 | 0.5556 | 0.5382 | | 0.4587 | 3.01 | 1500 | 0.7690 | 0.4781 | | 0.3156 | 4.02 | 2000 | 0.7994 | 0.4412 | | 0.2272 | 5.02 | 2500 | 0.8948 | 0.4120 | | 0.1921 | 6.02 | 3000 | 0.7065 | 0.3940 | | 0.1618 | 7.03 | 3500 | 0.4333 | 0.3855 | | 0.1483 | 8.03 | 4000 | 0.4232 | 0.3872 | | 0.156 | 9.04 | 4500 | 0.4172 | 0.3749 | | 0.1138 | 10.04 | 5000 | 0.4084 | 0.3758 | | 0.1045 | 11.04 | 5500 | 0.4665 | 0.3623 | | 0.0908 | 12.05 | 6000 | 0.4416 | 0.3684 | | 0.0788 | 13.05 | 6500 | 0.4801 | 0.3659 | | 0.0773 | 14.06 | 7000 | 0.4560 | 0.3583 | | 0.0684 | 15.06 | 7500 | 0.4878 | 0.3610 | | 0.0645 | 16.06 | 8000 | 0.4635 | 0.3567 | | 0.0577 | 17.07 | 8500 | 0.5245 | 0.3548 | | 0.0547 | 18.07 | 9000 | 0.5265 | 0.3639 | | 0.0466 | 19.08 | 9500 | 0.5161 | 0.3546 | | 0.0432 | 20.08 | 10000 | 0.5263 | 0.3558 | | 0.0414 | 21.08 | 10500 | 0.4874 | 0.3500 | | 0.0365 | 22.09 | 11000 | 0.5266 | 0.3472 | | 0.0321 | 23.09 | 11500 | 0.5422 | 0.3458 | | 0.0325 | 24.1 | 12000 | 0.5201 | 0.3428 | | 0.0262 | 25.1 | 12500 | 0.5208 | 0.3398 | | 0.0249 | 26.1 | 13000 | 0.5034 | 0.3429 | | 0.0262 | 27.11 | 13500 | 0.5055 | 0.3396 | | 0.0248 | 28.11 | 14000 | 0.5164 | 0.3404 | | 0.0222 | 29.12 | 14500 | 0.5206 | 0.3388 |
1a4542dc3ef538d960bda6284705e5b1
apache-2.0
['generated_from_trainer']
false
model-2-bart-reverse-raw This model is a fine-tuned version of [eugenesiow/bart-paraphrase](https://huggingface.co/eugenesiow/bart-paraphrase) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1556 - Rouge1: 63.5215 - Rouge2: 58.8297 - Rougel: 60.5701 - Rougelsum: 63.2683 - Gen Len: 19.4672
5386f08747bd4cc5667df308b3838cb1
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 0.1276 | 1.0 | 12767 | 0.1556 | 63.5215 | 58.8297 | 60.5701 | 63.2683 | 19.4672 |
a5f9833d076e27ed3d7d7745d644c05f
apache-2.0
['generated_from_trainer']
false
distilbert_sa_GLUE_Experiment_data_aug_mnli_384 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE MNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.9264 - Accuracy: 0.6353
0fd7fbe4390be3892f3e491d3a743220
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:------:|:---------------:|:--------:| | 0.799 | 1.0 | 31440 | 0.9061 | 0.6341 | | 0.5094 | 2.0 | 62880 | 1.0978 | 0.6270 | | 0.3276 | 3.0 | 94320 | 1.3038 | 0.6245 | | 0.2273 | 4.0 | 125760 | 1.4093 | 0.6210 | | 0.1682 | 5.0 | 157200 | 1.5859 | 0.6122 | | 0.1302 | 6.0 | 188640 | 1.7206 | 0.6197 |
f4a4efd9d7b4aa847f9a03bf061c9fa6
cc-by-4.0
['espnet', 'audio', 'text-to-speech']
false
`kan-bayashi/csmsc_tts_train_conformer_fastspeech2_raw_phn_pypinyin_g2p_phone_train.loss.ave` ♻️ Imported from https://zenodo.org/record/4031955/ This model was trained by kan-bayashi using csmsc/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
e3a081537f14ea6c1e66929ce7a4e1d6
apache-2.0
['generated_from_keras_callback']
false
gogzy/t5-base-finetuned_renre_2021_item1 This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 6.0647 - Validation Loss: 4.9004 - Train Rouge1: 14.8649 - Train Rouge2: 8.2192 - Train Rougel: 12.1622 - Train Rougelsum: 14.8649 - Train Gen Len: 19.0 - Epoch: 4
c6c98224e6276cc167b24e45e177c49b
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Train Gen Len | Epoch | |:----------:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-------------:|:-----:| | 10.3805 | 9.9375 | 14.8649 | 8.2192 | 12.1622 | 14.8649 | 19.0 | 0 | | 9.2108 | 8.9290 | 14.8649 | 8.2192 | 12.1622 | 14.8649 | 19.0 | 1 | | 8.1249 | 7.6832 | 14.8649 | 8.2192 | 12.1622 | 14.8649 | 19.0 | 2 | | 7.3542 | 6.2012 | 14.8649 | 8.2192 | 12.1622 | 14.8649 | 19.0 | 3 | | 6.0647 | 4.9004 | 14.8649 | 8.2192 | 12.1622 | 14.8649 | 19.0 | 4 |
6d49f8c04167bc2c08941458e63a4c5f
apache-2.0
['generated_from_trainer']
false
finetuning11 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan - Wer: 1.0
f0c00eb705bed812fb8d08d895e1d62b
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.00024 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 800 - num_epochs: 5
f07cdf3b94f16d660f2bac58b378587f
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:---:| | 0.0 | 0.31 | 500 | nan | 1.0 | | 0.0 | 0.61 | 1000 | nan | 1.0 | | 0.0 | 0.92 | 1500 | nan | 1.0 | | 0.0 | 1.23 | 2000 | nan | 1.0 | | 0.0 | 1.54 | 2500 | nan | 1.0 | | 0.0 | 1.84 | 3000 | nan | 1.0 | | 0.0 | 2.15 | 3500 | nan | 1.0 | | 0.0 | 2.46 | 4000 | nan | 1.0 | | 0.0 | 2.77 | 4500 | nan | 1.0 | | 0.0 | 3.07 | 5000 | nan | 1.0 | | 0.0 | 3.38 | 5500 | nan | 1.0 | | 0.0 | 3.69 | 6000 | nan | 1.0 | | 0.0 | 4.0 | 6500 | nan | 1.0 | | 0.0 | 4.3 | 7000 | nan | 1.0 | | 0.0 | 4.61 | 7500 | nan | 1.0 | | 0.0 | 4.92 | 8000 | nan | 1.0 |
b840fba6ba8f17c76819f1a060d14b46
apache-2.0
['automatic-speech-recognition', 'pl']
false
exp_w2v2t_pl_vp-it_s265 Fine-tuned [facebook/wav2vec2-large-it-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-it-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (pl)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
556147ce6ee040bfef9469ad3c4ce2d9