license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
cc-by-4.0
['question generation']
false
model prediction questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "research-backup/t5-small-subjqa-vanilla-books-qg") output = pipe("generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.") ```
113e7e96e1d38a2de381a2e7cf99bc9c
cc-by-4.0
['question generation']
false
Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/research-backup/t5-small-subjqa-vanilla-books-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.books.json) | | Score | Type | Dataset | |:-----------|--------:|:-------|:-----------------------------------------------------------------| | BERTScore | 72.1 | books | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_1 | 3.25 | books | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_2 | 0.68 | books | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_3 | 0 | books | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_4 | 0 | books | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | METEOR | 3.87 | books | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | MoverScore | 49.58 | books | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | ROUGE_L | 4.4 | books | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
3b79691284e239f7d2484fa660a66e4c
cc-by-4.0
['question generation']
false
Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_subjqa - dataset_name: books - input_types: ['paragraph_answer'] - output_types: ['question'] - prefix_types: ['qg'] - model: t5-small - max_length: 512 - max_length_output: 32 - epoch: 1 - batch: 32 - lr: 5e-05 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 2 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/research-backup/t5-small-subjqa-vanilla-books-qg/raw/main/trainer_config.json).
11b1ac4f8fef842de8e588b586ed678e
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Whisper Large-v2 Bulgarian This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the mozilla-foundation/common_voice_11_0 bg dataset. It achieves the following results on the evaluation set: - Loss: 0.3208 - Wer: 13.4040
0966bae1834f977fe3bd59e1b8269d19
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0023 | 7.04 | 1000 | 0.3208 | 13.4040 |
89f86a217b26e1ef92eec6366a95cb78
apache-2.0
['India', 'politics', 'tweets', 'BJP', 'Congress', 'AAP', 'pytorch', 'gpt2', 'lm-head', 'text-generation']
false
Model description This is a GPT2 Language model with LM head fine-tuned on tweets crawled from handles which belong predominantly to Indian Politics. For more information about the crawled data, you can go through this [blog](https://bagdeabhishek.github.io/twitterAnalysis) post. This model is finetuned using GPT2-medium instead of the vanilla GPT2 implementation. This model has more parameters but it is able to model language slightly better.
30dfadd8c83eee537b8cea070aebe523
apache-2.0
['India', 'politics', 'tweets', 'BJP', 'Congress', 'AAP', 'pytorch', 'gpt2', 'lm-head', 'text-generation']
false
Training data I used the pre-trained gpt2-medium model from Huggingface transformers repository and fine-tuned it on custom data set crawled from twitter. The method used to identify the political handles is mentioned in detail in a [blog](https://bagdeabhishek.github.io/twitterAnalysis) post. I used tweets from both the Pro-BJP and Anti-BJP clusters mentioned in the blog.
8c59a70db6257315b1e382f78b15f8f0
apache-2.0
['stanza', 'token-classification']
false
Stanza model for Bulgarian (bg) Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages. Starting from raw text to syntactic analysis and entity recognition, Stanza brings state-of-the-art NLP models to languages of your choosing. Find more about it in [our website](https://stanfordnlp.github.io/stanza) and our [GitHub repository](https://github.com/stanfordnlp/stanza). This card and repo were automatically prepared with `hugging_stanza.py` in the `stanfordnlp/huggingface-models` repo Last updated 2022-10-12 02:49:02.763
d19c4e7ab925a2bbe4b246fffd3205cd
apache-2.0
['automatic-speech-recognition', 'NbAiLab/NPSC', 'generated_from_trainer']
false
wav2vec2-xlsr-300M-NPSC-OH This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the NBAILAB/NPSC - 16K_MP3 dataset. It achieves the following results on the evaluation set: - Loss: 0.1692 - Wer: 0.1663
d4d871ab3002c803ade20ac44faa3a3c
apache-2.0
['automatic-speech-recognition', 'NbAiLab/NPSC', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 13 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 15.0 - mixed_precision_training: Native AMP
04926ae7a231a9cf69c1515c517e95b4
apache-2.0
['automatic-speech-recognition', 'NbAiLab/NPSC', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 3.1638 | 0.66 | 500 | 3.0686 | 1.0 | | 2.9311 | 1.31 | 1000 | 2.9208 | 1.0 | | 2.4175 | 1.97 | 1500 | 1.5009 | 0.9049 | | 1.4442 | 2.63 | 2000 | 0.4426 | 0.3783 | | 1.2624 | 3.28 | 2500 | 0.3193 | 0.2998 | | 1.1889 | 3.94 | 3000 | 0.2867 | 0.2630 | | 1.1315 | 4.6 | 3500 | 0.2566 | 0.2444 | | 1.0864 | 5.26 | 4000 | 0.2368 | 0.2294 | | 1.093 | 5.91 | 4500 | 0.2240 | 0.2151 | | 1.0368 | 6.57 | 5000 | 0.2117 | 0.2056 | | 1.0178 | 7.23 | 5500 | 0.2020 | 0.1954 | | 1.0035 | 7.88 | 6000 | 0.2005 | 0.1924 | | 0.9759 | 8.54 | 6500 | 0.1971 | 0.1863 | | 0.9795 | 9.2 | 7000 | 0.1892 | 0.1812 | | 0.9601 | 9.85 | 7500 | 0.1863 | 0.1795 | | 0.9673 | 10.51 | 8000 | 0.1809 | 0.1761 | | 0.9233 | 11.17 | 8500 | 0.1818 | 0.1755 | | 0.9382 | 11.83 | 9000 | 0.1767 | 0.1741 | | 0.9242 | 12.48 | 9500 | 0.1743 | 0.1703 | | 0.9703 | 13.14 | 10000 | 0.1711 | 0.1711 | | 0.9139 | 13.8 | 10500 | 0.1718 | 0.1672 | | 0.9073 | 14.45 | 11000 | 0.1700 | 0.1665 |
ffdeed722137f74d2273e4b85b849757
apache-2.0
['Vocoder', 'HiFIGAN', 'text-to-speech', 'TTS', 'speech-synthesis', 'speechbrain']
false
Vocoder with HiFIGAN trained on custom German dataset This repository provides all the necessary tools for using a [HiFIGAN](https://arxiv.org/abs/2010.05646) vocoder trained on a generated German dataset using [mp3_to_training_data](https://github.com/padmalcom/mp3_to_training_data). The pre-trained model (8 epochs so far) takes in input a spectrogram and produces a waveform in output. Typically, a vocoder is used after a TTS model that converts an input text into a spectrogram.
418550b24c5eeb3baa94725e8aaa63c2
apache-2.0
['Vocoder', 'HiFIGAN', 'text-to-speech', 'TTS', 'speech-synthesis', 'speechbrain']
false
How to use Install speechbrain. ```bash pip install speechbrain ``` Use a TTS model (e.g. [tts-tacotron-german](https://huggingface.co/padmalcom/tts-tacotron2-german)), generate a spectrogram and convert it to audio. ```python import torchaudio from speechbrain.pretrained import Tacotron2 from speechbrain.pretrained import HIFIGAN tacotron2 = Tacotron2.from_hparams(source="padmalcom/tts-tacotron2-german", savedir="tmpdir_tts") hifi_gan = HIFIGAN.from_hparams(source="padmalcom/tts-hifigan-german", savedir="tmpdir_vocoder") mel_output, mel_length, alignment = tacotron2.encode_text("Mary had a little lamb") waveforms = hifi_gan.decode_batch(mel_output) torchaudio.save('example_TTS.wav',waveforms.squeeze(1), 22050) ```
0f453675baee558e54e2c32be3955a83
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.7696 - Matthews Correlation: 0.5136
ae6b3f2425813092a449570435d1d9ac
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5284 | 1.0 | 535 | 0.4948 | 0.4093 | | 0.3529 | 2.0 | 1070 | 0.5135 | 0.4942 | | 0.2417 | 3.0 | 1605 | 0.6303 | 0.5083 | | 0.1818 | 4.0 | 2140 | 0.7696 | 0.5136 | | 0.1302 | 5.0 | 2675 | 0.8774 | 0.5123 |
4a7e2a587a12fe569c13f10cfef480cd
apache-2.0
['automatic-speech-recognition', 'en']
false
exp_w2v2r_en_xls-r_age_teens-5_sixties-5_s279 Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
06185cb9fa2aa56040f1f7fba902e708
apache-2.0
['vision', 'image-segmentation', 'generated_from_trainer']
false
segformer-b0-finetuned-segments-sidewalk-2 This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the segments/sidewalk-semantic dataset. It achieves the following results on the evaluation set: - Loss: 2.9327 - Mean Iou: 0.0763 - Mean Accuracy: 0.1260 - Overall Accuracy: 0.5923 - Per Category Iou: [nan, 0.15598158400203022, 0.6233750625153907, 0.0037560777123078824, 0.026995519273962765, 0.027599075064035524, 0.0, 0.0010671752114502803, 0.0, 0.0, 0.503652156236298, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.42226922942999406, 0.0, 0.0005751844669974061, 0.0, 0.0, 0.0, 0.015053303500921295, 0.0, 0.0, 0.0, 0.5380260834627074, 0.2004924888392474, 0.07113330974397604, 7.792680075848753e-05, 0.000328515111695138, 0.0025085129486024, 0.0] - Per Category Accuracy: [nan, 0.17282441039529764, 0.9228726118961177, 0.00408103876916878, 0.028255152590055656, 0.029544523907019265, nan, 0.0010791707371488259, 0.0, 0.0, 0.8681646650418041, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.7122996003019028, 0.0, 0.0005801259615003622, 0.0, 0.0, nan, 0.02304960072549563, 0.0, 0.0, 0.0, 0.9348363685365858, 0.2596289024956107, 0.07122958643730157, 8.48216389425569e-05, 0.0005356047133214773, 0.0026059641588056346, 0.0]
592e9cf7e5aad922d0ce56f7df918f4a
apache-2.0
['vision', 'image-segmentation', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 0.05
eeec123a79b5e7a41c8a0d10b2baad6b
apache-2.0
['vision', 'image-segmentation', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:| | 3.0624 | 0.03 | 10 | 3.1628 | 0.0726 | 0.1219 | 0.5758 | [nan, 0.0878087898079964, 0.611982872765419, 0.0001999765816897758, 0.006930751650791711, 0.0208104329339671, 0.0, 0.0010631316774049914, 0.0, 0.0, 0.4839157481183621, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.39292052415275885, 0.0, 0.0003268797082673576, 0.0011424188270622699, 0.0, 0.0, 0.004317032040472175, 3.142508260307427e-05, 0.0, 0.0, 0.5537894233680722, 0.28184052017073197, 0.015966383939961543, 0.0002995587926924772, 0.0005713078253519804, 0.0035316933149879015, 0.0] | [nan, 0.09656561651317118, 0.9239613003877697, 0.00021265611687132485, 0.007163978434475801, 0.0222089828684614, nan, 0.0010774805715464, 0.0, 0.0, 0.8583517795809614, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.705533848895072, 0.0, 0.00033222625115695, 0.0011495555325644448, 0.0, nan, 0.008061062548807214, 3.244014792707455e-05, 0.0, 0.0, 0.8715627360179777, 0.3828074002074446, 0.01597238073499201, 0.0003298619292210546, 0.0011388100215281895, 0.003805890022240969, 0.0] | | 2.6259 | 0.05 | 20 | 2.9327 | 0.0763 | 0.1260 | 0.5923 | [nan, 0.15598158400203022, 0.6233750625153907, 0.0037560777123078824, 0.026995519273962765, 0.027599075064035524, 0.0, 0.0010671752114502803, 0.0, 0.0, 0.503652156236298, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.42226922942999406, 0.0, 0.0005751844669974061, 0.0, 0.0, 0.0, 0.015053303500921295, 0.0, 0.0, 0.0, 0.5380260834627074, 0.2004924888392474, 0.07113330974397604, 7.792680075848753e-05, 0.000328515111695138, 0.0025085129486024, 0.0] | [nan, 0.17282441039529764, 0.9228726118961177, 0.00408103876916878, 0.028255152590055656, 0.029544523907019265, nan, 0.0010791707371488259, 0.0, 0.0, 0.8681646650418041, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.7122996003019028, 0.0, 0.0005801259615003622, 0.0, 0.0, nan, 0.02304960072549563, 0.0, 0.0, 0.0, 0.9348363685365858, 0.2596289024956107, 0.07122958643730157, 8.48216389425569e-05, 0.0005356047133214773, 0.0026059641588056346, 0.0] |
9e996d74ab5c4d3e60f84afe473f466f
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2173 - Accuracy: 0.925 - F1: 0.9252
0f4e208afe18f0b696eaa0c2c313defa
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.825 | 1.0 | 250 | 0.2925 | 0.915 | 0.9134 | | 0.2444 | 2.0 | 500 | 0.2173 | 0.925 | 0.9252 |
f3676fc6498d3ceb8fb97a49e232164a
apache-2.0
['generated_from_trainer']
false
t5-base-finetuned-cnndm_fs0.1-c This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9852 - Recall: 34.0438 - Precision: 33.1906 - F1: 31.9429 - Gen Len: 18.9962
85978488a02534d6878529accc4f7376
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 1799 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP
86a9ecbcf46f1010476f343e36bb499a
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Recall | Precision | F1 | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:-------:|:-------:| | 2.19 | 0.11 | 200 | 1.6689 | 33.0433 | 41.8436 | 35.485 | 18.9695 | | 1.74 | 0.23 | 400 | 1.5476 | 36.2979 | 44.1939 | 38.3491 | 18.9741 | | 1.6352 | 0.34 | 600 | 1.5075 | 33.8389 | 42.4269 | 36.2306 | 18.9848 | | 1.5937 | 0.46 | 800 | 1.4779 | 33.8957 | 42.3366 | 36.2976 | 18.9939 | | 1.5457 | 0.57 | 1000 | 1.4497 | 34.2432 | 42.4519 | 36.5314 | 18.9916 | | 1.522 | 0.69 | 1200 | 1.4360 | 34.8509 | 42.6855 | 36.9827 | 18.9886 | | 1.5091 | 0.8 | 1400 | 1.4210 | 34.5935 | 42.3167 | 36.7092 | 18.9848 | | 1.5015 | 0.92 | 1600 | 1.4013 | 35.3025 | 43.1577 | 37.4461 | 18.9954 | | 1.4897 | 1.03 | 1800 | 1.3980 | 34.498 | 42.2453 | 36.5759 | 18.9886 | | 1.468 | 1.15 | 2000 | 1.3998 | 34.6134 | 42.053 | 36.5715 | 18.9863 | | 1.4812 | 1.26 | 2200 | 1.4014 | 34.5802 | 41.9303 | 36.5025 | 18.9871 | | 1.5264 | 1.38 | 2400 | 1.4729 | 34.0632 | 40.792 | 35.5837 | 18.9863 | | 1.7346 | 1.49 | 2600 | 1.6945 | 33.8488 | 36.3411 | 33.3566 | 18.997 | | 1.9477 | 1.61 | 2800 | 1.8588 | 34.0827 | 34.8631 | 32.749 | 18.9931 | | 2.1295 | 1.72 | 3000 | 1.9741 | 34.6842 | 33.8048 | 32.5274 | 18.9939 | | 2.1759 | 1.84 | 3200 | 1.9805 | 34.4333 | 33.5371 | 32.2921 | 18.9962 | | 2.194 | 1.95 | 3400 | 1.9852 | 34.0438 | 33.1906 | 31.9429 | 18.9962 |
f087a0f6a3e5110b727904a9596c785f
apache-2.0
['generated_from_trainer']
false
bart-model2-1209 This model is a fine-tuned version of [theojolliffe/bart-paraphrase-v4-e1-feedback](https://huggingface.co/theojolliffe/bart-paraphrase-v4-e1-feedback) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1658 - Rouge1: 55.8035 - Rouge2: 46.8603 - Rougel: 54.6759 - Rougelsum: 55.2072 - Gen Len: 19.6748
779e86a9c68b8d4d492b19d4b3cd2174
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 0.9329 | 1.0 | 650 | 0.1658 | 55.8035 | 46.8603 | 54.6759 | 55.2072 | 19.6748 |
c5d6c948d9aa3df75797a4914799c886
creativeml-openrail-m
['text-to-image', 'stable-diffusion']
false
abrazaq Dreambooth model trained by raza2 with [buildspace's DreamBooth](https://colab.research.google.com/github/buildspace/diffusers/blob/main/examples/dreambooth/DreamBooth_Stable_Diffusion.ipynb) notebook Build your own using the [AI Avatar project](https://buildspace.so/builds/ai-avatar)! To get started head over to the [project dashboard](https://buildspace.so/p/build-ai-avatars). Sample pictures of this concept:
5ef2ad332d1fc782dabccd41b9946f13
apache-2.0
['generated_from_trainer']
false
distilroberta-base-finetuned-squad This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.0014
d420d8bf5663dbe6212f0d1f3b9b166f
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.0927 | 1.0 | 5536 | 1.0290 | | 0.87 | 2.0 | 11072 | 0.9683 | | 0.7335 | 3.0 | 16608 | 1.0014 |
6d12c2443643595d93f4bdff962f1135
apache-2.0
['generated_from_trainer']
false
convnext-tiny-224-finetuned-eurosat-albumentations This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on the image_folder dataset. It achieves the following results on the evaluation set: - Loss: 0.0727 - Accuracy: 0.9748
7715edee87f18312627b1fd1c0f8e607
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.141 | 1.0 | 190 | 0.1496 | 0.9544 | | 0.0736 | 2.0 | 380 | 0.0958 | 0.9719 | | 0.0568 | 3.0 | 570 | 0.0727 | 0.9748 |
2cc97971d0138fd2ebccac70849ade5e
apache-2.0
['automatic-speech-recognition', 'en']
false
exp_w2v2r_en_xls-r_accent_us-2_england-8_s930 Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
b61c1ff362aca58a02bf40794bdbd6dd
mit
['msmarco', 't5', 'pytorch', 'tensorflow', 'en']
false
Introduction mT5-base-en-msmarco-v1 is a mT5-based model finetuned on English MS MARCO passage dataset. Further information about the dataset or the translation method can be found on our paper [**mMARCO: A Multilingual Version of the MS MARCO Passage Ranking Dataset**](https://arxiv.org/abs/2108.13897) and [mMARCO](https://github.com/unicamp-dl/mMARCO) repository.
2c3c6ef27b1522334ec0c7b7b7a80b0b
mit
['msmarco', 't5', 'pytorch', 'tensorflow', 'en']
false
Usage ```python from transformers import T5Tokenizer, MT5ForConditionalGeneration model_name = 'unicamp-dl/mt5-base-en-msmarco' tokenizer = T5Tokenizer.from_pretrained(model_name) model = MT5ForConditionalGeneration.from_pretrained(model_name) ```
0416e7785b64d16067f5badabba4d1bd
mit
['msmarco', 't5', 'pytorch', 'tensorflow', 'en']
false
Citation If you use mT5-base-en-msmarco, please cite: @misc{bonifacio2021mmarco, title={mMARCO: A Multilingual Version of the MS MARCO Passage Ranking Dataset}, author={Luiz Henrique Bonifacio and Vitor Jeronymo and Hugo Queiroz Abonizio and Israel Campiotti and Marzieh Fadaee and and Roberto Lotufo and Rodrigo Nogueira}, year={2021}, eprint={2108.13897}, archivePrefix={arXiv}, primaryClass={cs.CL} }
61f6c333c3b7bea5db6875fbbf06ae16
apache-2.0
['generated_from_trainer']
false
twitter_RoBERTa_token_itr0_1e-05_all_01_03_2022-14_37_35 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3190 - Precision: 0.1194 - Recall: 0.2563 - F1: 0.1629 - Accuracy: 0.8546
53175adbd1947d05d37679c638a838f9
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 30 | 0.4963 | 0.0223 | 0.0562 | 0.0319 | 0.7461 | | No log | 2.0 | 60 | 0.4089 | 0.0617 | 0.1359 | 0.0849 | 0.8093 | | No log | 3.0 | 90 | 0.3919 | 0.1053 | 0.2101 | 0.1403 | 0.8219 | | No log | 4.0 | 120 | 0.3787 | 0.1202 | 0.2482 | 0.1619 | 0.8270 | | No log | 5.0 | 150 | 0.3745 | 0.1171 | 0.2391 | 0.1572 | 0.8311 |
c8a6ae42dca172549a009aced9fdc87c
apache-2.0
['generated_from_keras_callback']
false
risethi/distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.9709 - Validation Loss: 1.1167 - Epoch: 1
8b486f1e60a16eab94afc158b6dd7753
mit
['generated_from_trainer']
false
bert_base_tcm_0.9_10_epochs This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0190 - Criterio Julgamento Precision: 0.8170 - Criterio Julgamento Recall: 0.8803 - Criterio Julgamento F1: 0.8475 - Criterio Julgamento Number: 142 - Data Sessao Precision: 0.7798 - Data Sessao Recall: 0.9444 - Data Sessao F1: 0.8543 - Data Sessao Number: 90 - Modalidade Licitacao Precision: 0.9549 - Modalidade Licitacao Recall: 0.9799 - Modalidade Licitacao F1: 0.9673 - Modalidade Licitacao Number: 648 - Numero Exercicio Precision: 0.9559 - Numero Exercicio Recall: 0.9848 - Numero Exercicio F1: 0.9701 - Numero Exercicio Number: 330 - Objeto Licitacao Precision: 0.5496 - Objeto Licitacao Recall: 0.6792 - Objeto Licitacao F1: 0.6076 - Objeto Licitacao Number: 106 - Valor Objeto Precision: 0.8182 - Valor Objeto Recall: 0.8438 - Valor Objeto F1: 0.8308 - Valor Objeto Number: 32 - Overall Precision: 0.8868 - Overall Recall: 0.9414 - Overall F1: 0.9133 - Overall Accuracy: 0.9957
27ab0ac16eabe59ad50b4f1761ceea56
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0
610d409b81958450716883cbb928bcc9
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Criterio Julgamento Precision | Criterio Julgamento Recall | Criterio Julgamento F1 | Criterio Julgamento Number | Data Sessao Precision | Data Sessao Recall | Data Sessao F1 | Data Sessao Number | Modalidade Licitacao Precision | Modalidade Licitacao Recall | Modalidade Licitacao F1 | Modalidade Licitacao Number | Numero Exercicio Precision | Numero Exercicio Recall | Numero Exercicio F1 | Numero Exercicio Number | Objeto Licitacao Precision | Objeto Licitacao Recall | Objeto Licitacao F1 | Objeto Licitacao Number | Valor Objeto Precision | Valor Objeto Recall | Valor Objeto F1 | Valor Objeto Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:-----------------------------:|:--------------------------:|:----------------------:|:--------------------------:|:---------------------:|:------------------:|:--------------:|:------------------:|:------------------------------:|:---------------------------:|:-----------------------:|:---------------------------:|:--------------------------:|:-----------------------:|:-------------------:|:-----------------------:|:--------------------------:|:-----------------------:|:-------------------:|:-----------------------:|:----------------------:|:-------------------:|:---------------:|:-------------------:|:-----------------:|:--------------:|:----------:|:----------------:| | 0.0187 | 1.0 | 3497 | 0.0201 | 0.776 | 0.6831 | 0.7266 | 142 | 0.7565 | 0.9667 | 0.8488 | 90 | 0.9548 | 0.9784 | 0.9665 | 648 | 0.9329 | 0.9697 | 0.9510 | 330 | 0.375 | 0.5377 | 0.4419 | 106 | 0.7045 | 0.9688 | 0.8158 | 32 | 0.8496 | 0.9095 | 0.8785 | 0.9946 | | 0.0173 | 2.0 | 6994 | 0.0190 | 0.8170 | 0.8803 | 0.8475 | 142 | 0.7798 | 0.9444 | 0.8543 | 90 | 0.9549 | 0.9799 | 0.9673 | 648 | 0.9559 | 0.9848 | 0.9701 | 330 | 0.5496 | 0.6792 | 0.6076 | 106 | 0.8182 | 0.8438 | 0.8308 | 32 | 0.8868 | 0.9414 | 0.9133 | 0.9957 | | 0.0106 | 3.0 | 10491 | 0.0306 | 0.8187 | 0.9225 | 0.8675 | 142 | 0.7890 | 0.9556 | 0.8643 | 90 | 0.9550 | 0.9830 | 0.9688 | 648 | 0.9475 | 0.9848 | 0.9658 | 330 | 0.5373 | 0.6792 | 0.6000 | 106 | 0.7561 | 0.9688 | 0.8493 | 32 | 0.8817 | 0.9510 | 0.9151 | 0.9946 | | 0.0071 | 4.0 | 13988 | 0.0226 | 0.8258 | 0.9014 | 0.8620 | 142 | 0.7830 | 0.9222 | 0.8469 | 90 | 0.9608 | 0.9846 | 0.9726 | 648 | 0.9440 | 0.9697 | 0.9567 | 330 | 0.5522 | 0.6981 | 0.6167 | 106 | 0.9394 | 0.9688 | 0.9538 | 32 | 0.8903 | 0.9451 | 0.9169 | 0.9959 | | 0.0043 | 5.0 | 17485 | 0.0236 | 0.8408 | 0.9296 | 0.8829 | 142 | 0.7766 | 0.8111 | 0.7935 | 90 | 0.9637 | 0.9846 | 0.9740 | 648 | 0.9461 | 0.9576 | 0.9518 | 330 | 0.5682 | 0.7075 | 0.6303 | 106 | 0.7949 | 0.9688 | 0.8732 | 32 | 0.8921 | 0.9384 | 0.9147 | 0.9952 | | 0.0041 | 6.0 | 20982 | 0.0273 | 0.8269 | 0.9085 | 0.8658 | 142 | 0.7838 | 0.9667 | 0.8657 | 90 | 0.9652 | 0.9830 | 0.9740 | 648 | 0.9408 | 0.9636 | 0.9521 | 330 | 0.5827 | 0.7642 | 0.6612 | 106 | 0.7895 | 0.9375 | 0.8571 | 32 | 0.8890 | 0.9510 | 0.9190 | 0.9953 | | 0.0021 | 7.0 | 24479 | 0.0322 | 0.8228 | 0.9155 | 0.8667 | 142 | 0.7810 | 0.9111 | 0.8410 | 90 | 0.9608 | 0.9830 | 0.9718 | 648 | 0.9412 | 0.9697 | 0.9552 | 330 | 0.5507 | 0.7170 | 0.6230 | 106 | 0.8333 | 0.9375 | 0.8824 | 32 | 0.8854 | 0.9458 | 0.9146 | 0.9951 | | 0.0026 | 8.0 | 27976 | 0.0336 | 0.8435 | 0.8732 | 0.8581 | 142 | 0.8039 | 0.9111 | 0.8542 | 90 | 0.9637 | 0.9846 | 0.9740 | 648 | 0.9528 | 0.9788 | 0.9656 | 330 | 0.5620 | 0.7264 | 0.6337 | 106 | 0.8378 | 0.9688 | 0.8986 | 32 | 0.8954 | 0.9458 | 0.9199 | 0.9952 | | 0.001 | 9.0 | 31473 | 0.0326 | 0.8477 | 0.9014 | 0.8737 | 142 | 0.7905 | 0.9222 | 0.8513 | 90 | 0.9665 | 0.9784 | 0.9724 | 648 | 0.9551 | 0.9667 | 0.9608 | 330 | 0.5940 | 0.7453 | 0.6611 | 106 | 0.8611 | 0.9688 | 0.9118 | 32 | 0.9004 | 0.9451 | 0.9222 | 0.9952 | | 0.0011 | 10.0 | 34970 | 0.0338 | 0.8387 | 0.9155 | 0.8754 | 142 | 0.7810 | 0.9111 | 0.8410 | 90 | 0.9650 | 0.9799 | 0.9724 | 648 | 0.9607 | 0.9636 | 0.9622 | 330 | 0.6015 | 0.7547 | 0.6695 | 106 | 0.8857 | 0.9688 | 0.9254 | 32 | 0.9005 | 0.9466 | 0.9230 | 0.9952 |
e9d2b3aab05a620ef03abe47bd724aff
apache-2.0
['generated_from_trainer']
false
tiny-mlm-glue-mnli-target-glue-cola This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-mnli](https://huggingface.co/muhtasham/tiny-mlm-glue-mnli) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7272 - Matthews Correlation: 0.0899
689191520f363c56e389660b574ac3f6
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.6097 | 1.87 | 500 | 0.6214 | 0.0 | | 0.601 | 3.73 | 1000 | 0.6166 | 0.0 | | 0.5829 | 5.6 | 1500 | 0.6181 | 0.0630 | | 0.5537 | 7.46 | 2000 | 0.6384 | 0.0793 | | 0.5231 | 9.33 | 2500 | 0.6629 | 0.1079 | | 0.508 | 11.19 | 3000 | 0.6679 | 0.0949 | | 0.4817 | 13.06 | 3500 | 0.6915 | 0.1062 | | 0.4661 | 14.93 | 4000 | 0.7272 | 0.0899 |
5cdd734b071cc094e7836450f220b9fe
creativeml-openrail-m
['text-to-image', 'v2.0', 'Embedding']
false
Textual Inversion Embedding by ConflictX For SD 2.0 trained on 768x768 images from midjourney. Install by downloading the step embedding you want, and put it in the \embeddings folder It is slightly overfit on 150 steps so some concepts/keywords will be harder to prompt for (use negatives or weight Kipaki down) but it works amazing for cityscapes, people, gods, and other scifi genres. Very stylized on ancient Egypt, scifi, and orange/blue color scheme but other concepts are definitely possible: More images here: https://imgur.com/a/W2bmBaV Use keyword: Kipaki-xxx xxx is embedding number There are multiple versions, the images below were created with the 150 step version. ![00401-2324710412-a beautiful woman with a biolumenscent mask and glowing eyes, very detailed, best quality, soft lighting, Kipaki style, very de.png](https://s3.amazonaws.com/moonup/production/uploads/1669895566207-6303c53d7373aacccd859bbd.png) ![00404-3141178668-a beautiful woman with a biolumenscent mask and ((glowing eyes)), very detailed, best quality, soft lighting, Kipaki style, ver.png](https://s3.amazonaws.com/moonup/production/uploads/1669895611412-6303c53d7373aacccd859bbd.png) ![00383-4169954247-full view of star wars tie-fighter in space, very detailed, best quality, soft lighting, Kipaki style, very detailed, intricate.png](https://s3.amazonaws.com/moonup/production/uploads/1669895735004-6303c53d7373aacccd859bbd.png) ![00415-2206533630-a cozy modern interior living room, blue lighting, very detailed, best quality, soft lighting, Kipaki style, very detailed, intr.png](https://s3.amazonaws.com/moonup/production/uploads/1669896359595-6303c53d7373aacccd859bbd.png) ![00427-1769024071-a woman in a cozy modern interior swimming pool, blue lighting, very detailed, best quality, soft lighting, stylized Kipaki styl.png](https://s3.amazonaws.com/moonup/production/uploads/1669896638806-6303c53d7373aacccd859bbd.png) ![00414-3392484879-a rocket launching from a launch pad, blue lighting, very detailed, best quality, soft lighting, Kipaki style, very detailed, in.png](https://s3.amazonaws.com/moonup/production/uploads/1669896244606-6303c53d7373aacccd859bbd.png) ![00443-80180354-star wars egyptian storm trooper, stylized (Kipaki _0.65) style, very detailed, dust, 4k high resolution, sharp, fragmenv2, int.png](https://s3.amazonaws.com/moonup/production/uploads/1669899334082-6303c53d7373aacccd859bbd.png) Highres Images: ![00466-1644083345-batman, stylized (Kipaki_1.0) style, very detailed, dust, 4k high resolution, sharp, intricate.png](https://s3.amazonaws.com/moonup/production/uploads/1669901365152-6303c53d7373aacccd859bbd.png) ![00467-1644083345-spiderman, stylized (Kipaki_1.0) style, very detailed, dust, 4k high resolution, sharp, intricate.png](https://s3.amazonaws.com/moonup/production/uploads/1669901409079-6303c53d7373aacccd859bbd.png) ![00466-1644083345-an emerald crown, stylized (Kipaki_1.0) style, very detailed, dust, 4k high resolution, sharp, intricate.png](https://s3.amazonaws.com/moonup/production/uploads/1669901637347-6303c53d7373aacccd859bbd.png) ![00374-2662732015-a robot assembling a car , stylized (Kipaki_1.0) style, very detailed, dust, 4k high resolution, sharp, intricate, by artists.png](https://s3.amazonaws.com/moonup/production/uploads/1669902623798-6303c53d7373aacccd859bbd.png)
0413dadddbb7d7e0593d8108e4fa2f5a
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 1.3206
3a44e5b1a3f74181441d15cbc47809a9
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.2156 | 1.0 | 8235 | 1.1791 | | 0.9413 | 2.0 | 16470 | 1.2182 | | 0.7514 | 3.0 | 24705 | 1.3206 |
a7917c407ca5e10a0f215619ff3b3d20
agpl-3.0
['generated_from_trainer']
false
XLMR-ENIS-finetuned-stsb This model is a fine-tuned version of [vesteinn/XLMR-ENIS](https://huggingface.co/vesteinn/XLMR-ENIS) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.5232 - Pearson: 0.8915 - Spearmanr: 0.8888
dceaef44d8eedef3d9140409e3480fc5
agpl-3.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | |:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:| | No log | 1.0 | 360 | 0.6330 | 0.8562 | 0.8570 | | 1.2835 | 2.0 | 720 | 0.6368 | 0.8790 | 0.8781 | | 0.4518 | 3.0 | 1080 | 0.5352 | 0.8883 | 0.8852 | | 0.4518 | 4.0 | 1440 | 0.4881 | 0.8910 | 0.8885 | | 0.288 | 5.0 | 1800 | 0.5232 | 0.8915 | 0.8888 |
0684143a457ac423321d2bb500d70c06
apache-2.0
['generated_from_trainer']
false
wav2vec2-xls-r-300m-nyanja-test_v2 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. It achieves the following results on the evaluation set: - Loss: inf - Wer: 0.3734 - Cer: 0.0827
2ad001cb657a9db54dd4723f84f236bd
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 400 - num_epochs: 15 - mixed_precision_training: Native AMP
542188dc9cc5004dd3eb71a9ec0a91fc
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:| | 1.5816 | 0.62 | 400 | inf | 0.5702 | 0.1373 | | 0.6341 | 1.24 | 800 | inf | 0.4383 | 0.1022 | | 0.5103 | 1.86 | 1200 | inf | 0.3782 | 0.0895 | | 0.4553 | 2.48 | 1600 | inf | 0.3734 | 0.0827 |
d3615367e52b6058cbf7111f8e22b21b
apache-2.0
['generated_from_trainer']
false
distilroberta-base-etc-nlp This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0039 - Accuracy: 0.9993 - F1: 0.9993
c96f023b3eda0d99b3a12915110c1870
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 262 | 0.0025 | 0.9997 | 0.9997 | | No log | 2.0 | 524 | 0.0039 | 0.9993 | 0.9993 |
6b8542b504211b5b5fbb4949aea6e08a
apache-2.0
['automatic-speech-recognition', 'fr']
false
exp_w2v2r_fr_vp-100k_age_teens-5_sixties-5_s408 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
abaf00c4b6d2d6bef5b2b76e4f9c34cd
apache-2.0
['automatic-speech-recognition', 'it']
false
exp_w2v2t_it_unispeech-sat_s692 Fine-tuned [microsoft/unispeech-sat-large](https://huggingface.co/microsoft/unispeech-sat-large) for speech recognition using the train split of [Common Voice 7.0 (it)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
933883a31719f00b950e6c0675428f0d
apache-2.0
['generated_from_keras_callback']
false
kookoobear/distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.8602 - Validation Loss: 2.6150 - Epoch: 0
65dbf732f61e7ef4d9575345512ccefa
mit
['generated_from_trainer']
false
run-2 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.1449 - Accuracy: 0.75 - Precision: 0.7115 - Recall: 0.7093 - F1: 0.7103
89bf4cdd108cd6b1a4cf08879f5001c3
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.9838 | 1.0 | 50 | 0.8621 | 0.645 | 0.6536 | 0.6130 | 0.6124 | | 0.7134 | 2.0 | 100 | 0.8124 | 0.7 | 0.6628 | 0.6421 | 0.6483 | | 0.4911 | 3.0 | 150 | 0.8571 | 0.7 | 0.6726 | 0.6314 | 0.6361 | | 0.3104 | 4.0 | 200 | 0.8228 | 0.76 | 0.7298 | 0.7367 | 0.7294 | | 0.1942 | 5.0 | 250 | 1.1132 | 0.76 | 0.7282 | 0.7031 | 0.7119 | | 0.1409 | 6.0 | 300 | 1.2218 | 0.685 | 0.6516 | 0.6560 | 0.6524 | | 0.0976 | 7.0 | 350 | 1.3648 | 0.715 | 0.6984 | 0.7044 | 0.6946 | | 0.0791 | 8.0 | 400 | 1.5985 | 0.745 | 0.7183 | 0.7113 | 0.7124 | | 0.0647 | 9.0 | 450 | 1.8884 | 0.725 | 0.6818 | 0.6761 | 0.6785 | | 0.0275 | 10.0 | 500 | 1.8639 | 0.725 | 0.6979 | 0.7008 | 0.6958 | | 0.0329 | 11.0 | 550 | 1.8831 | 0.72 | 0.6816 | 0.6869 | 0.6838 | | 0.0169 | 12.0 | 600 | 2.1426 | 0.73 | 0.6864 | 0.6776 | 0.6794 | | 0.0072 | 13.0 | 650 | 2.2483 | 0.725 | 0.7187 | 0.7054 | 0.6968 | | 0.0203 | 14.0 | 700 | 2.2901 | 0.735 | 0.6986 | 0.6885 | 0.6921 | | 0.0093 | 15.0 | 750 | 2.3134 | 0.725 | 0.6830 | 0.6666 | 0.6723 | | 0.0089 | 16.0 | 800 | 2.1598 | 0.73 | 0.6919 | 0.6860 | 0.6885 | | 0.0061 | 17.0 | 850 | 2.0879 | 0.75 | 0.7129 | 0.7132 | 0.7125 | | 0.0024 | 18.0 | 900 | 2.1285 | 0.745 | 0.7062 | 0.7071 | 0.7049 | | 0.0043 | 19.0 | 950 | 2.1386 | 0.74 | 0.7001 | 0.7003 | 0.6985 | | 0.0028 | 20.0 | 1000 | 2.1449 | 0.75 | 0.7115 | 0.7093 | 0.7103 |
f160a1ccc6ae96ea8a45ba15e89b46d4
apache-2.0
['generated_from_trainer', 'whisper-event']
false
luigisaetta/whisper-medium This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the common_voice_11_0 dataset. It achieves the following results on the evaluation set: - Loss: 0.1531 - Wer: 5.5543
68bdc98b928dc42c3e95319f51105d29
apache-2.0
['generated_from_trainer', 'whisper-event']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 64 - eval_batch_size: 32 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 6000
56055e7ec8bc02753c5044f84dbdc1be
apache-2.0
['generated_from_trainer', 'whisper-event']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2023 | 0.17 | 1000 | 0.1852 | 7.6354 | | 0.1215 | 0.33 | 2000 | 0.1577 | 6.4088 | | 0.0711 | 1.1 | 3000 | 0.1576 | 6.1324 | | 0.0656 | 1.27 | 4000 | 0.1499 | 5.8786 | | 0.0294 | 2.04 | 5000 | 0.1552 | 5.6234 | | 0.0351 | 2.21 | 6000 | 0.1531 | 5.5543 |
660df7c2af12a027ea626152a6ce3e7d
apache-2.0
[]
false
PaddlePaddle/uie-senta-micro Sentiment analysis is a research hotspot in recent years, aiming at analyzing, processing, summarizing and reasoning emotionally subjective texts. Sentiment analysis has a wide range of application scenarios and can be applied to consumer decision making, public opinion mining, personalized recommendation and so on. According to the analysis granularity, it can be roughly divided into three categories: document-level sentiment analysis, sentence-level sentiment analysis and aspect-level sentiment analysis. Among them, aspect-level sentiment analysis includes multiple subtasks, such as aspect term extraction, opinion term extraction, aspect-opinion-sentiment triplet extraction, etc. UIE-Senta is a type of Chinese sentiment analysis model, which uses UIE as backbone and further trained based on large amount of samples related to sentiment analysis. So it has a stronger ability to understand sentiment knowledge and handle the related samples. Currently, UIE-Senta supports most of basic sentiment analysis capabilities, including sentiment-level sentiment classification, aspect-term extraction, opinion-term extraction, aspect-sentiment pair extraction, aspect-opinion pair extraction, aspect-opinion-sentiment triple extraction. You could perform sentiment analysis with UIE-Senta to improve your business analysis capabilities. <div align="center"> <img src="https://user-images.githubusercontent.com/35913314/199965793-f0933baa-5b82-47da-9271-ba36642119f8.png" /> </div>
94d3200679d2a7c66db059fddbb6d914
apache-2.0
['pythae', 'reproducibility']
false
This model was trained with pythae. It can be downloaded or reloaded using the method `load_from_hf_hub` ```python >>> from pythae.models import AutoModel >>> model = AutoModel.load_from_hf_hub(hf_hub_path="clementchadebec/reproduced_svae") ```
44d4aa822d75ac298fd877477e8fc2bc
apache-2.0
['pythae', 'reproducibility']
false
Reproducibility This trained model reproduces the results of Table 1 in [1]. | Model | Dataset | Metric | Obtained value | Reference value | |:---:|:---:|:---:|:---:|:---:| | SVAE | Dyn. Binarized MNIST | NLL (500 IS) | 93.13 (0.01) | 93.16 (0.31) | [1] Tim R Davidson, Luca Falorsi, Nicola De Cao, Thomas Kipf, and Jakub M Tomczak. Hyperspherical variational auto-encoders. In 34th Conference on Uncertainty in Artificial Intelligence 2018, UAI 2018, pages 856–865. Association For Uncertainty in Artificial Intelligence (AUAI), 2018.
f8c2e940cb4dee8cc8d3227726ea4e94
apache-2.0
['generated_from_trainer']
false
bert-base-cased-finetuned-emotion This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1342 - F1: 0.9365
decf57e3a561872bb6465e25c890cd37
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.7357 | 1.0 | 250 | 0.2318 | 0.9224 | | 0.1758 | 2.0 | 500 | 0.1679 | 0.9349 | | 0.1228 | 3.0 | 750 | 0.1385 | 0.9382 | | 0.0961 | 4.0 | 1000 | 0.1452 | 0.9340 | | 0.0805 | 5.0 | 1250 | 0.1342 | 0.9365 |
5e28be623c085e126e31f1e04b476364
apache-2.0
['CTC', 'pytorch', 'speechbrain', 'Transformer']
false
Transcribing your own audio files (in Darija) ```python from speechbrain.pretrained import EncoderASR asr_model = EncoderASR.from_hparams(source="aioxlabs/dvoice-darija", savedir="pretrained_models/asr-wav2vec2-dvoice-dar") asr_model.transcribe_file('./the_path_to_your_audio_file') ```
531d5b7bacd0f9326a04824097c69162
mit
['generated_from_trainer', 'deberta-v3']
false
DeBERTa v3 (small) fine-tuned on SST2 This model is a fine-tuned version of [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on the GLUE SST2 dataset. It achieves the following results on the evaluation set: - Loss: 0.2134 - Accuracy: 0.9404
01ac19bc7a753b128f861fdaae5a732e
mit
['generated_from_trainer', 'deberta-v3']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.176 | 1.0 | 4210 | 0.2134 | 0.9404 | | 0.1254 | 2.0 | 8420 | 0.2362 | 0.9415 | | 0.0957 | 3.0 | 12630 | 0.3187 | 0.9335 | | 0.0673 | 4.0 | 16840 | 0.3039 | 0.9266 | | 0.0457 | 5.0 | 21050 | 0.3521 | 0.9312 |
0fc1a35aba9681f84bc8f59a1ba64857
mit
[]
false
Ancient Greek BERT finetuned for tagging and parsing PROIEL (UD) This is a finetuned checkpoint of [Ancient Greek BERT](https://huggingface.co/pranaydeeps/Ancient-Greek-BERT) by Singh, Rutten and Lefever (2021), which has been trained on the [UD version of PROIEL](https://github.com/UniversalDependencies/UD_Ancient_Greek-PROIEL). The code for training and using the model can be found on [GitHub](https://github.com/clemeth/tagparse). The config file used is here: [`config.py`](config.py). If you use this model for something academic, feel free to cite the master's thesis that it sprung out of: > Clemeth, D. 2022. Tagging and Parsing Old Texts with New Techniques. University of Oslo. URL: http://urn.nb.no/URN:NBN:no-98954.
e49b940f6e4a0f29212b1803449af09a
mit
[]
false
Performance This is the performance on the [test set of the UD version of PROIEL](https://github.com/UniversalDependencies/UD_Ancient_Greek-PROIEL/blob/master/grc_proiel-ud-test.conllu). | Metric | Accuracy | |:--|:--| | UPOS | 0.9814480997446298 | | XPOS | 0.9821991888237945 | | feats | 0.9254168544389365 | | all tags | 0.9139251915277152 | | UAS | 0.8741925792398979 | | LAS | 0.8402433528616494 | | LA | 0.9063391918281508 |
c036267111e93af9c5bab16c92795db3
mit
[]
false
Bibliography - Singh, P., Rutten, G. Lefever, E. 2021. A Pilot Study for BERT Language Modelling and Morphological Analysis for Ancient and Medieval Greek. Proceedings of LaTeCH-CLfL 2021, pp. 128–137. [https://doi.org/10.18653/v1/2021.latechclfl-1.15](https://doi.org/10.18653/v1/2021.latechclfl-1.15).
7243f681c74a4b11bcd588adac903f82
mit
['generated_from_trainer']
false
clinical-finetuned-data3 This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5058 - Accuracy: 0.86 - Precision: 0.875 - Recall: 0.9265 - F1: 0.9
6cdd4c64352e30f7420ca3f93da5bd6d
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP
f9596d239b1e38c9f37fa873ce32a409
apache-2.0
['automatic-speech-recognition', 'it']
false
exp_w2v2t_it_unispeech-ml_s784 Fine-tuned [microsoft/unispeech-large-multi-lingual-1500h-cv](https://huggingface.co/microsoft/unispeech-large-multi-lingual-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (it)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
53bdda7f21af09831aa3202bd99f34ba
creativeml-openrail-m
['text-to-image', 'stable-diffusion']
false
Woolly Dreambooth model trained by LaCambre with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb) Il faut écrire : "A woolly style blablabla" Sample pictures of this concept: ![0](https://huggingface.co/LaCambre/woolly/resolve/main/sample_images/woolly_(1).jpg) Il faut écrire : "A woolly style blablabla"
671b4a9f25280c9925ad7dfc3f155de8
apache-2.0
['deep-narrow']
false
T5-Efficient-BASE-FF9000 (Deep-Narrow version) T5-Efficient-BASE-FF9000 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5). It is a *pretrained-only* checkpoint and was released with the paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*. In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures of similar parameter count. To quote the paper: > We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased > before considering any other forms of uniform scaling across other dimensions. This is largely due to > how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a > tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise, > a tall base model might also generally more efficient compared to a large model. We generally find > that, regardless of size, even if absolute performance might increase as we continue to stack layers, > the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36 > layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e., > params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params, > FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to > consider. To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially. A sequence of word embeddings is therefore processed sequentially by each transformer block.
fb44774e3c6435ad9c6fbec4ce6e435b
apache-2.0
['deep-narrow']
false
Details model architecture This model checkpoint - **t5-efficient-base-ff9000** - is of model type **Base** with the following variations: - **ff** is **9000** It has **449.42** million parameters and thus requires *ca.* **1797.7 MB** of memory in full precision (*fp32*) or **898.85 MB** of memory in half precision (*fp16* or *bf16*). A summary of the *original* T5 model architectures can be seen here: | Model | nl (el/dl) | ff | dm | kv | nh |
cecf2e9361cf991511f5e4389a828439
apache-2.0
['generated_from_trainer']
false
depression_suggestion This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.3740
a516b767a22819f313cc5189a4e5a85a
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 70
39d9ed5b45dbf0e61ccf9ac589db03d8
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 3 | 60.7965 | | No log | 2.0 | 6 | 60.5778 | | No log | 3.0 | 9 | 60.1954 | | No log | 4.0 | 12 | 59.6487 | | No log | 5.0 | 15 | 58.9372 | | No log | 6.0 | 18 | 58.0582 | | No log | 7.0 | 21 | 57.0106 | | No log | 8.0 | 24 | 55.7910 | | No log | 9.0 | 27 | 54.3934 | | No log | 10.0 | 30 | 52.8099 | | No log | 11.0 | 33 | 51.0219 | | No log | 12.0 | 36 | 49.0127 | | No log | 13.0 | 39 | 46.7522 | | No log | 14.0 | 42 | 44.2033 | | No log | 15.0 | 45 | 41.3146 | | No log | 16.0 | 48 | 37.9982 | | No log | 17.0 | 51 | 34.2236 | | No log | 18.0 | 54 | 29.8068 | | No log | 19.0 | 57 | 24.9750 | | No log | 20.0 | 60 | 20.0707 | | No log | 21.0 | 63 | 15.5166 | | No log | 22.0 | 66 | 12.0328 | | No log | 23.0 | 69 | 9.1012 | | No log | 24.0 | 72 | 7.2116 | | No log | 25.0 | 75 | 6.3149 | | No log | 26.0 | 78 | 5.8127 | | No log | 27.0 | 81 | 5.4548 | | No log | 28.0 | 84 | 5.1684 | | No log | 29.0 | 87 | 4.8927 | | No log | 30.0 | 90 | 4.6128 | | No log | 31.0 | 93 | 4.3782 | | No log | 32.0 | 96 | 4.1996 | | No log | 33.0 | 99 | 4.0981 | | No log | 34.0 | 102 | 4.0022 | | No log | 35.0 | 105 | 3.9224 | | No log | 36.0 | 108 | 3.8381 | | No log | 37.0 | 111 | 3.7660 | | No log | 38.0 | 114 | 3.6887 | | No log | 39.0 | 117 | 3.6483 | | No log | 40.0 | 120 | 3.6020 | | No log | 41.0 | 123 | 3.5590 | | No log | 42.0 | 126 | 3.5199 | | No log | 43.0 | 129 | 3.4646 | | No log | 44.0 | 132 | 3.4098 | | No log | 45.0 | 135 | 3.3684 | | No log | 46.0 | 138 | 3.3290 | | No log | 47.0 | 141 | 3.3113 | | No log | 48.0 | 144 | 3.3033 | | No log | 49.0 | 147 | 3.2928 | | No log | 50.0 | 150 | 3.2776 | | No log | 51.0 | 153 | 3.2587 | | No log | 52.0 | 156 | 3.2487 | | No log | 53.0 | 159 | 3.2390 | | No log | 54.0 | 162 | 3.2318 | | No log | 55.0 | 165 | 3.2311 | | No log | 56.0 | 168 | 3.2377 | | No log | 57.0 | 171 | 3.2554 | | No log | 58.0 | 174 | 3.2720 | | No log | 59.0 | 177 | 3.2781 | | No log | 60.0 | 180 | 3.2882 | | No log | 61.0 | 183 | 3.3089 | | No log | 62.0 | 186 | 3.3352 | | No log | 63.0 | 189 | 3.3519 | | No log | 64.0 | 192 | 3.3233 | | No log | 65.0 | 195 | 3.3028 | | No log | 66.0 | 198 | 3.3153 | | No log | 67.0 | 201 | 3.3422 | | No log | 68.0 | 204 | 3.3753 | | No log | 69.0 | 207 | 3.4003 | | No log | 70.0 | 210 | 3.3740 |
07f177fe3cece1c4c67db341b3531e74
mit
['translation']
false
mBART 25 SentencePiece tokenizer This tokenizer is used for Mideind's mBART translation models. It is based on Facebooks mBART-25 SentencePiece model. A language token from the original model has been replaced with "is_IS". Usage example (for debugging): ```python import sys from transformers.models import mbart MODEL_DIR = sys.argv[1] tokenizer: mbart.MBartTokenizerFast = mbart.MBartTokenizerFast.from_pretrained( MODEL_DIR, src_lang="en_XX" ) is_lang_idx = tokenizer.convert_tokens_to_ids("is_IS") model = mbart.MBartForConditionalGeneration.from_pretrained(MODEL_DIR) test_sentence = "This is a test." input_ids = tokenizer(test_sentence, return_tensors="pt") print(input_ids) outputs = model.generate( **input_ids, decoder_start_token_id=is_lang_idx ) print(outputs) print(tokenizer.batch_decode(outputs)) ```
a225d1438bfee7cdaa9569186c3db6c0
cc-by-4.0
['question generation']
false
Model Card of `research-backup/t5-small-squad-qg-no-paragraph` This model is fine-tuned version of [t5-small](https://huggingface.co/t5-small) for question generation task on the [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation). This model is fine-tuned without pargraph information but only the sentence that contains the answer.
033305e8c27086bad333c02151fba583
cc-by-4.0
['question generation']
false
model prediction questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "research-backup/t5-small-squad-qg-no-paragraph") output = pipe("generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.") ```
5799be9bce024b58c4e41c3709d826df
cc-by-4.0
['question generation']
false
Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/research-backup/t5-small-squad-qg-no-paragraph/raw/main/eval/metric.first.sentence.sentence_answer.question.lmqg_qg_squad.default.json) | | Score | Type | Dataset | |:-----------|--------:|:--------|:---------------------------------------------------------------| | BERTScore | 90.36 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_1 | 55.39 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_2 | 39.1 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_3 | 29.7 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_4 | 23.23 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | METEOR | 24.8 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | MoverScore | 63.18 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | ROUGE_L | 50.18 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
02442bc0d9250a57db9741dda7f52943
cc-by-4.0
['question generation']
false
Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_squad - dataset_name: default - input_types: ['sentence_answer'] - output_types: ['question'] - prefix_types: ['qg'] - model: t5-small - max_length: 128 - max_length_output: 32 - epoch: 8 - batch: 64 - lr: 0.0001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 1 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/research-backup/t5-small-squad-qg-no-paragraph/raw/main/trainer_config.json).
3860406e127067a6fc372edab2537edc
cc-by-4.0
[]
false
FinEst BERT FinEst BERT is a trilingual model, using bert-base architecture, trained on Finnish, Estonian, and English corpora. Focusing on three languages, the model performs better than [multilingual BERT](https://huggingface.co/bert-base-multilingual-cased), while still offering an option for cross-lingual knowledge transfer, which a monolingual model wouldn't. Evaluation is presented in our article: ``` @Inproceedings{ulcar-robnik2020finest, author = "Ulčar, M. and Robnik-Šikonja, M.", year = 2020, title = "{FinEst BERT} and {CroSloEngual BERT}: less is more in multilingual models", editor = "Sojka, P and Kopeček, I and Pala, K and Horák, A", booktitle = "Text, Speech, and Dialogue {TSD 2020}", series = "Lecture Notes in Computer Science", volume = 12284, publisher = "Springer", url = "https://doi.org/10.1007/978-3-030-58323-1_11", } ``` The preprint is available at [arxiv.org/abs/2006.07890](https://arxiv.org/abs/2006.07890).
4aa12f02d3228f3203d0c73ebeb62a2d
mit
['generated_from_trainer']
false
roberta-base-finetuned-squad2 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 0.9325
f221a98704774d4ea505889864743eb0
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.88 | 1.0 | 8160 | 0.8129 | | 0.6643 | 2.0 | 16320 | 0.8567 | | 0.5096 | 3.0 | 24480 | 0.9325 |
ec168572935034e0068c1ad01fbd359b
apache-2.0
['deep-narrow']
false
T5-Efficient-SMALL-NL24 (Deep-Narrow version) T5-Efficient-SMALL-NL24 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5). It is a *pretrained-only* checkpoint and was released with the paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*. In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures of similar parameter count. To quote the paper: > We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased > before considering any other forms of uniform scaling across other dimensions. This is largely due to > how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a > tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise, > a tall base model might also generally more efficient compared to a large model. We generally find > that, regardless of size, even if absolute performance might increase as we continue to stack layers, > the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36 > layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e., > params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params, > FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to > consider. To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially. A sequence of word embeddings is therefore processed sequentially by each transformer block.
f50f4d7f789ac3caf53c3c1f424cb49a
apache-2.0
['deep-narrow']
false
Details model architecture This model checkpoint - **t5-efficient-small-nl24** - is of model type **Small** with the following variations: - **nl** is **24** It has **192.73** million parameters and thus requires *ca.* **770.92 MB** of memory in full precision (*fp32*) or **385.46 MB** of memory in half precision (*fp16* or *bf16*). A summary of the *original* T5 model architectures can be seen here: | Model | nl (el/dl) | ff | dm | kv | nh |
295b5e3e4f6f85a36b86ceb1ca52a88a
mit
['T5', 'Seq2Seq', 'EconderDecoder', 'Spanish']
false
Spanish T5 (small) trained on [large_spanish_corpus](https://huggingface.co/datasets/viewer/?dataset=large_spanish_corpus). This is a Spanish **T5** (small arch) trained from scratch on the [large_spanish_corpus](https://huggingface.co/datasets/viewer/?dataset=large_spanish_corpus) aka BETO's corpus with [Flax](https://github.com/google/flax) This is part of the [Flax/Jax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organised by [HuggingFace](https://huggingface.co/) and TPU usage sponsored by Google.
4b033f2c63be844d2a9d0bf42dd1edbf
mit
['T5', 'Seq2Seq', 'EconderDecoder', 'Spanish']
false
Citation If you want to cite this model you can use this: ```bibtex @misc{mromero2021spanish-t5-small, title={Spanish T5 (small) by Manuel Romero}, author={Romero, Manuel}, publisher={Hugging Face}, journal={Hugging Face Hub}, howpublished={\url{https://huggingface.co/flax-community/spanish-t5-small}}, year={2021} } ```
a41a4b6064939a4bd65b488ec04f8481
apache-2.0
['fill-mask', 'korean', 'lassl']
false
How to use ```python from transformers import AutoModel, AutoTokenizer model = AutoModel.from_pretrained("lassl/bert-ko-base") tokenizer = AutoTokenizer.from_pretrained("lassl/bert-ko-base") ```
33b19209ed361436f1b8c069a8b6d627
apache-2.0
['fill-mask', 'korean', 'lassl']
false
Corpora This model was trained from 702,437 examples (whose have 3,596,465,664 tokens). 702,437 examples are extracted from below corpora. If you want to get information for training, you should see `config.json`. ```bash corpora/ ├── [707M] kowiki_latest.txt ├── [ 26M] modu_dialogue_v1.2.txt ├── [1.3G] modu_news_v1.1.txt ├── [9.7G] modu_news_v2.0.txt ├── [ 15M] modu_np_v1.1.txt ├── [1008M] modu_spoken_v1.2.txt ├── [6.5G] modu_written_v1.0.txt └── [413M] petition.txt ```
a7a1403a6133f771aa55cf44b843377b
apache-2.0
['generated_from_trainer']
false
test-ner This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1014 - Precision: 0.9609 - Recall: 0.9574 - F1: 0.9591 - Accuracy: 0.9732
707195b849270bf76fc32a2f22019731
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 151 | 0.1848 | 0.9060 | 0.9184 | 0.9122 | 0.9490 | | No log | 2.0 | 302 | 0.1137 | 0.9548 | 0.9529 | 0.9538 | 0.9705 | | No log | 3.0 | 453 | 0.1014 | 0.9609 | 0.9574 | 0.9591 | 0.9732 |
3082f3aeeec5bb41e2040b9269f1dfe6
apache-2.0
['generated_from_trainer']
false
roberta-base-bne-finetuned-detests-02-11-2022 This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8124 - F1: 0.6381
6d82e063df6eaf864615e560a8e1afbb
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.379 | 0.64 | 25 | 0.4136 | 0.0 | | 0.315 | 1.28 | 50 | 0.3663 | 0.6343 | | 0.3228 | 1.92 | 75 | 0.3424 | 0.6386 | | 0.1657 | 2.56 | 100 | 0.5133 | 0.5385 | | 0.108 | 3.21 | 125 | 0.4766 | 0.6452 | | 0.0631 | 3.85 | 150 | 0.6063 | 0.6083 | | 0.0083 | 4.49 | 175 | 0.6200 | 0.6198 | | 0.0032 | 5.13 | 200 | 0.6508 | 0.6335 | | 0.0047 | 5.77 | 225 | 0.6877 | 0.6269 | | 0.0018 | 6.41 | 250 | 0.7745 | 0.6148 | | 0.0014 | 7.05 | 275 | 0.7741 | 0.6299 | | 0.001 | 7.69 | 300 | 0.7896 | 0.6381 | | 0.0011 | 8.33 | 325 | 0.8008 | 0.6381 | | 0.0008 | 8.97 | 350 | 0.8086 | 0.6381 | | 0.0009 | 9.62 | 375 | 0.8124 | 0.6381 |
4e4f8aa96c53cbef8f29bcff9b66bf57
apache-2.0
['generated_from_trainer']
false
finetuning-sentiment-model-3000-samples-new This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3103 - Accuracy: 0.8667 - F1: 0.8667
1b98f96ff37316b096009523f417cdfc