text stringlengths 201 597k | inputs dict | prediction list | prediction_agent stringclasses 1 value | annotation null | annotation_agent null | vectors null | multi_label bool 1 class | explanation null | id stringlengths 36 36 | metadata dict | status stringclasses 1 value | metrics dict | label class label 2 classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Training results | Train Loss | Validation Loss | Train Rougel | Epoch | |:----------:|:---------------:|:---------------------------------------------:|:-----:| | 2.3199 | 3.2826 | tf.Tensor(0.3922559, shape=(), dtype=float32) | 0 | | {
"text": " Training results | Train Loss | Validation Loss | Train Rougel | Epoch | |:----------:|:---------------:|:---------------------------------------------:|:-----:| | 2.3199 | 3.2826 | tf.Tensor(0.3922559, shape=(), dtype=float32) | 0 | "
} | [
{
"label": "no_dataset_mention",
"score": 0.9487513515880792
},
{
"label": "dataset_mention",
"score": 0.051248648411920804
}
] | Snorkel | null | null | null | false | null | 0000d04a-8e39-4ddc-a9b4-84dd88965d8d | {
"split": "unlabelled"
} | Default | {
"text_length": 288
} | 1no_dataset_mention |
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2452 | 1.0 | 878 | 0.0709 | 0.9184 | 0.9206 | 0.9195 | 0.9803 | | 0.0501 | 2.0 | 1756 | 0.0621 | 0.9212 | 0.9328 | 0.9270 | 0.9830 | | 0.0299 | 3.0 | 2634 | 0.0607 | 0.9285 | 0.9362 | 0.9324 | 0.9839 | | {
"text": " Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2452 | 1.0 | 878 | 0.0709 | 0.9184 | 0.9206 | 0.9195 | 0.9803 | | 0.0501 | 2.0 | 1756 | 0.0621 | 0.9212 | 0.9328 | 0.9270 | 0.9830 | | 0.0299 | 3.0 | 2634 | 0.0607 | 0.9285 | 0.9362 | 0.9324 | 0.9839 | "
} | [
{
"label": "no_dataset_mention",
"score": 0.9487513515880792
},
{
"label": "dataset_mention",
"score": 0.051248648411920804
}
] | Snorkel | null | null | null | false | null | 0000e88d-fd73-4f83-af10-e09c794b1c8a | {
"split": "unlabelled"
} | Default | {
"text_length": 481
} | 1no_dataset_mention |
starbot-transformers This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.4079 | {
"text": " starbot-transformers This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.4079 "
} | [
{
"label": "dataset_mention",
"score": 0.9976600967163771
},
{
"label": "no_dataset_mention",
"score": 0.0023399032836228136
}
] | Snorkel | null | null | null | false | null | 00021d2d-9ee0-4c12-bb60-1d0cc08c42cf | {
"split": "unlabelled"
} | Default | {
"text_length": 203
} | 0dataset_mention |
distilbert-base-uncased-finetuned-paws This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the pawsx dataset. It achieves the following results on the evaluation set: - Loss: 0.3850 - Accuracy: 0.8355 - F1: 0.8362 | {
"text": " distilbert-base-uncased-finetuned-paws This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the pawsx dataset. It achieves the following results on the evaluation set: - Loss: 0.3850 - Accuracy: 0.8355 - F1: 0.8362 "
} | [
{
"label": "dataset_mention",
"score": 0.9976600967163771
},
{
"label": "no_dataset_mention",
"score": 0.0023399032836228136
}
] | Snorkel | null | null | null | false | null | 00033e9c-49a6-4480-b741-d8749b116cc2 | {
"split": "unlabelled"
} | Default | {
"text_length": 280
} | 0dataset_mention |
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0065 | 5.03 | 3000 | 0.6425 | 35.1077 | | {
"text": " Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0065 | 5.03 | 3000 | 0.6425 | 35.1077 | "
} | [
{
"label": "no_dataset_mention",
"score": 0.9487513515880792
},
{
"label": "dataset_mention",
"score": 0.051248648411920804
}
] | Snorkel | null | null | null | false | null | 00036b60-002e-474c-b0a7-b589aceeb8ee | {
"split": "unlabelled"
} | Default | {
"text_length": 204
} | 1no_dataset_mention |
mt5-small-finetuned-18jan-3 This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.6115 - Rouge1: 7.259 - Rouge2: 0.3667 - Rougel: 7.1595 - Rougelsum: 7.156 | {
"text": " mt5-small-finetuned-18jan-3 This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.6115 - Rouge1: 7.259 - Rouge2: 0.3667 - Rougel: 7.1595 - Rougelsum: 7.156 "
} | [
{
"label": "dataset_mention",
"score": 0.9976600967163771
},
{
"label": "no_dataset_mention",
"score": 0.0023399032836228136
}
] | Snorkel | null | null | null | false | null | 00040eee-1877-421c-8043-749aac151325 | {
"split": "unlabelled"
} | Default | {
"text_length": 293
} | 0dataset_mention |
deberta-base-mnli-finetuned-cola This model is a fine-tuned version of [microsoft/deberta-base-mnli](https://huggingface.co/microsoft/deberta-base-mnli) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8205 - Matthews Correlation: 0.6282 | {
"text": " deberta-base-mnli-finetuned-cola This model is a fine-tuned version of [microsoft/deberta-base-mnli](https://huggingface.co/microsoft/deberta-base-mnli) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8205 - Matthews Correlation: 0.6282 "
} | [
{
"label": "dataset_mention",
"score": 0.9999995939957158
},
{
"label": "no_dataset_mention",
"score": 4.0600428406159373e-7
}
] | Snorkel | null | null | null | false | null | 00068de5-8094-4434-9c89-47e8ff3ed6e8 | {
"split": "unlabelled"
} | Default | {
"text_length": 280
} | 0dataset_mention |
Long-Form Transcription The Whisper model is intrinsically designed to work on audio samples of up to 30s in duration. However, by using a chunking algorithm, it can be used to transcribe audio samples of up to arbitrary length. This is possible through Transformers [`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines | {
"text": " Long-Form Transcription The Whisper model is intrinsically designed to work on audio samples of up to 30s in duration. However, by using a chunking algorithm, it can be used to transcribe audio samples of up to arbitrary length. This is possible through Transformers [`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines"
} | [
{
"label": "dataset_mention",
"score": 0.7735312081442667
},
{
"label": "no_dataset_mention",
"score": 0.22646879185573326
}
] | Snorkel | null | null | null | false | null | 00073259-8a7f-47ae-8d56-40d7483ef72e | {
"split": "unlabelled"
} | Default | {
"text_length": 347
} | 0dataset_mention |
bert-finetuned-race This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3863 - Accuracy: 0.2982 | {
"text": " bert-finetuned-race This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3863 - Accuracy: 0.2982 "
} | [
{
"label": "dataset_mention",
"score": 0.9976600967163771
},
{
"label": "no_dataset_mention",
"score": 0.0023399032836228136
}
] | Snorkel | null | null | null | false | null | 000cf5c3-0684-49f5-b3a7-22fc4cc4142a | {
"split": "unlabelled"
} | Default | {
"text_length": 235
} | 0dataset_mention |
Training The NeMo toolkit [3] was used for training the models for over several hundred epochs. These model are trained with this [example script](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/asr_ctc/speech_to_text_ctc.py) and this [base config](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/conf/citrinet/citrinet_1024.yaml). The tokenizers for these models were built using the text transcripts of the train set with this [script](https://github.com/NVIDIA/NeMo/blob/main/scripts/tokenizers/process_asr_text_tokenizer.py). | {
"text": " Training The NeMo toolkit [3] was used for training the models for over several hundred epochs. These model are trained with this [example script](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/asr_ctc/speech_to_text_ctc.py) and this [base config](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/conf/citrinet/citrinet_1024.yaml). The tokenizers for these models were built using the text transcripts of the train set with this [script](https://github.com/NVIDIA/NeMo/blob/main/scripts/tokenizers/process_asr_text_tokenizer.py). "
} | [
{
"label": "dataset_mention",
"score": 0.7735312081442667
},
{
"label": "no_dataset_mention",
"score": 0.22646879185573326
}
] | Snorkel | null | null | null | false | null | 000d88b1-3767-4188-afa4-2a0423c50158 | {
"split": "unlabelled"
} | Default | {
"text_length": 546
} | 0dataset_mention |
Authors <b>IndoBART</b> was trained and evaluated by Samuel Cahyawijaya*, Genta Indra Winata*, Bryan Wilie*, Karissa Vincentio*, Xiaohong Li*, Adhiguna Kuncoro*, Sebastian Ruder, Zhi Yuan Lim, Syafri Bahar, Masayu Leylia Khodra, Ayu Purwarianti, Pascale Fung | {
"text": " Authors <b>IndoBART</b> was trained and evaluated by Samuel Cahyawijaya*, Genta Indra Winata*, Bryan Wilie*, Karissa Vincentio*, Xiaohong Li*, Adhiguna Kuncoro*, Sebastian Ruder, Zhi Yuan Lim, Syafri Bahar, Masayu Leylia Khodra, Ayu Purwarianti, Pascale Fung "
} | [
{
"label": "dataset_mention",
"score": 0.7735312081442667
},
{
"label": "no_dataset_mention",
"score": 0.22646879185573326
}
] | Snorkel | null | null | null | false | null | 000f3d53-a41d-4b22-b2c3-1e73bcb59624 | {
"split": "unlabelled"
} | Default | {
"text_length": 264
} | 0dataset_mention |
opus-mt-de-ha * source languages: de * target languages: ha * OPUS readme: [de-ha](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-ha/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-ha/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-ha/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-ha/opus-2020-01-20.eval.txt) | {
"text": " opus-mt-de-ha * source languages: de * target languages: ha * OPUS readme: [de-ha](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-ha/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-ha/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-ha/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-ha/opus-2020-01-20.eval.txt) "
} | [
{
"label": "dataset_mention",
"score": 0.9976600967163771
},
{
"label": "no_dataset_mention",
"score": 0.0023399032836228136
}
] | Snorkel | null | null | null | false | null | 001774ed-c993-4d96-bd4b-8b249bd30cc2 | {
"split": "unlabelled"
} | Default | {
"text_length": 631
} | 0dataset_mention |
Citations Please, cite this model using the following citation. ``` @inproceedings{dan2022electra-base-irony, title={北見工業大学 テキスト情報処理研究室 ELECTRA Base 皮肉検出モデル (Megagon Labs ver.)}, author={団 俊輔 and プタシンスキ ミハウ and ジェプカ ラファウ and 桝井 文人}, publisher={HuggingFace}, year={2022}, url = "https://huggingface.co/kit-nlp/bert-base-japanese-basic-char-v2-irony" } ``` | {
"text": " Citations Please, cite this model using the following citation. ``` @inproceedings{dan2022electra-base-irony, title={北見工業大学 テキスト情報処理研究室 ELECTRA Base 皮肉検出モデル (Megagon Labs ver.)}, author={団 俊輔 and プタシンスキ ミハウ and ジェプカ ラファウ and 桝井 文人}, publisher={HuggingFace}, year={2022}, url = \"https://huggingface.co/kit-nlp/bert-base-japanese-basic-char-v2-irony\" } ``` "
} | [
{
"label": "dataset_mention",
"score": 0.7735312081442667
},
{
"label": "no_dataset_mention",
"score": 0.22646879185573326
}
] | Snorkel | null | null | null | false | null | 001956fd-8554-4511-83fa-6f4698fd27c7 | {
"split": "unlabelled"
} | Default | {
"text_length": 371
} | 0dataset_mention |
S2T-SMALL-COVOST2-EN-ET-ST `s2t-small-covost2-en-et-st` is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST). The S2T model was proposed in [this paper](https://arxiv.org/abs/2010.05171) and released in [this repository](https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text) | {
"text": " S2T-SMALL-COVOST2-EN-ET-ST `s2t-small-covost2-en-et-st` is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST). The S2T model was proposed in [this paper](https://arxiv.org/abs/2010.05171) and released in [this repository](https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text) "
} | [
{
"label": "dataset_mention",
"score": 0.7735312081442667
},
{
"label": "no_dataset_mention",
"score": 0.22646879185573326
}
] | Snorkel | null | null | null | false | null | 001d9d90-e547-484c-89e9-bd2f913dd0e0 | {
"split": "unlabelled"
} | Default | {
"text_length": 335
} | 0dataset_mention |
xlm-roberta-large-xnli-finetuned-mnli-SJP This model is a fine-tuned version of [joeddav/xlm-roberta-large-xnli](https://huggingface.co/joeddav/xlm-roberta-large-xnli) on the swiss_judgment_prediction dataset. It achieves the following results on the evaluation set: - Loss: 1.3456 - Accuracy: 0.7957 | {
"text": " xlm-roberta-large-xnli-finetuned-mnli-SJP This model is a fine-tuned version of [joeddav/xlm-roberta-large-xnli](https://huggingface.co/joeddav/xlm-roberta-large-xnli) on the swiss_judgment_prediction dataset. It achieves the following results on the evaluation set: - Loss: 1.3456 - Accuracy: 0.7957 "
} | [
{
"label": "dataset_mention",
"score": 0.9976600967163771
},
{
"label": "no_dataset_mention",
"score": 0.0023399032836228136
}
] | Snorkel | null | null | null | false | null | 001e2fa3-ca8e-447d-8bd6-a0b23d49c479 | {
"split": "unlabelled"
} | Default | {
"text_length": 304
} | 0dataset_mention |
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 459 | 0.3655 | 0.8578 | 0.8990 | | 0.524 | 2.0 | 918 | 0.6061 | 0.8260 | 0.8823 | | 0.2971 | 3.0 | 1377 | 0.5960 | 0.8799 | 0.9148 | | {
"text": " Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 459 | 0.3655 | 0.8578 | 0.8990 | | 0.524 | 2.0 | 918 | 0.6061 | 0.8260 | 0.8823 | | 0.2971 | 3.0 | 1377 | 0.5960 | 0.8799 | 0.9148 | "
} | [
{
"label": "no_dataset_mention",
"score": 0.9487513515880792
},
{
"label": "dataset_mention",
"score": 0.051248648411920804
}
] | Snorkel | null | null | null | false | null | 001f7317-3eb8-4785-9701-a185159a667b | {
"split": "unlabelled"
} | Default | {
"text_length": 376
} | 1no_dataset_mention |
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 7500 - mixed_precision_training: Native AMP | {
"text": " Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 7500 - mixed_precision_training: Native AMP "
} | [
{
"label": "no_dataset_mention",
"score": 0.9487513515880792
},
{
"label": "dataset_mention",
"score": 0.051248648411920804
}
] | Snorkel | null | null | null | false | null | 001f8ab1-c896-4d08-928d-85650fd793c3 | {
"split": "unlabelled"
} | Default | {
"text_length": 407
} | 1no_dataset_mention |
How does this make training possible with 1 minute of training data? The model has been trained on 168 datasets, ~20 hours of data, or ~19.8 thousand audio files. This is smaller than LJ speech but it has way more variety in voices, which LJ speech doesn't have. this variety allows the model to learn speech in different genders, accents, pitches, and other important factors, meaning that it knows a lot more in terms of voices. Finetuning this on 1 minute of data is possible because it already has a decently close match of your voice somewhere in its latent space. Core: The multispeaker has more knowledge of multiple people speaking, making it surprisingly good at training on low-minute datasets. | {
"text": " How does this make training possible with 1 minute of training data? The model has been trained on 168 datasets, ~20 hours of data, or ~19.8 thousand audio files. This is smaller than LJ speech but it has way more variety in voices, which LJ speech doesn't have. this variety allows the model to learn speech in different genders, accents, pitches, and other important factors, meaning that it knows a lot more in terms of voices. Finetuning this on 1 minute of data is possible because it already has a decently close match of your voice somewhere in its latent space. Core: The multispeaker has more knowledge of multiple people speaking, making it surprisingly good at training on low-minute datasets. "
} | [
{
"label": "dataset_mention",
"score": 0.7735312081442667
},
{
"label": "no_dataset_mention",
"score": 0.22646879185573326
}
] | Snorkel | null | null | null | false | null | 00203ec4-f000-49c8-a04a-f342739afb51 | {
"split": "unlabelled"
} | Default | {
"text_length": 709
} | 0dataset_mention |
Nalisten-Likeness-1 Dreambooth model trained by nalisten1 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb) Sample pictures of this concept: download .png) | {
"text": " Nalisten-Likeness-1 Dreambooth model trained by nalisten1 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb) Sample pictures of this concept: download .png) "
} | [
{
"label": "dataset_mention",
"score": 0.7735312081442667
},
{
"label": "no_dataset_mention",
"score": 0.22646879185573326
}
] | Snorkel | null | null | null | false | null | 0022dc88-7014-49a9-b874-86c949516535 | {
"split": "unlabelled"
} | Default | {
"text_length": 747
} | 0dataset_mention |
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | No log | 1.0 | 296 | 0.6001 | 0.6435 | 0.6344 | 0.5087 | 0.4156 | | 0.6011 | 2.0 | 592 | 0.5633 | 0.7091 | 0.6879 | 0.6464 | 0.6521 | | 0.6011 | 3.0 | 888 | 0.5501 | 0.7234 | 0.6991 | 0.6841 | 0.6892 | | 0.5401 | 4.0 | 1184 | 0.5558 | 0.7082 | 0.6818 | 0.6595 | 0.6652 | | 0.5401 | 5.0 | 1480 | 0.5637 | 0.7091 | 0.6841 | 0.6557 | 0.6617 | | {
"text": " Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | No log | 1.0 | 296 | 0.6001 | 0.6435 | 0.6344 | 0.5087 | 0.4156 | | 0.6011 | 2.0 | 592 | 0.5633 | 0.7091 | 0.6879 | 0.6464 | 0.6521 | | 0.6011 | 3.0 | 888 | 0.5501 | 0.7234 | 0.6991 | 0.6841 | 0.6892 | | 0.5401 | 4.0 | 1184 | 0.5558 | 0.7082 | 0.6818 | 0.6595 | 0.6652 | | 0.5401 | 5.0 | 1480 | 0.5637 | 0.7091 | 0.6841 | 0.6557 | 0.6617 | "
} | [
{
"label": "no_dataset_mention",
"score": 0.9487513515880792
},
{
"label": "dataset_mention",
"score": 0.051248648411920804
}
] | Snorkel | null | null | null | false | null | 0023423a-6a07-426d-9375-d96c06dd09c0 | {
"split": "unlabelled"
} | Default | {
"text_length": 665
} | 1no_dataset_mention |
Training results | Training Loss | Epoch | Step | Validation Loss | Gen Len | P | R | F1 | Score | Bleu-precisions | Bleu-bp | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:------:|:------:|:----------------------------------------------------------------------------:|:-------:| | 0.3166 | 1.0 | 4807 | 0.2335 | 19.0 | 0.5580 | 0.0884 | 0.3129 | 5.9585 | [90.11303396628615, 80.34125695971072, 73.81487011728768, 69.48796722990271] | 0.0763 | | {
"text": " Training results | Training Loss | Epoch | Step | Validation Loss | Gen Len | P | R | F1 | Score | Bleu-precisions | Bleu-bp | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:------:|:------:|:----------------------------------------------------------------------------:|:-------:| | 0.3166 | 1.0 | 4807 | 0.2335 | 19.0 | 0.5580 | 0.0884 | 0.3129 | 5.9585 | [90.11303396628615, 80.34125695971072, 73.81487011728768, 69.48796722990271] | 0.0763 | "
} | [
{
"label": "no_dataset_mention",
"score": 0.9487513515880792
},
{
"label": "dataset_mention",
"score": 0.051248648411920804
}
] | Snorkel | null | null | null | false | null | 00237127-b6f9-4480-bdd6-e689a87df245 | {
"split": "unlabelled"
} | Default | {
"text_length": 579
} | 1no_dataset_mention |
Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:| | No log | 1.0 | 1 | nan | 0.0114 | 3.3338 | | No log | 2.0 | 2 | nan | 0.0114 | 3.3338 | | No log | 3.0 | 3 | nan | 0.0114 | 3.3338 | | No log | 4.0 | 4 | nan | 0.0114 | 3.3338 | | No log | 5.0 | 5 | nan | 0.0114 | 3.3338 | | No log | 6.0 | 6 | nan | 0.0114 | 3.3338 | | No log | 7.0 | 7 | nan | 0.0114 | 3.3338 | | No log | 8.0 | 8 | nan | 0.0114 | 3.3338 | | No log | 9.0 | 9 | nan | 0.0114 | 3.3338 | | No log | 10.0 | 10 | nan | 0.0114 | 3.3338 | | No log | 11.0 | 11 | nan | 0.0114 | 3.3338 | | No log | 12.0 | 12 | nan | 0.0114 | 3.3338 | | No log | 13.0 | 13 | nan | 0.0114 | 3.3338 | | No log | 14.0 | 14 | nan | 0.0114 | 3.3338 | | No log | 15.0 | 15 | nan | 0.0114 | 3.3338 | | No log | 16.0 | 16 | nan | 0.0114 | 3.3338 | | No log | 17.0 | 17 | nan | 0.0114 | 3.3338 | | No log | 18.0 | 18 | nan | 0.0114 | 3.3338 | | No log | 19.0 | 19 | nan | 0.0114 | 3.3338 | | No log | 20.0 | 20 | nan | 0.0114 | 3.3338 | | {
"text": " Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:| | No log | 1.0 | 1 | nan | 0.0114 | 3.3338 | | No log | 2.0 | 2 | nan | 0.0114 | 3.3338 | | No log | 3.0 | 3 | nan | 0.0114 | 3.3338 | | No log | 4.0 | 4 | nan | 0.0114 | 3.3338 | | No log | 5.0 | 5 | nan | 0.0114 | 3.3338 | | No log | 6.0 | 6 | nan | 0.0114 | 3.3338 | | No log | 7.0 | 7 | nan | 0.0114 | 3.3338 | | No log | 8.0 | 8 | nan | 0.0114 | 3.3338 | | No log | 9.0 | 9 | nan | 0.0114 | 3.3338 | | No log | 10.0 | 10 | nan | 0.0114 | 3.3338 | | No log | 11.0 | 11 | nan | 0.0114 | 3.3338 | | No log | 12.0 | 12 | nan | 0.0114 | 3.3338 | | No log | 13.0 | 13 | nan | 0.0114 | 3.3338 | | No log | 14.0 | 14 | nan | 0.0114 | 3.3338 | | No log | 15.0 | 15 | nan | 0.0114 | 3.3338 | | No log | 16.0 | 16 | nan | 0.0114 | 3.3338 | | No log | 17.0 | 17 | nan | 0.0114 | 3.3338 | | No log | 18.0 | 18 | nan | 0.0114 | 3.3338 | | No log | 19.0 | 19 | nan | 0.0114 | 3.3338 | | No log | 20.0 | 20 | nan | 0.0114 | 3.3338 | "
} | [
{
"label": "no_dataset_mention",
"score": 0.9487513515880792
},
{
"label": "dataset_mention",
"score": 0.051248648411920804
}
] | Snorkel | null | null | null | false | null | 00241479-1600-488f-880e-997b93cea609 | {
"split": "unlabelled"
} | Default | {
"text_length": 1561
} | 1no_dataset_mention |
wav2vec2-large-xls-r-300m-en-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the librispeech_asr dataset. It achieves the following results on the evaluation set: - Loss: 2.7541 - Wer: 1.0 - Cer: 0.9877 | {
"text": " wav2vec2-large-xls-r-300m-en-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the librispeech_asr dataset. It achieves the following results on the evaluation set: - Loss: 2.7541 - Wer: 1.0 - Cer: 0.9877 "
} | [
{
"label": "dataset_mention",
"score": 0.9976600967163771
},
{
"label": "no_dataset_mention",
"score": 0.0023399032836228136
}
] | Snorkel | null | null | null | false | null | 002424a1-df04-42f5-a9e8-f650edbc737a | {
"split": "unlabelled"
} | Default | {
"text_length": 289
} | 0dataset_mention |
This repository hosts the TFLite version of `diffusion model` part of [KerasCV Stable Diffusion](https://github.com/keras-team/keras-cv/tree/master/keras_cv/models/stable_diffusion). Stable Diffusion consists of `text encoder`, `diffusion model`, `decoder`, and some glue codes to handl inputs and outputs of each part. The TFLite version of `diffusion model` in this repository is built not only with the `diffusion model` itself but also TensorFlow operations that takes `context`, `unconditional context` from `text encoder` and generates `latent`. The `latent` output should be passed down to the `decoder` which is hosted in [this repository](https://huggingface.co/keras-sd/decoder-tflite/tree/main). TFLite conversion was based on the `SavedModel` from [this repository](https://huggingface.co/keras-sd/tfs-text-encoder/tree/main), and TensorFlow version `>= 2.12-nightly` was used. - NOTE: [Dynamic range quantization](https://www.tensorflow.org/lite/performance/post_training_quant | {
"text": " This repository hosts the TFLite version of `diffusion model` part of [KerasCV Stable Diffusion](https://github.com/keras-team/keras-cv/tree/master/keras_cv/models/stable_diffusion). Stable Diffusion consists of `text encoder`, `diffusion model`, `decoder`, and some glue codes to handl inputs and outputs of each part. The TFLite version of `diffusion model` in this repository is built not only with the `diffusion model` itself but also TensorFlow operations that takes `context`, `unconditional context` from `text encoder` and generates `latent`. The `latent` output should be passed down to the `decoder` which is hosted in [this repository](https://huggingface.co/keras-sd/decoder-tflite/tree/main). TFLite conversion was based on the `SavedModel` from [this repository](https://huggingface.co/keras-sd/tfs-text-encoder/tree/main), and TensorFlow version `>= 2.12-nightly` was used. - NOTE: [Dynamic range quantization](https://www.tensorflow.org/lite/performance/post_training_quant"
} | [
{
"label": "dataset_mention",
"score": 0.9999493214709373
},
{
"label": "no_dataset_mention",
"score": 0.00005067852906274217
}
] | Snorkel | null | null | null | false | null | 00257cf0-e62d-4caa-8d47-f319abd50432 | {
"split": "unlabelled"
} | Default | {
"text_length": 996
} | 0dataset_mention |
test-mlm This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6481 | {
"text": " test-mlm This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6481 "
} | [
{
"label": "dataset_mention",
"score": 0.9976600967163771
},
{
"label": "no_dataset_mention",
"score": 0.0023399032836228136
}
] | Snorkel | null | null | null | false | null | 0026aaea-d2b7-4984-bb0d-84723d410e5e | {
"split": "unlabelled"
} | Default | {
"text_length": 229
} | 0dataset_mention |
exp_w2v2r_es_xls-r_accent_surpeninsular-10_nortepeninsular-0_s61 Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool. | {
"text": " exp_w2v2r_es_xls-r_accent_surpeninsular-10_nortepeninsular-0_s61 Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool. "
} | [
{
"label": "dataset_mention",
"score": 0.9729673904845365
},
{
"label": "no_dataset_mention",
"score": 0.027032609515463546
}
] | Snorkel | null | null | null | false | null | 0026c458-a7b9-4862-a40c-e8530a533d6a | {
"split": "unlabelled"
} | Default | {
"text_length": 493
} | 0dataset_mention |
distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2248 - Accuracy: 0.9235 - F1: 0.9234 | {
"text": " distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2248 - Accuracy: 0.9235 - F1: 0.9234 "
} | [
{
"label": "dataset_mention",
"score": 0.9976600967163771
},
{
"label": "no_dataset_mention",
"score": 0.0023399032836228136
}
] | Snorkel | null | null | null | false | null | 00276a3c-83c8-422b-bea5-ce5b4ca7bd4c | {
"split": "unlabelled"
} | Default | {
"text_length": 285
} | 0dataset_mention |
distilroberta-clickbait This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on a dataset of headlines. It achieves the following results on the evaluation set: - Loss: 0.0268 - Acc: 0.9963 | {
"text": " distilroberta-clickbait This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on a dataset of headlines. It achieves the following results on the evaluation set: - Loss: 0.0268 - Acc: 0.9963 "
} | [
{
"label": "dataset_mention",
"score": 0.9976600967163771
},
{
"label": "no_dataset_mention",
"score": 0.0023399032836228136
}
] | Snorkel | null | null | null | false | null | 002fa958-fcb8-44d6-acef-8eb65b86999a | {
"split": "unlabelled"
} | Default | {
"text_length": 242
} | 0dataset_mention |
distilbert_sa_GLUE_Experiment_logit_kd_qnli_192 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE QNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.3931 - Accuracy: 0.5870 | {
"text": " distilbert_sa_GLUE_Experiment_logit_kd_qnli_192 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE QNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.3931 - Accuracy: 0.5870 "
} | [
{
"label": "dataset_mention",
"score": 0.9999995939957158
},
{
"label": "no_dataset_mention",
"score": 4.0600428406159373e-7
}
] | Snorkel | null | null | null | false | null | 0031efcc-4300-4de5-9bd9-172927556516 | {
"split": "unlabelled"
} | Default | {
"text_length": 280
} | 0dataset_mention |
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - training_steps: 1000 - mixed_precision_training: Native AMP | {
"text": " Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - training_steps: 1000 - mixed_precision_training: Native AMP "
} | [
{
"label": "no_dataset_mention",
"score": 0.9487513515880792
},
{
"label": "dataset_mention",
"score": 0.051248648411920804
}
] | Snorkel | null | null | null | false | null | 00336152-36d4-4b0e-84ab-9d4967b8f820 | {
"split": "unlabelled"
} | Default | {
"text_length": 406
} | 1no_dataset_mention |
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.8185 | 1.0 | 70 | 0.3369 | 0.7449 | | 0.2899 | 2.0 | 140 | 0.2740 | 0.7919 | | {
"text": " Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.8185 | 1.0 | 70 | 0.3369 | 0.7449 | | 0.2899 | 2.0 | 140 | 0.2740 | 0.7919 | "
} | [
{
"label": "no_dataset_mention",
"score": 0.9487513515880792
},
{
"label": "dataset_mention",
"score": 0.051248648411920804
}
] | Snorkel | null | null | null | false | null | 00341ff0-df54-43b1-96d2-fed4cd42182d | {
"split": "unlabelled"
} | Default | {
"text_length": 261
} | 1no_dataset_mention |
Model description The PAN model proposes a a lightweight convolutional neural network for image super resolution. Pixel attention (PA) is similar to channel attention and spatial attention in formulation. PA however produces 3D attention maps instead of a 1D attention vector or a 2D map. This attention scheme introduces fewer additional parameters but generates better SR results. The model is very lightweight with the model being just 260k to 270k parameters (~1mb). | {
"text": " Model description The PAN model proposes a a lightweight convolutional neural network for image super resolution. Pixel attention (PA) is similar to channel attention and spatial attention in formulation. PA however produces 3D attention maps instead of a 1D attention vector or a 2D map. This attention scheme introduces fewer additional parameters but generates better SR results. The model is very lightweight with the model being just 260k to 270k parameters (~1mb). "
} | [
{
"label": "dataset_mention",
"score": 0.7735312081442667
},
{
"label": "no_dataset_mention",
"score": 0.22646879185573326
}
] | Snorkel | null | null | null | false | null | 0039a2d9-932e-4028-8924-9df88a02e520 | {
"split": "unlabelled"
} | Default | {
"text_length": 473
} | 0dataset_mention |
model prediction questions = model.generate_q(list_context="Dopo il 1971 , l' OPEC ha tardato ad adeguare i prezzi per riflettere tale deprezzamento.", list_answer="Dopo il 1971") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/mt5-small-itquad-qg") output = pipe("<hl> Dopo il 1971 <hl> , l' OPEC ha tardato ad adeguare i prezzi per riflettere tale deprezzamento.") ``` | {
"text": " model prediction questions = model.generate_q(list_context=\"Dopo il 1971 , l' OPEC ha tardato ad adeguare i prezzi per riflettere tale deprezzamento.\", list_answer=\"Dopo il 1971\") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline(\"text2text-generation\", \"lmqg/mt5-small-itquad-qg\") output = pipe(\"<hl> Dopo il 1971 <hl> , l' OPEC ha tardato ad adeguare i prezzi per riflettere tale deprezzamento.\") ``` "
} | [
{
"label": "dataset_mention",
"score": 0.7735312081442667
},
{
"label": "no_dataset_mention",
"score": 0.22646879185573326
}
] | Snorkel | null | null | null | false | null | 003c8d21-3b19-4851-9d68-c06b4fa59df5 | {
"split": "unlabelled"
} | Default | {
"text_length": 445
} | 0dataset_mention |
`kan-bayashi/vctk_tts_train_gst_fastspeech2_raw_phn_tacotron_g2p_en_no_space_train.loss.ave` ♻️ Imported from https://zenodo.org/record/4036266/ This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/). | {
"text": " `kan-bayashi/vctk_tts_train_gst_fastspeech2_raw_phn_tacotron_g2p_en_no_space_train.loss.ave` ♻️ Imported from https://zenodo.org/record/4036266/ This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/). "
} | [
{
"label": "dataset_mention",
"score": 0.7735312081442667
},
{
"label": "no_dataset_mention",
"score": 0.22646879185573326
}
] | Snorkel | null | null | null | false | null | 003f0e39-a04c-49ce-9ec1-2de92621e02f | {
"split": "unlabelled"
} | Default | {
"text_length": 256
} | 0dataset_mention |
Dreamy Painting on Stable Diffusion This is the `<dreamy-painting>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`:      Here are images generated in this style:     | {
"text": " Dreamy Painting on Stable Diffusion This is the `<dreamy-painting>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`:      Here are images generated in this style:    "
} | [
{
"label": "dataset_mention",
"score": 0.7735312081442667
},
{
"label": "no_dataset_mention",
"score": 0.22646879185573326
}
] | Snorkel | null | null | null | false | null | 00401ada-92c8-4da9-967d-4667769228da | {
"split": "unlabelled"
} | Default | {
"text_length": 1572
} | 0dataset_mention |
exp_w2v2t_th_xlsr-53_s218 Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) for speech recognition on Thai using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool. | {
"text": " exp_w2v2t_th_xlsr-53_s218 Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) for speech recognition on Thai using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool. "
} | [
{
"label": "dataset_mention",
"score": 0.9729673904845365
},
{
"label": "no_dataset_mention",
"score": 0.027032609515463546
}
] | Snorkel | null | null | null | false | null | 0042bdfb-2c8e-440e-b4de-afc1c041ff40 | {
"split": "unlabelled"
} | Default | {
"text_length": 463
} | 0dataset_mention |
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.7272339744854407e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 | {
"text": " Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.7272339744854407e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 "
} | [
{
"label": "no_dataset_mention",
"score": 0.9487513515880792
},
{
"label": "dataset_mention",
"score": 0.051248648411920804
}
] | Snorkel | null | null | null | false | null | 0043e17d-92f8-4a29-827c-2b48c3b8f3f0 | {
"split": "unlabelled"
} | Default | {
"text_length": 284
} | 1no_dataset_mention |
Please Note! This model is NOT the 19.2M images Characters Model on TrinArt, but an improved version of the original Trin-sama Twitter bot model. This model is intended to retain the original SD's aesthetics as much as possible while nudging the model to anime/manga style. | {
"text": " Please Note! This model is NOT the 19.2M images Characters Model on TrinArt, but an improved version of the original Trin-sama Twitter bot model. This model is intended to retain the original SD's aesthetics as much as possible while nudging the model to anime/manga style."
} | [
{
"label": "dataset_mention",
"score": 0.7735312081442667
},
{
"label": "no_dataset_mention",
"score": 0.22646879185573326
}
] | Snorkel | null | null | null | false | null | 00468179-c2f1-430e-a19d-ade3c5d678f0 | {
"split": "unlabelled"
} | Default | {
"text_length": 274
} | 0dataset_mention |
Training procedure
The training was run on an NVIDIA DGX Station with 4XTesla V100 GPUs.
Training code is available at https://github.com/source-data/soda-roberta
- Model fine-tuned: EMBL/bio-lm
- Tokenizer vocab size: 50265
- Training data: EMBO/sd-nlp
- Dataset configuration: GENEPROD_ROLES
- Training with 48771 examples.
- Evaluating on 13801 examples.
- Training on 15 features: O, I-CONTROLLED_VAR, B-CONTROLLED_VAR, I-MEASURED_VAR, B-MEASURED_VAR
- Epochs: 0.9
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `learning_rate`: 0.0001
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
| {
"text": " Training procedure\r \r The training was run on an NVIDIA DGX Station with 4XTesla V100 GPUs.\r \r Training code is available at https://github.com/source-data/soda-roberta\r \r - Model fine-tuned: EMBL/bio-lm\r - Tokenizer vocab size: 50265\r - Training data: EMBO/sd-nlp\r - Dataset configuration: GENEPROD_ROLES\r - Training with 48771 examples.\r - Evaluating on 13801 examples.\r - Training on 15 features: O, I-CONTROLLED_VAR, B-CONTROLLED_VAR, I-MEASURED_VAR, B-MEASURED_VAR\r - Epochs: 0.9\r - `per_device_train_batch_size`: 16\r - `per_device_eval_batch_size`: 16\r - `learning_rate`: 0.0001\r - `weight_decay`: 0.0\r - `adam_beta1`: 0.9\r - `adam_beta2`: 0.999\r - `adam_epsilon`: 1e-08\r - `max_grad_norm`: 1.0\r \r "
} | [
{
"label": "dataset_mention",
"score": 0.9920527014967919
},
{
"label": "no_dataset_mention",
"score": 0.00794729850320818
}
] | Snorkel | null | null | null | false | null | 004773d8-f8d7-4ede-96f9-0efc932dc959 | {
"split": "unlabelled"
} | Default | {
"text_length": 705
} | 0dataset_mention |
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.2885 | 1.0 | 522 | 1.2005 | | 1.2209 | 2.0 | 1044 | 1.1594 | | 1.1871 | 3.0 | 1566 | 1.1263 | | 1.1455 | 4.0 | 2088 | 1.1098 | | 1.1124 | 5.0 | 2610 | 1.0949 | | 1.0758 | 6.0 | 3132 | 1.0825 | | 1.0485 | 7.0 | 3654 | 1.0707 | | 1.0205 | 8.0 | 4176 | 1.0606 | | 0.9913 | 9.0 | 4698 | 1.0523 | | 1.0099 | 10.0 | 5220 | 1.0463 | | 0.97 | 11.0 | 5742 | 1.0395 | | 0.9699 | 12.0 | 6264 | 1.0370 | | 0.9531 | 13.0 | 6786 | 1.0337 | | 0.9449 | 14.0 | 7308 | 1.0312 | | 0.9354 | 15.0 | 7830 | 1.0274 | | 0.9342 | 16.0 | 8352 | 1.0266 | | 0.9188 | 17.0 | 8874 | 1.0262 | | 0.9219 | 18.0 | 9396 | 1.0251 | | 0.9044 | 19.0 | 9918 | 1.0252 | | 0.9223 | 20.0 | 10440 | 1.0249 | | {
"text": " Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.2885 | 1.0 | 522 | 1.2005 | | 1.2209 | 2.0 | 1044 | 1.1594 | | 1.1871 | 3.0 | 1566 | 1.1263 | | 1.1455 | 4.0 | 2088 | 1.1098 | | 1.1124 | 5.0 | 2610 | 1.0949 | | 1.0758 | 6.0 | 3132 | 1.0825 | | 1.0485 | 7.0 | 3654 | 1.0707 | | 1.0205 | 8.0 | 4176 | 1.0606 | | 0.9913 | 9.0 | 4698 | 1.0523 | | 1.0099 | 10.0 | 5220 | 1.0463 | | 0.97 | 11.0 | 5742 | 1.0395 | | 0.9699 | 12.0 | 6264 | 1.0370 | | 0.9531 | 13.0 | 6786 | 1.0337 | | 0.9449 | 14.0 | 7308 | 1.0312 | | 0.9354 | 15.0 | 7830 | 1.0274 | | 0.9342 | 16.0 | 8352 | 1.0266 | | 0.9188 | 17.0 | 8874 | 1.0262 | | 0.9219 | 18.0 | 9396 | 1.0251 | | 0.9044 | 19.0 | 9918 | 1.0252 | | 0.9223 | 20.0 | 10440 | 1.0249 | "
} | [
{
"label": "no_dataset_mention",
"score": 0.9487513515880792
},
{
"label": "dataset_mention",
"score": 0.051248648411920804
}
] | Snorkel | null | null | null | false | null | 0048443c-7120-46dd-a41d-1050a079a076 | {
"split": "unlabelled"
} | Default | {
"text_length": 1165
} | 1no_dataset_mention |
TODO: fill in a link to your training script here. If you trained your model in a colab, simply fill in the link here. If you trained the model locally, it would be great if you could upload the training script on github and paste the link here. --> | {
"text": " TODO: fill in a link to your training script here. If you trained your model in a colab, simply fill in the link here. If you trained the model locally, it would be great if you could upload the training script on github and paste the link here. --> "
} | [
{
"label": "dataset_mention",
"score": 0.7735312081442667
},
{
"label": "no_dataset_mention",
"score": 0.22646879185573326
}
] | Snorkel | null | null | null | false | null | 004a2226-a758-4d2c-b6ad-2bb1b849abf0 | {
"split": "unlabelled"
} | Default | {
"text_length": 251
} | 0dataset_mention |
Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/stsb-distilroberta-base-v2') embeddings = model.encode(sentences) print(embeddings) ``` | {
"text": " Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = [\"This is an example sentence\", \"Each sentence is converted\"] model = SentenceTransformer('sentence-transformers/stsb-distilroberta-base-v2') embeddings = model.encode(sentences) print(embeddings) ``` "
} | [
{
"label": "dataset_mention",
"score": 0.7735312081442667
},
{
"label": "no_dataset_mention",
"score": 0.22646879185573326
}
] | Snorkel | null | null | null | false | null | 004ef0a9-aab8-41d9-9df5-98bf718a6abd | {
"split": "unlabelled"
} | Default | {
"text_length": 501
} | 0dataset_mention |
lilt-en-funsd This model is a fine-tuned version of [SCUT-DLVCLab/lilt-roberta-en-base](https://huggingface.co/SCUT-DLVCLab/lilt-roberta-en-base) on the funsd-layoutlmv3 dataset. It achieves the following results on the evaluation set: - Loss: 1.7699 - Answer: {'precision': 0.8906439854191981, 'recall': 0.8971848225214198, 'f1': 0.8939024390243904, 'number': 817} - Header: {'precision': 0.6274509803921569, 'recall': 0.5378151260504201, 'f1': 0.579185520361991, 'number': 119} - Question: {'precision': 0.8778359511343804, 'recall': 0.9340761374187558, 'f1': 0.9050832208726944, 'number': 1077} - Overall Precision: 0.8706 - Overall Recall: 0.8957 - Overall F1: 0.8830 - Overall Accuracy: 0.7973 | {
"text": " lilt-en-funsd This model is a fine-tuned version of [SCUT-DLVCLab/lilt-roberta-en-base](https://huggingface.co/SCUT-DLVCLab/lilt-roberta-en-base) on the funsd-layoutlmv3 dataset. It achieves the following results on the evaluation set: - Loss: 1.7699 - Answer: {'precision': 0.8906439854191981, 'recall': 0.8971848225214198, 'f1': 0.8939024390243904, 'number': 817} - Header: {'precision': 0.6274509803921569, 'recall': 0.5378151260504201, 'f1': 0.579185520361991, 'number': 119} - Question: {'precision': 0.8778359511343804, 'recall': 0.9340761374187558, 'f1': 0.9050832208726944, 'number': 1077} - Overall Precision: 0.8706 - Overall Recall: 0.8957 - Overall F1: 0.8830 - Overall Accuracy: 0.7973 "
} | [
{
"label": "dataset_mention",
"score": 0.9976600967163771
},
{
"label": "no_dataset_mention",
"score": 0.0023399032836228136
}
] | Snorkel | null | null | null | false | null | 00508b4c-e379-4dff-a0ad-5ea6f60e3224 | {
"split": "unlabelled"
} | Default | {
"text_length": 702
} | 0dataset_mention |
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 33276, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 | {
"text": " Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 33276, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 "
} | [
{
"label": "no_dataset_mention",
"score": 0.9487513515880792
},
{
"label": "dataset_mention",
"score": 0.051248648411920804
}
] | Snorkel | null | null | null | false | null | 00547293-cafe-493a-b496-6564934857f4 | {
"split": "unlabelled"
} | Default | {
"text_length": 459
} | 1no_dataset_mention |
esm2_t12_35M_UR50D-finetuned-ARG-classification This model is a fine-tuned version of [facebook/esm2_t12_35M_UR50D](https://huggingface.co/facebook/esm2_t12_35M_UR50D) on an unknown dataset. It achieves the following results on the evaluation set: | {
"text": " esm2_t12_35M_UR50D-finetuned-ARG-classification This model is a fine-tuned version of [facebook/esm2_t12_35M_UR50D](https://huggingface.co/facebook/esm2_t12_35M_UR50D) on an unknown dataset. It achieves the following results on the evaluation set: "
} | [
{
"label": "dataset_mention",
"score": 0.9976600967163771
},
{
"label": "no_dataset_mention",
"score": 0.0023399032836228136
}
] | Snorkel | null | null | null | false | null | 0058a1bc-699f-45a9-b101-7d4526d3a859 | {
"split": "unlabelled"
} | Default | {
"text_length": 252
} | 0dataset_mention |
berttest2 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0674 - Precision: 0.9138 - Recall: 0.9325 - F1: 0.9230 - Accuracy: 0.9823 | {
"text": " berttest2 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0674 - Precision: 0.9138 - Recall: 0.9325 - F1: 0.9230 - Accuracy: 0.9823 "
} | [
{
"label": "dataset_mention",
"score": 0.9999984471975748
},
{
"label": "no_dataset_mention",
"score": 0.0000015528024252442756
}
] | Snorkel | null | null | null | false | null | 005a6c9d-78a5-4ae8-ab10-ac0bbb6aad97 | {
"split": "unlabelled"
} | Default | {
"text_length": 276
} | 0dataset_mention |
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.5993 | 0.5 | 100 | 0.6257 | 37.9294 | | 0.352 | 1.35 | 200 | 0.5881 | 44.2387 | | {
"text": " Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.5993 | 0.5 | 100 | 0.6257 | 37.9294 | | 0.352 | 1.35 | 200 | 0.5881 | 44.2387 | "
} | [
{
"label": "no_dataset_mention",
"score": 0.9487513515880792
},
{
"label": "dataset_mention",
"score": 0.051248648411920804
}
] | Snorkel | null | null | null | false | null | 005a743c-7224-47d8-8a3f-af026f6256c5 | {
"split": "unlabelled"
} | Default | {
"text_length": 265
} | 1no_dataset_mention |
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 11.0967 | 1.0 | 118 | 4.6437 | 1.0 | | 3.4973 | 2.0 | 236 | 3.2588 | 1.0 | | 3.1305 | 3.0 | 354 | 2.6566 | 1.0 | | 1.2931 | 4.0 | 472 | 0.9156 | 0.9944 | | 0.6851 | 5.0 | 590 | 0.7474 | 0.8598 | | 0.525 | 6.0 | 708 | 0.6649 | 0.7995 | | 0.4325 | 7.0 | 826 | 0.6740 | 0.7752 | | 0.3766 | 8.0 | 944 | 0.6220 | 0.7628 | | 0.3256 | 9.0 | 1062 | 0.6316 | 0.7322 | | 0.2802 | 10.0 | 1180 | 0.6442 | 0.7305 | | 0.2575 | 11.0 | 1298 | 0.6885 | 0.7280 | | 0.2248 | 12.0 | 1416 | 0.6702 | 0.7197 | | 0.2089 | 13.0 | 1534 | 0.6781 | 0.7173 | | 0.1893 | 14.0 | 1652 | 0.6981 | 0.7049 | | 0.1652 | 15.0 | 1770 | 0.7154 | 0.7436 | | 0.1643 | 16.0 | 1888 | 0.6798 | 0.7023 | | 0.1472 | 17.0 | 2006 | 0.7381 | 0.6947 | | 0.1372 | 18.0 | 2124 | 0.7240 | 0.7065 | | 0.1318 | 19.0 | 2242 | 0.7305 | 0.6714 | | 0.1211 | 20.0 | 2360 | 0.7288 | 0.6597 | | 0.1178 | 21.0 | 2478 | 0.7417 | 0.6699 | | 0.1118 | 22.0 | 2596 | 0.7476 | 0.6753 | | 0.1016 | 23.0 | 2714 | 0.7973 | 0.6647 | | 0.0998 | 24.0 | 2832 | 0.8027 | 0.6633 | | 0.0917 | 25.0 | 2950 | 0.8045 | 0.6680 | | 0.0907 | 26.0 | 3068 | 0.7884 | 0.6565 | | 0.0835 | 27.0 | 3186 | 0.8009 | 0.6622 | | 0.0749 | 28.0 | 3304 | 0.8123 | 0.6536 | | 0.0755 | 29.0 | 3422 | 0.8006 | 0.6555 | | 0.074 | 30.0 | 3540 | 0.8072 | 0.6531 | | {
"text": " Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 11.0967 | 1.0 | 118 | 4.6437 | 1.0 | | 3.4973 | 2.0 | 236 | 3.2588 | 1.0 | | 3.1305 | 3.0 | 354 | 2.6566 | 1.0 | | 1.2931 | 4.0 | 472 | 0.9156 | 0.9944 | | 0.6851 | 5.0 | 590 | 0.7474 | 0.8598 | | 0.525 | 6.0 | 708 | 0.6649 | 0.7995 | | 0.4325 | 7.0 | 826 | 0.6740 | 0.7752 | | 0.3766 | 8.0 | 944 | 0.6220 | 0.7628 | | 0.3256 | 9.0 | 1062 | 0.6316 | 0.7322 | | 0.2802 | 10.0 | 1180 | 0.6442 | 0.7305 | | 0.2575 | 11.0 | 1298 | 0.6885 | 0.7280 | | 0.2248 | 12.0 | 1416 | 0.6702 | 0.7197 | | 0.2089 | 13.0 | 1534 | 0.6781 | 0.7173 | | 0.1893 | 14.0 | 1652 | 0.6981 | 0.7049 | | 0.1652 | 15.0 | 1770 | 0.7154 | 0.7436 | | 0.1643 | 16.0 | 1888 | 0.6798 | 0.7023 | | 0.1472 | 17.0 | 2006 | 0.7381 | 0.6947 | | 0.1372 | 18.0 | 2124 | 0.7240 | 0.7065 | | 0.1318 | 19.0 | 2242 | 0.7305 | 0.6714 | | 0.1211 | 20.0 | 2360 | 0.7288 | 0.6597 | | 0.1178 | 21.0 | 2478 | 0.7417 | 0.6699 | | 0.1118 | 22.0 | 2596 | 0.7476 | 0.6753 | | 0.1016 | 23.0 | 2714 | 0.7973 | 0.6647 | | 0.0998 | 24.0 | 2832 | 0.8027 | 0.6633 | | 0.0917 | 25.0 | 2950 | 0.8045 | 0.6680 | | 0.0907 | 26.0 | 3068 | 0.7884 | 0.6565 | | 0.0835 | 27.0 | 3186 | 0.8009 | 0.6622 | | 0.0749 | 28.0 | 3304 | 0.8123 | 0.6536 | | 0.0755 | 29.0 | 3422 | 0.8006 | 0.6555 | | 0.074 | 30.0 | 3540 | 0.8072 | 0.6531 | "
} | [
{
"label": "no_dataset_mention",
"score": 0.9487513515880792
},
{
"label": "dataset_mention",
"score": 0.051248648411920804
}
] | Snorkel | null | null | null | false | null | 005c1785-a543-4b7b-a608-49fe20b47b19 | {
"split": "unlabelled"
} | Default | {
"text_length": 1941
} | 1no_dataset_mention |
distilbart-cnn-arxiv-pubmed-pubmed-v3-e8 This model is a fine-tuned version of [theojolliffe/distilbart-cnn-arxiv-pubmed-pubmed](https://huggingface.co/theojolliffe/distilbart-cnn-arxiv-pubmed-pubmed) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8422 - Rouge1: 54.9328 - Rouge2: 36.7154 - Rougel: 39.5674 - Rougelsum: 52.4889 - Gen Len: 142.0 | {
"text": " distilbart-cnn-arxiv-pubmed-pubmed-v3-e8 This model is a fine-tuned version of [theojolliffe/distilbart-cnn-arxiv-pubmed-pubmed](https://huggingface.co/theojolliffe/distilbart-cnn-arxiv-pubmed-pubmed) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8422 - Rouge1: 54.9328 - Rouge2: 36.7154 - Rougel: 39.5674 - Rougelsum: 52.4889 - Gen Len: 142.0 "
} | [
{
"label": "dataset_mention",
"score": 0.9976600967163771
},
{
"label": "no_dataset_mention",
"score": 0.0023399032836228136
}
] | Snorkel | null | null | null | false | null | 005cb28f-0845-42f0-9dd0-9d6cfb096778 | {
"split": "unlabelled"
} | Default | {
"text_length": 391
} | 0dataset_mention |
finetuned_token_itr0_3e-05_all_16_02_2022-20_12_04 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1620 - Precision: 0.3509 - Recall: 0.3793 - F1: 0.3646 - Accuracy: 0.9468 | {
"text": " finetuned_token_itr0_3e-05_all_16_02_2022-20_12_04 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1620 - Precision: 0.3509 - Recall: 0.3793 - F1: 0.3646 - Accuracy: 0.9468 "
} | [
{
"label": "dataset_mention",
"score": 0.9976600967163771
},
{
"label": "no_dataset_mention",
"score": 0.0023399032836228136
}
] | Snorkel | null | null | null | false | null | 00604cae-27a0-49bc-8b0e-788c01e2f829 | {
"split": "unlabelled"
} | Default | {
"text_length": 376
} | 0dataset_mention |
funnel-transformer-xlarge_cls_CR This model is a fine-tuned version of [funnel-transformer/xlarge](https://huggingface.co/funnel-transformer/xlarge) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2563 - Accuracy: 0.9388 | {
"text": " funnel-transformer-xlarge_cls_CR This model is a fine-tuned version of [funnel-transformer/xlarge](https://huggingface.co/funnel-transformer/xlarge) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2563 - Accuracy: 0.9388 "
} | [
{
"label": "dataset_mention",
"score": 0.9976600967163771
},
{
"label": "no_dataset_mention",
"score": 0.0023399032836228136
}
] | Snorkel | null | null | null | false | null | 0064f624-7b0d-4aba-85c6-4eedbed33b3a | {
"split": "unlabelled"
} | Default | {
"text_length": 266
} | 0dataset_mention |
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.8087 | 1.0 | 157 | 0.7144 | | 0.7182 | 2.0 | 314 | 0.6918 | | 0.7041 | 3.0 | 471 | 0.6918 | | {
"text": " Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.8087 | 1.0 | 157 | 0.7144 | | 0.7182 | 2.0 | 314 | 0.6918 | | 0.7041 | 3.0 | 471 | 0.6918 | "
} | [
{
"label": "no_dataset_mention",
"score": 0.9487513515880792
},
{
"label": "dataset_mention",
"score": 0.051248648411920804
}
] | Snorkel | null | null | null | false | null | 0069f3b9-ac6b-44c5-9459-71631669e0ee | {
"split": "unlabelled"
} | Default | {
"text_length": 276
} | 1no_dataset_mention |
Danish BERT fine-tuned for Sentiment Analysis with `senda` This model detects polarity ('positive', 'neutral', 'negative') of Danish texts. It is trained and tested on Tweets annotated by [Alexandra Institute](https://github.com/alexandrainst). The model is trained with the [`senda`](https://github.com/ebanalyse/senda) package. Here is an example of how to load the model in PyTorch using the [🤗Transformers](https://github.com/huggingface/transformers) library: ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline tokenizer = AutoTokenizer.from_pretrained("pin/senda") model = AutoModelForSequenceClassification.from_pretrained("pin/senda") | {
"text": " Danish BERT fine-tuned for Sentiment Analysis with `senda` This model detects polarity ('positive', 'neutral', 'negative') of Danish texts. It is trained and tested on Tweets annotated by [Alexandra Institute](https://github.com/alexandrainst). The model is trained with the [`senda`](https://github.com/ebanalyse/senda) package. Here is an example of how to load the model in PyTorch using the [🤗Transformers](https://github.com/huggingface/transformers) library: ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline tokenizer = AutoTokenizer.from_pretrained(\"pin/senda\") model = AutoModelForSequenceClassification.from_pretrained(\"pin/senda\") "
} | [
{
"label": "dataset_mention",
"score": 0.7735312081442667
},
{
"label": "no_dataset_mention",
"score": 0.22646879185573326
}
] | Snorkel | null | null | null | false | null | 006cbf85-88c2-4bf1-b0ff-7247e8121134 | {
"split": "unlabelled"
} | Default | {
"text_length": 694
} | 0dataset_mention |
distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2235 - Accuracy: 0.9265 - F1: 0.9268 | {
"text": " distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2235 - Accuracy: 0.9265 - F1: 0.9268 "
} | [
{
"label": "dataset_mention",
"score": 0.9976600967163771
},
{
"label": "no_dataset_mention",
"score": 0.0023399032836228136
}
] | Snorkel | null | null | null | false | null | 006ed878-7e4a-4edb-9fc4-0ea6ad435a8d | {
"split": "unlabelled"
} | Default | {
"text_length": 285
} | 0dataset_mention |
model by no3 This your the Stable Diffusion model fine-tuned the azura-sd-1.4-beta3 concept taught to Stable Diffusion with Dreambooth. It can be used by modifying the `instance_prompt`: **sks_azura** You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb). And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts) If you have issues or questions feel free to visit the Community Tab and start discussion about it. Here are the images used for training this concept:       | {
"text": " model by no3 This your the Stable Diffusion model fine-tuned the azura-sd-1.4-beta3 concept taught to Stable Diffusion with Dreambooth. It can be used by modifying the `instance_prompt`: **sks_azura** You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb). And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts) If you have issues or questions feel free to visit the Community Tab and start discussion about it. Here are the images used for training this concept:      "
} | [
{
"label": "dataset_mention",
"score": 0.7735312081442667
},
{
"label": "no_dataset_mention",
"score": 0.22646879185573326
}
] | Snorkel | null | null | null | false | null | 0072383d-19a6-4827-903b-5504fac9aeae | {
"split": "unlabelled"
} | Default | {
"text_length": 1444
} | 0dataset_mention |
wav2vec2-base-20sec-timit-and-dementiabank This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4338 - Wer: 0.2313 | {
"text": " wav2vec2-base-20sec-timit-and-dementiabank This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4338 - Wer: 0.2313 "
} | [
{
"label": "dataset_mention",
"score": 0.9976600967163771
},
{
"label": "no_dataset_mention",
"score": 0.0023399032836228136
}
] | Snorkel | null | null | null | false | null | 0072895f-57e7-4ff9-9d06-4483abcf3cbb | {
"split": "unlabelled"
} | Default | {
"text_length": 263
} | 0dataset_mention |
Details model architecture This model checkpoint - **t5-efficient-small-el8-dl2** - is of model type **Small** with the following variations: - **el** is **8** - **dl** is **2** It has **50.03** million parameters and thus requires *ca.* **200.11 MB** of memory in full precision (*fp32*) or **100.05 MB** of memory in half precision (*fp16* or *bf16*). A summary of the *original* T5 model architectures can be seen here: | Model | nl (el/dl) | ff | dm | kv | nh | | {
"text": " Details model architecture This model checkpoint - **t5-efficient-small-el8-dl2** - is of model type **Small** with the following variations: - **el** is **8** - **dl** is **2** It has **50.03** million parameters and thus requires *ca.* **200.11 MB** of memory in full precision (*fp32*) or **100.05 MB** of memory in half precision (*fp16* or *bf16*). A summary of the *original* T5 model architectures can be seen here: | Model | nl (el/dl) | ff | dm | kv | nh | "
} | [
{
"label": "dataset_mention",
"score": 0.7735312081442667
},
{
"label": "no_dataset_mention",
"score": 0.22646879185573326
}
] | Snorkel | null | null | null | false | null | 0072c779-6d01-4a98-a68f-ae0a52ffc168 | {
"split": "unlabelled"
} | Default | {
"text_length": 473
} | 0dataset_mention |
mini-mlm-tweet-target-imdb This model is a fine-tuned version of [muhtasham/mini-mlm-tweet](https://huggingface.co/muhtasham/mini-mlm-tweet) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.4742 - Accuracy: 0.8324 - F1: 0.9085 | {
"text": " mini-mlm-tweet-target-imdb This model is a fine-tuned version of [muhtasham/mini-mlm-tweet](https://huggingface.co/muhtasham/mini-mlm-tweet) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.4742 - Accuracy: 0.8324 - F1: 0.9085 "
} | [
{
"label": "dataset_mention",
"score": 0.9976600967163771
},
{
"label": "no_dataset_mention",
"score": 0.0023399032836228136
}
] | Snorkel | null | null | null | false | null | 00761bf3-e16e-48e9-bcee-4a432c9b91c0 | {
"split": "unlabelled"
} | Default | {
"text_length": 269
} | 0dataset_mention |
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2.380655430044305e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2.380655430044305e-05, 'decay_steps': 3221, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.05} - training_precision: mixed_float16 | {
"text": " Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2.380655430044305e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2.380655430044305e-05, 'decay_steps': 3221, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.05} - training_precision: mixed_float16 "
} | [
{
"label": "no_dataset_mention",
"score": 0.9487513515880792
},
{
"label": "dataset_mention",
"score": 0.051248648411920804
}
] | Snorkel | null | null | null | false | null | 0076ceca-5efc-4185-9ba5-b92711652c3a | {
"split": "unlabelled"
} | Default | {
"text_length": 666
} | 1no_dataset_mention |
模型介绍 模型分成四部分: * Text Encoder:把中文文本输入转化成 Embedding 向量 * Latent Diffusion Model:在 Latent 空间中根据文本输入处理随机生成的噪声 * Auto Encoder:将 Latent 空间中的张量还原为图片 * Super Resolution:提升图片分辨率 我们使用中文模型CLIP-ViT-L作为 Text Encoder,使用 [latent-diffusion](https://github.com/CompVis/latent-diffusion) 中的 Auto Encoder,使用 [ESRGAN](https://github.com/xinntao/ESRGAN) 作为 Super Resolution 模型。我们使用 [Noah-Wukong](https://wukong-dataset.github.io/wukong-dataset/) 数据集中的两千万图文对 Latent Diffusion Model 进行了预训练。 我们使用了古诗词文图数据集 [paint4poem](https://github.com/paint4poem/paint4poem) 进行微调,微调后模型能够为古诗词生成精美的古风插画。 | {
"text": " 模型介绍 模型分成四部分: * Text Encoder:把中文文本输入转化成 Embedding 向量 * Latent Diffusion Model:在 Latent 空间中根据文本输入处理随机生成的噪声 * Auto Encoder:将 Latent 空间中的张量还原为图片 * Super Resolution:提升图片分辨率 我们使用中文模型CLIP-ViT-L作为 Text Encoder,使用 [latent-diffusion](https://github.com/CompVis/latent-diffusion) 中的 Auto Encoder,使用 [ESRGAN](https://github.com/xinntao/ESRGAN) 作为 Super Resolution 模型。我们使用 [Noah-Wukong](https://wukong-dataset.github.io/wukong-dataset/) 数据集中的两千万图文对 Latent Diffusion Model 进行了预训练。 我们使用了古诗词文图数据集 [paint4poem](https://github.com/paint4poem/paint4poem) 进行微调,微调后模型能够为古诗词生成精美的古风插画。 "
} | [
{
"label": "dataset_mention",
"score": 0.9976600967163771
},
{
"label": "no_dataset_mention",
"score": 0.0023399032836228136
}
] | Snorkel | null | null | null | false | null | 0078a37a-3034-4dcf-9754-e43b19737409 | {
"split": "unlabelled"
} | Default | {
"text_length": 571
} | 0dataset_mention |
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 5 - eval_batch_size: 5 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 | {
"text": " Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 5 - eval_batch_size: 5 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 "
} | [
{
"label": "no_dataset_mention",
"score": 0.9487513515880792
},
{
"label": "dataset_mention",
"score": 0.051248648411920804
}
] | Snorkel | null | null | null | false | null | 007c38eb-6042-4403-8e23-693e1e16ed3f | {
"split": "unlabelled"
} | Default | {
"text_length": 265
} | 1no_dataset_mention |
Whisper Large Nepali - Drishti Sharma This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.2551 - Wer: 18.8467 | {
"text": " Whisper Large Nepali - Drishti Sharma This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.2551 - Wer: 18.8467 "
} | [
{
"label": "dataset_mention",
"score": 0.9997774755092824
},
{
"label": "no_dataset_mention",
"score": 0.00022252449071764603
}
] | Snorkel | null | null | null | false | null | 0082113b-9207-4ed2-8cdb-e1aaf2dc5fd7 | {
"split": "unlabelled"
} | Default | {
"text_length": 268
} | 0dataset_mention |
kobart_16_5.6e-5_datav2_min30_lp5.0_temperature1.0 This model is a fine-tuned version of [gogamza/kobart-base-v2](https://huggingface.co/gogamza/kobart-base-v2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.7174 - Rouge1: 35.7621 - Rouge2: 12.8914 - Rougel: 23.6695 - Bleu1: 29.9954 - Bleu2: 17.513 - Bleu3: 10.317 - Bleu4: 5.8532 - Gen Len: 49.3147 | {
"text": " kobart_16_5.6e-5_datav2_min30_lp5.0_temperature1.0 This model is a fine-tuned version of [gogamza/kobart-base-v2](https://huggingface.co/gogamza/kobart-base-v2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.7174 - Rouge1: 35.7621 - Rouge2: 12.8914 - Rougel: 23.6695 - Bleu1: 29.9954 - Bleu2: 17.513 - Bleu3: 10.317 - Bleu4: 5.8532 - Gen Len: 49.3147 "
} | [
{
"label": "dataset_mention",
"score": 0.9976600967163771
},
{
"label": "no_dataset_mention",
"score": 0.0023399032836228136
}
] | Snorkel | null | null | null | false | null | 00878d6e-e7d1-4849-8a00-6105e036dc21 | {
"split": "unlabelled"
} | Default | {
"text_length": 395
} | 0dataset_mention |
opus-mt-es-en-finetuned-es-to-en This model is a fine-tuned version of [Helsinki-NLP/opus-mt-es-en](https://huggingface.co/Helsinki-NLP/opus-mt-es-en) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5851 - Bleu: 71.1382 - Gen Len: 10.3225 | {
"text": " opus-mt-es-en-finetuned-es-to-en This model is a fine-tuned version of [Helsinki-NLP/opus-mt-es-en](https://huggingface.co/Helsinki-NLP/opus-mt-es-en) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5851 - Bleu: 71.1382 - Gen Len: 10.3225 "
} | [
{
"label": "dataset_mention",
"score": 0.9976600967163771
},
{
"label": "no_dataset_mention",
"score": 0.0023399032836228136
}
] | Snorkel | null | null | null | false | null | 0087b79a-b8fa-4952-9d4b-6c73ff528e3f | {
"split": "unlabelled"
} | Default | {
"text_length": 282
} | 0dataset_mention |
fathyshalab/domain_transfer_general-massive_general-roberta-large-v1-5-95 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. | {
"text": " fathyshalab/domain_transfer_general-massive_general-roberta-large-v1-5-95 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. "
} | [
{
"label": "dataset_mention",
"score": 0.7735312081442667
},
{
"label": "no_dataset_mention",
"score": 0.22646879185573326
}
] | Snorkel | null | null | null | false | null | 00887d92-d0b8-4dfc-8178-b22debc313f4 | {
"split": "unlabelled"
} | Default | {
"text_length": 453
} | 0dataset_mention |
wav2vec2-base-ft-keyword-spotting This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset. It achieves the following results on the evaluation set: - Loss: 0.0824 - Accuracy: 0.9826 | {
"text": " wav2vec2-base-ft-keyword-spotting This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset. It achieves the following results on the evaluation set: - Loss: 0.0824 - Accuracy: 0.9826 "
} | [
{
"label": "dataset_mention",
"score": 0.9976600967163771
},
{
"label": "no_dataset_mention",
"score": 0.0023399032836228136
}
] | Snorkel | null | null | null | false | null | 0088c808-76d0-49f2-9dc5-8279986ad1c5 | {
"split": "unlabelled"
} | Default | {
"text_length": 261
} | 0dataset_mention |
BibTeX Entry and Citation Info ``` @article{wang2021you, title={You Only Learn One Representation: Unified Network for Multiple Tasks}, author={Wang, Chien-Yao and Yeh, I-Hau and Liao, Hong-Yuan Mark}, journal={arXiv preprint arXiv:2105.04206}, year={2021} } ``` | {
"text": " BibTeX Entry and Citation Info ``` @article{wang2021you, title={You Only Learn One Representation: Unified Network for Multiple Tasks}, author={Wang, Chien-Yao and Yeh, I-Hau and Liao, Hong-Yuan Mark}, journal={arXiv preprint arXiv:2105.04206}, year={2021} } ```"
} | [
{
"label": "dataset_mention",
"score": 0.7735312081442667
},
{
"label": "no_dataset_mention",
"score": 0.22646879185573326
}
] | Snorkel | null | null | null | false | null | 008ec3d2-1f30-41ab-a7dc-984391a6af80 | {
"split": "unlabelled"
} | Default | {
"text_length": 272
} | 0dataset_mention |
T5-Efficient-LARGE-NL2 (Deep-Narrow version) T5-Efficient-LARGE-NL2 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5). It is a *pretrained-only* checkpoint and was released with the paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*. In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures of similar parameter count. To quote the paper: > We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased > before considering any other forms of uniform scaling across other dimensions. This is largely due to > how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a > tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise, > a tall base model might also generally more efficient compared to a large model. We generally find > that, regardless of size, even if absolute performance might increase as we continue to stack layers, > the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36 > layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e., > params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params, > FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to > consider. To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially. A sequence of word embeddings is therefore processed sequentially by each transformer block. | {
"text": " T5-Efficient-LARGE-NL2 (Deep-Narrow version) T5-Efficient-LARGE-NL2 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5). It is a *pretrained-only* checkpoint and was released with the paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*. In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures of similar parameter count. To quote the paper: > We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased > before considering any other forms of uniform scaling across other dimensions. This is largely due to > how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a > tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise, > a tall base model might also generally more efficient compared to a large model. We generally find > that, regardless of size, even if absolute performance might increase as we continue to stack layers, > the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36 > layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e., > params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params, > FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to > consider. To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially. A sequence of word embeddings is therefore processed sequentially by each transformer block. "
} | [
{
"label": "dataset_mention",
"score": 0.7735312081442667
},
{
"label": "no_dataset_mention",
"score": 0.22646879185573326
}
] | Snorkel | null | null | null | false | null | 0092aa08-9697-4b0c-ab73-91acf2cf8b72 | {
"split": "unlabelled"
} | Default | {
"text_length": 2073
} | 0dataset_mention |
T5-Efficient-SMALL-DM2000 (Deep-Narrow version) T5-Efficient-SMALL-DM2000 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5). It is a *pretrained-only* checkpoint and was released with the paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*. In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures of similar parameter count. To quote the paper: > We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased > before considering any other forms of uniform scaling across other dimensions. This is largely due to > how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a > tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise, > a tall base model might also generally more efficient compared to a large model. We generally find > that, regardless of size, even if absolute performance might increase as we continue to stack layers, > the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36 > layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e., > params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params, > FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to > consider. To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially. A sequence of word embeddings is therefore processed sequentially by each transformer block. | {
"text": " T5-Efficient-SMALL-DM2000 (Deep-Narrow version) T5-Efficient-SMALL-DM2000 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5). It is a *pretrained-only* checkpoint and was released with the paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*. In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures of similar parameter count. To quote the paper: > We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased > before considering any other forms of uniform scaling across other dimensions. This is largely due to > how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a > tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise, > a tall base model might also generally more efficient compared to a large model. We generally find > that, regardless of size, even if absolute performance might increase as we continue to stack layers, > the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36 > layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e., > params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params, > FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to > consider. To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially. A sequence of word embeddings is therefore processed sequentially by each transformer block. "
} | [
{
"label": "dataset_mention",
"score": 0.7735312081442667
},
{
"label": "no_dataset_mention",
"score": 0.22646879185573326
}
] | Snorkel | null | null | null | false | null | 0099b49d-0d2e-41b6-8235-4d68592caf1e | {
"split": "unlabelled"
} | Default | {
"text_length": 2079
} | 0dataset_mention |
Available Models - **unimo-text-1.0**, *12 layer, 12 heads, 768 hidden size, pretrained model* - **unimo-text-1.0-large**, *24 layer, 16 heads, 1024 hidden size, pretrained model* - **unimo-text-1.0-lcsts-new**, *12 layer, 12 heads, 768 hidden size, finetuned on the lcsts-new Chinese summarization dataset* - **unimo-text-1.0-summary**, *12 layer, 12 heads, 768 hidden size, finetuned on several in-house Chinese summarization datasets* | {
"text": " Available Models - **unimo-text-1.0**, *12 layer, 12 heads, 768 hidden size, pretrained model* - **unimo-text-1.0-large**, *24 layer, 16 heads, 1024 hidden size, pretrained model* - **unimo-text-1.0-lcsts-new**, *12 layer, 12 heads, 768 hidden size, finetuned on the lcsts-new Chinese summarization dataset* - **unimo-text-1.0-summary**, *12 layer, 12 heads, 768 hidden size, finetuned on several in-house Chinese summarization datasets* "
} | [
{
"label": "dataset_mention",
"score": 0.9976600967163771
},
{
"label": "no_dataset_mention",
"score": 0.0023399032836228136
}
] | Snorkel | null | null | null | false | null | 009c8e37-c1cd-4698-a2f3-afcdb89fbb8e | {
"split": "unlabelled"
} | Default | {
"text_length": 441
} | 0dataset_mention |
exp_w2v2t_it_no-pretraining_s615 Fine-tuned randomly initialized wav2vec2 model for speech recognition using the train split of [Common Voice 7.0 (it)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool. | {
"text": " exp_w2v2t_it_no-pretraining_s615 Fine-tuned randomly initialized wav2vec2 model for speech recognition using the train split of [Common Voice 7.0 (it)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool. "
} | [
{
"label": "dataset_mention",
"score": 0.9729673904845365
},
{
"label": "no_dataset_mention",
"score": 0.027032609515463546
}
] | Snorkel | null | null | null | false | null | 009f8052-1c14-442a-9af0-cfd73a2b34e0 | {
"split": "unlabelled"
} | Default | {
"text_length": 413
} | 0dataset_mention |
also support OFA checkpoints. e.g. "OFA-Sys/ofa-large" if torch.cuda.is_available(): model.cuda() prompt = "please describe this image according to the given question: what piece of clothing is this boy putting on?" image = "glove_boy.jpeg" print(model.caption(prompt, image)) ``` To try generic captioning, just use "please describe this image according to the given question: what does the image describe?" PromptCap also support taking OCR inputs: ```python prompt = "please describe this image according to the given question: what year was this taken?" image = "dvds.jpg" ocr = "yip AE Mht juor 02/14/2012" print(model.caption(prompt, image, ocr)) ``` | {
"text": " also support OFA checkpoints. e.g. \"OFA-Sys/ofa-large\" if torch.cuda.is_available(): model.cuda() prompt = \"please describe this image according to the given question: what piece of clothing is this boy putting on?\" image = \"glove_boy.jpeg\" print(model.caption(prompt, image)) ``` To try generic captioning, just use \"please describe this image according to the given question: what does the image describe?\" PromptCap also support taking OCR inputs: ```python prompt = \"please describe this image according to the given question: what year was this taken?\" image = \"dvds.jpg\" ocr = \"yip AE Mht juor 02/14/2012\" print(model.caption(prompt, image, ocr)) ``` "
} | [
{
"label": "dataset_mention",
"score": 0.7735312081442667
},
{
"label": "no_dataset_mention",
"score": 0.22646879185573326
}
] | Snorkel | null | null | null | false | null | 00a0a159-4677-4adc-bf3b-4a95004e4fa8 | {
"split": "unlabelled"
} | Default | {
"text_length": 670
} | 0dataset_mention |
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 4.4466 | 1.0 | 2067 | 4.1217 | 0.3847 | | 3.9191 | 2.0 | 4134 | 3.6562 | 0.4298 | | 3.6397 | 3.0 | 6201 | 3.4417 | 0.4550 | | 3.522 | 4.0 | 8268 | 3.3239 | 0.4692 | | 3.4504 | 5.0 | 10335 | 3.2792 | 0.4766 | | {
"text": " Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 4.4466 | 1.0 | 2067 | 4.1217 | 0.3847 | | 3.9191 | 2.0 | 4134 | 3.6562 | 0.4298 | | 3.6397 | 3.0 | 6201 | 3.4417 | 0.4550 | | 3.522 | 4.0 | 8268 | 3.3239 | 0.4692 | | 3.4504 | 5.0 | 10335 | 3.2792 | 0.4766 | "
} | [
{
"label": "no_dataset_mention",
"score": 0.9487513515880792
},
{
"label": "dataset_mention",
"score": 0.051248648411920804
}
] | Snorkel | null | null | null | false | null | 00a0d3bb-b90a-4074-8cb1-de7c5f023697 | {
"split": "unlabelled"
} | Default | {
"text_length": 462
} | 1no_dataset_mention |
Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:| | No log | 1.0 | 272 | 1.9605 | 9.0786 | 17.3148 | | 2.3992 | 2.0 | 544 | 1.8884 | 10.1443 | 17.3301 | | 2.3992 | 3.0 | 816 | 1.8647 | 10.4816 | 17.3258 | | 2.0832 | 4.0 | 1088 | 1.8473 | 10.7396 | 17.3231 | | 2.0832 | 5.0 | 1360 | 1.8343 | 11.0937 | 17.2621 | | 1.9193 | 6.0 | 1632 | 1.8282 | 11.1303 | 17.3098 | | 1.9193 | 7.0 | 1904 | 1.8234 | 11.2971 | 17.2991 | | 1.8351 | 8.0 | 2176 | 1.8241 | 11.3433 | 17.2621 | | 1.8351 | 9.0 | 2448 | 1.8224 | 11.394 | 17.2691 | | 1.7747 | 10.0 | 2720 | 1.8228 | 11.427 | 17.2674 | | {
"text": " Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:| | No log | 1.0 | 272 | 1.9605 | 9.0786 | 17.3148 | | 2.3992 | 2.0 | 544 | 1.8884 | 10.1443 | 17.3301 | | 2.3992 | 3.0 | 816 | 1.8647 | 10.4816 | 17.3258 | | 2.0832 | 4.0 | 1088 | 1.8473 | 10.7396 | 17.3231 | | 2.0832 | 5.0 | 1360 | 1.8343 | 11.0937 | 17.2621 | | 1.9193 | 6.0 | 1632 | 1.8282 | 11.1303 | 17.3098 | | 1.9193 | 7.0 | 1904 | 1.8234 | 11.2971 | 17.2991 | | 1.8351 | 8.0 | 2176 | 1.8241 | 11.3433 | 17.2621 | | 1.8351 | 9.0 | 2448 | 1.8224 | 11.394 | 17.2691 | | 1.7747 | 10.0 | 2720 | 1.8228 | 11.427 | 17.2674 | "
} | [
{
"label": "no_dataset_mention",
"score": 0.9487513515880792
},
{
"label": "dataset_mention",
"score": 0.051248648411920804
}
] | Snorkel | null | null | null | false | null | 00a168df-bbcc-4820-8760-f73ebedeac21 | {
"split": "unlabelled"
} | Default | {
"text_length": 873
} | 1no_dataset_mention |
DialoGPT Trained on the Speech of a Game Character This is an instance of [microsoft/DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) trained on a game character, Joshua from [The World Ends With You](https://en.wikipedia.org/wiki/The_World_Ends_with_You). The data comes from [a Kaggle game script dataset](https://www.kaggle.com/ruolinzheng/twewy-game-script). I built a Discord AI chatbot based on this model. [Check out my GitHub repo.](https://github.com/RuolinZheng08/twewy-discord-chatbot) Chat with the model: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua") model = AutoModelWithLMHead.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua") | {
"text": " DialoGPT Trained on the Speech of a Game Character This is an instance of [microsoft/DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) trained on a game character, Joshua from [The World Ends With You](https://en.wikipedia.org/wiki/The_World_Ends_with_You). The data comes from [a Kaggle game script dataset](https://www.kaggle.com/ruolinzheng/twewy-game-script). I built a Discord AI chatbot based on this model. [Check out my GitHub repo.](https://github.com/RuolinZheng08/twewy-discord-chatbot) Chat with the model: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained(\"r3dhummingbird/DialoGPT-medium-joshua\") model = AutoModelWithLMHead.from_pretrained(\"r3dhummingbird/DialoGPT-medium-joshua\") "
} | [
{
"label": "dataset_mention",
"score": 0.9998315330731724
},
{
"label": "no_dataset_mention",
"score": 0.00016846692682752137
}
] | Snorkel | null | null | null | false | null | 00a2159b-7f8e-4f60-af73-cb68f19e6f62 | {
"split": "unlabelled"
} | Default | {
"text_length": 784
} | 0dataset_mention |
Bicleaner AI full model for en-sq Bicleaner AI is a tool that aims at detecting noisy sentence pairs in a parallel corpus. It indicates the likelihood of a pair of sentences being mutual translations (with a value near to 1) or not (with a value near to 0). Sentence pairs considered very noisy are scored with 0. Find out at our repository for further instructions on how to use it: https://github.com/bitextor/bicleaner-ai | {
"text": " Bicleaner AI full model for en-sq Bicleaner AI is a tool that aims at detecting noisy sentence pairs in a parallel corpus. It indicates the likelihood of a pair of sentences being mutual translations (with a value near to 1) or not (with a value near to 0). Sentence pairs considered very noisy are scored with 0. Find out at our repository for further instructions on how to use it: https://github.com/bitextor/bicleaner-ai "
} | [
{
"label": "dataset_mention",
"score": 0.7735312081442667
},
{
"label": "no_dataset_mention",
"score": 0.22646879185573326
}
] | Snorkel | null | null | null | false | null | 00a4e766-4631-44d9-955f-d0785a066201 | {
"split": "unlabelled"
} | Default | {
"text_length": 427
} | 0dataset_mention |
Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 3.5361 | 2.3102 | 0 | | 1.9179 | 1.8637 | 1 | | 1.6133 | 1.8637 | 2 | | {
"text": " Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 3.5361 | 2.3102 | 0 | | 1.9179 | 1.8637 | 1 | | 1.6133 | 1.8637 | 2 | "
} | [
{
"label": "no_dataset_mention",
"score": 0.9487513515880792
},
{
"label": "dataset_mention",
"score": 0.051248648411920804
}
] | Snorkel | null | null | null | false | null | 00aa52f2-b52d-4378-bab9-b738ea7f851e | {
"split": "unlabelled"
} | Default | {
"text_length": 226
} | 1no_dataset_mention |
Intended uses & limitations You can directly use this model as a language detector, i.e. for sequence classification tasks. Currently, it supports the following 41 languages, modern and medieval: Modern: Bulgarian (bg), Croatian (hr), Czech (cs), Danish (da), Dutch (nl), English (en), Estonian (et), Finnish (fi), French (fr), German (de), Greek (el), Hungarian (hu), Irish (ga), Italian (it), Latvian (lv), Lithuanian (lt), Maltese (mt), Polish (pl), Portuguese (pt), Romanian (ro), Slovak (sk), Slovenian (sl), Spanish (es), Swedish (sv), Russian (ru), Turkish (tr), Basque (eu), Catalan (ca), Albanian (sq), Serbian (se), Ukrainian (uk), Norwegian (no), Arabic (ar), Chinese (zh), Hebrew (he) Medieval: Middle High German (mhd), Latin (la), Middle Low German (gml), Old French (fro), Old Church Slavonic (chu), Early New High German (fnhd), Ancient and Medieval Greek (grc) | {
"text": " Intended uses & limitations You can directly use this model as a language detector, i.e. for sequence classification tasks. Currently, it supports the following 41 languages, modern and medieval: Modern: Bulgarian (bg), Croatian (hr), Czech (cs), Danish (da), Dutch (nl), English (en), Estonian (et), Finnish (fi), French (fr), German (de), Greek (el), Hungarian (hu), Irish (ga), Italian (it), Latvian (lv), Lithuanian (lt), Maltese (mt), Polish (pl), Portuguese (pt), Romanian (ro), Slovak (sk), Slovenian (sl), Spanish (es), Swedish (sv), Russian (ru), Turkish (tr), Basque (eu), Catalan (ca), Albanian (sq), Serbian (se), Ukrainian (uk), Norwegian (no), Arabic (ar), Chinese (zh), Hebrew (he) Medieval: Middle High German (mhd), Latin (la), Middle Low German (gml), Old French (fro), Old Church Slavonic (chu), Early New High German (fnhd), Ancient and Medieval Greek (grc) "
} | [
{
"label": "dataset_mention",
"score": 0.7735312081442667
},
{
"label": "no_dataset_mention",
"score": 0.22646879185573326
}
] | Snorkel | null | null | null | false | null | 00ad7ee4-1981-483e-b1f8-e0caea27a6cd | {
"split": "unlabelled"
} | Default | {
"text_length": 883
} | 0dataset_mention |
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6744 | 0.12 | 50 | 0.6094 | 0.66 | | 0.4942 | 0.23 | 100 | 0.3772 | 0.8667 | | 0.3857 | 0.35 | 150 | 0.3256 | 0.8867 | | 0.3483 | 0.46 | 200 | 0.3634 | 0.84 | | 0.3235 | 0.58 | 250 | 0.3338 | 0.8733 | | 0.3129 | 0.69 | 300 | 0.3482 | 0.8667 | | 0.3573 | 0.81 | 350 | 0.3632 | 0.8333 | | 0.3266 | 0.92 | 400 | 0.3274 | 0.86 | | 0.2615 | 1.04 | 450 | 0.3400 | 0.8667 | | 0.2409 | 1.15 | 500 | 0.3541 | 0.8467 | | 0.2508 | 1.27 | 550 | 0.2997 | 0.88 | | 0.2442 | 1.39 | 600 | 0.3654 | 0.86 | | 0.2625 | 1.5 | 650 | 0.3302 | 0.8667 | | 0.1983 | 1.62 | 700 | 0.3184 | 0.8867 | | 0.2356 | 1.73 | 750 | 0.3239 | 0.8867 | | 0.2078 | 1.85 | 800 | 0.2968 | 0.9 | | 0.2343 | 1.96 | 850 | 0.3148 | 0.8933 | | 0.1544 | 2.08 | 900 | 0.3535 | 0.9 | | 0.1407 | 2.19 | 950 | 0.3603 | 0.8733 | | 0.187 | 2.31 | 1000 | 0.3843 | 0.88 | | 0.144 | 2.42 | 1050 | 0.4546 | 0.8467 | | 0.1786 | 2.54 | 1100 | 0.3681 | 0.88 | | 0.1315 | 2.66 | 1150 | 0.3806 | 0.8867 | | 0.1399 | 2.77 | 1200 | 0.3880 | 0.8867 | | 0.1905 | 2.89 | 1250 | 0.3944 | 0.8733 | | 0.2043 | 3.0 | 1300 | 0.3974 | 0.8733 | | 0.1081 | 3.12 | 1350 | 0.3731 | 0.9067 | | 0.1055 | 3.23 | 1400 | 0.3809 | 0.8867 | | 0.1092 | 3.35 | 1450 | 0.3568 | 0.9 | | 0.0981 | 3.46 | 1500 | 0.3610 | 0.9133 | | 0.109 | 3.58 | 1550 | 0.4126 | 0.8867 | | 0.1001 | 3.7 | 1600 | 0.3831 | 0.9 | | 0.1027 | 3.81 | 1650 | 0.4064 | 0.9 | | 0.133 | 3.93 | 1700 | 0.3845 | 0.9 | | 0.1031 | 4.04 | 1750 | 0.3915 | 0.9 | | 0.0772 | 4.16 | 1800 | 0.3988 | 0.8867 | | 0.0785 | 4.27 | 1850 | 0.3962 | 0.9 | | 0.1059 | 4.39 | 1900 | 0.3969 | 0.9 | | 0.0668 | 4.5 | 1950 | 0.4095 | 0.8933 | | 0.0915 | 4.62 | 2000 | 0.4077 | 0.8933 | | 0.1413 | 4.73 | 2050 | 0.4004 | 0.9067 | | 0.0727 | 4.85 | 2100 | 0.4100 | 0.8933 | | 0.0724 | 4.97 | 2150 | 0.4125 | 0.8933 | | {
"text": " Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6744 | 0.12 | 50 | 0.6094 | 0.66 | | 0.4942 | 0.23 | 100 | 0.3772 | 0.8667 | | 0.3857 | 0.35 | 150 | 0.3256 | 0.8867 | | 0.3483 | 0.46 | 200 | 0.3634 | 0.84 | | 0.3235 | 0.58 | 250 | 0.3338 | 0.8733 | | 0.3129 | 0.69 | 300 | 0.3482 | 0.8667 | | 0.3573 | 0.81 | 350 | 0.3632 | 0.8333 | | 0.3266 | 0.92 | 400 | 0.3274 | 0.86 | | 0.2615 | 1.04 | 450 | 0.3400 | 0.8667 | | 0.2409 | 1.15 | 500 | 0.3541 | 0.8467 | | 0.2508 | 1.27 | 550 | 0.2997 | 0.88 | | 0.2442 | 1.39 | 600 | 0.3654 | 0.86 | | 0.2625 | 1.5 | 650 | 0.3302 | 0.8667 | | 0.1983 | 1.62 | 700 | 0.3184 | 0.8867 | | 0.2356 | 1.73 | 750 | 0.3239 | 0.8867 | | 0.2078 | 1.85 | 800 | 0.2968 | 0.9 | | 0.2343 | 1.96 | 850 | 0.3148 | 0.8933 | | 0.1544 | 2.08 | 900 | 0.3535 | 0.9 | | 0.1407 | 2.19 | 950 | 0.3603 | 0.8733 | | 0.187 | 2.31 | 1000 | 0.3843 | 0.88 | | 0.144 | 2.42 | 1050 | 0.4546 | 0.8467 | | 0.1786 | 2.54 | 1100 | 0.3681 | 0.88 | | 0.1315 | 2.66 | 1150 | 0.3806 | 0.8867 | | 0.1399 | 2.77 | 1200 | 0.3880 | 0.8867 | | 0.1905 | 2.89 | 1250 | 0.3944 | 0.8733 | | 0.2043 | 3.0 | 1300 | 0.3974 | 0.8733 | | 0.1081 | 3.12 | 1350 | 0.3731 | 0.9067 | | 0.1055 | 3.23 | 1400 | 0.3809 | 0.8867 | | 0.1092 | 3.35 | 1450 | 0.3568 | 0.9 | | 0.0981 | 3.46 | 1500 | 0.3610 | 0.9133 | | 0.109 | 3.58 | 1550 | 0.4126 | 0.8867 | | 0.1001 | 3.7 | 1600 | 0.3831 | 0.9 | | 0.1027 | 3.81 | 1650 | 0.4064 | 0.9 | | 0.133 | 3.93 | 1700 | 0.3845 | 0.9 | | 0.1031 | 4.04 | 1750 | 0.3915 | 0.9 | | 0.0772 | 4.16 | 1800 | 0.3988 | 0.8867 | | 0.0785 | 4.27 | 1850 | 0.3962 | 0.9 | | 0.1059 | 4.39 | 1900 | 0.3969 | 0.9 | | 0.0668 | 4.5 | 1950 | 0.4095 | 0.8933 | | 0.0915 | 4.62 | 2000 | 0.4077 | 0.8933 | | 0.1413 | 4.73 | 2050 | 0.4004 | 0.9067 | | 0.0727 | 4.85 | 2100 | 0.4100 | 0.8933 | | 0.0724 | 4.97 | 2150 | 0.4125 | 0.8933 | "
} | [
{
"label": "no_dataset_mention",
"score": 0.9487513515880792
},
{
"label": "dataset_mention",
"score": 0.051248648411920804
}
] | Snorkel | null | null | null | false | null | 00ae3d4b-82a2-48ab-977b-c0f5937b86a7 | {
"split": "unlabelled"
} | Default | {
"text_length": 2811
} | 1no_dataset_mention |
Overview  We propose DiffCSE, an unsupervised contrastive learning framework for learning sentence embeddings. DiffCSE learns sentence embeddings that are sensitive to the difference between the original sentence and an edited sentence, where the edited sentence is obtained by stochastically masking out the original sentence and then sampling from a masked language model. We show that DiffSCE is an instance of equivariant contrastive learning [(Dangovski et al., 2021)](https://arxiv.org/abs/2111.00899), which generalizes contrastive learning and learns representations that are insensitive to certain types of augmentations and sensitive to other "harmful" types of augmentations. Our experiments show that DiffCSE achieves state-of-the-art results among unsupervised sentence representation learning methods, outperforming unsupervised SimCSE by 2.3 absolute points on semantic textual similarity tasks. | {
"text": " Overview  We propose DiffCSE, an unsupervised contrastive learning framework for learning sentence embeddings. DiffCSE learns sentence embeddings that are sensitive to the difference between the original sentence and an edited sentence, where the edited sentence is obtained by stochastically masking out the original sentence and then sampling from a masked language model. We show that DiffSCE is an instance of equivariant contrastive learning [(Dangovski et al., 2021)](https://arxiv.org/abs/2111.00899), which generalizes contrastive learning and learns representations that are insensitive to certain types of augmentations and sensitive to other \"harmful\" types of augmentations. Our experiments show that DiffCSE achieves state-of-the-art results among unsupervised sentence representation learning methods, outperforming unsupervised SimCSE by 2.3 absolute points on semantic textual similarity tasks. "
} | [
{
"label": "dataset_mention",
"score": 0.7735312081442667
},
{
"label": "no_dataset_mention",
"score": 0.22646879185573326
}
] | Snorkel | null | null | null | false | null | 00af7821-c19f-4511-8398-8d08c6e0a0bb | {
"split": "unlabelled"
} | Default | {
"text_length": 984
} | 0dataset_mention |
PIXEL (Pixel-based Encoder of Language) PIXEL is a language model trained to reconstruct masked image patches that contain rendered text. PIXEL was pretrained on the *English* Wikipedia and Bookcorpus (in total around 3.2B words) but can theoretically be finetuned on data in any written language that can be typeset on a computer screen because it operates on rendered text as opposed to using a tokenizer with a fixed vocabulary. It is not currently possible to use the Hosted Inference API with PIXEL. Paper: [Language Modelling with Pixels](https://arxiv.org/abs/2207.06991) Codebase: [https://github.com/xplip/pixel](https://github.com/xplip/pixel) | {
"text": " PIXEL (Pixel-based Encoder of Language) PIXEL is a language model trained to reconstruct masked image patches that contain rendered text. PIXEL was pretrained on the *English* Wikipedia and Bookcorpus (in total around 3.2B words) but can theoretically be finetuned on data in any written language that can be typeset on a computer screen because it operates on rendered text as opposed to using a tokenizer with a fixed vocabulary. It is not currently possible to use the Hosted Inference API with PIXEL. Paper: [Language Modelling with Pixels](https://arxiv.org/abs/2207.06991) Codebase: [https://github.com/xplip/pixel](https://github.com/xplip/pixel) "
} | [
{
"label": "dataset_mention",
"score": 0.9794001894568531
},
{
"label": "no_dataset_mention",
"score": 0.020599810543146933
}
] | Snorkel | null | null | null | false | null | 00afba6c-d709-4b30-9219-6f5de8b1b4ce | {
"split": "unlabelled"
} | Default | {
"text_length": 661
} | 0dataset_mention |
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.3356 | 1.0 | 1033 | 0.2558 | 0.3761 | | 0.2588 | 2.0 | 2066 | 0.2352 | 0.5246 | | 0.2252 | 3.0 | 3099 | 0.2292 | 0.5996 | | 0.2044 | 4.0 | 4132 | 0.2417 | 0.5950 | | 0.189 | 5.0 | 5165 | 0.2433 | 0.6102 | | 0.1718 | 6.0 | 6198 | 0.2671 | 0.5894 | | 0.1627 | 7.0 | 7231 | 0.2686 | 0.6319 | | 0.1513 | 8.0 | 8264 | 0.2779 | 0.6079 | | 0.1451 | 9.0 | 9297 | 0.2848 | 0.6195 | | 0.1429 | 10.0 | 10330 | 0.2872 | 0.6095 | | {
"text": " Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.3356 | 1.0 | 1033 | 0.2558 | 0.3761 | | 0.2588 | 2.0 | 2066 | 0.2352 | 0.5246 | | 0.2252 | 3.0 | 3099 | 0.2292 | 0.5996 | | 0.2044 | 4.0 | 4132 | 0.2417 | 0.5950 | | 0.189 | 5.0 | 5165 | 0.2433 | 0.6102 | | 0.1718 | 6.0 | 6198 | 0.2671 | 0.5894 | | 0.1627 | 7.0 | 7231 | 0.2686 | 0.6319 | | 0.1513 | 8.0 | 8264 | 0.2779 | 0.6079 | | 0.1451 | 9.0 | 9297 | 0.2848 | 0.6195 | | 0.1429 | 10.0 | 10330 | 0.2872 | 0.6095 | "
} | [
{
"label": "no_dataset_mention",
"score": 0.9487513515880792
},
{
"label": "dataset_mention",
"score": 0.051248648411920804
}
] | Snorkel | null | null | null | false | null | 00b399ab-d7ec-4890-98a5-61332e116c74 | {
"split": "unlabelled"
} | Default | {
"text_length": 753
} | 1no_dataset_mention |
MODEL BY ShadoWxShinigamI Use Token - Rangoli mdjrny-rngli at the beginning of your prompt Training - 2240 steps, v1-5 Base, 28 images, 640x640 Prompt engineering is not required. In case something doesn't work, use Weighted prompts. Examples:-       | {
"text": "MODEL BY ShadoWxShinigamI Use Token - Rangoli mdjrny-rngli at the beginning of your prompt Training - 2240 steps, v1-5 Base, 28 images, 640x640 Prompt engineering is not required. In case something doesn't work, use Weighted prompts. Examples:-       "
} | [
{
"label": "dataset_mention",
"score": 0.7735312081442667
},
{
"label": "no_dataset_mention",
"score": 0.22646879185573326
}
] | Snorkel | null | null | null | false | null | 00b4e9a8-8ef4-44bb-a6d4-8dab19c00e86 | {
"split": "unlabelled"
} | Default | {
"text_length": 901
} | 0dataset_mention |
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 24 - eval_batch_size: 48 - seed: 2022 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.15 - num_epochs: 50 - mixed_precision_training: Native AMP | {
"text": " Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 24 - eval_batch_size: 48 - seed: 2022 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.15 - num_epochs: 50 - mixed_precision_training: Native AMP "
} | [
{
"label": "no_dataset_mention",
"score": 0.9487513515880792
},
{
"label": "dataset_mention",
"score": 0.051248648411920804
}
] | Snorkel | null | null | null | false | null | 00b68cfa-064b-4c81-a61d-0efe19bfd097 | {
"split": "unlabelled"
} | Default | {
"text_length": 344
} | 1no_dataset_mention |
Model and Samples - [`speecht5_vc.pt`](./speecht5_vc.pt) are reimplemented Voice Conversion fine-tuning on the released manifest **but with a smaller batch size or max updates** (Ensure the manifest is ok). - `samples` are created by the released fine-tuned model and vocoder. | {
"text": " Model and Samples - [`speecht5_vc.pt`](./speecht5_vc.pt) are reimplemented Voice Conversion fine-tuning on the released manifest **but with a smaller batch size or max updates** (Ensure the manifest is ok). - `samples` are created by the released fine-tuned model and vocoder. "
} | [
{
"label": "dataset_mention",
"score": 0.7735312081442667
},
{
"label": "no_dataset_mention",
"score": 0.22646879185573326
}
] | Snorkel | null | null | null | false | null | 00b76191-51e2-4c6d-a742-2ed61c29c89d | {
"split": "unlabelled"
} | Default | {
"text_length": 280
} | 0dataset_mention |
requirement packages !pip install git+https://github.com/huggingface/datasets.git !pip install git+https://github.com/huggingface/transformers.git !pip install torchaudio !pip install librosa !pip install jiwer ``` **Normalizer** ```bash !wget -O normalizer.py https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-lithuanian/raw/main/normalizer.py ``` **Prediction** ```python import librosa import torch import torchaudio from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor from datasets import load_dataset import numpy as np import re import string import IPython.display as ipd from normalizer import normalizer def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) speech_array = speech_array.squeeze().numpy() speech_array = librosa.resample(np.asarray(speech_array), sampling_rate, 16_000) batch["speech"] = speech_array return batch def predict(batch): features = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) input_values = features.input_values.to(device) attention_mask = features.attention_mask.to(device) with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits pred_ids = torch.argmax(logits, dim=-1) batch["predicted"] = processor.batch_decode(pred_ids)[0] return batch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") processor = Wav2Vec2Processor.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-georgian") model = Wav2Vec2ForCTC.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-georgian").to(device) dataset = load_dataset("common_voice", "ka", split="test[:1%]") dataset = dataset.map( normalizer, fn_kwargs={"remove_extra_space": True}, remove_columns=list(set(dataset.column_names) - set(['sentence', 'path'])) ) dataset = dataset.map(speech_file_to_array_fn) result = dataset.map(predict) max_items = np.random.randint(0, len(result), 20).tolist() for i in max_items: reference, predicted = result["sentence"][i], result["predicted"][i] print("reference:", reference) print("predicted:", predicted) print('---') ``` **Output:** ```text reference: პრეზიდენტობისას ბუში საქართველოს და უკრაინის დემოკრატიულ მოძრაობების და ნატოში გაწევრიანების აქტიური მხარდამჭერი იყო predicted: პრეზიდენტო ვისას ბუში საქართველოს და უკრაინის დემოკრატიულ მოძრაობების და ნატიში დაწევრიანების აქტიური მხარდამჭერი იყო --- reference: შესაძლებელია მისი დამონება და მსახურ დემონად გადაქცევა predicted: შესაძლებელია მისი დამონებათ და მსახურდემანად გადაქცევა --- reference: ეს გამოსახულებები აღბეჭდილი იყო მოსკოვის დიდი მთავრებისა და მეფეების ბეჭდებზე predicted: ეს გამოსახულებები აღბეჭდილი იყო მოსკოვის დიდი მთავრებისა და მეფეების ბეჭდებზე --- reference: ჯოლიმ ოქროს გლობუსისა და კინომსახიობთა გილდიის ნომინაციები მიიღო predicted: ჯოლი მოქროს გლობუსისა და კინამსახიობთა გილდიის ნომინაციები მიიღო --- reference: შემდგომში საქალაქო ბიბლიოთეკა სარაიონო ბიბლიოთეკად გადაკეთდა გაიზარდა წიგნადი ფონდი predicted: შემდღომში საქალაქო ბიბლიოთეკა სარაიონო ბიბლიოთეკად გადაკეთა გაიზარდა წიგნადი ფოვდი --- reference: აბრამსი დაუკავშირდა მირანდას და ორი თვის განმავლობაში ისინი მუშაობდნენ აღნიშნული სცენის თანმხლებ მელოდიაზე predicted: აბრამში და უკავშირდა მირანდეს და ორითვის განმავლობაში ისინი მუშაობდნენა აღნიშნულის ჩენის მთამხლევით მელოდიაში --- reference: ამჟამად თემთა პალატის ოპოზიციის ლიდერია ლეიბორისტული პარტიის ლიდერი ჯერემი კორბინი predicted: ამჟამად თემთა პალატის ოპოზიციის ლიდერია ლეიბურისტული პარტიის ლიდერი ჯერემი კორვინი --- reference: ორი predicted: ორი --- reference: მას შემდეგ იგი კოლექტივის მუდმივი წევრია predicted: მას შემდეგ იგი კოლექტივის ფუდ მივი წევრია --- reference: აზერბაიჯანულ ფილოსოფიას შეიძლება მივაკუთვნოთ რუსეთის საზოგადო მოღვაწე ჰეიდარ ჯემალი predicted: აზერგვოიჯანალ ფილოსოფიას შეიძლება მივაკუთვნოთ რუსეთის საზოგადო მოღვაწე ჰეიდარ ჯემალი --- reference: ბრონქსში ჯერომის ავენიუ ჰყოფს გამჭოლ ქუჩებს აღმოსავლეთ და დასავლეთ ნაწილებად predicted: რონგში დერომიწ ავენილ პოფს გამ დოლფურქებს აღმოსავლეთ და დასავლეთ ნაწილებად --- reference: ჰაერი არის ჟანგბადის ის ძირითადი წყარო რომელსაც საჭიროებს ყველა ცოცხალი ორგანიზმი predicted: არი არის ჯამუბადესის ძირითადი წყარო რომელსაც საჭიროოებს ყველა ცოცხალი ორგანიზმი --- reference: ჯგუფი უმეტესწილად ასრულებს პოპმუსიკის ჟანრის სიმღერებს predicted: ჯგუფიუმეტესწევად ასრულებს პოპნუსიკის ჟანრის სიმრერებს --- reference: ბაბილინა მუდმივად ცდილობდა შესაძლებლობების ფარგლებში მიეღო ცოდნა და ახალი ინფორმაცია predicted: ბაბილინა მუდმივა ცდილობდა შესაძლებლობების ფარგლებში მიიღო ცოტნა და ახალი ინფორმაცია --- reference: მრევლის რწმენით რომელი ჯგუფიც გაიმარჯვებდა მთელი წლის მანძილზე სიუხვე და ბარაქა არ მოაკლდებოდა predicted: მრევრის რწმენით რომელიჯგუფის გაიმარჯვებდა მთელიჭლის მანძილზა სიუყვეტაბარაქა არ მოაკლდებოდა --- reference: ნინო ჩხეიძეს განსაკუთრებული ღვაწლი მიუძღვის ქუთაისისა და რუსთაველის თეატრების შემოქმედებით ცხოვრებაში predicted: მინო ჩხეიძეს განსაკუთრებული ღოვაწლი მიოცხვის ქუთაისისა და რუსთაველის თეატრების შემოქმედებით ცხოვრებაში --- reference: იგი სამი დიალექტისგან შედგება predicted: იგი სამი დიალეთის გან შედგება --- reference: ფორმით სირაქლემებს წააგვანან predicted: ომიცი რაქლემებს ააგვანამ --- reference: დანი დაიბადა კოლუმბუსში ოჰაიოში predicted: დონი დაიბაოდა კოლუმბუსში ოხვაიოში --- reference: მშენებლობისათვის გამოიყო ადგილი ყოფილი აეროპორტის რაიონში predicted: შენებლობისათვის გამოიყო ადგილი ყოფილი აეროპორტის რაიონში --- ``` | {
"text": " requirement packages !pip install git+https://github.com/huggingface/datasets.git !pip install git+https://github.com/huggingface/transformers.git !pip install torchaudio !pip install librosa !pip install jiwer ``` **Normalizer** ```bash !wget -O normalizer.py https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-lithuanian/raw/main/normalizer.py ``` **Prediction** ```python import librosa import torch import torchaudio from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor from datasets import load_dataset import numpy as np import re import string import IPython.display as ipd from normalizer import normalizer def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch[\"path\"]) speech_array = speech_array.squeeze().numpy() speech_array = librosa.resample(np.asarray(speech_array), sampling_rate, 16_000) batch[\"speech\"] = speech_array return batch def predict(batch): features = processor(batch[\"speech\"], sampling_rate=16_000, return_tensors=\"pt\", padding=True) input_values = features.input_values.to(device) attention_mask = features.attention_mask.to(device) with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits pred_ids = torch.argmax(logits, dim=-1) batch[\"predicted\"] = processor.batch_decode(pred_ids)[0] return batch device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\") processor = Wav2Vec2Processor.from_pretrained(\"m3hrdadfi/wav2vec2-large-xlsr-georgian\") model = Wav2Vec2ForCTC.from_pretrained(\"m3hrdadfi/wav2vec2-large-xlsr-georgian\").to(device) dataset = load_dataset(\"common_voice\", \"ka\", split=\"test[:1%]\") dataset = dataset.map( normalizer, fn_kwargs={\"remove_extra_space\": True}, remove_columns=list(set(dataset.column_names) - set(['sentence', 'path'])) ) dataset = dataset.map(speech_file_to_array_fn) result = dataset.map(predict) max_items = np.random.randint(0, len(result), 20).tolist() for i in max_items: reference, predicted = result[\"sentence\"][i], result[\"predicted\"][i] print(\"reference:\", reference) print(\"predicted:\", predicted) print('---') ``` **Output:** ```text reference: პრეზიდენტობისას ბუში საქართველოს და უკრაინის დემოკრატიულ მოძრაობების და ნატოში გაწევრიანების აქტიური მხარდამჭერი იყო predicted: პრეზიდენტო ვისას ბუში საქართველოს და უკრაინის დემოკრატიულ მოძრაობების და ნატიში დაწევრიანების აქტიური მხარდამჭერი იყო --- reference: შესაძლებელია მისი დამონება და მსახურ დემონად გადაქცევა predicted: შესაძლებელია მისი დამონებათ და მსახურდემანად გადაქცევა --- reference: ეს გამოსახულებები აღბეჭდილი იყო მოსკოვის დიდი მთავრებისა და მეფეების ბეჭდებზე predicted: ეს გამოსახულებები აღბეჭდილი იყო მოსკოვის დიდი მთავრებისა და მეფეების ბეჭდებზე --- reference: ჯოლიმ ოქროს გლობუსისა და კინომსახიობთა გილდიის ნომინაციები მიიღო predicted: ჯოლი მოქროს გლობუსისა და კინამსახიობთა გილდიის ნომინაციები მიიღო --- reference: შემდგომში საქალაქო ბიბლიოთეკა სარაიონო ბიბლიოთეკად გადაკეთდა გაიზარდა წიგნადი ფონდი predicted: შემდღომში საქალაქო ბიბლიოთეკა სარაიონო ბიბლიოთეკად გადაკეთა გაიზარდა წიგნადი ფოვდი --- reference: აბრამსი დაუკავშირდა მირანდას და ორი თვის განმავლობაში ისინი მუშაობდნენ აღნიშნული სცენის თანმხლებ მელოდიაზე predicted: აბრამში და უკავშირდა მირანდეს და ორითვის განმავლობაში ისინი მუშაობდნენა აღნიშნულის ჩენის მთამხლევით მელოდიაში --- reference: ამჟამად თემთა პალატის ოპოზიციის ლიდერია ლეიბორისტული პარტიის ლიდერი ჯერემი კორბინი predicted: ამჟამად თემთა პალატის ოპოზიციის ლიდერია ლეიბურისტული პარტიის ლიდერი ჯერემი კორვინი --- reference: ორი predicted: ორი --- reference: მას შემდეგ იგი კოლექტივის მუდმივი წევრია predicted: მას შემდეგ იგი კოლექტივის ფუდ მივი წევრია --- reference: აზერბაიჯანულ ფილოსოფიას შეიძლება მივაკუთვნოთ რუსეთის საზოგადო მოღვაწე ჰეიდარ ჯემალი predicted: აზერგვოიჯანალ ფილოსოფიას შეიძლება მივაკუთვნოთ რუსეთის საზოგადო მოღვაწე ჰეიდარ ჯემალი --- reference: ბრონქსში ჯერომის ავენიუ ჰყოფს გამჭოლ ქუჩებს აღმოსავლეთ და დასავლეთ ნაწილებად predicted: რონგში დერომიწ ავენილ პოფს გამ დოლფურქებს აღმოსავლეთ და დასავლეთ ნაწილებად --- reference: ჰაერი არის ჟანგბადის ის ძირითადი წყარო რომელსაც საჭიროებს ყველა ცოცხალი ორგანიზმი predicted: არი არის ჯამუბადესის ძირითადი წყარო რომელსაც საჭიროოებს ყველა ცოცხალი ორგანიზმი --- reference: ჯგუფი უმეტესწილად ასრულებს პოპმუსიკის ჟანრის სიმღერებს predicted: ჯგუფიუმეტესწევად ასრულებს პოპნუსიკის ჟანრის სიმრერებს --- reference: ბაბილინა მუდმივად ცდილობდა შესაძლებლობების ფარგლებში მიეღო ცოდნა და ახალი ინფორმაცია predicted: ბაბილინა მუდმივა ცდილობდა შესაძლებლობების ფარგლებში მიიღო ცოტნა და ახალი ინფორმაცია --- reference: მრევლის რწმენით რომელი ჯგუფიც გაიმარჯვებდა მთელი წლის მანძილზე სიუხვე და ბარაქა არ მოაკლდებოდა predicted: მრევრის რწმენით რომელიჯგუფის გაიმარჯვებდა მთელიჭლის მანძილზა სიუყვეტაბარაქა არ მოაკლდებოდა --- reference: ნინო ჩხეიძეს განსაკუთრებული ღვაწლი მიუძღვის ქუთაისისა და რუსთაველის თეატრების შემოქმედებით ცხოვრებაში predicted: მინო ჩხეიძეს განსაკუთრებული ღოვაწლი მიოცხვის ქუთაისისა და რუსთაველის თეატრების შემოქმედებით ცხოვრებაში --- reference: იგი სამი დიალექტისგან შედგება predicted: იგი სამი დიალეთის გან შედგება --- reference: ფორმით სირაქლემებს წააგვანან predicted: ომიცი რაქლემებს ააგვანამ --- reference: დანი დაიბადა კოლუმბუსში ოჰაიოში predicted: დონი დაიბაოდა კოლუმბუსში ოხვაიოში --- reference: მშენებლობისათვის გამოიყო ადგილი ყოფილი აეროპორტის რაიონში predicted: შენებლობისათვის გამოიყო ადგილი ყოფილი აეროპორტის რაიონში --- ``` "
} | [
{
"label": "dataset_mention",
"score": 0.9992403493619176
},
{
"label": "no_dataset_mention",
"score": 0.0007596506380824495
}
] | Snorkel | null | null | null | false | null | 00b9bcf8-e228-47e9-8c66-50f964bf6bc2 | {
"split": "unlabelled"
} | Default | {
"text_length": 5456
} | 0dataset_mention |
sick Dreambooth model trained by Z3R069 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb) Sample pictures of this concept: | {
"text": " sick Dreambooth model trained by Z3R069 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb) Sample pictures of this concept: "
} | [
{
"label": "dataset_mention",
"score": 0.7735312081442667
},
{
"label": "no_dataset_mention",
"score": 0.22646879185573326
}
] | Snorkel | null | null | null | false | null | 00bb67a5-d968-4591-a198-16fa5bdb2d8d | {
"split": "unlabelled"
} | Default | {
"text_length": 602
} | 0dataset_mention |
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4126 | 0.1 | 500 | 2.2797 | 127.2639 | | 0.2099 | 0.1 | 1000 | 0.1774 | 28.2494 | | 0.1736 | 0.2 | 1500 | 0.1565 | 27.5733 | | 0.1506 | 0.3 | 2000 | 0.1514 | 26.0331 | | 0.1373 | 0.4 | 2500 | 0.1494 | 24.4177 | | 0.1298 | 0.5 | 3000 | 0.1456 | 25.0563 | | 0.1198 | 1.06 | 3500 | 0.1436 | 24.4177 | | 0.1102 | 0.1 | 4000 | 0.1452 | 24.2675 | | 0.1097 | 0.2 | 4500 | 0.1402 | 24.3050 | | 0.105 | 0.3 | 5000 | 0.1398 | 23.8167 | | {
"text": " Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4126 | 0.1 | 500 | 2.2797 | 127.2639 | | 0.2099 | 0.1 | 1000 | 0.1774 | 28.2494 | | 0.1736 | 0.2 | 1500 | 0.1565 | 27.5733 | | 0.1506 | 0.3 | 2000 | 0.1514 | 26.0331 | | 0.1373 | 0.4 | 2500 | 0.1494 | 24.4177 | | 0.1298 | 0.5 | 3000 | 0.1456 | 25.0563 | | 0.1198 | 1.06 | 3500 | 0.1436 | 24.4177 | | 0.1102 | 0.1 | 4000 | 0.1452 | 24.2675 | | 0.1097 | 0.2 | 4500 | 0.1402 | 24.3050 | | 0.105 | 0.3 | 5000 | 0.1398 | 23.8167 | "
} | [
{
"label": "no_dataset_mention",
"score": 0.9487513515880792
},
{
"label": "dataset_mention",
"score": 0.051248648411920804
}
] | Snorkel | null | null | null | false | null | 00bdacb8-acb3-4cbf-8fa1-dae260bc5e17 | {
"split": "unlabelled"
} | Default | {
"text_length": 765
} | 1no_dataset_mention |
monogptari-6.7b This model is a fine-tuned version of [facebook/opt-6.7b](https://huggingface.co/facebook/opt-6.7b) on an english monogatari (物語) dataset. It achieves the following results on the evaluation set: - Loss: 0.7030 - Accuracy: 0.8436 | {
"text": " monogptari-6.7b This model is a fine-tuned version of [facebook/opt-6.7b](https://huggingface.co/facebook/opt-6.7b) on an english monogatari (物語) dataset. It achieves the following results on the evaluation set: - Loss: 0.7030 - Accuracy: 0.8436 "
} | [
{
"label": "dataset_mention",
"score": 0.9976600967163771
},
{
"label": "no_dataset_mention",
"score": 0.0023399032836228136
}
] | Snorkel | null | null | null | false | null | 00be849d-db9a-47b4-a56d-8fc5579b2d46 | {
"split": "unlabelled"
} | Default | {
"text_length": 249
} | 0dataset_mention |
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:| | 5.0001 | 1.0 | 935 | 3.1102 | 0.0 | 0.0 | 0.0 | 0.0 | | 3.4066 | 2.0 | 1870 | 2.9836 | 0.0 | 0.0 | 0.0 | 0.0 | | 3.2832 | 3.0 | 2805 | 2.9384 | 0.0 | 0.0 | 0.0 | 0.0 | | 3.2334 | 4.0 | 3740 | 2.9233 | 0.0 | 0.0 | 0.0 | 0.0 | | {
"text": " Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:| | 5.0001 | 1.0 | 935 | 3.1102 | 0.0 | 0.0 | 0.0 | 0.0 | | 3.4066 | 2.0 | 1870 | 2.9836 | 0.0 | 0.0 | 0.0 | 0.0 | | 3.2832 | 3.0 | 2805 | 2.9384 | 0.0 | 0.0 | 0.0 | 0.0 | | 3.2334 | 4.0 | 3740 | 2.9233 | 0.0 | 0.0 | 0.0 | 0.0 | "
} | [
{
"label": "no_dataset_mention",
"score": 0.9487513515880792
},
{
"label": "dataset_mention",
"score": 0.051248648411920804
}
] | Snorkel | null | null | null | false | null | 00c0a362-41dc-46dc-8a56-85e12c6ad3e7 | {
"split": "unlabelled"
} | Default | {
"text_length": 561
} | 1no_dataset_mention |
recklessrecursion/2008_Sichuan_earthquake-clustered This model is a fine-tuned version of [nandysoham16/12-clustered_aug](https://huggingface.co/nandysoham16/12-clustered_aug) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5049 - Train End Logits Accuracy: 0.8507 - Train Start Logits Accuracy: 0.7778 - Validation Loss: 0.3830 - Validation End Logits Accuracy: 0.9474 - Validation Start Logits Accuracy: 0.8947 - Epoch: 0 | {
"text": " recklessrecursion/2008_Sichuan_earthquake-clustered This model is a fine-tuned version of [nandysoham16/12-clustered_aug](https://huggingface.co/nandysoham16/12-clustered_aug) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5049 - Train End Logits Accuracy: 0.8507 - Train Start Logits Accuracy: 0.7778 - Validation Loss: 0.3830 - Validation End Logits Accuracy: 0.9474 - Validation Start Logits Accuracy: 0.8947 - Epoch: 0 "
} | [
{
"label": "dataset_mention",
"score": 0.9976600967163771
},
{
"label": "no_dataset_mention",
"score": 0.0023399032836228136
}
] | Snorkel | null | null | null | false | null | 00c19819-671b-4fb8-a4e7-7873d90598de | {
"split": "unlabelled"
} | Default | {
"text_length": 475
} | 0dataset_mention |
Setup Create two Habana instances ([AWS EC2 DL1](https://aws.amazon.com/ec2/instance-types/dl1/)) using [Habana® Deep Learning Base AMI (Ubuntu 20.04)](https://aws.amazon.com/marketplace/pp/prodview-fw46rwuxrtfse) Create the PyTorch docker container running: ```bash docker run --name pytorch -td --runtime=habana -e HABANA_VISIBLE_DEVICES=all -e OMPI_MCA_btl_vader_single_copy_mechanism=none --cap-add=sys_nice --net=host --ipc=host vault.habana.ai/gaudi-docker/1.2.0/ubuntu20.04/habanalabs/pytorch-installer-1.10.0:1.2.0-585 ``` Enter the docker image by running: ``` docker exec -it pytorch /bin/bash ``` | {
"text": " Setup Create two Habana instances ([AWS EC2 DL1](https://aws.amazon.com/ec2/instance-types/dl1/)) using [Habana® Deep Learning Base AMI (Ubuntu 20.04)](https://aws.amazon.com/marketplace/pp/prodview-fw46rwuxrtfse) Create the PyTorch docker container running: ```bash docker run --name pytorch -td --runtime=habana -e HABANA_VISIBLE_DEVICES=all -e OMPI_MCA_btl_vader_single_copy_mechanism=none --cap-add=sys_nice --net=host --ipc=host vault.habana.ai/gaudi-docker/1.2.0/ubuntu20.04/habanalabs/pytorch-installer-1.10.0:1.2.0-585 ``` Enter the docker image by running: ``` docker exec -it pytorch /bin/bash ``` "
} | [
{
"label": "dataset_mention",
"score": 0.7735312081442667
},
{
"label": "no_dataset_mention",
"score": 0.22646879185573326
}
] | Snorkel | null | null | null | false | null | 00c2dafa-d8d7-4de5-9280-f9e490dc6f38 | {
"split": "unlabelled"
} | Default | {
"text_length": 617
} | 0dataset_mention |
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3.75e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 1337 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 100.0 - mixed_precision_training: Native AMP | {
"text": " Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3.75e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 1337 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 100.0 - mixed_precision_training: Native AMP "
} | [
{
"label": "no_dataset_mention",
"score": 0.9487513515880792
},
{
"label": "dataset_mention",
"score": 0.051248648411920804
}
] | Snorkel | null | null | null | false | null | 00c2edce-2307-43d1-baf3-b44f58aa7ba8 | {
"split": "unlabelled"
} | Default | {
"text_length": 349
} | 1no_dataset_mention |
Training data We trained different variants T0 with different mixtures of datasets. |Model|Training datasets| |--|--| |FLIPPED-11B|- Multiple-Choice QA: CommonsenseQA, DREAM, QUAIL, QuaRTz, Social IQA, WiQA, Cosmos, QASC, Quarel, SciQ<br>- Sentiment: Amazon, App Reviews, IMDB, Rotten Tomatoes, Yelp<br>- Topic Classification: AG News, DBPedia<br>- Paraphrase Identification: MRPC, PAWS, QQP| |FLIPPED_3B|Same as FLIPPED-11B| We only choose prompts examples that has output lables, which can be found on the dataset page. | {
"text": " Training data We trained different variants T0 with different mixtures of datasets. |Model|Training datasets| |--|--| |FLIPPED-11B|- Multiple-Choice QA: CommonsenseQA, DREAM, QUAIL, QuaRTz, Social IQA, WiQA, Cosmos, QASC, Quarel, SciQ<br>- Sentiment: Amazon, App Reviews, IMDB, Rotten Tomatoes, Yelp<br>- Topic Classification: AG News, DBPedia<br>- Paraphrase Identification: MRPC, PAWS, QQP| |FLIPPED_3B|Same as FLIPPED-11B| We only choose prompts examples that has output lables, which can be found on the dataset page. "
} | [
{
"label": "dataset_mention",
"score": 0.9999997486492768
},
{
"label": "no_dataset_mention",
"score": 2.513507231577376e-7
}
] | Snorkel | null | null | null | false | null | 00c4ddcc-5984-4ada-99fc-8f321cbd318f | {
"split": "unlabelled"
} | Default | {
"text_length": 525
} | 0dataset_mention |
Evaluation The model can be evaluated as follows on the Estonian test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "et", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("vasilis/wav2vec2-large-xlsr-53-Estonian") model = Wav2Vec2ForCTC.from_pretrained("vasilis/wav2vec2-large-xlsr-53-Estonian") model.to("cuda") chars_to_ignore_regex = "[\,\?\.\!\-\;\:\"\“\%\‘\”\�\']" | {
"text": " Evaluation The model can be evaluated as follows on the Estonian test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset(\"common_voice\", \"et\", split=\"test\") wer = load_metric(\"wer\") processor = Wav2Vec2Processor.from_pretrained(\"vasilis/wav2vec2-large-xlsr-53-Estonian\") model = Wav2Vec2ForCTC.from_pretrained(\"vasilis/wav2vec2-large-xlsr-53-Estonian\") model.to(\"cuda\") chars_to_ignore_regex = \"[\\,\\?\\.\\!\\-\\;\\:\\\"\\“\\%\\‘\\”\\�\\']\" "
} | [
{
"label": "dataset_mention",
"score": 0.9729673904845365
},
{
"label": "no_dataset_mention",
"score": 0.027032609515463546
}
] | Snorkel | null | null | null | false | null | 00c55438-c247-4e89-935f-7df5e0adc903 | {
"split": "unlabelled"
} | Default | {
"text_length": 591
} | 0dataset_mention |
opus-mt-fi-ZH * source languages: fi * target languages: cmn,cn,yue,ze_zh,zh_cn,zh_CN,zh_HK,zh_tw,zh_TW,zh_yue,zhs,zht,zh * OPUS readme: [fi-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh/opus-2020-01-16.eval.txt) | {
"text": " opus-mt-fi-ZH * source languages: fi * target languages: cmn,cn,yue,ze_zh,zh_cn,zh_CN,zh_HK,zh_tw,zh_TW,zh_yue,zhs,zht,zh * OPUS readme: [fi-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh/opus-2020-01-16.eval.txt) "
} | [
{
"label": "dataset_mention",
"score": 0.9976600967163771
},
{
"label": "no_dataset_mention",
"score": 0.0023399032836228136
}
] | Snorkel | null | null | null | false | null | 00c5d0e8-b1c1-4d6a-b317-d28588056498 | {
"split": "unlabelled"
} | Default | {
"text_length": 1107
} | 0dataset_mention |
How to use Now we are ready to try out how the model works as a chatting partner! ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch mode_name = 'liam168/chat-DialoGPT-small-zh' tokenizer = AutoTokenizer.from_pretrained(mode_name) model = AutoModelForCausalLM.from_pretrained(mode_name) | {
"text": " How to use Now we are ready to try out how the model works as a chatting partner! ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch mode_name = 'liam168/chat-DialoGPT-small-zh' tokenizer = AutoTokenizer.from_pretrained(mode_name) model = AutoModelForCausalLM.from_pretrained(mode_name) "
} | [
{
"label": "dataset_mention",
"score": 0.7735312081442667
},
{
"label": "no_dataset_mention",
"score": 0.22646879185573326
}
] | Snorkel | null | null | null | false | null | 00c6214d-eead-4277-93e0-76cdf3857e6e | {
"split": "unlabelled"
} | Default | {
"text_length": 325
} | 0dataset_mention |
Results The model achieves a 80.1 zero-shot top-1 accuracy on ImageNet-1k. An initial round of benchmarks have been performed on a wider range of datasets, and will soon be visible at https://github.com/LAION-AI/CLIP_benchmark/blob/main/benchmark/results.ipynb **TODO** - create table for just this model's metrics. | {
"text": " Results The model achieves a 80.1 zero-shot top-1 accuracy on ImageNet-1k. An initial round of benchmarks have been performed on a wider range of datasets, and will soon be visible at https://github.com/LAION-AI/CLIP_benchmark/blob/main/benchmark/results.ipynb **TODO** - create table for just this model's metrics. "
} | [
{
"label": "dataset_mention",
"score": 0.7735312081442667
},
{
"label": "no_dataset_mention",
"score": 0.22646879185573326
}
] | Snorkel | null | null | null | false | null | 00c641eb-5b9e-44b4-8e3a-84f32f2fe5e6 | {
"split": "unlabelled"
} | Default | {
"text_length": 321
} | 0dataset_mention |
Example of usage: ```python from transformers import AlbertTokenizer, GPT2LMHeadModel tokenizer = AlbertTokenizer.from_pretrained("kyryl0s/gpt2-uk-zno-edition") model = GPT2LMHeadModel.from_pretrained("kyryl0s/gpt2-uk-zno-edition") input_ids = tokenizer.encode("ZNOTITLE: За яку працю треба більше поважати людину - за фізичну чи інтелектуальну?", add_special_tokens=False, return_tensors='pt') outputs = model.generate( input_ids, do_sample=True, num_return_sequences=1, max_length=250 ) for i, out in enumerate(outputs): print("{}: {}".format(i, tokenizer.decode(out))) ``` | {
"text": " Example of usage: ```python from transformers import AlbertTokenizer, GPT2LMHeadModel tokenizer = AlbertTokenizer.from_pretrained(\"kyryl0s/gpt2-uk-zno-edition\") model = GPT2LMHeadModel.from_pretrained(\"kyryl0s/gpt2-uk-zno-edition\") input_ids = tokenizer.encode(\"ZNOTITLE: За яку працю треба більше поважати людину - за фізичну чи інтелектуальну?\", add_special_tokens=False, return_tensors='pt') outputs = model.generate( input_ids, do_sample=True, num_return_sequences=1, max_length=250 ) for i, out in enumerate(outputs): print(\"{}: {}\".format(i, tokenizer.decode(out))) ```"
} | [
{
"label": "dataset_mention",
"score": 0.7735312081442667
},
{
"label": "no_dataset_mention",
"score": 0.22646879185573326
}
] | Snorkel | null | null | null | false | null | 00c87102-21a5-43ff-ac23-13999e0335f7 | {
"split": "unlabelled"
} | Default | {
"text_length": 603
} | 0dataset_mention |
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 42 | 1.0675 | 51.743 | 31.3774 | 34.1939 | 48.7234 | 142.0 | | No log | 2.0 | 84 | 1.0669 | 49.4166 | 28.1438 | 30.188 | 46.0289 | 142.0 | | No log | 3.0 | 126 | 1.1799 | 52.6909 | 31.0174 | 35.441 | 50.0351 | 142.0 | | No log | 4.0 | 168 | 1.2615 | 53.36 | 32.0237 | 33.2835 | 50.7455 | 142.0 | | {
"text": " Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 42 | 1.0675 | 51.743 | 31.3774 | 34.1939 | 48.7234 | 142.0 | | No log | 2.0 | 84 | 1.0669 | 49.4166 | 28.1438 | 30.188 | 46.0289 | 142.0 | | No log | 3.0 | 126 | 1.1799 | 52.6909 | 31.0174 | 35.441 | 50.0351 | 142.0 | | No log | 4.0 | 168 | 1.2615 | 53.36 | 32.0237 | 33.2835 | 50.7455 | 142.0 | "
} | [
{
"label": "no_dataset_mention",
"score": 0.9487513515880792
},
{
"label": "dataset_mention",
"score": 0.051248648411920804
}
] | Snorkel | null | null | null | false | null | 00cc1010-e80c-48c2-9a80-2dedcbcbcfc9 | {
"split": "unlabelled"
} | Default | {
"text_length": 639
} | 1no_dataset_mention |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.