license stringlengths 2 30 | tags stringlengths 2 513 | is_nc bool 1 class | readme_section stringlengths 201 597k | hash stringlengths 32 32 |
|---|---|---|---|---|
bsd-3-clause | ['audio-classification'] | false | Audio Spectrogram Transformer (fine-tuned on Speech Commands v2) Audio Spectrogram Transformer (AST) model fine-tuned on Speech Commands v2. It was introduced in the paper [AST: Audio Spectrogram Transformer](https://arxiv.org/abs/2104.01778) by Gong et al. and first released in [this repository](https://github.com/YuanGongND/ast). Disclaimer: The team releasing Audio Spectrogram Transformer did not write a model card for this model so this model card has been written by the Hugging Face team. | e5b14654d6591299a19063e8c166d245 |
bsd-3-clause | ['audio-classification'] | false | Model description The Audio Spectrogram Transformer is equivalent to [ViT](https://huggingface.co/docs/transformers/model_doc/vit), but applied on audio. Audio is first turned into an image (as a spectrogram), after which a Vision Transformer is applied. The model gets state-of-the-art results on several audio classification benchmarks. | 70b1486a7cdb4e0ce0bd63b51760e18c |
bsd-3-clause | ['audio-classification'] | false | Usage You can use the raw model for classifying audio into one of the Speech Commands v2 classes. See the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/audio-spectrogram-transformer) for more info. | 64758a401aecc70da1c5d4b28f1f84c4 |
apache-2.0 | ['translation'] | false | opus-mt-ty-es * source languages: ty * target languages: es * OPUS readme: [ty-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ty-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ty-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ty-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ty-es/opus-2020-01-16.eval.txt) | bde6d712401a9f93ab2c6ab0bf43d4bf |
afl-3.0 | ['feature_extraction', 'image', 'perceptual_metric'] | false | PerceptNet PercepNet model trained on TID2008 and validated on TID2013, obtaining 0.97 and 0.93 Pearson Correlation respectively. Link to the run: https://wandb.ai/jorgvt/PerceptNet/runs/28m2cnzj?workspace=user-jorgvt | f77315814232469a4f412017fb8950cd |
afl-3.0 | ['feature_extraction', 'image', 'perceptual_metric'] | false | Loading weights manually As of now to use the model you have to install the [PerceptNet repo](https://github.com/Jorgvt/perceptnet) to get access to the `PerceptNet` class where you will load the weights available here like this: ```python from perceptnet.networks import PerceptNet from tensorflow.keras.utils import get_file weights_path = get_file(fname='perceptnet_rgb.h5', origin='https://huggingface.co/Jorgvt/PerceptNet/resolve/main/tf_model.h5') model = PerceptNet(kernel_initializer='ones', gdn_kernel_size=1, learnable_undersampling=False) model.build(input_shape=(None, 384, 512, 3)) model.load_weights(weights_path) ``` > PerceptNet requires `wandb` to be installed. It's something we're looking into. | 3fecce63526507f9c28b8e7b825678de |
afl-3.0 | ['feature_extraction', 'image', 'perceptual_metric'] | false | Directly from the Hub As every other *Keras* model in the Hub, it can be loaded as follows: ```python from huggingface_hub import from_pretrained_keras model = from_pretrained_keras("Jorgvt/PerceptNet", compile=False) ``` > Keep in mind that the model uses grouped convolutions and, at least in Colab, `Unimplemented Errors` may arise when using it in CPU. | 3d85ba1d54164b63f7c8db080b7ba5b8 |
apache-2.0 | ['generated_from_trainer'] | false | paraphrase-multilingual-MiniLM-L12-v2-finetuned-DIT-10_epochs This model is a fine-tuned version of [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 4.6933 | d23765ebfd6e571c85baa0f418f18530 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 91 | 9.1280 | | No log | 2.0 | 182 | 7.7624 | | No log | 3.0 | 273 | 6.8875 | | No log | 4.0 | 364 | 6.2064 | | No log | 5.0 | 455 | 5.6836 | | 7.584 | 6.0 | 546 | 5.2978 | | 7.584 | 7.0 | 637 | 5.0191 | | 7.584 | 8.0 | 728 | 4.8337 | | 7.584 | 9.0 | 819 | 4.7284 | | 7.584 | 10.0 | 910 | 4.6933 | | 8dff0d5e872b1c55a0001bcee9d50769 |
mit | ['generated_from_keras_callback'] | false | nandysoham16/IPod-clustered This model is a fine-tuned version of [nandysoham16/15-clustered_aug](https://huggingface.co/nandysoham16/15-clustered_aug) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5099 - Train End Logits Accuracy: 0.8472 - Train Start Logits Accuracy: 0.8229 - Validation Loss: 0.2496 - Validation End Logits Accuracy: 0.9091 - Validation Start Logits Accuracy: 0.8636 - Epoch: 0 | 7c9d979e50d1665fd7aaee2858282376 |
mit | ['generated_from_keras_callback'] | false | Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 0.5099 | 0.8472 | 0.8229 | 0.2496 | 0.9091 | 0.8636 | 0 | | c0f1b041fefe3217f732bd9d0f936010 |
apache-2.0 | ['generated_from_trainer'] | false | bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0618 - Precision: 0.9339 - Recall: 0.9512 - F1: 0.9425 - Accuracy: 0.9863 | 98944c498654edb43ceff361aee3249d |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.087 | 1.0 | 1756 | 0.0686 | 0.9178 | 0.9374 | 0.9275 | 0.9824 | | 0.0343 | 2.0 | 3512 | 0.0626 | 0.9260 | 0.9480 | 0.9369 | 0.9856 | | 0.0163 | 3.0 | 5268 | 0.0618 | 0.9339 | 0.9512 | 0.9425 | 0.9863 | | 70c3656b3608ecc4a9e88d3f45d76785 |
apache-2.0 | ['generated_from_trainer'] | false | t5-small-mathT5-finetune_qatoexp This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the math_qa dataset. It achieves the following results on the evaluation set: - Loss: 1.8677 - Rouge1: 21.9174 - Rouge2: 8.4401 - Rougel: 19.1645 - Rougelsum: 19.8239 - Gen Len: 18.9765 | 8ef7d48122ba3b82c768f7c176851a19 |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP | 158ec024ebdf483043b84ab0f497d781 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 2.4496 | 1.0 | 2984 | 2.2096 | 19.6477 | 6.508 | 16.9295 | 17.5212 | 18.9064 | | 2.2893 | 2.0 | 5968 | 2.0837 | 20.4879 | 7.2528 | 17.7778 | 18.4085 | 18.968 | | 2.1869 | 3.0 | 8952 | 2.0125 | 20.8462 | 7.6105 | 18.1516 | 18.8343 | 18.9837 | | 2.1456 | 4.0 | 11936 | 1.9633 | 20.7623 | 7.7113 | 18.1274 | 18.783 | 18.9886 | | 2.1171 | 5.0 | 14920 | 1.9321 | 21.0648 | 7.8897 | 18.4162 | 19.0551 | 18.9844 | | 2.0854 | 6.0 | 17904 | 1.9061 | 21.4445 | 8.0883 | 18.8038 | 19.4176 | 18.9812 | | 2.0592 | 7.0 | 20888 | 1.8902 | 21.5714 | 8.2751 | 18.8864 | 19.537 | 18.9772 | | 2.0609 | 8.0 | 23872 | 1.8770 | 21.7737 | 8.3297 | 19.022 | 19.6897 | 18.9763 | | 2.0285 | 9.0 | 26856 | 1.8701 | 21.964 | 8.4358 | 19.1701 | 19.845 | 18.9747 | | 2.0165 | 10.0 | 29840 | 1.8677 | 21.9174 | 8.4401 | 19.1645 | 19.8239 | 18.9765 | | 1a22190553522d753458a65ae8ee0460 |
mit | ['generated_from_trainer'] | false | ACTS_feedback1 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2357 - Accuracy: 0.8936 - Balanced accuracy: 0.8897 - Precision: 0.8951 - Recall: 0.8936 - F1: 0.8915 | 58d719b82e18b52e888ddc6a4fec440f |
mit | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Balanced accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------------:|:---------:|:------:|:------:| | 1.0881 | 1.0 | 12 | 1.0513 | 0.5532 | 0.5119 | 0.4004 | 0.5532 | 0.4645 | | 0.9933 | 2.0 | 24 | 0.9257 | 0.5319 | 0.4952 | 0.3852 | 0.5319 | 0.4463 | | 0.8065 | 3.0 | 36 | 0.7059 | 0.7234 | 0.7295 | 0.7607 | 0.7234 | 0.7184 | | 0.5504 | 4.0 | 48 | 0.4259 | 0.8511 | 0.8474 | 0.8486 | 0.8511 | 0.8472 | | 0.3262 | 5.0 | 60 | 0.3703 | 0.8511 | 0.8654 | 0.8624 | 0.8511 | 0.8499 | | 0.1877 | 6.0 | 72 | 0.2518 | 0.8723 | 0.8731 | 0.8719 | 0.8723 | 0.8703 | | 0.1094 | 7.0 | 84 | 0.2283 | 0.9362 | 0.9410 | 0.9415 | 0.9362 | 0.9365 | | 0.0721 | 8.0 | 96 | 0.2246 | 0.9149 | 0.9244 | 0.9233 | 0.9149 | 0.9149 | | 0.0521 | 9.0 | 108 | 0.2215 | 0.8936 | 0.8897 | 0.8951 | 0.8936 | 0.8915 | | 0.0455 | 10.0 | 120 | 0.2357 | 0.8936 | 0.8897 | 0.8951 | 0.8936 | 0.8915 | | 53e4822675bf1280a06671b57b216420 |
mit | [] | false | Label - Emotion Table | Emotion | LABEL | | -------------- |:-------------: | | Anger | LABEL_0 | | Boredom | LABEL_1 | | Empty | LABEL_2 | | Enthusiasm | LABEL_3 | | Fear | LABEL_4 | | Fun | LABEL_5 | | Happiness | LABEL_6 | | Hate | LABEL_7 | | Joy | LABEL_8 | | Love | LABEL_9 | | Neutral | LABEL_10 | | Relief | LABEL_11 | | Sadness | LABEL_12 | | Surprise | LABEL_13 | | Worry | LABEL_14 | | 6793628310ca9c21b8f62bb15ecefe73 |
apache-2.0 | ['generated_from_trainer'] | false | distilled-mt5-small-0.8-0.5 This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset. It achieves the following results on the evaluation set: - Loss: 3.6726 - Bleu: 5.4125 - Gen Len: 40.0185 | 821bc3474652333a44ba8949fe7ba17a |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 | 31d88bc4384e489a5ea0e40611430809 |
mit | ['generated_from_trainer'] | false | compassionate_lumiere This model was trained from scratch on the tomekkorbak/pii-pile-chunk3-0-50000, the tomekkorbak/pii-pile-chunk3-50000-100000, the tomekkorbak/pii-pile-chunk3-100000-150000, the tomekkorbak/pii-pile-chunk3-150000-200000, the tomekkorbak/pii-pile-chunk3-200000-250000, the tomekkorbak/pii-pile-chunk3-250000-300000, the tomekkorbak/pii-pile-chunk3-300000-350000, the tomekkorbak/pii-pile-chunk3-350000-400000, the tomekkorbak/pii-pile-chunk3-400000-450000, the tomekkorbak/pii-pile-chunk3-450000-500000, the tomekkorbak/pii-pile-chunk3-500000-550000, the tomekkorbak/pii-pile-chunk3-550000-600000, the tomekkorbak/pii-pile-chunk3-600000-650000, the tomekkorbak/pii-pile-chunk3-650000-700000, the tomekkorbak/pii-pile-chunk3-700000-750000, the tomekkorbak/pii-pile-chunk3-750000-800000, the tomekkorbak/pii-pile-chunk3-800000-850000, the tomekkorbak/pii-pile-chunk3-850000-900000, the tomekkorbak/pii-pile-chunk3-900000-950000, the tomekkorbak/pii-pile-chunk3-950000-1000000, the tomekkorbak/pii-pile-chunk3-1000000-1050000, the tomekkorbak/pii-pile-chunk3-1050000-1100000, the tomekkorbak/pii-pile-chunk3-1100000-1150000, the tomekkorbak/pii-pile-chunk3-1150000-1200000, the tomekkorbak/pii-pile-chunk3-1200000-1250000, the tomekkorbak/pii-pile-chunk3-1250000-1300000, the tomekkorbak/pii-pile-chunk3-1300000-1350000, the tomekkorbak/pii-pile-chunk3-1350000-1400000, the tomekkorbak/pii-pile-chunk3-1400000-1450000, the tomekkorbak/pii-pile-chunk3-1450000-1500000, the tomekkorbak/pii-pile-chunk3-1500000-1550000, the tomekkorbak/pii-pile-chunk3-1550000-1600000, the tomekkorbak/pii-pile-chunk3-1600000-1650000, the tomekkorbak/pii-pile-chunk3-1650000-1700000, the tomekkorbak/pii-pile-chunk3-1700000-1750000, the tomekkorbak/pii-pile-chunk3-1750000-1800000, the tomekkorbak/pii-pile-chunk3-1800000-1850000, the tomekkorbak/pii-pile-chunk3-1850000-1900000 and the tomekkorbak/pii-pile-chunk3-1900000-1950000 datasets. | b0fed521bc60eb134864391a19c01497 |
mit | ['generated_from_trainer'] | false | Full config {'dataset': {'conditional_training_config': {'aligned_prefix': '<|aligned|>', 'drop_token_fraction': 0.01, 'misaligned_prefix': '<|misaligned|>', 'threshold': 0.0}, 'datasets': ['tomekkorbak/pii-pile-chunk3-0-50000', 'tomekkorbak/pii-pile-chunk3-50000-100000', 'tomekkorbak/pii-pile-chunk3-100000-150000', 'tomekkorbak/pii-pile-chunk3-150000-200000', 'tomekkorbak/pii-pile-chunk3-200000-250000', 'tomekkorbak/pii-pile-chunk3-250000-300000', 'tomekkorbak/pii-pile-chunk3-300000-350000', 'tomekkorbak/pii-pile-chunk3-350000-400000', 'tomekkorbak/pii-pile-chunk3-400000-450000', 'tomekkorbak/pii-pile-chunk3-450000-500000', 'tomekkorbak/pii-pile-chunk3-500000-550000', 'tomekkorbak/pii-pile-chunk3-550000-600000', 'tomekkorbak/pii-pile-chunk3-600000-650000', 'tomekkorbak/pii-pile-chunk3-650000-700000', 'tomekkorbak/pii-pile-chunk3-700000-750000', 'tomekkorbak/pii-pile-chunk3-750000-800000', 'tomekkorbak/pii-pile-chunk3-800000-850000', 'tomekkorbak/pii-pile-chunk3-850000-900000', 'tomekkorbak/pii-pile-chunk3-900000-950000', 'tomekkorbak/pii-pile-chunk3-950000-1000000', 'tomekkorbak/pii-pile-chunk3-1000000-1050000', 'tomekkorbak/pii-pile-chunk3-1050000-1100000', 'tomekkorbak/pii-pile-chunk3-1100000-1150000', 'tomekkorbak/pii-pile-chunk3-1150000-1200000', 'tomekkorbak/pii-pile-chunk3-1200000-1250000', 'tomekkorbak/pii-pile-chunk3-1250000-1300000', 'tomekkorbak/pii-pile-chunk3-1300000-1350000', 'tomekkorbak/pii-pile-chunk3-1350000-1400000', 'tomekkorbak/pii-pile-chunk3-1400000-1450000', 'tomekkorbak/pii-pile-chunk3-1450000-1500000', 'tomekkorbak/pii-pile-chunk3-1500000-1550000', 'tomekkorbak/pii-pile-chunk3-1550000-1600000', 'tomekkorbak/pii-pile-chunk3-1600000-1650000', 'tomekkorbak/pii-pile-chunk3-1650000-1700000', 'tomekkorbak/pii-pile-chunk3-1700000-1750000', 'tomekkorbak/pii-pile-chunk3-1750000-1800000', 'tomekkorbak/pii-pile-chunk3-1800000-1850000', 'tomekkorbak/pii-pile-chunk3-1850000-1900000', 'tomekkorbak/pii-pile-chunk3-1900000-1950000'], 'is_split_by_sentences': True, 'skip_tokens': 1649999872}, 'generation': {'force_call_on': [25177], 'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}], 'scenario_configs': [{'generate_kwargs': {'bad_words_ids': [[50257], [50258]], 'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_samples': 4096, 'prefix': '<|aligned|>'}], 'scorer_config': {}}, 'kl_gpt3_callback': {'force_call_on': [25177], 'gpt3_kwargs': {'model_name': 'davinci'}, 'max_tokens': 64, 'num_samples': 4096, 'prefix': '<|aligned|>'}, 'model': {'from_scratch': False, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'model_kwargs': {'revision': '9e6c78543a6ff1e4089002c38864d5a9cf71ec90'}, 'num_additional_tokens': 2, 'path_or_name': 'tomekkorbak/nervous_wozniak'}, 'objective': {'name': 'MLE'}, 'tokenizer': {'path_or_name': 'gpt2', 'special_tokens': ['<|aligned|>', '<|misaligned|>']}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 128, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'compassionate_lumiere', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.0001, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000, 'output_dir': 'training_output2', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 251, 'save_strategy': 'steps', 'seed': 42, 'tokens_already_seen': 1649999872, 'warmup_ratio': 0.01, 'weight_decay': 0.1}} | 35ab1319058e9afacb8c373dc346bd5d |
apache-2.0 | [] | false | WellcomeBertMesh
WellcomeBertMesh is build from the data science team at the WellcomeTrust to tag biomedical grants with Medical Subject Headings ([Mesh](https://www.nlm.nih.gov/mesh/meshhome.html)). Even though developed with the intention to be used towards research grants, it should be applicable to any type of biomedical text close to the domain it was trained which is abstracts from biomedical publications.
| 5396f3435e322a2665812da7ff28f6d6 |
apache-2.0 | [] | false | Model description
The model is inspired from [BertMesh](https://pubmed.ncbi.nlm.nih.gov/32976559/) which is trained on the full text of biomedical publications and uses BioBert as its pretrained model.
WellcomeBertMesh is utilising the latest state of the art model in the biomedical domain which is [PubMedBert](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract) from Microsoft and attach a Multilabel attention head which essentially allows the model to pay attention to different tokens per label to decide whether it applies.
We train the model using data from the [BioASQ](http://bioasq.org) competition which consists of abstracts from PubMed publications. We use 2016-2019 data for training and 2020-2021 for testing which gives us ~2.5M publications to train and 220K to test. This is out of a total of 14M publications. It takes 4 days to train WellcomeBertMesh on 8 Nvidia P100 GPUs.
The model achieves 63% micro f1 with a 0.5 threshold for all labels.
The code for developing the model is open source and can be found in https://github.com/wellcometrust/grants_tagger
| 0a20b8406b53a0faa19f687a2780eb6b |
apache-2.0 | [] | false | How to use
⚠️ You need transformers 4.17+ for the example to work due to its recent support for custom models.
You can use the model straight from the hub but because it contains a custom forward function due to the multilabel attention head you have to pass `trust_remote_code=True`. You can get access to the probabilities for all labels by omitting `return_labels=True`.
```
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"Wellcome/WellcomeBertMesh"
)
model = AutoModel.from_pretrained(
"Wellcome/WellcomeBertMesh",
trust_remote_code=True
)
text = "This grant is about malaria and not about HIV."
inputs = tokenizer([text], padding="max_length")
labels = model(**inputs, return_labels=True)
print(labels)
```
You can inspect the model code if you navigate to the files and see `model.py`. | b397942c8e085e0cf3a3cd2cba1aefcb |
creativeml-openrail-m | ['text-to-image'] | false | 1e3d938d-b6cf-4ae6-a07a-d0b4128465d1 Dreambooth model trained by tzvc with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! Sample pictures of: sdcid (use that on your prompt)  | ebe58f2e51b81b7b7f4dea1f88b1135a |
apache-2.0 | ['generated_from_trainer'] | false | tiny-mlm-glue-mnli-custom-tokenizer This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 6.1721 | ca1dba5853fb36ccdb6d86f0b1e13ae1 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 7.8162 | 0.4 | 500 | 7.1032 | | 6.9567 | 0.8 | 1000 | 7.0697 | | 6.8563 | 1.2 | 1500 | 7.0460 | | 6.7685 | 1.6 | 2000 | 7.0131 | | 6.6897 | 2.0 | 2500 | 6.9769 | | 6.5455 | 2.4 | 3000 | 6.9249 | | 6.482 | 2.8 | 3500 | 6.8552 | | 6.4153 | 3.2 | 4000 | 6.8445 | | 6.38 | 3.6 | 4500 | 6.7803 | | 6.4066 | 4.0 | 5000 | 6.8070 | | 6.2854 | 4.4 | 5500 | 6.7329 | | 6.2966 | 4.8 | 6000 | 6.7094 | | 6.1244 | 5.2 | 6500 | 6.6476 | | 6.1276 | 5.6 | 7000 | 6.6118 | | 6.0685 | 6.0 | 7500 | 6.5714 | | 5.98 | 6.4 | 8000 | 6.5522 | | 6.0174 | 6.8 | 8500 | 6.5093 | | 5.9451 | 7.2 | 9000 | 6.4866 | | 5.9681 | 7.6 | 9500 | 6.5238 | | 5.9246 | 8.0 | 10000 | 6.5340 | | 5.9219 | 8.4 | 10500 | 6.4727 | | 5.8812 | 8.8 | 11000 | 6.4483 | | 5.7815 | 9.2 | 11500 | 6.4402 | | 5.7938 | 9.6 | 12000 | 6.4124 | | 5.7934 | 10.0 | 12500 | 6.3908 | | 5.7332 | 10.4 | 13000 | 6.3861 | | 5.7628 | 10.8 | 13500 | 6.3638 | | 5.7259 | 11.2 | 14000 | 6.3345 | | 5.7505 | 11.6 | 14500 | 6.3117 | | 5.6441 | 12.0 | 15000 | 6.3118 | | 5.7058 | 12.4 | 15500 | 6.3116 | | 5.6017 | 12.8 | 16000 | 6.2728 | | 5.6424 | 13.2 | 16500 | 6.2790 | | 5.5799 | 13.6 | 17000 | 6.3034 | | 5.5625 | 14.0 | 17500 | 6.2580 | | 5.6015 | 14.4 | 18000 | 6.2607 | | 5.4884 | 14.8 | 18500 | 6.2535 | | 5.5117 | 15.2 | 19000 | 6.1960 | | 5.4919 | 15.6 | 19500 | 6.1907 | | 5.4624 | 16.0 | 20000 | 6.1838 | | 5.4721 | 16.4 | 20500 | 6.1461 | | 5.4833 | 16.8 | 21000 | 6.1251 | | 5.4404 | 17.2 | 21500 | 6.1725 | | 5.4487 | 17.6 | 22000 | 6.1417 | | 5.4499 | 18.0 | 22500 | 6.1721 | | a008e33aa2b90e61e8168e9defdad343 |
creativeml-openrail-m | ['text-to-image'] | false | lubosskostelny Dreambooth model trained by Markfm with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! Sample pictures of: lubosskostelny (use that on your prompt)  | 5af37e6f7ddf381afc78f9b44e20dd08 |
mit | ['generated_from_trainer'] | false | sarcasm-detection-RoBerta-base-newdata This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4844 - Accuracy: 0.7824 | 922fb1b003039a303340ce6df1c2dc29 |
apache-2.0 | [] | false | distilbert-base-ja-cased We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages. Our versions give exactly the same representations produced by the original model which preserves the original accuracy. For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf). | a1e00d2bd04bea9e483af6c9a3e6947a |
apache-2.0 | [] | false | How to use ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-ja-cased") model = AutoModel.from_pretrained("Geotrend/distilbert-base-ja-cased") ``` To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers). | 058358fd957319b1361e8b3dbe810d31 |
mit | ['generated_from_trainer'] | false | stupefied_brattain This model was trained from scratch on the tomekkorbak/pii-pile-chunk3-0-50000, the tomekkorbak/pii-pile-chunk3-50000-100000, the tomekkorbak/pii-pile-chunk3-100000-150000, the tomekkorbak/pii-pile-chunk3-150000-200000, the tomekkorbak/pii-pile-chunk3-200000-250000, the tomekkorbak/pii-pile-chunk3-250000-300000, the tomekkorbak/pii-pile-chunk3-300000-350000, the tomekkorbak/pii-pile-chunk3-350000-400000, the tomekkorbak/pii-pile-chunk3-400000-450000, the tomekkorbak/pii-pile-chunk3-450000-500000, the tomekkorbak/pii-pile-chunk3-500000-550000, the tomekkorbak/pii-pile-chunk3-550000-600000, the tomekkorbak/pii-pile-chunk3-600000-650000, the tomekkorbak/pii-pile-chunk3-650000-700000, the tomekkorbak/pii-pile-chunk3-700000-750000, the tomekkorbak/pii-pile-chunk3-750000-800000, the tomekkorbak/pii-pile-chunk3-800000-850000, the tomekkorbak/pii-pile-chunk3-850000-900000, the tomekkorbak/pii-pile-chunk3-900000-950000, the tomekkorbak/pii-pile-chunk3-950000-1000000, the tomekkorbak/pii-pile-chunk3-1000000-1050000, the tomekkorbak/pii-pile-chunk3-1050000-1100000, the tomekkorbak/pii-pile-chunk3-1100000-1150000, the tomekkorbak/pii-pile-chunk3-1150000-1200000, the tomekkorbak/pii-pile-chunk3-1200000-1250000, the tomekkorbak/pii-pile-chunk3-1250000-1300000, the tomekkorbak/pii-pile-chunk3-1300000-1350000, the tomekkorbak/pii-pile-chunk3-1350000-1400000, the tomekkorbak/pii-pile-chunk3-1400000-1450000, the tomekkorbak/pii-pile-chunk3-1450000-1500000, the tomekkorbak/pii-pile-chunk3-1500000-1550000, the tomekkorbak/pii-pile-chunk3-1550000-1600000, the tomekkorbak/pii-pile-chunk3-1600000-1650000, the tomekkorbak/pii-pile-chunk3-1650000-1700000, the tomekkorbak/pii-pile-chunk3-1700000-1750000, the tomekkorbak/pii-pile-chunk3-1750000-1800000, the tomekkorbak/pii-pile-chunk3-1800000-1850000, the tomekkorbak/pii-pile-chunk3-1850000-1900000 and the tomekkorbak/pii-pile-chunk3-1900000-1950000 datasets. | a02b7ea8275c6a840173f882e07894cc |
mit | ['generated_from_trainer'] | false | Full config {'dataset': {'datasets': ['tomekkorbak/pii-pile-chunk3-0-50000', 'tomekkorbak/pii-pile-chunk3-50000-100000', 'tomekkorbak/pii-pile-chunk3-100000-150000', 'tomekkorbak/pii-pile-chunk3-150000-200000', 'tomekkorbak/pii-pile-chunk3-200000-250000', 'tomekkorbak/pii-pile-chunk3-250000-300000', 'tomekkorbak/pii-pile-chunk3-300000-350000', 'tomekkorbak/pii-pile-chunk3-350000-400000', 'tomekkorbak/pii-pile-chunk3-400000-450000', 'tomekkorbak/pii-pile-chunk3-450000-500000', 'tomekkorbak/pii-pile-chunk3-500000-550000', 'tomekkorbak/pii-pile-chunk3-550000-600000', 'tomekkorbak/pii-pile-chunk3-600000-650000', 'tomekkorbak/pii-pile-chunk3-650000-700000', 'tomekkorbak/pii-pile-chunk3-700000-750000', 'tomekkorbak/pii-pile-chunk3-750000-800000', 'tomekkorbak/pii-pile-chunk3-800000-850000', 'tomekkorbak/pii-pile-chunk3-850000-900000', 'tomekkorbak/pii-pile-chunk3-900000-950000', 'tomekkorbak/pii-pile-chunk3-950000-1000000', 'tomekkorbak/pii-pile-chunk3-1000000-1050000', 'tomekkorbak/pii-pile-chunk3-1050000-1100000', 'tomekkorbak/pii-pile-chunk3-1100000-1150000', 'tomekkorbak/pii-pile-chunk3-1150000-1200000', 'tomekkorbak/pii-pile-chunk3-1200000-1250000', 'tomekkorbak/pii-pile-chunk3-1250000-1300000', 'tomekkorbak/pii-pile-chunk3-1300000-1350000', 'tomekkorbak/pii-pile-chunk3-1350000-1400000', 'tomekkorbak/pii-pile-chunk3-1400000-1450000', 'tomekkorbak/pii-pile-chunk3-1450000-1500000', 'tomekkorbak/pii-pile-chunk3-1500000-1550000', 'tomekkorbak/pii-pile-chunk3-1550000-1600000', 'tomekkorbak/pii-pile-chunk3-1600000-1650000', 'tomekkorbak/pii-pile-chunk3-1650000-1700000', 'tomekkorbak/pii-pile-chunk3-1700000-1750000', 'tomekkorbak/pii-pile-chunk3-1750000-1800000', 'tomekkorbak/pii-pile-chunk3-1800000-1850000', 'tomekkorbak/pii-pile-chunk3-1850000-1900000', 'tomekkorbak/pii-pile-chunk3-1900000-1950000'], 'filter_threshold': 0.000286, 'is_split_by_sentences': True, 'skip_tokens': 1649999872}, 'generation': {'force_call_on': [25177], 'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}], 'scenario_configs': [{'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_samples': 2048}], 'scorer_config': {}}, 'kl_gpt3_callback': {'force_call_on': [25177], 'max_tokens': 64, 'num_samples': 4096}, 'model': {'from_scratch': False, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'model_kwargs': {'revision': '9e6c78543a6ff1e4089002c38864d5a9cf71ec90'}, 'path_or_name': 'tomekkorbak/nervous_wozniak'}, 'objective': {'name': 'MLE'}, 'tokenizer': {'path_or_name': 'gpt2'}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 128, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'stupefied_brattain', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.0001, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000, 'output_dir': 'training_output2', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 25177, 'save_strategy': 'steps', 'seed': 42, 'tokens_already_seen': 1649999872, 'warmup_ratio': 0.01, 'weight_decay': 0.1}} | ae704e1e7af3317aa57e3ffb885df54c |
mit | [] | false | linnopoke on Stable Diffusion This is the `<linnopoke>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`:             | 3c6d70dc9de3352874a216885287cc6c |
mit | ['generated_from_trainer'] | false | roberta-base-finetuned-intent This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the snips_built_in_intents dataset. It achieves the following results on the evaluation set: - Loss: 0.2720 - Accuracy: 0.9333 | 05433d541990654dafbda54f7e315332 |
mit | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: IPU - gradient_accumulation_steps: 8 - total_train_batch_size: 8 - total_eval_batch_size: 5 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - training precision: Mixed Precision | e2938d2448df64b82cb2f029d9732643 |
mit | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.9568 | 1.0 | 37 | 1.7598 | 0.4333 | | 1.2238 | 2.0 | 74 | 0.8130 | 0.7667 | | 0.4536 | 3.0 | 111 | 0.4985 | 0.8 | | 0.2478 | 4.0 | 148 | 0.3535 | 0.8667 | | 0.0903 | 5.0 | 185 | 0.3110 | 0.8667 | | 0.0849 | 6.0 | 222 | 0.2720 | 0.9333 | | 0.0708 | 7.0 | 259 | 0.2742 | 0.8667 | | 0.0796 | 8.0 | 296 | 0.2839 | 0.8667 | | 0.0638 | 9.0 | 333 | 0.2949 | 0.8667 | | 0.0566 | 10.0 | 370 | 0.2925 | 0.8667 | | 97ea05557ac00d5afff7ebf59e92d1d2 |
apache-2.0 | ['generated_from_trainer'] | false | wav2vec2-base-timit-demo-colab6 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9394 - Wer: 0.5282 | 40b195ffc0058d20fef3604822518f28 |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 60 - mixed_precision_training: Native AMP | 66cf731af9439ef6d9d6a326c2b4d34b |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 5.3117 | 7.35 | 500 | 3.1548 | 1.0 | | 1.6732 | 14.71 | 1000 | 0.8857 | 0.6561 | | 0.5267 | 22.06 | 1500 | 0.7931 | 0.6018 | | 0.2951 | 29.41 | 2000 | 0.8152 | 0.5816 | | 0.2013 | 36.76 | 2500 | 0.9060 | 0.5655 | | 0.1487 | 44.12 | 3000 | 0.9201 | 0.5624 | | 0.1189 | 51.47 | 3500 | 0.9394 | 0.5412 | | 0.1004 | 58.82 | 4000 | 0.9394 | 0.5282 | | 587ea7a489bc0d61dcfbe3bf8f9a24db |
apache-2.0 | ['t5-small', 'text2text-generation', 'natural language understanding', 'conversational system', 'task-oriented dialog'] | false | t5-small-nlu-tm1_tm2_tm3 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on [Taskmaster-1](https://huggingface.co/datasets/ConvLab/tm1), [Taskmaster-2](https://huggingface.co/datasets/ConvLab/tm2), and [Taskmaster-3](https://huggingface.co/datasets/ConvLab/tm3). Refer to [ConvLab-3](https://github.com/ConvLab/ConvLab-3) for model description and usage. | 99655554d4aebbb2f8ddbd669c61712b |
apache-2.0 | ['t5-small', 'text2text-generation', 'natural language understanding', 'conversational system', 'task-oriented dialog'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 128 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 256 - optimizer: Adafactor - lr_scheduler_type: linear - num_epochs: 10.0 | 08a4d92c8447c8561b27e0089c0d04c3 |
creativeml-openrail-m | ['text-to-image'] | false | Kurzgesagt-style-v2-768 Dreambooth model trained on the v2-768 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! Sample pictures of: Kurzgesagt style (use that on your prompt)  | 4aca4aa0f202cbc4c9f0fae25be39ba9 |
apache-2.0 | ['generated_from_trainer'] | false | rte This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the GLUE RTE dataset. It achieves the following results on the evaluation set: - Loss: 0.7994 - Accuracy: 0.6859 | 28418074c797c0ef0f1bd465abc0343f |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 | 2cf9e2eadc0d254e262dc239a865c936 |
creativeml-openrail-m | ['text-to-image', 'stable-diffusion'] | false | crystalpunk Dreambooth model trained by rudzinskimaciej with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept: | 52246c66af1f8f908ed94b95df51f83d |
apache-2.0 | ['hf-asr-leaderboard', 'generated_from_trainer'] | false | Whisper Small Telugu - Naga Budigam This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Chai_Bisket_Stories_16-08-2021_14-17 dataset. It achieves the following results on the evaluation set: - Loss: 0.7063 - Wer: 77.4871 | 47ee3b47882841f9ab37b95aaddc0ac1 |
apache-2.0 | ['hf-asr-leaderboard', 'generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 5000 - mixed_precision_training: Native AMP | 4eda0a8f6e2668fea04e863c96d73cf6 |
apache-2.0 | ['hf-asr-leaderboard', 'generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.2933 | 2.62 | 500 | 0.3849 | 86.6429 | | 0.0692 | 5.24 | 1000 | 0.3943 | 82.7190 | | 0.0251 | 7.85 | 1500 | 0.4720 | 82.4415 | | 0.0098 | 10.47 | 2000 | 0.5359 | 81.6092 | | 0.0061 | 13.09 | 2500 | 0.5868 | 75.9413 | | 0.0025 | 15.71 | 3000 | 0.6235 | 76.6944 | | 0.0009 | 18.32 | 3500 | 0.6634 | 78.3987 | | 0.0005 | 20.94 | 4000 | 0.6776 | 77.1700 | | 0.0002 | 23.56 | 4500 | 0.6995 | 78.2798 | | 0.0001 | 26.18 | 5000 | 0.7063 | 77.4871 | | d2ff6bdf53464836db147273823c2ce7 |
apache-2.0 | ['generated_from_trainer'] | false | distilbert-base-uncased-finetuned-category-classification This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0377 - F1: 0.9943 - Roc Auc: 0.9943 - Accuracy: 0.9943 | 20d07ccddd2f6163703aba83add77c0a |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|:--------:| | 0.0374 | 1.0 | 7612 | 0.0373 | 0.9916 | 0.9916 | 0.9915 | | 0.0255 | 2.0 | 15224 | 0.0409 | 0.9922 | 0.9922 | 0.9921 | | 0.0281 | 3.0 | 22836 | 0.0332 | 0.9934 | 0.9934 | 0.9934 | | 0.0189 | 4.0 | 30448 | 0.0359 | 0.9941 | 0.9941 | 0.9940 | | 0.005 | 5.0 | 38060 | 0.0377 | 0.9943 | 0.9943 | 0.9943 | | 96d690f7c14f3aa77cb6ae598226ccb1 |
apache-2.0 | ['translation'] | false | tgl-deu * source group: Tagalog * target group: German * OPUS readme: [tgl-deu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tgl-deu/README.md) * model: transformer-align * source language(s): tgl_Latn * target language(s): deu * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-deu/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-deu/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-deu/opus-2020-06-17.eval.txt) | ca8368611f8c087b76b00a2ca6b779be |
apache-2.0 | ['translation'] | false | System Info: - hf_name: tgl-deu - source_languages: tgl - target_languages: deu - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tgl-deu/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['tl', 'de'] - src_constituents: {'tgl_Latn'} - tgt_constituents: {'deu'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-deu/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-deu/opus-2020-06-17.test.txt - src_alpha3: tgl - tgt_alpha3: deu - short_pair: tl-de - chrF2_score: 0.473 - bleu: 22.7 - brevity_penalty: 0.9690000000000001 - ref_len: 2453.0 - src_name: Tagalog - tgt_name: German - train_date: 2020-06-17 - src_alpha2: tl - tgt_alpha2: de - prefer_old: False - long_pair: tgl-deu - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41 | 3de082f4ec14e89bef488a7ba0e7e8b0 |
apache-2.0 | ['generated_from_trainer'] | false | ner_kaggle_class_prediction_model This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0191 - Precision: 0.9850 - Recall: 0.9830 - F1: 0.9840 - Accuracy: 0.9950 | 3f70a65f6437d30f02e0789954e345ea |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.1304 | 1.0 | 806 | 0.0202 | 0.9823 | 0.9794 | 0.9808 | 0.9940 | | 0.0142 | 2.0 | 1612 | 0.0178 | 0.9819 | 0.9826 | 0.9823 | 0.9945 | | 0.0081 | 3.0 | 2418 | 0.0191 | 0.9850 | 0.9830 | 0.9840 | 0.9950 | | 19a6e18f988182425f73e418e9632f4f |
apache-2.0 | ['generated_from_trainer'] | false | test_trainer This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the amazon_us_reviews dataset. It achieves the following results on the evaluation set: - Loss: 0.9348 - Accuracy: 0.7441 | ffd4738b63cd742399f77f6cb5c4c225 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.6471 | 1.0 | 7500 | 0.6596 | 0.7376 | | 0.5235 | 2.0 | 15000 | 0.6997 | 0.7423 | | 0.3955 | 3.0 | 22500 | 0.9348 | 0.7441 | | 79c10fd497d5c4f5ec665e1ec8619780 |
afl-3.0 | ['CTC', 'pytorch', 'speechbrain', 'Transformer', 'hf-asr-leaderboard'] | false | wav2vec 2.0 with CTC trained on data aligned from RTVE databases (No LM) This repository provides all the necessary tools to perform automatic speech recognition from an end-to-end system pretrained on CommonVoice (Spanish Language) within SpeechBrain. For a better experience, we encourage you to learn more about [SpeechBrain](https://speechbrain.github.io). The performance of the model is the following: | Release | RTVE 2022 Test WER | GPUs | |:-------------:|:--------------:| :--------:| | 16-01-23 | 23.45 | 3xRTX2080Ti 12GB | | 6af279a20c69630c0271feea48063902 |
afl-3.0 | ['CTC', 'pytorch', 'speechbrain', 'Transformer', 'hf-asr-leaderboard'] | false | Pipeline description This ASR system is composed of 2 different but linked blocks: - Tokenizer (char) that transforms words into chars and trained with the train transcriptions (train.tsv) of CommonVoice (ES). - Acoustic model (wav2vec2.0 + CTC). A pretrained wav2vec 2.0 model ([wav2vec2-large-xlsr-53-spanish](https://huggingface.co/facebook/wav2vec2-large-xlsr-53-spanish)) is combined with two DNN layers and finetuned on CommonVoice ES. The obtained final acoustic representation is given to the CTC decoder. The system is trained with recordings sampled at 16kHz (single channel). The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *transcribe_file* if needed. | 36dd433f5cec2d101e97d967e4c16840 |
afl-3.0 | ['CTC', 'pytorch', 'speechbrain', 'Transformer', 'hf-asr-leaderboard'] | false | Install SpeechBrain First of all, please install tranformers and SpeechBrain with the following command: ``` pip install speechbrain transformers ``` Please notice that we encourage you to read tutorials and learn more about [SpeechBrain](https://speechbrain.github.io). | 7c61fd533f44e77963dfa6ca80d9a63f |
afl-3.0 | ['CTC', 'pytorch', 'speechbrain', 'Transformer', 'hf-asr-leaderboard'] | false | Transcribing your own audio files (in Spanish) ```python from speechbrain.pretrained import EncoderASR asr_model = EncoderASR.from_hparams(source="Voyager1/asr-wav2vec2-commonvoice-es", savedir="pretrained_models/asr-wav2vec2-commonvoice-es") asr_model.transcribe_file("Voyager1/asr-wav2vec2-commonvoice-es/example-es.wav") ``` | 50cdc669bf7bb7998ee89101bf58edde |
afl-3.0 | ['CTC', 'pytorch', 'speechbrain', 'Transformer', 'hf-asr-leaderboard'] | false | **Citations** ```bibtex @article{lopez2022tid, title={TID Spanish ASR system for the Albayzin 2022 Speech-to-Text Transcription Challenge}, author={L{\'o}pez, Fernando and Luque, Jordi}, journal={Proc. IberSPEECH 2022}, pages={271--275}, year={2022} } @misc{https://doi.org/10.48550/arxiv.2210.15226, doi = {10.48550/ARXIV.2210.15226}, url = {https://arxiv.org/abs/2210.15226}, author = {López, Fernando and Luque, Jordi}, title = {Iterative pseudo-forced alignment by acoustic CTC loss for self-supervised ASR domain adaptation}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } @misc{lleidartve, title={Rtve 2018, 2020 and 2022 database description}, author={Lleida, E and Ortega, A and Miguel, A and Baz{\'a}n, V and P{\'e}rez, C and G{\'o}mez, M and de Prada, A} } @misc{speechbrain, title={{SpeechBrain}: A General-Purpose Speech Toolkit}, author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio}, year={2021}, eprint={2106.04624}, archivePrefix={arXiv}, primaryClass={eess.AS}, note={arXiv:2106.04624} } ``` | 6c5a48146f93e0fd173dc5906b6917a5 |
creativeml-openrail-m | ['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'text-to-audio'] | false | Riffusion Riffusion is an app for real-time music generation with stable diffusion. Read about it at https://www.riffusion.com/about and try it at https://www.riffusion.com/. * Code: https://github.com/riffusion/riffusion * Web app: https://github.com/hmartiro/riffusion-app * Model checkpoint: https://huggingface.co/riffusion/riffusion-model-v1 * Discord: https://discord.gg/yu6SRwvX4v This repository contains the model files, including: * a diffusers formated library * a compiled checkpoint file * a traced unet for improved inference speed * a seed image library for use with riffusion-app | 76a773939dbdcfd17acb38e77826636d |
creativeml-openrail-m | ['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'text-to-audio'] | false | Riffusion v1 Model Riffusion is a latent text-to-image diffusion model capable of generating spectrogram images given any text input. These spectrograms can be converted into audio clips. The model was created by [Seth Forsgren](https://sethforsgren.com/) and [Hayk Martiros](https://haykmartiros.com/) as a hobby project. You can use the Riffusion model directly, or try the [Riffusion web app](https://www.riffusion.com/). The Riffusion model was created by fine-tuning the **Stable-Diffusion-v1-5** checkpoint. Read about Stable Diffusion here [🤗's Stable Diffusion blog](https://huggingface.co/blog/stable_diffusion). | 5f870e5b6f3f609327e473745aa9a6d3 |
creativeml-openrail-m | ['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'text-to-audio'] | false | Model Details - **Developed by:** Seth Forsgren, Hayk Martiros - **Model type:** Diffusion-based text-to-image generation model - **Language(s):** English - **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based. - **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([CLIP ViT-L/14](https://arxiv.org/abs/2103.00020)) as suggested in the [Imagen paper](https://arxiv.org/abs/2205.11487). | e0c4b40586a9eb0ff7657b838308dd11 |
creativeml-openrail-m | ['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'text-to-audio'] | false | Direct Use The model is intended for research purposes only. Possible research areas and tasks include - Generation of artworks, audio, and use in creative processes. - Applications in educational or creative tools. - Research on generative models. | 64471dd22a7e58a503a4861e24b70644 |
creativeml-openrail-m | ['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'text-to-audio'] | false | Datasets The original Stable Diffusion v1.5 was trained on the [LAION-5B](https://arxiv.org/abs/2210.08402) dataset using the [CLIP text encoder](https://openai.com/blog/clip/), which provided an amazing starting point with an in-depth understanding of language, including musical concepts. The team at LAION also compiled a fantastic audio dataset from many general, speech, and music sources that we recommend at [LAION-AI/audio-dataset](https://github.com/LAION-AI/audio-dataset/blob/main/data_collection/README.md). | a323fcfe0712c820a62bb90e9657425f |
creativeml-openrail-m | ['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'text-to-audio'] | false | Fine Tuning Check out the [diffusers training examples](https://huggingface.co/docs/diffusers/training/overview) from Hugging Face. Fine tuning requires a dataset of spectrogram images of short audio clips, with associated text describing them. Note that the CLIP encoder is able to understand and connect many words even if they never appear in the dataset. It is also possible to use a [dreambooth](https://huggingface.co/blog/dreambooth) method to get custom styles. | e00276208eaae31276496c683f400d72 |
creativeml-openrail-m | ['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'text-to-audio'] | false | Citation If you build on this work, please cite it as follows: ``` @article{Forsgren_Martiros_2022, author = {Forsgren, Seth* and Martiros, Hayk*}, title = {{Riffusion - Stable diffusion for real-time music generation}}, url = {https://riffusion.com/about}, year = {2022} } ``` | deaf7a7502b92f51f27e8effbe611b5c |
mit | ['text', 'Twitter'] | false | distilbert-depression-base This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) trained on CLPsych 2015 and evaluated on a scraped dataset from Twitter to detect potential users in Twitter for depression. It achieves the following results on the evaluation set: - Evaluation Loss: 0.64 - Accuracy: 0.65 - F1: 0.70 - Precision: 0.61 - Recall: 0.83 - AUC: 0.65 | 621d70a446319dee8116e4cf59dd76cc |
mit | ['text', 'Twitter'] | false | Intended uses & limitations Feed a corpus of tweets to the model to generate label if input is indicative of a depressed user or not. Label 1 is depressed, Label 0 is not depressed. Limitation: All token sequences longer than 512 are automatically truncated. Also, training and test data may be contaminated with mislabeled users. | 30aa8ac071698ea567c0246618c24964 |
mit | ['text', 'Twitter'] | false | How to use You can use this model directly with a pipeline for sentiment analysis: ```python >>> from transformers import DistilBertTokenizerFast, AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained('distilbert-base-uncased') >>> from transformers import DistilBertForSequenceClassification >>> model = DistilBertForSequenceClassification.from_pretrained(r"distilbert-depression-base") >>> from transformers import pipeline >>> classifier = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer) >>> tokenizer_kwargs = {'padding':True,'truncation':True,'max_length':512} >>> result=classifier('pain peko',**tokenizer_kwargs) | 3a7e4f88e4d47948fec1f8fa2ae80fbd |
mit | ['text', 'Twitter'] | false | Should note that the string passed as the input can be a corpus of tweets concatenated together into one document. [{'label': 'LABEL_1', 'score': 0.5048992037773132}] ``` Otherwise, download the files and specify within the pipeline the path to the folder that contains the config.json, pytorch_model.bin, and training_args.bin | 60702d4b4075647c14a224a23697d7f6 |
mit | ['text', 'Twitter'] | false | Training results | Epoch | Training Loss | Validation Loss | Accuracy | F1 | Precision | Recall | AUC | |:-----:|:-------------:|:---------------:|:--------:|:--------:|:---------:|:--------:|:--------:| | 1.0 | 0.68 | 0.66 | 0.59 | 0.63 | 0.56 | 0.73 | 0.59 | | 2.0 | 0.60 | 0.68 | 0.63 | 0.69 | 0.59 | 0.83 | 0.63 | | 3.0 | 0.52 | 0.67 | 0.64 | 0.66 | 0.62 | 0.72 | 0.65 | | c071ad8021153f4992edb5f65abe775e |
cc-by-4.0 | ['question-answering, multi-step-reasoning, multi-hop-reasoning'] | false | Fast doesn't work with digit tokenization model = AutoModelForSeq2SeqLM.from_pretrained(model_name) enable_digit_tokenization(tokenizer) input_texts = [ "answer_me: Who scored the first touchdown of the game?" + "context: ... Oakland would get the early lead in the first quarter as quarterback JaMarcus Russell completed a 20-yard touchdown pass to rookie wide receiver Chaz Schilens..." | 00be41eb011a13cafeac8b68b379570a |
mit | [] | false | AliceBeta on Stable Diffusion This is the `<Alice-style>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`:      | c0801af4d1d747e0f4ca1e7069887e76 |
mit | ['generated_from_trainer'] | false | clinical_bio_bert_ft This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2570 - F1: 0.8160 | 8835c590bfede547c9bf77eeb324dcbe |
mit | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.6327 | 1.0 | 95 | 0.2442 | 0.7096 | | 0.1692 | 2.0 | 190 | 0.2050 | 0.7701 | | 0.0878 | 3.0 | 285 | 0.1923 | 0.8002 | | 0.0493 | 4.0 | 380 | 0.2234 | 0.8079 | | 0.0302 | 5.0 | 475 | 0.2250 | 0.8090 | | 0.0191 | 6.0 | 570 | 0.2363 | 0.8145 | | 0.0132 | 7.0 | 665 | 0.2489 | 0.8178 | | 0.0102 | 8.0 | 760 | 0.2494 | 0.8152 | | 0.008 | 9.0 | 855 | 0.2542 | 0.8191 | | 0.0068 | 10.0 | 950 | 0.2570 | 0.8160 | | 730942739a9116ace8da57bf0ff8d68b |
mit | ['generated_from_trainer'] | false | finetuning-sentiment-model-deberta-smote This model is a fine-tuned version of [yangheng/deberta-v3-base-absa-v1.1](https://huggingface.co/yangheng/deberta-v3-base-absa-v1.1) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.4852 - Accuracy: 0.7215 - F1: 0.7215 - Precision: 0.7215 - Recall: 0.7215 | 4bfba83b41bb61d49eeee5d53517ecbe |
apache-2.0 | ['generated_from_trainer'] | false | bert-base-uncased-finetuned-academic This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the elsevier-oa-cc-by dataset. It achieves the following results on the evaluation set: - Loss: 2.5893 | 2036ccc7a65802f6487f3f84ea8ead3e |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 40 - eval_batch_size: 40 - seed: 42 - optimizer: Adam with betas=(0.9,0.97) and epsilon=0.0001 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP | 14ddb7f4f1e15612399596a5e1714ce4 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.9591 | 0.25 | 820 | 2.6567 | | 2.7993 | 0.5 | 1640 | 2.6006 | | 2.7519 | 0.75 | 2460 | 2.5707 | | 2.7319 | 1.0 | 3280 | 2.5763 | | 2.7359 | 1.25 | 4100 | 2.5866 | | 2.7451 | 1.5 | 4920 | 2.5855 | | 2.7421 | 1.75 | 5740 | 2.5770 | | 2.7319 | 2.0 | 6560 | 2.5762 | | 2.7356 | 2.25 | 7380 | 2.5807 | | 2.7376 | 2.5 | 8200 | 2.5813 | | 2.7386 | 2.75 | 9020 | 2.5841 | | 2.7378 | 3.0 | 9840 | 2.5737 | | a3a844483f20c4779ec16e7c00e96ec0 |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 | 391e26ac36a5b4a41fa00effeb06a5e0 |
apache-2.0 | ['automatic-speech-recognition', 'common_voice', 'generated_from_trainer', 'xls_r_repro_common_voice_tr'] | false | wav2vec2-xls-r-100m-common_voice-tr-ft This model is a fine-tuned version of [facebook/wav2vec2-xls-r-100m](https://huggingface.co/facebook/wav2vec2-xls-r-100m) on the COMMON_VOICE - TR dataset. It achieves the following results on the evaluation set: - Loss: 3.4113 - Wer: 1.0 - Cer: 1.0 | e02dd263a5f1cb962aa31eeb9ed0fe51 |
apache-2.0 | ['automatic-speech-recognition', 'common_voice', 'generated_from_trainer', 'xls_r_repro_common_voice_tr'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 64 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 50.0 - mixed_precision_training: Native AMP | c5ced2bc93458ae08f38c75ff76e51f0 |
apache-2.0 | ['automatic-speech-recognition', 'common_voice', 'generated_from_trainer', 'xls_r_repro_common_voice_tr'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:----:|:---------------:|:---:|:---:| | 3.1315 | 9.09 | 500 | 3.3832 | 1.0 | 1.0 | | 3.1163 | 18.18 | 1000 | 3.4252 | 1.0 | 1.0 | | 3.121 | 27.27 | 1500 | 3.4051 | 1.0 | 1.0 | | 3.1273 | 36.36 | 2000 | 3.4345 | 1.0 | 1.0 | | 3.2257 | 45.45 | 2500 | 3.4097 | 1.0 | 1.0 | | 2125591798bda9858f1770f18bdc2e96 |
apache-2.0 | [] | false | ALBERT Persian A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language > میتونی بهش بگی برت_کوچولو [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) is the first attempt on ALBERT for the Persian Language. The model was trained based on Google's ALBERT BASE Version 2.0 over various writing styles from numerous subjects (e.g., scientific, novels, news) with more than 3.9M documents, 73M sentences, and 1.3B words, like the way we did for ParsBERT. Please follow the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo for the latest information about previous and current models. | 6cc06cf12950df53a149d08cbea66d02 |
apache-2.0 | [] | false | Persian NER [ARMAN, PEYMA] This task aims to extract named entities in the text, such as names and label with appropriate `NER` classes such as locations, organizations, etc. The datasets used for this task contain sentences that are marked with `IOB` format. In this format, tokens that are not part of an entity are tagged as `”O”` the `”B”`tag corresponds to the first word of an object, and the `”I”` tag corresponds to the rest of the terms of the same entity. Both `”B”` and `”I”` tags are followed by a hyphen (or underscore), followed by the entity category. Therefore, the NER task is a multi-class token classification problem that labels the tokens upon being fed a raw text. There are two primary datasets used in Persian NER, `ARMAN`, and `PEYMA`. | d1a3f4085bea0f525bae6f77e40ac54c |
apache-2.0 | [] | false | PEYMA PEYMA dataset includes 7,145 sentences with a total of 302,530 tokens from which 41,148 tokens are tagged with seven different classes. 1. Organization 2. Money 3. Location 4. Date 5. Time 6. Person 7. Percent | Label | | 90332248ff4352d719bb1f61d9144598 |
apache-2.0 | [] | false | | |:------------:|:-----:| | Organization | 16964 | | Money | 2037 | | Location | 8782 | | Date | 4259 | | Time | 732 | | Person | 7675 | | Percent | 699 | **Download** You can download the dataset from [here](http://nsurl.org/tasks/task-7-named-entity-recognition-ner-for-farsi/) | cb97f070288755092bb641f73d11c66e |
apache-2.0 | [] | false | Results The following table summarizes the F1 score obtained as compared to other models and architectures. | Dataset | ALBERT-fa-base-v2 | ParsBERT-v1 | mBERT | MorphoBERT | Beheshti-NER | LSTM-CRF | Rule-Based CRF | BiLSTM-CRF | |:-------:|:-----------------:|:-----------:|:-----:|:----------:|:------------:|:--------:|:--------------:|:----------:| | PEYMA | 88.99 | 93.10 | 86.64 | - | 90.59 | - | 84.00 | - | | d58ce7c530c2a6df0c1b732733171284 |
apache-2.0 | [] | false | BibTeX entry and citation info Please cite in publications as the following: ```bibtex @misc{ALBERTPersian, author = {Mehrdad Farahani}, title = {ALBERT-Persian: A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/m3hrdadfi/albert-persian}}, } @article{ParsBERT, title={ParsBERT: Transformer-based Model for Persian Language Understanding}, author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri}, journal={ArXiv}, year={2020}, volume={abs/2005.12515} } ``` | bd9bcbb8823dd5dcba212139afbd891c |
apache-2.0 | ['t5', 'seq2seq'] | false | t5-v1_1-base-dutch-english-cased A [T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) sequence to sequence model pre-trained from scratch on [cleaned Dutch 🇳🇱🇧🇪 mC4 and cleaned English 🇬🇧 C4](https://huggingface.co/datasets/yhavinga/mc4_nl_cleaned). This **t5-v1.1** model has **247M** parameters. It was pre-trained with masked language modeling (denoise token span corruption) objective on the dataset `mc4_nl_cleaned` config `small_en_nl` for **10** epoch(s) and a duration of **11d18h**, with a sequence length of **512**, batch size **128** and **2839630** total steps (**186B** tokens). Pre-training evaluation loss and accuracy are **1,11** and **0,75**. Refer to the evaluation section below for a comparison of the pre-trained models on summarization and translation. * Pre-trained T5 models need to be finetuned before they can be used for downstream tasks, therefore the inference widget on the right has been turned off. * For a demo of the Dutch CNN summarization models, head over to the Hugging Face Spaces for the **[Netherformer 📰](https://huggingface.co/spaces/flax-community/netherformer)** example application! Please refer to the original T5 papers and Scale Efficiently papers for more information about the T5 architecture and configs, though it must be noted that this model (t5-v1_1-base-dutch-english-cased) is unrelated to these projects and not an 'official' checkpoint. * **[Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf)** by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu*. * **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*. | a1987b7487ade7a46b7b8c7f2743132b |
apache-2.0 | ['t5', 'seq2seq'] | false | Tokenizer The model uses a cased SentencePiece tokenizer configured with the `Nmt, NFKC, Replace multi-space to single-space` normalizers and has 32003 tokens. It was trained on Dutch and English with scripts from the Huggingface Transformers [Flax examples](https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling). See [./raw/main/tokenizer.json](tokenizer.json) for details. | 59f04100e0e79b29f79fa18400b78c23 |
apache-2.0 | ['t5', 'seq2seq'] | false | Dataset(s) All models listed below are pre-trained on [cleaned Dutch mC4](https://huggingface.co/datasets/yhavinga/mc4_nl_cleaned), which is the original mC4, except * Documents that contained words from a selection of the Dutch and English [List of Dirty Naught Obscene and Otherwise Bad Words](https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words) are removed * Sentences with less than 3 words are removed * Sentences with a word of more than 1000 characters are removed * Documents with less than 5 sentences are removed * Documents with "javascript", "lorum ipsum", "terms of use", "privacy policy", "cookie policy", "uses cookies", "use of cookies", "use cookies", "elementen ontbreken", "deze printversie" are removed. The Dutch and English models are pre-trained on a 50/50% mix of Dutch mC4 and English C4. The translation models are fine-tuned on [CCMatrix](https://huggingface.co/datasets/yhavinga/ccmatrix). | 65ea5221eb67c42b5ab959a353110fec |
apache-2.0 | ['t5', 'seq2seq'] | false | Dutch T5 Models Three types of [Dutch T5 models have been trained (blog)](https://huggingface.co/spaces/yhavinga/pre-training-dutch-t5-models). `t5-base-dutch` is the only model with an original T5 config. The other model types t5-v1.1 and t5-eff have `gated-relu` instead of `relu` as activation function, and trained with a drop-out of `0.0` unless training would diverge (`t5-v1.1-large-dutch-cased`). The T5-eff models are models that differ in their number of layers. The table will list the several dimensions of these models. Not all t5-eff models are efficient, the best example being the inefficient `t5-xl-4L-dutch-english-cased`. | | [t5-base-dutch](https://huggingface.co/yhavinga/t5-base-dutch) | [t5-v1.1-base-dutch-uncased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-uncased) | [t5-v1.1-base-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cased) | [t5-v1.1-large-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-large-dutch-cased) | [t5-v1_1-base-dutch-english-cased](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased) | [t5-v1_1-base-dutch-english-cased-1024](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased-1024) | [t5-small-24L-dutch-english](https://huggingface.co/yhavinga/t5-small-24L-dutch-english) | [t5-xl-4L-dutch-english-cased](https://huggingface.co/yhavinga/t5-xl-4L-dutch-english-cased) | [t5-base-36L-dutch-english-cased](https://huggingface.co/yhavinga/t5-base-36L-dutch-english-cased) | [t5-eff-xl-8l-dutch-english-cased](https://huggingface.co/yhavinga/t5-eff-xl-8l-dutch-english-cased) | [t5-eff-large-8l-dutch-english-cased](https://huggingface.co/yhavinga/t5-eff-large-8l-dutch-english-cased) | |:------------------|:----------------|:-----------------------------|:---------------------------|:----------------------------|:-----------------------------------|:----------------------------------------|:-----------------------------|:-------------------------------|:----------------------------------|:-----------------------------------|:--------------------------------------| | *type* | t5 | t5-v1.1 | t5-v1.1 | t5-v1.1 | t5-v1.1 | t5-v1.1 | t5 eff | t5 eff | t5 eff | t5 eff | t5 eff | | *d_model* | 768 | 768 | 768 | 1024 | 768 | 768 | 512 | 2048 | 768 | 1024 | 1024 | | *d_ff* | 3072 | 2048 | 2048 | 2816 | 2048 | 2048 | 1920 | 5120 | 2560 | 16384 | 4096 | | *num_heads* | 12 | 12 | 12 | 16 | 12 | 12 | 8 | 32 | 12 | 32 | 16 | | *d_kv* | 64 | 64 | 64 | 64 | 64 | 64 | 64 | 64 | 64 | 128 | 64 | | *num_layers* | 12 | 12 | 12 | 24 | 12 | 12 | 24 | 4 | 36 | 8 | 8 | | *num parameters* | 223M | 248M | 248M | 783M | 248M | 248M | 250M | 585M | 729M | 1241M | 335M | | *feed_forward_proj* | relu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | | *dropout* | 0.1 | 0.0 | 0.0 | 0.1 | 0.0 | 0.0 | 0.0 | 0.1 | 0.0 | 0.0 | 0.0 | | *dataset* | mc4_nl_cleaned | mc4_nl_cleaned full | mc4_nl_cleaned full | mc4_nl_cleaned | mc4_nl_cleaned small_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | | *tr. seq len* | 512 | 1024 | 1024 | 512 | 512 | 1024 | 512 | 512 | 512 | 512 | 512 | | *batch size* | 128 | 64 | 64 | 64 | 128 | 64 | 128 | 512 | 512 | 64 | 128 | | *total steps* | 527500 | 1014525 | 1210154 | 1120k/2427498 | 2839630 | 1520k/3397024 | 851852 | 212963 | 212963 | 538k/1703705 | 851850 | | *epochs* | 1 | 2 | 2 | 2 | 10 | 4 | 1 | 1 | 1 | 1 | 1 | | *duration* | 2d9h | 5d5h | 6d6h | 8d13h | 11d18h | 9d1h | 4d10h | 6d1h | 17d15h | 4d 19h | 3d 23h | | *optimizer* | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | | *lr* | 0.005 | 0.005 | 0.005 | 0.005 | 0.005 | 0.005 | 0.005 | 0.005 | 0.009 | 0.005 | 0.005 | | *warmup* | 10000.0 | 10000.0 | 10000.0 | 10000.0 | 10000.0 | 5000.0 | 20000.0 | 2500.0 | 1000.0 | 1500.0 | 1500.0 | | *eval loss* | 1,38 | 1,20 | 0,96 | 1,07 | 1,11 | 1,13 | 1,18 | 1,27 | 1,05 | 1,3019 | 1,15 | | *eval acc* | 0,70 | 0,73 | 0,78 | 0,76 | 0,75 | 0,74 | 0,74 | 0,72 | 0,76 | 0,71 | 0,74 | | 3f27f8ca0e6a6fbdd935cd193bebcaf4 |
apache-2.0 | ['t5', 'seq2seq'] | false | Evaluation Most models from the list above have been fine-tuned for summarization and translation. The figure below shows the evaluation scores, where the x-axis shows the translation Bleu score (higher is better) and y-axis the summarization Rouge1 translation score (higher is better). Point size is proportional to the model size. Models with faster inference speed are green, slower inference speed is plotted as bleu.  Evaluation was run on fine-tuned models trained with the following settings: | | Summarization | Translation | |---------------:|------------------|-------------------| | Dataset | CNN Dailymail NL | CCMatrix en -> nl | | | 1252021deb2ac44faff6c4cdb822d33f |
apache-2.0 | ['t5', 'seq2seq'] | false | train samples | 50K | 50K | | Optimizer | Adam | Adam | | learning rate | 0.001 | 0.0005 | | source length | 1024 | 128 | | target length | 142 | 128 | |label smoothing | 0.05 | 0.1 | | | b7e38794db11635f5a2ffa7dff2ce104 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.