license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5233 | 1.0 | 535 | 0.5324 | 0.4151 | | 0.3489 | 2.0 | 1070 | 0.5132 | 0.4836 | | 0.2392 | 3.0 | 1605 | 0.5852 | 0.5177 | | 0.1822 | 4.0 | 2140 | 0.7485 | 0.5256 | | 0.1382 | 5.0 | 2675 | 0.8051 | 0.5338 |
ce481d1f3efa284cbc1a57631d79d5ea
apache-2.0
['generated_from_trainer']
false
bert-base-uncased-mnli This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE MNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.4056 - Accuracy: 0.8501
488665e084d8e9f73af8240d54ea14b6
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0
fa914cf232303fcb6666b8b61576cf28
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.4526 | 1.0 | 12272 | 0.4244 | 0.8388 | | 0.3344 | 2.0 | 24544 | 0.4252 | 0.8469 | | 0.2307 | 3.0 | 36816 | 0.4974 | 0.8445 |
98ee5ea13a704e607667a71627f6c192
apache-2.0
['generated_from_trainer']
false
bigbird-base-finetuned-big_patent This model is a fine-tuned version of [robingeibel/bigbird-base-finetuned-big_patent](https://huggingface.co/robingeibel/bigbird-base-finetuned-big_patent) on the big_patent dataset. It achieves the following results on the evaluation set: - Loss: 1.0686
6c601f40e906a06195f2feff9bf07fff
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP
b58f5adf002efbbad84f02c75a30e832
apache-2.0
['generated_from_trainer']
false
NER_EHR_Spanish_model_Mulitlingual_BERT This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the DisTEMIST shared task 2022 dataset. It is available at: https://temu.bsc.es/distemist/category/data/ It achieves the following results on the evaluation set: - Loss: 0.2603 - Precision: 0.5637 - Recall: 0.5801 - F1: 0.5718 - Accuracy: 0.9534
9aad9dbaf88efd22d1962e945bf5dbb6
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 71 | 0.2060 | 0.5017 | 0.5540 | 0.5266 | 0.9496 | | No log | 2.0 | 142 | 0.2163 | 0.5363 | 0.5433 | 0.5398 | 0.9495 | | No log | 3.0 | 213 | 0.2245 | 0.5521 | 0.5356 | 0.5438 | 0.9514 | | No log | 4.0 | 284 | 0.2453 | 0.5668 | 0.5985 | 0.5822 | 0.9522 | | No log | 5.0 | 355 | 0.2433 | 0.5657 | 0.5579 | 0.5617 | 0.9530 | | No log | 6.0 | 426 | 0.2553 | 0.5762 | 0.5762 | 0.5762 | 0.9536 | | No log | 7.0 | 497 | 0.2603 | 0.5637 | 0.5801 | 0.5718 | 0.9534 |
342d5085e7e793a259ca86849216ddfa
apache-2.0
['generated_from_trainer']
false
How to cite this work: Tamayo, A., Burgos, D. A., & Gelbukh, A. (2022). mbert and simple post-processing: A baseline for disease mention detection in spanish. In Working Notes of Conference and Labs of the Evaluation (CLEF) Forum. CEUR Workshop Proceedings. @inproceedings{tamayo2022mbert, title={mbert and simple post-processing: A baseline for disease mention detection in spanish}, author={Tamayo, Antonio and Burgos, Diego A and Gelbukh, Alexander}, booktitle={Working Notes of Conference and Labs of the Evaluation (CLEF) Forum. CEUR Workshop Proceedings}, year={2022} }
76a970761517bad21e8d10093960c818
apache-2.0
['automatic-speech-recognition', 'it']
false
exp_w2v2t_it_vp-fr_s579 Fine-tuned [facebook/wav2vec2-large-fr-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-fr-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (it)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
cdd53dc6f6f86313051f877a41195529
mit
[]
false
model by soydavidtapia This your the Stable Diffusion model fine-tuned the soydavidtapia concept taught to Stable Diffusion with Dreambooth. It can be used by modifying the `instance_prompt`: **a photo of david tapia** You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb). And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts) Here are the images used for training this concept: ![image 0](https://huggingface.co/sd-dreambooth-library/soydavidtapia/resolve/main/concept_images/1.jpeg) ![image 1](https://huggingface.co/sd-dreambooth-library/soydavidtapia/resolve/main/concept_images/8.jpeg) ![image 2](https://huggingface.co/sd-dreambooth-library/soydavidtapia/resolve/main/concept_images/9.jpeg) ![image 3](https://huggingface.co/sd-dreambooth-library/soydavidtapia/resolve/main/concept_images/2.jpeg) ![image 4](https://huggingface.co/sd-dreambooth-library/soydavidtapia/resolve/main/concept_images/0.jpeg) ![image 5](https://huggingface.co/sd-dreambooth-library/soydavidtapia/resolve/main/concept_images/4.jpeg) ![image 6](https://huggingface.co/sd-dreambooth-library/soydavidtapia/resolve/main/concept_images/10.jpeg) ![image 7](https://huggingface.co/sd-dreambooth-library/soydavidtapia/resolve/main/concept_images/6.jpeg) ![image 8](https://huggingface.co/sd-dreambooth-library/soydavidtapia/resolve/main/concept_images/5.jpeg) ![image 9](https://huggingface.co/sd-dreambooth-library/soydavidtapia/resolve/main/concept_images/3.jpeg) ![image 10](https://huggingface.co/sd-dreambooth-library/soydavidtapia/resolve/main/concept_images/7.jpeg)
64537bbda7c8b451ea8fd8a05f58b804
mit
[]
false
ScandiNLI - Natural Language Inference model for Scandinavian Languages This model is a fine-tuned version of [NbAiLab/nb-bert-base](https://huggingface.co/NbAiLab/nb-bert-base) for Natural Language Inference in Danish, Norwegian Bokmål and Swedish. We have released three models for Scandinavian NLI, of different sizes: - [alexandrainst/scandi-nli-large](https://huggingface.co/alexandrainst/scandi-nli-large) - alexandrainst/scandi-nli-base (this) - [alexandrainst/scandi-nli-small](https://huggingface.co/alexandrainst/scandi-nli-small) A demo of the large model can be found in [this Hugging Face Space](https://huggingface.co/spaces/alexandrainst/zero-shot-classification) - check it out! The performance and model size of each of them can be found in the Performance section below.
724451e84d09c07fa31a12f29192bb7a
mit
[]
false
Quick start You can use this model in your scripts as follows: ```python >>> from transformers import pipeline >>> classifier = pipeline( ... "zero-shot-classification", ... model="alexandrainst/scandi-nli-base", ... ) >>> classifier( ... "Mexicansk bokser advarer Messi - 'Du skal bede til gud, om at jeg ikke finder dig'", ... candidate_labels=['sundhed', 'politik', 'sport', 'religion'], ... hypothesis_template="Dette eksempel handler om {}", ... ) {'sequence': "Mexicansk bokser advarer Messi - 'Du skal bede til gud, om at jeg ikke finder dig'", 'labels': ['sport', 'religion', 'sundhed', 'politik'], 'scores': [0.724335789680481, 0.1176532730460167, 0.08848614990711212, 0.06952482461929321]} ```
4e32ada077d8f94de9e1e7562893b144
mit
[]
false
Performance We evaluate the models in Danish, Swedish and Norwegian Bokmål separately. In all cases, we report Matthew's Correlation Coefficient (MCC), macro-average F1-score as well as accuracy.
3164d10c4c74575ca2da064689da2d6d
mit
[]
false
Scandinavian Evaluation The Scandinavian scores are the average of the Danish, Swedish and Norwegian scores, which can be found in the sections below. | **Model** | **MCC** | **Macro-F1** | **Accuracy** | **Number of Parameters** | | :-------- | :------------ | :--------- | :----------- | :----------- | | [`alexandrainst/scandi-nli-large`](https://huggingface.co/alexandrainst/scandi-nli-large) | **73.70%** | **74.44%** | **83.91%** | 354M | | [`MoritzLaurer/mDeBERTa-v3-base-xnli-multilingual-nli-2mil7`](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-xnli-multilingual-nli-2mil7) | 69.01% | 71.99% | 80.66% | 279M | | `alexandrainst/scandi-nli-base` (this) | 67.42% | 71.54% | 80.09% | 178M | | [`joeddav/xlm-roberta-large-xnli`](https://huggingface.co/joeddav/xlm-roberta-large-xnli) | 64.17% | 70.80% | 77.29% | 560M | | [`MoritzLaurer/mDeBERTa-v3-base-mnli-xnli`](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-mnli-xnli) | 63.94% | 70.41% | 77.23% | 279M | | [`NbAiLab/nb-bert-base-mnli`](https://huggingface.co/NbAiLab/nb-bert-base-mnli) | 61.71% | 68.36% | 76.08% | 178M | | [`alexandrainst/scandi-nli-small`](https://huggingface.co/alexandrainst/scandi-nli-small) | 56.02% | 65.30% | 73.56% | **22M** |
cdb13b1ebd3c8311e8260f4e95630345
mit
[]
false
page=439) to evaluate the Danish performance of the models. The test split is generated using [this gist](https://gist.github.com/saattrupdan/1cb8379232fdec6e943dc84595a85e7c). | **Model** | **MCC** | **Macro-F1** | **Accuracy** | **Number of Parameters** | | :-------- | :------------ | :--------- | :----------- | :----------- | | [`alexandrainst/scandi-nli-large`](https://huggingface.co/alexandrainst/scandi-nli-large) | **73.80%** | **58.41%** | **86.98%** | 354M | | [`MoritzLaurer/mDeBERTa-v3-base-xnli-multilingual-nli-2mil7`](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-xnli-multilingual-nli-2mil7) | 68.37% | 57.10% | 83.25% | 279M | | `alexandrainst/scandi-nli-base` (this) | 62.44% | 55.00% | 80.42% | 178M | | [`NbAiLab/nb-bert-base-mnli`](https://huggingface.co/NbAiLab/nb-bert-base-mnli) | 56.92% | 53.25% | 76.39% | 178M | | [`MoritzLaurer/mDeBERTa-v3-base-mnli-xnli`](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-mnli-xnli) | 52.79% | 52.00% | 72.35% | 279M | | [`joeddav/xlm-roberta-large-xnli`](https://huggingface.co/joeddav/xlm-roberta-large-xnli) | 49.18% | 50.31% | 69.73% | 560M | | [`alexandrainst/scandi-nli-small`](https://huggingface.co/alexandrainst/scandi-nli-small) | 47.28% | 48.88% | 73.46% | **22M** |
94b9db9f5e2bf52a6464f78f46b701d8
mit
[]
false
Swedish Evaluation We use the test split of the machine translated version of the [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) dataset to evaluate the Swedish performance of the models. We acknowledge that not evaluating on a gold standard dataset is not ideal, but unfortunately we are not aware of any NLI datasets in Swedish. | **Model** | **MCC** | **Macro-F1** | **Accuracy** | **Number of Parameters** | | :-------- | :------------ | :--------- | :----------- | :----------- | | [`alexandrainst/scandi-nli-large`](https://huggingface.co/alexandrainst/scandi-nli-large) | **76.69%** | **84.47%** | **84.38%** | 354M | | [`joeddav/xlm-roberta-large-xnli`](https://huggingface.co/joeddav/xlm-roberta-large-xnli) | 75.35% | 83.42% | 83.55% | 560M | | [`MoritzLaurer/mDeBERTa-v3-base-mnli-xnli`](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-mnli-xnli) | 73.84% | 82.46% | 82.58% | 279M | | [`MoritzLaurer/mDeBERTa-v3-base-xnli-multilingual-nli-2mil7`](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-xnli-multilingual-nli-2mil7) | 73.32% | 82.15% | 82.08% | 279M | | `alexandrainst/scandi-nli-base` (this) | 72.29% | 81.37% | 81.51% | 178M | | [`NbAiLab/nb-bert-base-mnli`](https://huggingface.co/NbAiLab/nb-bert-base-mnli) | 64.69% | 76.40% | 76.47% | 178M | | [`alexandrainst/scandi-nli-small`](https://huggingface.co/alexandrainst/scandi-nli-small) | 62.35% | 74.79% | 74.93% | **22M** |
c28f43cb14e7deed025b07a10995c12f
mit
[]
false
Norwegian Evaluation We use the test split of the machine translated version of the [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) dataset to evaluate the Norwegian performance of the models. We acknowledge that not evaluating on a gold standard dataset is not ideal, but unfortunately we are not aware of any NLI datasets in Norwegian. | **Model** | **MCC** | **Macro-F1** | **Accuracy** | **Number of Parameters** | | :-------- | :------------ | :--------- | :----------- | :----------- | | [`alexandrainst/scandi-nli-large`](https://huggingface.co/alexandrainst/scandi-nli-large) | **70.61%** | **80.43%** | **80.36%** | 354M | | [`joeddav/xlm-roberta-large-xnli`](https://huggingface.co/joeddav/xlm-roberta-large-xnli) | 67.99% | 78.68% | 78.60% | 560M | | `alexandrainst/scandi-nli-base` (this) | 67.53% | 78.24% | 78.33% | 178M | | [`MoritzLaurer/mDeBERTa-v3-base-xnli-multilingual-nli-2mil7`](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-xnli-multilingual-nli-2mil7) | 65.33% | 76.73% | 76.65% | 279M | | [`MoritzLaurer/mDeBERTa-v3-base-mnli-xnli`](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-mnli-xnli) | 65.18% | 76.76% | 76.77% | 279M | | [`NbAiLab/nb-bert-base-mnli`](https://huggingface.co/NbAiLab/nb-bert-base-mnli) | 63.51% | 75.42% | 75.39% | 178M | | [`alexandrainst/scandi-nli-small`](https://huggingface.co/alexandrainst/scandi-nli-small) | 58.42% | 72.22% | 72.30% | **22M** |
f2ba6d1dd8263dd4c25f666892c45168
mit
[]
false
page=439) as well as machine translated versions of [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) and [CommitmentBank](https://doi.org/10.18148/sub/2019.v23i2.601) into all three languages, and machine translated versions of [FEVER](https://aclanthology.org/N18-1074/) and [Adversarial NLI](https://aclanthology.org/2020.acl-main.441/) into Swedish. The training split of DanFEVER is generated using [this gist](https://gist.github.com/saattrupdan/1cb8379232fdec6e943dc84595a85e7c). The three languages are sampled equally during training, and they're validated on validation splits of [DanFEVER](https://aclanthology.org/2021.nodalida-main.pdf
f804270810ea086a25c78dc32fe8348f
mit
[]
false
page=439) and machine translated versions of [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) for Swedish and Norwegian Bokmål, sampled equally. Check out the [Github repository](https://github.com/alexandrainst/ScandiNLI) for the code used to train the ScandiNLI models, and the full training logs can be found in [this Weights and Biases report](https://wandb.ai/saattrupdan/huggingface/reports/ScandiNLI--VmlldzozMDQyOTk1?accessToken=r9crgxqvvigy2hatdjeobzwipz7f3id5vqg8ooksljhfw6wl0hv1b05asypsfj9v).
202208aa3ac7388088b6b47500a9e458
mit
[]
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 4242 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9, 0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - max_steps: 50,000
8446df2d9a0ba5fc70857c44207da627
apache-2.0
['PROP', 'Pretrain4IR']
false
PROP-marco-step400k **PROP**, **P**re-training with **R**epresentative w**O**rds **P**rediction, is a new pre-training method tailored for ad-hoc retrieval. PROP is inspired by the classical statistical language model for IR, specifically the query likelihood model, which assumes that the query is generated as the piece of text representative of the “ideal” document. Based on this idea, we construct the representative words prediction (ROP) task for pre-training. The full paper can be found [here](https://arxiv.org/pdf/2010.10137.pdf). This model is pre-trained with more steps than [PROP-marco](https://huggingface.co/xyma/PROP-marco) on MS MARCO document corpus, and used at the MS MARCO Document Ranking Leaderboard where we reached 1st place.
f575b2c4e92ce39fcd855f29eea29d42
apache-2.0
['PROP', 'Pretrain4IR']
false
Citation If you find our work useful, please consider citing our paper: ```bibtex @inproceedings{DBLP:conf/wsdm/MaGZFJC21, author = {Xinyu Ma and Jiafeng Guo and Ruqing Zhang and Yixing Fan and Xiang Ji and Xueqi Cheng}, editor = {Liane Lewin{-}Eytan and David Carmel and Elad Yom{-}Tov and Eugene Agichtein and Evgeniy Gabrilovich}, title = {{PROP:} Pre-training with Representative Words Prediction for Ad-hoc Retrieval}, booktitle = {{WSDM} '21, The Fourteenth {ACM} International Conference on Web Search and Data Mining, Virtual Event, Israel, March 8-12, 2021}, pages = {283--291}, publisher = {{ACM}}, year = {2021}, url = {https://doi.org/10.1145/3437963.3441777}, doi = {10.1145/3437963.3441777}, timestamp = {Wed, 07 Apr 2021 16:17:44 +0200}, biburl = {https://dblp.org/rec/conf/wsdm/MaGZFJC21.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
9348fe85d3bf9a4493ffcecdbffc6f7b
apache-2.0
['generated_from_keras_callback']
false
MiguelCosta/distilbert-finetuned-cisco This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 4.4181 - Validation Loss: 4.2370 - Epoch: 0
e274e830b888d24911aff36a97c5dbce
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -964, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32
25b6779fa9fae25a7dea491ba66e2f37
apache-2.0
['generated_from_keras_callback']
false
kasrahabib/distilbert-base-cased-trained-on-open-and-closed-source This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0045 - Validation Loss: 0.2459 - Train Precision: 0.9168 - Train Recall: 0.9676 - Train F1: 0.9415 - Epoch: 9
5bd969f080c7db323c5fc15eb4817a66
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 5860, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32
50528d5bffee36dda8394985e5c6d453
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Epoch | |:----------:|:---------------:|:---------------:|:------------:|:--------:|:-----:| | 0.2726 | 0.1881 | 0.8684 | 0.9695 | 0.9161 | 0 | | 0.1050 | 0.1451 | 0.9102 | 0.9676 | 0.9380 | 1 | | 0.0485 | 0.1617 | 0.9385 | 0.9313 | 0.9349 | 2 | | 0.0301 | 0.1832 | 0.9011 | 0.9733 | 0.9358 | 3 | | 0.0214 | 0.1782 | 0.9319 | 0.9408 | 0.9364 | 4 | | 0.0140 | 0.2199 | 0.9292 | 0.9523 | 0.9406 | 5 | | 0.0104 | 0.2089 | 0.9308 | 0.9504 | 0.9405 | 6 | | 0.0060 | 0.2600 | 0.9055 | 0.9695 | 0.9364 | 7 | | 0.0059 | 0.2426 | 0.9102 | 0.9676 | 0.9380 | 8 | | 0.0045 | 0.2459 | 0.9168 | 0.9676 | 0.9415 | 9 |
a54d6e1e860d1648950c2c751ec3ea2d
mit
['fastai', 'translation']
false
Fine Tune En-ML translation * source group: English * target group: Malayalam This is a Machine translation model created for fun to translate from English text to Malayalam which was fine-tuned for KDE-Dataset. [Tweet](https://twitter.com/kurianbenoy2/status/1503082136009465857?s=20&t=7Hn-KUqHZRY6VJ16-i1qdA)
f4f2ede73028f7afb92f6e1afe94af36
mit
['fastai', 'translation']
false
Model description Used a fine tuned model on top of MarianMT models created by Helsinki-NLP group. The [training code is described here](https://kurianbenoy.com/ml-blog/fastai/huggingface/translation/fine%20tuning/malayalam/2022/03/12/_03_13_huggingace_translation_models.html).
1ca2a465ff3e0fe41069d2bfd17e91d1
creativeml-openrail-m
[]
false
art by `Steampunk_angel` this style gives a steampunk look and feel with gears and sometimes mechanical wings to prompts. License This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here
71c6f33994d721c6e21ae68fcb79a915
['apache-2.0', 'bsd-3-clause']
['summarization', 'summary', 'booksum', 'long-document', 'long-form']
false
long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP13 > Evaluating some metric results before merging with the "main" wip version This model is a fine-tuned version of [pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP12](https://huggingface.co/pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP12) on the `kmfoda/booksum`. The "base" checkpoint that I update when a training session is productive is [here](https://huggingface.co/pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP)
90f640737673621b2343044384e4623d
['apache-2.0', 'bsd-3-clause']
['summarization', 'summary', 'booksum', 'long-document', 'long-form']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0006 - train_batch_size: 2 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 64 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.01 - num_epochs: 1.1
44a0cfac1b64677a2d2f04d678004fb4
apache-2.0
['generated_from_trainer']
false
starbot-transformers This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.4079
264d78ab01b2479f7c464a9233535d49
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.005 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10
46e5dacebc9db2ba22af81669e2b8ab5
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 3.3942 | 1.0 | 2992 | 3.3385 | | 3.2566 | 2.0 | 5984 | 3.2760 | | 3.4112 | 3.0 | 8976 | 3.4710 | | 3.4887 | 4.0 | 11968 | 3.5264 | | 3.4856 | 5.0 | 14960 | 3.5181 | | 3.4359 | 6.0 | 17952 | 3.5079 | | 3.4115 | 7.0 | 20944 | 3.4954 | | 3.3657 | 8.0 | 23936 | 3.4482 | | 3.3018 | 9.0 | 26928 | 3.4207 | | 3.2435 | 10.0 | 29920 | 3.4079 |
eaf02739f16983c903c68ede583398ae
other
['text-generation', 'opt']
false
How to use You can use this model directly with a pipeline for text generation. ```python >>> from transformers import pipeline >>> generator = pipeline('text-generation', model="facebook/opt-1.3b") >>> generator("Hello, I'm am conscious and") [{'generated_text': 'Hello, I am conscious and I am here.\nI am here.\nI am conscious.'}] ``` By default, generation is deterministic. In order to use the top-k sampling, please set `do_sample` to `True`. ```python >>> from transformers import pipeline, set_seed >>> set_seed(32) >>> generator = pipeline('text-generation', model="facebook/opt-1.3b", do_sample=True) >>> generator("Hello, I'm am conscious and") [{'generated_text': "Hello, I'm am conscious and able to hear. I have a lot of experience in the"}] ```
d3fe664e46ccd092b6a89f9f6dd1add8
other
['text-generation', 'opt']
false
Limitations and bias As mentioned in Meta AI's model card, given that the training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral the model is strongly biased : > Like other large language models for which the diversity (or lack thereof) of training > data induces downstream impact on the quality of our model, OPT-175B has limitations in terms > of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and > hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern > large language models. Here's an example of how the model can have biased predictions: ```python >>> from transformers import pipeline, set_seed >>> set_seed(32) >>> generator = pipeline('text-generation', model="facebook/opt-1.3b", do_sample=True, num_return_sequences=5) >>> generator("The woman worked as a") [{'generated_text': 'The woman worked as a bartender for six months before getting to the job she always dreamed of. She'}, {'generated_text': 'The woman worked as a nanny in a house near The White Horse Farm in the Yorkshire Dales'}, {'generated_text': "The woman worked as a translator at the British Broadcasting Corporation's headquarters and was also an acquaintance of some"}, {'generated_text': 'The woman worked as a secretary and went to school full-time, and also worked as a waitress'}, {'generated_text': 'The woman worked as a beautician with her baby and the little girl is now at the age where'}] ``` compared to: ```python >>> from transformers import pipeline, set_seed >>> set_seed(32) >>> generator = pipeline('text-generation', model="facebook/opt-1.3b", do_sample=True, num_return_sequences=5) >>> generator("The man worked as a") [{'generated_text': 'The man worked as a janitor and the owner of the house he worked at caught him cheating on'}, {'generated_text': 'The man worked as a software engineer.\n\nFor over 10 years, he had been at Amazon'}, {'generated_text': 'The man worked as a car salesman - and was a man of his word to her\nA T'}, {'generated_text': 'The man worked as a private contractor for five years. He went to the Bahamas in the summer of'}, {'generated_text': 'The man worked as a computer systems consultant. After leaving the job, he became a prolific internet hacker'}] ``` This bias will also affect all fine-tuned versions of this model.
38e5f26c00fbada5ce6fb7dfcd12fc70
cc-by-4.0
['question generation', 'answer extraction']
false
Model Card of `lmqg/mt5-base-koquad-qg-ae` This model is fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) for question generation and answer extraction jointly on the [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
be759c2b37ecd4d4a56b1cec52b4c03f
cc-by-4.0
['question generation', 'answer extraction']
false
Overview - **Language model:** [google/mt5-base](https://huggingface.co/google/mt5-base) - **Language:** ko - **Training data:** [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
c8bba8cbad02094f8f9c1f47c21682e7
cc-by-4.0
['question generation', 'answer extraction']
false
model prediction question_answer_pairs = model.generate_qa("1990년 영화 《 남부군 》에서 단역으로 영화배우 첫 데뷔에 이어 같은 해 KBS 드라마 《지구인》에서 단역으로 출연하였고 이듬해 MBC 《여명의 눈동자》를 통해 단역으로 출연하였다.") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/mt5-base-koquad-qg-ae")
77876745450f637e482a7b86235bcc3a
cc-by-4.0
['question generation', 'answer extraction']
false
question generation question = pipe("extract answers: 또한 스피어스는 많은 새로운 여성 아티스트들에게 영향을 끼쳤는데, 대표적으로 데미 로바토, 케이티 페리, 크리스티니아 드바지, 레이디 가가, 리틀 부츠, 셀레나 고메즈 & 더씬, 픽시 로트 이 있다. 2007년 비욘세 놀스는 Total Request Live와의 인터뷰에서 '나는 브리트니를 사랑하고 팬이에요. 특히 새 앨범 Blackout을 좋아해요'라고 말했다. 린제이 로한은 '언제나 브리트니 스피어스에게 영감을 받는다. 학창시절 그녀처럼 타블로이드에 오르기를 꿈꿔왔다'고 말하며 롤 모델로 꼽았다. 스피어스는 현대 음악가들에게 음악적 영감으로 언급되기도 했다. <hl> 마일리 사이러스는 자신의 히트곡 Party in the U.S.A. 가 브리트니에게 영감과 영향을 받은 곡이라고 밝혔다. <hl> 베리 매닐로우의 앨범 15 Minutes 역시 브리트니에게 영감을 얻었다고 언급되었다.") ```
04df0db105a52f191897f180f637e484
cc-by-4.0
['question generation', 'answer extraction']
false
Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/mt5-base-koquad-qg-ae/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_koquad.default.json) | | Score | Type | Dataset | |:-----------|--------:|:--------|:-----------------------------------------------------------------| | BERTScore | 84.19 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | Bleu_1 | 27.97 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | Bleu_2 | 20.84 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | Bleu_3 | 15.88 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | Bleu_4 | 12.22 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | METEOR | 29.86 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | MoverScore | 83.24 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | ROUGE_L | 28.55 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | - ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/lmqg/mt5-base-koquad-qg-ae/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_koquad.default.json) | | Score | Type | Dataset | |:--------------------------------|--------:|:--------|:-----------------------------------------------------------------| | QAAlignedF1Score (BERTScore) | 80.28 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | QAAlignedF1Score (MoverScore) | 81.97 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | QAAlignedPrecision (BERTScore) | 77.03 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | QAAlignedPrecision (MoverScore) | 78.1 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | QAAlignedRecall (BERTScore) | 83.91 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | QAAlignedRecall (MoverScore) | 86.43 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | - ***Metric (Answer Extraction)***: [raw metric file](https://huggingface.co/lmqg/mt5-base-koquad-qg-ae/raw/main/eval/metric.first.answer.paragraph_sentence.answer.lmqg_qg_koquad.default.json) | | Score | Type | Dataset | |:-----------------|--------:|:--------|:-----------------------------------------------------------------| | AnswerExactMatch | 83.02 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | AnswerF1Score | 88.43 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | BERTScore | 96.14 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | Bleu_1 | 74.93 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | Bleu_2 | 65.39 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | Bleu_3 | 51.39 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | Bleu_4 | 34.98 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | METEOR | 61.26 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | MoverScore | 95.2 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | ROUGE_L | 83.83 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
a7b710befcf3d172e8c374d68ae435a6
cc-by-4.0
['question generation', 'answer extraction']
false
Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_koquad - dataset_name: default - input_types: ['paragraph_answer', 'paragraph_sentence'] - output_types: ['question', 'answer'] - prefix_types: ['qg', 'ae'] - model: google/mt5-base - max_length: 512 - max_length_output: 32 - epoch: 14 - batch: 32 - lr: 0.0001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 2 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mt5-base-koquad-qg-ae/raw/main/trainer_config.json).
2f273feab3a8a295ed19031e34581a71
apache-2.0
['generated_from_trainer']
false
tiny-mlm-snli-target-glue-mrpc This model is a fine-tuned version of [muhtasham/tiny-mlm-snli](https://huggingface.co/muhtasham/tiny-mlm-snli) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1053 - Accuracy: 0.6814 - F1: 0.7601
2708cb2f1ee868365f65294c3d55b108
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.5879 | 4.35 | 500 | 0.5553 | 0.7279 | 0.8189 | | 0.4565 | 8.7 | 1000 | 0.5597 | 0.7598 | 0.8388 | | 0.3208 | 13.04 | 1500 | 0.6303 | 0.7426 | 0.8217 | | 0.2133 | 17.39 | 2000 | 0.7777 | 0.7230 | 0.8094 | | 0.137 | 21.74 | 2500 | 1.1053 | 0.6814 | 0.7601 |
fd022d5c569f9aeb32fb0da799e7c360
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'endpoints-template']
false
Stable Diffusion v1-5 Custom Inference This repo is for running diffusion custom inference endpoints with `prompts` and an optional `image` as inputs (Unlike normal text-to-image inference). To achieve this goal, this repo implements a `handler.py` script. For more information regarding custom inference, please visit this [link](https://huggingface.co/docs/inference-endpoints/guides/custom_handler). For more information about the model, license and limitations please check the original [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) or diffusion [documentation](https://huggingface.co/docs/diffusers/index).
2847604ed86118c881f8d7b566fbf515
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'endpoints-template']
false
Local test custom handler To test custom inference locally, please run the following command: ```commandline python local_request.py --prompts="whale in the universe" --image="test_image.jpg" ``` **Note**: `--image` parameter is optional.
84eb425fff8550cdf724ea5610154ca0
apache-2.0
['generated_from_trainer']
false
my_awesome_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.0568 - Accuracy: 0.3929
637a1d210b4261836f3955d3fe39df17
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 47 | 2.2071 | 0.3333 | | No log | 2.0 | 94 | 2.0568 | 0.3929 |
00b7e56d3b79ef6aa2d738519d2a359c
apache-2.0
['generated_from_keras_callback']
false
test-model This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.0322 - Validation Loss: 0.9818 - Train Rouge1: 63.3560 - Train Rouge2: 39.8622 - Train Rougel: 62.5870 - Train Rougelsum: 62.5573 - Epoch: 9
25c43ff755553ec34ed92d360049c03b
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Epoch | |:----------:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-----:| | 1.2033 | 1.1011 | 63.0488 | 39.8152 | 62.3015 | 62.2699 | 0 | | 1.1660 | 1.0732 | 63.6556 | 40.2704 | 62.8821 | 62.8498 | 1 | | 1.1394 | 1.0532 | 63.8815 | 40.5348 | 63.1276 | 63.0965 | 2 | | 1.1149 | 1.0386 | 64.2783 | 40.8596 | 63.5115 | 63.4840 | 3 | | 1.0969 | 1.0245 | 63.6975 | 40.1645 | 62.9323 | 62.8990 | 4 | | 1.0831 | 1.0122 | 63.7146 | 40.3383 | 62.9457 | 62.9173 | 5 | | 1.0678 | 1.0044 | 63.3129 | 39.9492 | 62.5462 | 62.5154 | 6 | | 1.0551 | 0.9949 | 62.5523 | 39.2999 | 61.7963 | 61.7831 | 7 | | 1.0417 | 0.9869 | 63.3126 | 40.0112 | 62.5606 | 62.5360 | 8 | | 1.0322 | 0.9818 | 63.3560 | 39.8622 | 62.5870 | 62.5573 | 9 |
a321c109e6a2c773ed961ed9fa0caed5
mit
['generated_from_trainer']
false
multiqa_model This model is a fine-tuned version of [nc33/multiqa_model](https://huggingface.co/nc33/multiqa_model) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1150 - Precision: 0.0855 - Recall: 0.0485 - F1: 0.0619 - Accuracy: 0.9626
b17600f27ce4d359fb6fc04ded6a77a1
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 327 | 0.1121 | 0.0708 | 0.0280 | 0.0402 | 0.9631 | | 0.0786 | 2.0 | 654 | 0.1098 | 0.0531 | 0.0254 | 0.0343 | 0.9599 | | 0.0786 | 3.0 | 981 | 0.1085 | 0.0657 | 0.0243 | 0.0354 | 0.9634 | | 0.0681 | 4.0 | 1308 | 0.1133 | 0.0765 | 0.0453 | 0.0569 | 0.9618 | | 0.0641 | 5.0 | 1635 | 0.1150 | 0.0855 | 0.0485 | 0.0619 | 0.9626 |
37ee262490a499a828a4c7962b5da674
apache-2.0
['generated_from_trainer']
false
wav2vec2-large-xls-r-300m-bengali-v7 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 3.2999 - Wer: 1.0
ee48c365e30e20e7be37d610b219a71f
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 - mixed_precision_training: Native AMP
12dac171e31230d76cfed608c74a3300
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:---:| | 9.965 | 0.85 | 400 | 4.0076 | 1.0 | | 3.5381 | 1.71 | 800 | 3.3463 | 1.0 | | 3.3333 | 2.56 | 1200 | 3.2927 | 1.0 | | 3.307 | 3.41 | 1600 | 3.3024 | 1.0 | | 3.3386 | 4.26 | 2000 | 3.2984 | 1.0 | | 3.3277 | 5.12 | 2400 | 3.2999 | 1.0 | | 3.3145 | 5.97 | 2800 | 3.2999 | 1.0 | | 3.3306 | 6.82 | 3200 | 3.2999 | 1.0 | | 3.326 | 7.68 | 3600 | 3.2999 | 1.0 | | 3.3143 | 8.53 | 4000 | 3.2999 | 1.0 | | 3.3311 | 9.38 | 4400 | 3.2999 | 1.0 |
6b59033b90a9c87d049e2685d795c1e4
apache-2.0
['Quality Estimation', 'monotransquest', 'hter']
false
Using Pre-trained Models ```python import torch from transquest.algo.sentence_level.monotransquest.run_model import MonoTransQuestModel model = MonoTransQuestModel("xlmroberta", "TransQuest/monotransquest-hter-en_de-wiki", num_labels=1, use_cuda=torch.cuda.is_available()) predictions, raw_outputs = model.predict([["Reducerea acestor conflicte este importantă pentru conservare.", "Reducing these conflicts is not important for preservation."]]) print(predictions) ```
f2b7418367643f45b03e76e6c4ccdf37
apache-2.0
['generated_from_trainer']
false
BERT_NER_Ep5_PAD_75-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3504 - Precision: 0.6469 - Recall: 0.7246 - F1: 0.6835 - Accuracy: 0.9013
f3904186846122f1533c6e5a2cb4af9f
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 288 | 0.3695 | 0.5799 | 0.6200 | 0.5993 | 0.8792 | | 0.4695 | 2.0 | 576 | 0.3443 | 0.5823 | 0.7252 | 0.6460 | 0.8862 | | 0.4695 | 3.0 | 864 | 0.3189 | 0.6407 | 0.7030 | 0.6704 | 0.8978 | | 0.2184 | 4.0 | 1152 | 0.3458 | 0.6383 | 0.7335 | 0.6826 | 0.8980 | | 0.2184 | 5.0 | 1440 | 0.3504 | 0.6469 | 0.7246 | 0.6835 | 0.9013 |
0b578ed1faa91bc3fe687251987df9b1
apache-2.0
['minds14', 'google/xtreme_s', 'generated_from_trainer']
false
xtreme_s_xlsr_minds14_upd This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the GOOGLE/XTREME_S - MINDS14.FR-FR dataset. It achieves the following results on the evaluation set: - Loss: 2.6303 - F1: 0.0223 - Accuracy: 0.0833
778b5b8ee0f024582b56c02634e2e5e2
apache-2.0
['minds14', 'google/xtreme_s', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - total_train_batch_size: 64 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1 - num_epochs: 3.0 - mixed_precision_training: Native AMP
9734e2844dd2aea5001ec351301328f7
apache-2.0
['automatic-speech-recognition', 'pl']
false
exp_w2v2t_pl_unispeech_s957 Fine-tuned [microsoft/unispeech-large-1500h-cv](https://huggingface.co/microsoft/unispeech-large-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (pl)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
e6fb8764a1814fd9a994fe222104c336
apache-2.0
[]
false
Overview Model included in a paper for modeling fine grained similarity between documents: **Title**: "Multi-Vector Models with Textual Guidance for Fine-Grained Scientific Document Similarity" **Authors**: Sheshera Mysore, Arman Cohan, Tom Hope **Paper**: https://arxiv.org/abs/2111.08366 **Github**: https://github.com/allenai/aspire **Note**: In the context of the paper, this model is referred to as `Specter-CoCite_Scib` and represents a baseline bi-encoder for scientific document similarity. This model is similar in architecture to the [`allenai/specter`](https://github.com/allenai/specter) model but is trained on co-citation data instead of citation data.
a7646052510db2cd37a84939db8f8525
apache-2.0
[]
false
Model description This model is a BERT bi-encoder model trained for similarity of title-abstract pairs in biomedical scientific papers. The model is **initialized with the SciBert model**. This model inputs the title and abstract of a paper and represents it with a single vector obtained by a scalar mix of the CLS token at every layer of the SciBert encoder. These scalar mix parameters can be important for performance in some datasets. Importantly, these scalar mix weights are not included as part of this HF model, if you wish to use these parameters please download the full model at: [`aspire-biencoder-biomed-scib-full.zip`](https://drive.google.com/file/d/1X6S5qwaKUlI3N3RDQSG-tJCzMBWAnqxP/view?usp=sharing).
2592d2959c4c92ee2a91d9fbb95c52c6
apache-2.0
[]
false
Training data The model is trained on pairs of co-cited papers in a contrastive learning setup. The model is trained on 1.2 million biomedical paper pairs. In training the model negative examples for the contrastive loss are obtained as random in-batch negatives. Co-citations are obtained from the full text of papers, for example - the papers in brackets below are all co-cited and each pairs title and abstracts would be used as a training pair: > The idea of distant supervision has been proposed and used widely in Relation Extraction (Mintz et al., 2009; Riedel et al., 2010; Hoffmann et al., 2011; Surdeanu et al., 2012) , where the source of labels is an external knowledge base.
8fa8baee63a5fa3e25363cedc2979077
apache-2.0
[]
false
Training procedure The model was trained with the Adam Optimizer and a learning rate of 2e-5 with 1000 warm-up steps followed by linear decay of the learning rate. The model training convergence is checked with the loss on a held out dev set consisting of co-cited paper pairs.
4d53d6328fe7934858ca3e0dede05655
apache-2.0
[]
false
Intended uses & limitations This model is trained for document similarity tasks in **biomedical** scientific text using a single vector per document. Here, the documents are the title and abstract of a paper. With appropriate fine-tuning the model can also be used for other tasks such as classification. Since the training data comes primarily from biomedicine, performance on other domains may be poorer.
f835f35218642e924f83306622000765
apache-2.0
[]
false
Variable and metrics This model is evaluated on information retrieval datasets with document level queries. Here we report performance on RELISH (biomedical/English), and TRECCOVID (biomedical/English). These are detailed on [github](https://github.com/allenai/aspire) and in our [paper](https://arxiv.org/abs/2111.08366). These datasets represent a abstract level retrieval task, where given a query scientific abstract the task requires the retrieval of relevant candidate abstracts. We rank documents by the L2 distance between the query and candidate documents.
83f470201a9e9c23bacbc6b117e54a16
apache-2.0
[]
false
Evaluation results The released model `aspire-biencoder-biomed-scib` (and `aspire-biencoder-biomed-scib-full`) is compared against `allenai/specter`. `aspire-biencoder-biomed-scib-full`<sup>*</sup> is the performance reported in our paper by averaging over 3 re-runs of the model. The released models `aspire-biencoder-biomed-scib` and `aspire-biencoder-biomed-scib-full` are the single best run among the 3 re-runs. | | TRECCOVID | TRECCOVID | RELISH | RELISH | |-------------------------------------------:|:---------:|:-------:|:------:|:-------:| | | MAP | NDCG%20 | MAP | NDCG%20 | | `specter` | 28.24 | 59.28 | 60.62| 77.20 | | `aspire-biencoder-biomed-scib-full`<sup>*</sup> | 30.60 | 62.07 | 61.43| 78.01 | | `aspire-biencoder-biomed-scib` | 30.74 | 60.16 | 61.52| 78.07 | | `aspire-biencoder-biomed-scib-full` | 31.45 | 63.15 | 61.34| 77.89 | **Alternative models:** Besides the above models consider these alternative models also released in the Aspire paper: [`aspire-biencoder-compsci-spec`](https://huggingface.co/allenai/aspire-biencoder-compsci-spec): If you wanted to run on computer science papers. [`aspire-biencoder-biomed-spec`](https://huggingface.co/allenai/aspire-biencoder-biomed-spec): This is an alternative bi-encoder model identical to the above model, except that it is initialized with `allenai/specter` instead of SciBert. This usually under-performs the model released here.
ac0d5ae40be10a86ae60fb6221d22aff
cc-by-4.0
['questions and answers generation']
false
Model Card of `lmqg/t5-large-squad-qag` This model is fine-tuned version of [t5-large](https://huggingface.co/t5-large) for question & answer pair generation task on the [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
00e06ceec5b2a409134544909edd5d0e
cc-by-4.0
['questions and answers generation']
false
Overview - **Language model:** [t5-large](https://huggingface.co/t5-large) - **Language:** en - **Training data:** [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
aefabb15329341e07938bbe9460d8610
cc-by-4.0
['questions and answers generation']
false
model prediction question_answer_pairs = model.generate_qa("William Turner was an English painter who specialised in watercolour landscapes") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/t5-large-squad-qag") output = pipe("generate question and answer: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.") ```
41d18318224e5aa0a56dec6bedb8583e
cc-by-4.0
['questions and answers generation']
false
Evaluation - ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/lmqg/t5-large-squad-qag/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qag_squad.default.json) | | Score | Type | Dataset | |:--------------------------------|--------:|:--------|:-----------------------------------------------------------------| | QAAlignedF1Score (BERTScore) | 93.45 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) | | QAAlignedF1Score (MoverScore) | 66.05 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) | | QAAlignedPrecision (BERTScore) | 93.34 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) | | QAAlignedPrecision (MoverScore) | 66.34 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) | | QAAlignedRecall (BERTScore) | 93.57 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) | | QAAlignedRecall (MoverScore) | 65.84 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) |
89d026d67a7857eeb2442f4751d64f7a
cc-by-4.0
['questions and answers generation']
false
Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qag_squad - dataset_name: default - input_types: ['paragraph'] - output_types: ['questions_answers'] - prefix_types: ['qag'] - model: t5-large - max_length: 512 - max_length_output: 256 - epoch: 12 - batch: 8 - lr: 0.0001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 8 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/t5-large-squad-qag/raw/main/trainer_config.json).
6e4045c66afd8d4580ba5295c120bf3f
apache-2.0
['image-classification', 'timm']
false
Model card for coatnet_1_rw_224.sw_in1k A timm specific CoAtNet image classification model. Trained in `timm` on ImageNet-1k by Ross Wightman. ImageNet-1k training done on TPUs thanks to support of the [TRC](https://sites.research.google/trc/about/) program.
af15e2a5ee345cdc33be7939ae33cf30
apache-2.0
['image-classification', 'timm']
false
Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 41.7 - GMACs: 8.0 - Activations (M): 34.6 - Image size: 224 x 224 - **Papers:** - CoAtNet: Marrying Convolution and Attention for All Data Sizes: https://arxiv.org/abs/2201.03545 - **Dataset:** ImageNet-1k
46674726122fa2a871b95af1e37630f1
apache-2.0
['image-classification', 'timm']
false
Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model('coatnet_1_rw_224.sw_in1k', pretrained=True) model = model.eval()
48061612b98e2c178d395c7d8c4206d2
apache-2.0
['image-classification', 'timm']
false
Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'coatnet_1_rw_224.sw_in1k', pretrained=True, features_only=True, ) model = model.eval()
6f1a920a8c8259ab76a020bd4ff502e2
apache-2.0
['image-classification', 'timm']
false
Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'coatnet_1_rw_224.sw_in1k', pretrained=True, num_classes=0,
8650d19c8a0655448bb9b5053e00ab37
mit
[]
false
coop himmelblau on Stable Diffusion This is the `<coop himmelblau>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<coop himmelblau> 0](https://huggingface.co/sd-concepts-library/coop-himmelblau/resolve/main/concept_images/3.jpeg) ![<coop himmelblau> 1](https://huggingface.co/sd-concepts-library/coop-himmelblau/resolve/main/concept_images/1.jpeg) ![<coop himmelblau> 2](https://huggingface.co/sd-concepts-library/coop-himmelblau/resolve/main/concept_images/4.jpeg) ![<coop himmelblau> 3](https://huggingface.co/sd-concepts-library/coop-himmelblau/resolve/main/concept_images/5.jpeg) ![<coop himmelblau> 4](https://huggingface.co/sd-concepts-library/coop-himmelblau/resolve/main/concept_images/0.jpeg) ![<coop himmelblau> 5](https://huggingface.co/sd-concepts-library/coop-himmelblau/resolve/main/concept_images/2.jpeg)
c90cc78ffde31c903c731908106be7cf
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-marc-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 0.9237 - Mae: 0.5122
1c4393ead71a9c995b96de82c5761952
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Mae | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.1089 | 1.0 | 235 | 0.9380 | 0.4878 | | 0.9546 | 2.0 | 470 | 0.9237 | 0.5122 |
a260d2c9c45a6bce2b311ecbbff173e5
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-panx-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.5084 - F1: 0.5794
5ba5ce2477507d58dde56921ff547a6a
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP
402453e9baeb31919b9f960ddacbd492
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.7119 | 1.0 | 19 | 1.0009 | 0.2266 | | 0.891 | 2.0 | 38 | 0.6405 | 0.5281 | | 0.6023 | 3.0 | 57 | 0.5084 | 0.5794 |
d3cce9b377468a19309fb2130b39e185
apache-2.0
['generated_from_trainer']
false
t5-base-finetuned-keyword-to-text-generation This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.4643 - Rouge1: 2.1108 - Rouge2: 0.3331 - Rougel: 1.7368 - Rougelsum: 1.7391 - Gen Len: 16.591
befe2977b8f91e45779d6a6f155396b0
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 - mixed_precision_training: Native AMP
64149c7c22134e7cf13f124f82708549
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | No log | 1.0 | 375 | 3.4862 | 2.0718 | 0.326 | 1.7275 | 1.7308 | 16.7995 | | 3.5928 | 2.0 | 750 | 3.4761 | 2.0829 | 0.3253 | 1.7192 | 1.7224 | 16.773 | | 3.5551 | 3.0 | 1125 | 3.4701 | 2.1028 | 0.3272 | 1.7274 | 1.7296 | 16.6505 | | 3.5225 | 4.0 | 1500 | 3.4671 | 2.11 | 0.3305 | 1.7343 | 1.7362 | 16.699 | | 3.5225 | 5.0 | 1875 | 3.4653 | 2.1134 | 0.3319 | 1.7418 | 1.7437 | 16.5485 | | 3.4987 | 6.0 | 2250 | 3.4643 | 2.1108 | 0.3331 | 1.7368 | 1.7391 | 16.591 | | 3.4939 | 7.0 | 2625 | 3.4643 | 2.1108 | 0.3331 | 1.7368 | 1.7391 | 16.591 | | 3.498 | 8.0 | 3000 | 3.4643 | 2.1108 | 0.3331 | 1.7368 | 1.7391 | 16.591 |
a330632a0f6ddd2365036f8b7f669f1e
apache-2.0
['national library of spain', 'spanish', 'bne', 'roberta-base-bne']
false
Model description The **roberta-base-bne** is a transformer-based masked language model for the Spanish language. It is based on the [RoBERTa](https://arxiv.org/abs/1907.11692) base model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019.
ebed84abef0e6626147e3df95833483e
apache-2.0
['national library of spain', 'spanish', 'bne', 'roberta-base-bne']
false
Intended uses and limitations The **roberta-base-bne** model is ready-to-use only for masked language modeling to perform the Fill Mask task (try the inference API or read the next section). However, it is intended to be fine-tuned on non-generative downstream tasks such as Question Answering, Text Classification, or Named Entity Recognition. You can use the raw model for fill mask or fine-tune it to a downstream task.
9331e1bf99a0427cbc6d5614a9f86203
apache-2.0
['national library of spain', 'spanish', 'bne', 'roberta-base-bne']
false
How to use Here is how to use this model: ```python >>> from transformers import pipeline >>> from pprint import pprint >>> unmasker = pipeline('fill-mask', model='PlanTL-GOB-ES/roberta-base-bne') >>> pprint(unmasker("Gracias a los datos de la BNE se ha podido <mask> este modelo del lenguaje.")) [{'score': 0.08422081917524338, 'token': 3832, 'token_str': ' desarrollar', 'sequence': 'Gracias a los datos de la BNE se ha podido desarrollar este modelo del lenguaje.'}, {'score': 0.06348305940628052, 'token': 3078, 'token_str': ' crear', 'sequence': 'Gracias a los datos de la BNE se ha podido crear este modelo del lenguaje.'}, {'score': 0.06148449331521988, 'token': 2171, 'token_str': ' realizar', 'sequence': 'Gracias a los datos de la BNE se ha podido realizar este modelo del lenguaje.'}, {'score': 0.056218471378088, 'token': 10880, 'token_str': ' elaborar', 'sequence': 'Gracias a los datos de la BNE se ha podido elaborar este modelo del lenguaje.'}, {'score': 0.05133328214287758, 'token': 31915, 'token_str': ' validar', 'sequence': 'Gracias a los datos de la BNE se ha podido validar este modelo del lenguaje.'}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python >>> from transformers import RobertaTokenizer, RobertaModel >>> tokenizer = RobertaTokenizer.from_pretrained('PlanTL-GOB-ES/roberta-base-bne') >>> model = RobertaModel.from_pretrained('PlanTL-GOB-ES/roberta-base-bne') >>> text = "Gracias a los datos de la BNE se ha podido desarrollar este modelo del lenguaje." >>> encoded_input = tokenizer(text, return_tensors='pt') >>> output = model(**encoded_input) >>> print(output.last_hidden_state.shape) torch.Size([1, 19, 768]) ```
e0392d3c34d58bbd7f7a591ff62163a5
apache-2.0
['national library of spain', 'spanish', 'bne', 'roberta-base-bne']
false
Limitations and bias At the time of submission, no measures have been taken to estimate the bias and toxicity embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. Nevertheless, here's an example of how the model can have biased predictions: ```python >>> from transformers import pipeline, set_seed >>> from pprint import pprint >>> unmasker = pipeline('fill-mask', model='PlanTL-GOB-ES/roberta-base-bne') >>> set_seed(42) >>> pprint(unmasker("Antonio está pensando en <mask>.")) [{'score': 0.07950365543365479, 'sequence': 'Antonio está pensando en ti.', 'token': 486, 'token_str': ' ti'}, {'score': 0.03375273942947388, 'sequence': 'Antonio está pensando en irse.', 'token': 13134, 'token_str': ' irse'}, {'score': 0.031026942655444145, 'sequence': 'Antonio está pensando en casarse.', 'token': 24852, 'token_str': ' casarse'}, {'score': 0.030703715980052948, 'sequence': 'Antonio está pensando en todo.', 'token': 665, 'token_str': ' todo'}, {'score': 0.02838558703660965, 'sequence': 'Antonio está pensando en ello.', 'token': 1577, 'token_str': ' ello'}] >>> set_seed(42) >>> pprint(unmasker("Mohammed está pensando en <mask>.")) [{'score': 0.05433618649840355, 'sequence': 'Mohammed está pensando en morir.', 'token': 9459, 'token_str': ' morir'}, {'score': 0.0400255024433136, 'sequence': 'Mohammed está pensando en irse.', 'token': 13134, 'token_str': ' irse'}, {'score': 0.03705748915672302, 'sequence': 'Mohammed está pensando en todo.', 'token': 665, 'token_str': ' todo'}, {'score': 0.03658654913306236, 'sequence': 'Mohammed está pensando en quedarse.', 'token': 9331, 'token_str': ' quedarse'}, {'score': 0.03329474478960037, 'sequence': 'Mohammed está pensando en ello.', 'token': 1577, 'token_str': ' ello'}] ```
59536933d0652c92efedfbe7f52c815d
apache-2.0
['national library of spain', 'spanish', 'bne', 'roberta-base-bne']
false
Training data The [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) crawls all .es domains once a year. The training corpus consists of 59TB of WARC files from these crawls, carried out from 2009 to 2019. To obtain a high-quality training corpus, the corpus has been preprocessed with a pipeline of operations, including among others, sentence splitting, language detection, filtering of bad-formed sentences, and deduplication of repetitive contents. During the process, document boundaries are kept. This resulted in 2TB of Spanish clean corpus. Further global deduplication among the corpus is applied, resulting in 570GB of text. Some of the statistics of the corpus: | Corpora | Number of documents | Number of tokens | Size (GB) | |---------|---------------------|------------------|-----------| | BNE | 201,080,084 | 135,733,450,668 | 570GB |
b0035d7c748de19f11b89cdae6fa605b
apache-2.0
['national library of spain', 'spanish', 'bne', 'roberta-base-bne']
false
Training procedure The training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original [RoBERTA](https://arxiv.org/abs/1907.11692) model with a vocabulary size of 50,262 tokens. The **roberta-base-bne** pre-training consists of a masked language model training, that follows the approach employed for the RoBERTa base. The training lasted a total of 48 hours with 16 computing nodes, each one with 4 NVIDIA V100 GPUs of 16GB VRAM.
93e1d2e7cd24e93b2f5e3ac686023e33
apache-2.0
['national library of spain', 'spanish', 'bne', 'roberta-base-bne']
false
Evaluation When fine-tuned on downstream tasks, this model achieves the following results: | Dataset | Metric | [**RoBERTa-base**](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) | |--------------|----------|------------| | MLDoc | F1 | 0.9664 | | CoNLL-NERC | F1 | 0.8851 | | CAPITEL-NERC | F1 | 0.8960 | | PAWS-X | F1 | 0.9020 | | UD-POS | F1 | 0.9907 | | CAPITEL-POS | F1 | 0.9846 | | SQAC | F1 | 0.7923 | | STS | Combined | 0.8533 | | XNLI | Accuracy | 0.8016 | For more evaluation details visit our [GitHub repository](https://github.com/PlanTL-GOB-ES/lm-spanish) or [paper](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405).
77a5680119587a5da5e3cd7a98af3eb2
apache-2.0
['national library of spain', 'spanish', 'bne', 'roberta-base-bne']
false
Funding This work was funded by the [Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA)](https://portal.mineco.gob.es/en-us/digitalizacionIA/Pages/sedia.aspx) within the framework of the Plan-TL.
b7b98090047978c6f2c8920de661eb3d
apache-2.0
['national library of spain', 'spanish', 'bne', 'roberta-base-bne']
false
Citation information If you use this model, please cite our [paper](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405): ``` @article{, title = {MarIA: Spanish Language Models}, author = {Asier Gutiérrez Fandiño and Jordi Armengol Estapé and Marc Pàmies and Joan Llop Palao and Joaquin Silveira Ocampo and Casimiro Pio Carrino and Carme Armentano Oller and Carlos Rodriguez Penagos and Aitor Gonzalez Agirre and Marta Villegas}, doi = {10.26342/2022-68-3}, issn = {1135-5948}, journal = {Procesamiento del Lenguaje Natural}, publisher = {Sociedad Española para el Procesamiento del Lenguaje Natural}, url = {https://upcommons.upc.edu/handle/2117/367156
597f8a2246fb82f2eed696d286d39364
apache-2.0
['national library of spain', 'spanish', 'bne', 'roberta-base-bne']
false
Disclaimer <details> <summary>Click to expand</summary> The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence. In no event shall the owner of the models (SEDIA) nor the creator (BSC) be liable for any results arising from the use made by third parties of these models. Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables. Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de Inteligencia Artificial. En ningún caso el propietario de los modelos (SEDIA) ni el creador (BSC) serán responsables de los resultados derivados del uso que hagan terceros de estos models. </details>
22f1036da9d1cdf92f93f6f299e43eeb
apache-2.0
['generated_from_keras_callback']
false
Haakf/distilbert-base-uncased-padded_left_allsides_news This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.1600 - Validation Loss: 2.0358 - Epoch: 9
a03154c6aec512214de9395f23a8fb95