license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0121 | 0.99 | 140 | 0.0001 | 1.0 | | 0.0103 | 1.99 | 280 | 0.0001 | 1.0 | | 0.0049 | 2.99 | 420 | 0.0000 | 1.0 |
f1ca9d24e25d81d1a2569304da791f6a
mit
['generated_from_trainer']
false
xtremedistil-l6-h256-uncased-TQUAD-finetuned_lr-2e-05_epochs-9 This model is a fine-tuned version of [microsoft/xtremedistil-l6-h256-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h256-uncased) on the Turkish squad dataset. It achieves the following results on the evaluation set: - Loss: 2.2340
fa214500ae508cba9ddcc459246228f3
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 9
e42358b6bfbdc5b9b6cb0af7afe2f3b0
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.5236 | 1.0 | 1050 | 3.0042 | | 2.8489 | 2.0 | 2100 | 2.5866 | | 2.5485 | 3.0 | 3150 | 2.3526 | | 2.4067 | 4.0 | 4200 | 2.3535 | | 2.3091 | 5.0 | 5250 | 2.2862 | | 2.2401 | 6.0 | 6300 | 2.3989 | | 2.1715 | 7.0 | 7350 | 2.2284 | | 2.1414 | 8.0 | 8400 | 2.2298 | | 2.1221 | 9.0 | 9450 | 2.2340 |
b565d31ebab447b1b949fcc56a7ed74f
apache-2.0
['generated_from_trainer']
false
Full config {'dataset': {'datasets': ['kejian/codeparrot-train-more-filter-3.3b-cleaned'], 'is_split_by_sentences': True}, 'generation': {'batch_size': 64, 'metrics_configs': [{}, {'n': 1}, {}], 'scenario_configs': [{'display_as_html': True, 'generate_kwargs': {'do_sample': True, 'eos_token_id': 0, 'max_length': 640, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_samples': 512}, {'display_as_html': True, 'generate_kwargs': {'do_sample': True, 'eos_token_id': 0, 'max_length': 272, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'functions', 'num_samples': 512, 'prompts_path': 'resources/functions_csnet.jsonl', 'use_prompt_for_scoring': True}], 'scorer_config': {}}, 'kl_gpt3_callback': {'gpt3_kwargs': {'model_name': 'code-cushman-001'}, 'max_tokens': 64, 'num_samples': 4096}, 'model': {'from_scratch': True, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'path_or_name': 'codeparrot/codeparrot-small'}, 'objective': {'name': 'MLE'}, 'tokenizer': {'path_or_name': 'codeparrot/codeparrot-small'}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 64, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'kejian/final-mle', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.0008, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000.0, 'output_dir': 'training_output', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 5000, 'save_strategy': 'steps', 'seed': 42, 'warmup_ratio': 0.01, 'weight_decay': 0.1}}
fe4780759e800af071d92649f7f5f2bc
apache-2.0
['generated_from_trainer']
false
xlsr-english This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the librispeech_asr dataset. It achieves the following results on the evaluation set: - Loss: 0.3098 - Wer: 0.1451
4806cb228be0a6e04df6c665b186af65
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.2453 | 2.37 | 400 | 0.5789 | 0.4447 | | 0.3736 | 4.73 | 800 | 0.3737 | 0.2850 | | 0.1712 | 7.1 | 1200 | 0.3038 | 0.2136 | | 0.117 | 9.47 | 1600 | 0.3016 | 0.2072 | | 0.0897 | 11.83 | 2000 | 0.3158 | 0.1920 | | 0.074 | 14.2 | 2400 | 0.3137 | 0.1831 | | 0.0595 | 16.57 | 2800 | 0.2967 | 0.1745 | | 0.0493 | 18.93 | 3200 | 0.3192 | 0.1670 | | 0.0413 | 21.3 | 3600 | 0.3176 | 0.1644 | | 0.0322 | 23.67 | 4000 | 0.3079 | 0.1598 | | 0.0296 | 26.04 | 4400 | 0.2978 | 0.1511 | | 0.0235 | 28.4 | 4800 | 0.3098 | 0.1451 |
c8e693ef90a8c5d014d729b81d08c78d
mit
[]
false
Party girl on Stable Diffusion This is the `<party-girl>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<party-girl> 0](https://huggingface.co/sd-concepts-library/party-girl/resolve/main/concept_images/5.jpeg) ![<party-girl> 1](https://huggingface.co/sd-concepts-library/party-girl/resolve/main/concept_images/4.jpeg) ![<party-girl> 2](https://huggingface.co/sd-concepts-library/party-girl/resolve/main/concept_images/1.jpeg) ![<party-girl> 3](https://huggingface.co/sd-concepts-library/party-girl/resolve/main/concept_images/2.jpeg) ![<party-girl> 4](https://huggingface.co/sd-concepts-library/party-girl/resolve/main/concept_images/3.jpeg) ![<party-girl> 5](https://huggingface.co/sd-concepts-library/party-girl/resolve/main/concept_images/0.jpeg)
0129035a934d62a10df09eaf9ca62a76
unknown
[]
false
Age estimation in supermarkets The model analyzed in this card estimates someone's age. This project has been done for the master Applied Artificial Intelligence and is about estimating ages in supermarkets when a person wants to buy alcohol. This model's goal is to only estimate ages in an image. It will not cover ethnicities or gender.
ef221e285d3dc1d158c5c8df01af98b4
unknown
[]
false
Model description **Used dataset:** UTKFace images - This dataset contains roughly 24K face images. - The age of a person on the picture is labeled in the filename of that image. - Since we do not have use for baby images, we decided to cut these out of the dataset, so there are 21K images left. **Model input:** Facial images **Model output:** For a face in a picture, the model will return the estimated age of that person. The model output also gives a confidence score for the estimation. **Model architecture:** A Convolutional Neural Network. This CNN will perform a regression analysis to estimates the ages.
bad43b2b3bc03a59c643764e3ea7b254
unknown
[]
false
Performance To determine the performance of the model, the following metrics have been used: - MSE, this metric measures how close the regression line is to the data points. <br> &ensp; - *Our model's MSE:* 60.9 - RMSE, this metric measures the mean error that can be made. <br> &ensp; - *Our model's RMSE:* 7.8 - MAE, this is a measure for model accuracy. The MAE is the average error that the model's predictions have in comparison with their corresponding actual targets. <br> &ensp; - *Our model's MAE:* 5.2 Ideally, the RMSE and the MAE should be close to each other. When there is a big difference in these two numbers, it is an indication of variance in the individually errors. Our results show that the prediction model can be around 8 years off of the actual age of a person. We also looked at how the model performs in different age, gender and race classes. It seemed the model predicted the ages of people between 20 and 30 better than the rest. The model could also predict the ages of females better than males. The race that the model can predict the best is East Asian.
77dcf2bbf4c109025b613226dfc2e0fb
unknown
[]
false
Limitations - **Lighting** <br> When the lighting is poor, the age estimation can be poor as well - **Occlusion** <br> Partially hidden or obstructed faces might not be detected. (e.g. face masks) - **UTKFace** <br> The ages in this dataset are in itself estimation from a previous model. Since we do not know the exact ages of the people in the images, our model will not be the most reliable.
df8e997ea9a7c319e6707545e2e6e117
unknown
[]
false
Training and evaluation data Train data: 70% Test data: 30% Our model has been made by trial and error. The following architecture is the outcome: - Hidden layers: 7 - Batch size: 128 - Epochs: 65 - Optimizer: adam - Activation: ReLu & Linear
c05919f66f397faaab0bd0b6aad6e4ea
apache-2.0
['speech']
false
Wav2Vec2-Conformer-Large with Rotary Position Embeddings Wav2Vec2 Conformer with rotary position embeddings, pretrained on 960 hours of Librispeech on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. **Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model. **Paper**: [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) **Authors**: Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino The results of Wav2Vec2-Conformer can be found in Table 3 and Table 4 of the [official paper](https://arxiv.org/abs/2010.05171). The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec
ed48e8a6b11e9f4a91e3eae520fad5aa
mit
['roberta', 'gottbert']
false
Overview **Language model:** uklfr/gottbert-base **Language:** German **Training & Eval data:** [GARFAB2022Weighted](https://huggingface.co/datasets/julius-br/GARFAB) <br> **Published**: September 21th, 2022 <br> **Author**: Julius Breiholz
d239fd910b90b95575644560452d013e
mit
['roberta', 'gottbert']
false
Performance | Label | Precision | Recall | F1-Score | | --- | --- | --- | --- | | Irrelevant | 0,95 | 0,91 | 0,93 | | Bug Report | 0,82 | 0,91 | 0,86 | | Feature Request | 0,87 | 0,82 | 0,85 | | all classes (avg.) | 0,88 | 0,88 | 0,88 |
a30913e87fc8d8fd2701bb789ed72a1f
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the NST dataset. Aborted after 6000 steps / 0.4 epochs as it wasen't promising when manualy evaluated on an SVT broadcast. The punctation, capitalization and entities like Norge seems worse than original so probably need to fix dataset before more training. Re-split the test dataset to contain a thousand samples so evaluate didn't take hours.
3349274a73d5474b14636f03ad1d15b8
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Training results | Step | Wer | |:----:|:----:| | 1000 | 9.42 | | 2000 | 8.13 | | 3000 | 7.27 | | 4000 | 7.05 | | 5000 | 6.60 | | 6000 | 6.49 | Source audio: https://www.youtube.com/watch?v=9XLHas6oD_E This model: ``` [00:00:00.000 --> 00:00:03.040] Ta nu ett djupt andetag för er kan inte alla göra. [00:00:03.040 --> 00:00:11.840] För de allra flesta så är det en självklarhet att kunna andas utan större problem, men har man lomsjukdomens hysterisk fibrås är det inte så. [00:00:11.840 --> 00:00:16.240] Nu finns en ny medicin, men den är inte subventionerad i Sverige. [00:00:16.240 --> 00:00:22.960] Nej, om man vill kunna andas i sverige så får man söka sig till svarta marknaden i mindre noggräknade länder som är norrje. [00:00:22.960 --> 00:00:39.360] Nu ska vi åka till norrje och så ska vi möta upp då en person som ska jag köpa då kafttrio av honom som han får då gratis från norska staten och som han då säljer vidare. [00:00:39.360 --> 00:00:54.560] Okej, i norrje delar läkarna ut medicin i kafttri och gratis till vilken jävla gud som helst och det är bra för nu kan helen andas ut och in.Det ser okej bra att hon får hosta upp inte bara slemme utan även tjugosex tusen i kontanter. [00:00:54.560 --> 00:01:00.320] Jag fattar inte, sverige är ju världsbäst på subventioner, i alla fall i södra sverige, ja när det gäller äl. ``` Whisper medium: ``` [00:00:00.000 --> 00:00:03.080] Ta ett djupt antal, för det kan inte alla göra. [00:00:03.080 --> 00:00:08.000] För de flesta är det självklar att kunna andas utan problem. [00:00:08.000 --> 00:00:12.120] Men har man Lundsjukdomens fibros, är det inte så. [00:00:12.120 --> 00:00:16.200] Nu finns en ny medicin, men den är inte subventionerad i Sverige. [00:00:16.200 --> 00:00:20.160] Om man vill andas i Sverige, så får man söka sig till svarta marknaden- [00:00:20.160 --> 00:00:22.920] -i mindre noggräknade länder som Norge. [00:00:22.920 --> 00:00:29.840] Nu ska vi åka till Norge och möta upp en person som jag ska köpa. [00:00:29.840 --> 00:00:37.480] Ja, kaffetrio av honom. Som han får gratis från Norska staten. [00:00:37.480 --> 00:00:40.200] -Och som han säljer vidare. -Okej. [00:00:40.200 --> 00:00:44.560] I Norge delar läkarna ut medicinen kaffetrio gratis till vilken gud som helst. [00:00:44.560 --> 00:00:49.360] Det är bra, för nu kan Helen andas ut och in. [00:00:49.360 --> 00:00:54.280] Det är inte bara att hon får rosta upp, utan även 26 000 kontanter. [00:00:54.280 --> 00:00:59.320] Sverige är världsbäst på subventioner, i alla fall i södra Sverige. ```
a97c6fce43ba4e08b39d34522186248e
apache-2.0
['generated_from_trainer']
false
wav2vec2-base-timit-demo-colab647 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5534 - Wer: 0.4799
63067da4491d2835f033c4886b5f8beb
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 5.2072 | 7.04 | 500 | 3.7757 | 1.0 | | 1.2053 | 14.08 | 1000 | 0.6128 | 0.5648 | | 0.3922 | 21.13 | 1500 | 0.5547 | 0.5035 | | 0.2157 | 28.17 | 2000 | 0.5534 | 0.4799 |
01edb54b32b7edfefd88976abec7c16d
cc-by-4.0
['question generation']
false
Model Card of `lmqg/bart-base-subjqa-books-qg` This model is fine-tuned version of [lmqg/bart-base-squad](https://huggingface.co/lmqg/bart-base-squad) for question generation task on the [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) (dataset_name: books) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
532420a3459630a15fd205e922201ee7
cc-by-4.0
['question generation']
false
Overview - **Language model:** [lmqg/bart-base-squad](https://huggingface.co/lmqg/bart-base-squad) - **Language:** en - **Training data:** [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) (books) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
1dfab9306fdff92c1dc082bd422f40db
cc-by-4.0
['question generation']
false
model prediction questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/bart-base-subjqa-books-qg") output = pipe("<hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.") ```
d7030f02f2cdff33d74203e4e0718ab5
cc-by-4.0
['question generation']
false
Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/bart-base-subjqa-books-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.books.json) | | Score | Type | Dataset | |:-----------|--------:|:-------|:-----------------------------------------------------------------| | BERTScore | 92.96 | books | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_1 | 22.47 | books | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_2 | 13.03 | books | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_3 | 4.52 | books | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_4 | 2.03 | books | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | METEOR | 20.57 | books | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | MoverScore | 62.85 | books | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | ROUGE_L | 23.24 | books | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
d915e13939b1a966752e53722b7c1b11
cc-by-4.0
['question generation']
false
Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_subjqa - dataset_name: books - input_types: ['paragraph_answer'] - output_types: ['question'] - prefix_types: None - model: lmqg/bart-base-squad - max_length: 512 - max_length_output: 32 - epoch: 2 - batch: 32 - lr: 0.0001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 4 - label_smoothing: 0.0 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/bart-base-subjqa-books-qg/raw/main/trainer_config.json).
1c721cc595bc589f12e1d6444f26f16d
apache-2.0
['translation']
false
opus-mt-fr-sv * source languages: fr * target languages: sv * OPUS readme: [fr-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-sv/opus-2020-01-24.zip) * test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-sv/opus-2020-01-24.test.txt) * test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-sv/opus-2020-01-24.eval.txt)
cd400e167479a4fbc1d481ef4e0493d9
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6949 - Matthews Correlation: 0.5410
e7ff10546fff6f179ee68e6d111b69a7
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5241 | 1.0 | 535 | 0.5322 | 0.3973 | | 0.356 | 2.0 | 1070 | 0.5199 | 0.4836 | | 0.2402 | 3.0 | 1605 | 0.6086 | 0.5238 | | 0.166 | 4.0 | 2140 | 0.6949 | 0.5410 | | 0.134 | 5.0 | 2675 | 0.8254 | 0.5253 |
a9a234ad50df3794bcd96ef24bff124c
apache-2.0
['automatic-speech-recognition', 'it']
false
exp_w2v2t_it_vp-es_s496 Fine-tuned [facebook/wav2vec2-large-es-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-es-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (it)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
c37657965f3fb3d11ef4e66e2aa0c186
mit
[]
false
model by Fedeya This your the Stable Diffusion model fine-tuned the federico minaya concept taught to Stable Diffusion with Dreambooth. It can be used by modifying the `instance_prompt`: **a photo of sks federicominaya** You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb). And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts) Here are the images used for training this concept: ![image 0](https://huggingface.co/Fedeya/federico-minaya/resolve/main/concept_images/1.jpeg) ![image 1](https://huggingface.co/Fedeya/federico-minaya/resolve/main/concept_images/4.jpeg) ![image 2](https://huggingface.co/Fedeya/federico-minaya/resolve/main/concept_images/2.jpeg) ![image 3](https://huggingface.co/Fedeya/federico-minaya/resolve/main/concept_images/0.jpeg) ![image 4](https://huggingface.co/Fedeya/federico-minaya/resolve/main/concept_images/3.jpeg) ![image 5](https://huggingface.co/Fedeya/federico-minaya/resolve/main/concept_images/5.jpeg)
42bb1b9a54a54024af9ba919bdc9f4ae
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-response-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.9774 - Matthews Correlation: 0.3330
cc5f4b68bc8dfcdbb4031dcc8d214ff4
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | No log | 1.0 | 23 | 1.0662 | 0.0 | | No log | 2.0 | 46 | 1.0175 | 0.0 | | No log | 3.0 | 69 | 1.0001 | 0.0 | | No log | 4.0 | 92 | 0.9852 | 0.1196 | | No log | 5.0 | 115 | 0.9836 | 0.2326 | | No log | 6.0 | 138 | 0.9680 | 0.1808 | | No log | 7.0 | 161 | 0.9774 | 0.3330 | | No log | 8.0 | 184 | 0.9786 | 0.2881 | | No log | 9.0 | 207 | 0.9974 | 0.2235 | | No log | 10.0 | 230 | 0.9957 | 0.2031 |
a6367b91e462025b5e6b2c88659db19d
apache-2.0
['automatic-speech-recognition', 'it']
false
exp_w2v2t_it_wav2vec2_s211 Fine-tuned [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) for speech recognition using the train split of [Common Voice 7.0 (it)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
1c9de4adec4b851a280d6952072ea3c7
apache-2.0
['Quality Estimation', 'monotransquest', 'hter']
false
Using Pre-trained Models ```python import torch from transquest.algo.sentence_level.monotransquest.run_model import MonoTransQuestModel model = MonoTransQuestModel("xlmroberta", "TransQuest/monotransquest-hter-en_lv-it-nmt", num_labels=1, use_cuda=torch.cuda.is_available()) predictions, raw_outputs = model.predict([["Reducerea acestor conflicte este importantă pentru conservare.", "Reducing these conflicts is not important for preservation."]]) print(predictions) ```
222be06fcf6a2f2f8bed40c432656afd
apache-2.0
['image-segmentation', 'vision', 'generated_from_trainer']
false
segformer-trainer-test This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the segments/sidewalk-semantic dataset. It achieves the following results on the evaluation set: - Loss: 1.3886 - Mean Iou: 0.1391 - Mean Accuracy: 0.1905 - Overall Accuracy: 0.7192
b8c0e221b73e83d71bdf165a64a083fc
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
indo-sentence-bert-base This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here -->
327ee422123ec4f49dec46072a8c06df
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["Ibukota Perancis adalah Paris", "Menara Eifel terletak di Paris, Perancis", "Pizza adalah makanan khas Italia", "Saya kuliah di Carneige Mellon University"] model = SentenceTransformer('firqaaa/indo-sentence-bert-base') embeddings = model.encode(sentences) print(embeddings) ```
0d7ef8bdd93351928dd9ba93b0bbe0b6
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
Sentences we want sentence embeddings for sentences = ["Ibukota Perancis adalah Paris", "Menara Eifel terletak di Paris, Perancis", "Pizza adalah makanan khas Italia", "Saya kuliah di Carneige Mellon University"]
6343efdcde436459f9478ae11982bc3b
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
Training The model was trained with the parameters: **DataLoader**: `sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 19644 with parameters: ``` {'batch_size': 16} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 5, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 9930, "weight_decay": 0.01 } ```
cb031dd6c515905d5f4d18192410816f
apache-2.0
['generated_from_trainer']
false
mini-mlm-imdb-target-tweet This model is a fine-tuned version of [muhtasham/mini-mlm-imdb](https://huggingface.co/muhtasham/mini-mlm-imdb) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 1.3042 - Accuracy: 0.7674 - F1: 0.7669
fe9884544dee7edf56dbd4ca6ccc5770
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8543 | 4.9 | 500 | 0.6920 | 0.7674 | 0.7571 | | 0.3797 | 9.8 | 1000 | 0.7231 | 0.7727 | 0.7709 | | 0.1668 | 14.71 | 1500 | 0.9171 | 0.7594 | 0.7583 | | 0.068 | 19.61 | 2000 | 1.1558 | 0.7647 | 0.7642 | | 0.0409 | 24.51 | 2500 | 1.3042 | 0.7674 | 0.7669 |
4da08762b1f57112ae18ac65ece29d0e
apache-2.0
[]
false
Fine-tuned T5 base model for use as a frame semantic parser in the [Frame Semantic Transformer](https://github.com/chanind/frame-semantic-transformer) project. This model is trained on data from [FrameNet 1.7](https://framenet2.icsi.berkeley.edu/).
1a704eed7f1a9cfb6cc52fc3ebfa0a60
apache-2.0
[]
false
Performance This model is trained and evaluated using the same train/dev/test splits from FrameNet 1.7 annotated corpora as used by [Open Sesame](https://github.com/swabhs/open-sesame). | Task | F1 Score (Dev) | F1 Score (Test) | | ---------------------- | -------------- | --------------- | | Trigger identification | 0.78 | 0.71 | | Frame Classification | 0.89 | 0.87 | | Argument Extraction | 0.74 | 0.72 |
f2b2e2ecb7db084eccc475375f3e630b
apache-2.0
['generated_from_trainer']
false
mis_515_bert This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3636 - Accuracy: 0.9073
057f17ee3e81fd4545be08a6aea12ed3
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2
39705bae8ee9556de11607cf339413be
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4773 | 1.0 | 1125 | 0.3741 | 0.8777 | | 0.2705 | 2.0 | 2250 | 0.3636 | 0.9073 |
465ecc0399aec0a33032471f188c3d1c
apache-2.0
['generated_from_trainer']
false
distilroberta-base-finetuned-wikitext2 This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 8.3687
f53a46c44ee1ce44b77a6acb7f3b6a76
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 6 | 8.8622 | | No log | 2.0 | 12 | 8.4576 | | No log | 3.0 | 18 | 8.4412 |
7176a58774a14d113cc49486f4631906
apache-2.0
['generated_from_trainer']
false
bert-base-mlm-finetuned-emotion This model is a fine-tuned version of [google/bert_uncased_L-12_H-768_A-12](https://huggingface.co/google/bert_uncased_L-12_H-768_A-12) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.3374
d56b2bd467861bfb6e00510cb43dcda6
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.4247 | 5.75 | 500 | 2.3526 | | 2.1825 | 11.49 | 1000 | 2.2778 | | 2.0578 | 17.24 | 1500 | 2.3802 | | 1.9059 | 22.99 | 2000 | 2.3358 | | 1.7966 | 28.74 | 2500 | 2.3374 |
e098dd64fc7181f2d48c56253113882d
apache-2.0
['generated_from_trainer']
false
bert-small-finetuned-wnut17-ner-longer6 This model is a fine-tuned version of [muhtasham/bert-small-finetuned-wnut17-ner](https://huggingface.co/muhtasham/bert-small-finetuned-wnut17-ner) on the wnut_17 dataset. It achieves the following results on the evaluation set: - Loss: 0.4037 - Precision: 0.5667 - Recall: 0.4270 - F1: 0.4870 - Accuracy: 0.9268
3de32d18600677fecab1dde9d7030df8
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 425 | 0.3744 | 0.5626 | 0.4139 | 0.4769 | 0.9248 | | 0.085 | 2.0 | 850 | 0.3914 | 0.5814 | 0.4270 | 0.4924 | 0.9271 | | 0.0652 | 3.0 | 1275 | 0.4037 | 0.5667 | 0.4270 | 0.4870 | 0.9268 |
e97f6de6e2cc8e3e144153dcc7d7aeb8
creativeml-openrail-m
['text-to-image', 'stable-diffusion']
false
nes-cover-art-image-generator Dreambooth model trained by SergenK with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept: ![0](https://huggingface.co/SergenK/nes-cover-art-image-generator/resolve/main/sample_images/00009-799444158.png) ![1](https://huggingface.co/SergenK/nes-cover-art-image-generator/resolve/main/sample_images/00011-2687893221.png) ![2](https://huggingface.co/SergenK/nes-cover-art-image-generator/resolve/main/sample_images/00004-238860550.png) ![3](https://huggingface.co/SergenK/nes-cover-art-image-generator/resolve/main/sample_images/00013-1488226353.png)
3f6d00c6e93afe9bfce91b63bdb8bc85
apache-2.0
['generated_from_trainer']
false
wav2vec2-large-xls-r-300m-kr-jw4169 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the fleurs dataset. It achieves the following results on the evaluation set: - Loss: 0.9752 - Wer: 0.5196
1421131583785f787bcb8217d1ace7a7
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30
505c3202f674d588db2d956e5b73961b
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 35.084 | 1.39 | 200 | 6.8536 | 1.0 | | 4.853 | 2.78 | 400 | 4.6246 | 1.0 | | 4.5491 | 4.17 | 600 | 4.3815 | 1.0 | | 2.799 | 5.55 | 800 | 1.7402 | 0.8642 | | 1.3872 | 6.94 | 1000 | 1.2019 | 0.7448 | | 0.9599 | 8.33 | 1200 | 1.0594 | 0.7134 | | 0.675 | 9.72 | 1400 | 0.9321 | 0.6404 | | 0.4775 | 11.11 | 1600 | 0.9088 | 0.5911 | | 0.3479 | 12.5 | 1800 | 0.9430 | 0.6010 | | 0.2712 | 13.89 | 2000 | 0.8948 | 0.5854 | | 0.2283 | 15.28 | 2200 | 0.9009 | 0.5495 | | 0.1825 | 16.67 | 2400 | 0.9079 | 0.5501 | | 0.161 | 18.06 | 2600 | 0.9518 | 0.5390 | | 0.1394 | 19.44 | 2800 | 0.9529 | 0.5399 | | 0.1266 | 20.83 | 3000 | 0.9505 | 0.5283 | | 0.1102 | 22.22 | 3200 | 0.9748 | 0.5328 | | 0.101 | 23.61 | 3400 | 0.9593 | 0.5316 | | 0.0907 | 25.0 | 3600 | 0.9832 | 0.5292 | | 0.0833 | 26.39 | 3800 | 0.9773 | 0.5181 | | 0.0781 | 27.78 | 4000 | 0.9736 | 0.5163 | | 0.0744 | 29.17 | 4200 | 0.9752 | 0.5196 |
56e21bbcd55a24af0964575d317c9554
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-Regression-Edmunds_Car_Reviews-Non_European_Imports This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2240 - Mae: 0.3140 - Mse: 0.2240 - Rmse: 0.4733
6d8aae9ad066235c15c64b67a156489c
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Mae | Mse | Rmse | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:| | 0.6594 | 1.0 | 715 | 0.2436 | 0.3319 | 0.2436 | 0.4935 | | 0.2324 | 2.0 | 1430 | 0.2274 | 0.3210 | 0.2274 | 0.4769 | | 0.1975 | 3.0 | 2145 | 0.2303 | 0.3198 | 0.2303 | 0.4799 |
f85e352f28c914643bf97639dc8e433a
apache-2.0
['generated_from_trainer']
false
DistilBERT-POWO_MGH_Epiphyte_Finetuned This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0749
d80bb22b59f7a6eb74c0ddd50464e877
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.0824 | 1.0 | 1931 | 0.0807 | | 0.0768 | 2.0 | 3862 | 0.0747 | | 0.0664 | 3.0 | 5793 | 0.0749 |
bf97e7b9f06964c87d056452b172f49a
mit
[]
false
German BERT large fine-tuned to predict educational requirements This is a fine-tuned version of the German BERT large language model [deepset/gbert-large](https://huggingface.co/deepset/gbert-large). The multilabel task this model was trained on was to predict education requirements from job ad texts. The dataset used for training is not available to the public. The 7 labels in the task are (in the classification head order): - `'Bachelor'` - `'Berufsausbildung'` - `'Doktorat oder äquivalent'` - `'Höhere Berufsausbildung'` - `'Master'` - `'Sonstiges'` - `'keine Ausbildungserfordernisse'` The number of representatives of these labels in each of the splits (train/test/val) of the dataset is summarized in the following table: | Label name | All data | Training | Validation | Test | |------------|----------|----------|------------|------| | Bachelor | 521 | 365 | 52 | 104 | | Berufsausbildung | 1854 | 1298 | 185 | 371 | | Doktorat oder äquivalent | 38 | 27 | 4 | 7 | | Höhere Berufsausbildung | 564 | 395 | 56 | 113 | | Master | 245 | 171 | 25 | 49 | | Sonstiges | 819 | 573 | 82 | 164 | | keine Ausbildungserfordernisse | 176 | 123 | 18 | 35 |
7d18b1f0436cc452552d51833c6c3767
mit
[]
false
Cross-entropy_minimization) loss between the model's predictions and the actual labels in the training set. During training, a weighted version of the [label ranking average precision (LRAP)](https://scikit-learn.org/stable/modules/model_evaluation.html
da7fe36f45a5a3a17e7083c2d2a3a40f
mit
[]
false
label-ranking-average-precision) was tracked for the testing set. LRAP measures what fraction of higher-ranked labels produced by the model were true labels. To account for the label imbalance, the rankings were weighted so that improperly ranked rare labels are penalized more than their more frequent counterparts. After training was complete, the model with highest weighted LRAP was saved. ``` LRAP: 0.96 ```
4c52f84b211e592ef3d371c894ac5c1b
mit
[]
false
See also: - [deepset/gbert-base](https://huggingface.co/deepset/gbert-base) - [deepset/gbert-large](https://huggingface.co/deepset/gbert-large) - [gonzpen/gbert-base-ft-edu-redux](https://huggingface.co/gonzpen/gbert-base-ft-edu-redux)
cb0d8bbd2ca7b6287fb9d5756e155855
mit
['generated_from_trainer']
false
bart-cnn-pubmed-arxiv-pubmed-arxiv-arxiv-earlystopping This model is a fine-tuned version of [theojolliffe/bart-cnn-pubmed-arxiv-pubmed-arxiv-arxiv](https://huggingface.co/theojolliffe/bart-cnn-pubmed-arxiv-pubmed-arxiv-arxiv) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8347 - Rouge1: 53.9049 - Rouge2: 35.5953 - Rougel: 39.788 - Rougelsum: 51.4101 - Gen Len: 142.0
877bedbabd840890e796381044c7de79
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 200 - mixed_precision_training: Native AMP
84f40a80eb7b7e7a8c316579938d1333
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:| | No log | 0.31 | 125 | 1.0240 | 52.5632 | 32.977 | 34.672 | 49.9905 | 142.0 | | No log | 0.63 | 250 | 1.0056 | 52.5508 | 32.4826 | 34.6851 | 49.835 | 141.6852 | | No log | 0.94 | 375 | 0.8609 | 53.0475 | 32.9384 | 35.3322 | 50.272 | 141.6481 | | 0.8255 | 1.26 | 500 | 0.9022 | 52.2493 | 31.5622 | 33.389 | 49.6612 | 142.0 | | 0.8255 | 1.57 | 625 | 0.8706 | 53.3568 | 33.2533 | 35.7531 | 50.4568 | 141.8889 | | 0.8255 | 1.88 | 750 | 0.8186 | 52.7375 | 33.4439 | 37.1094 | 50.5323 | 142.0 | | 0.8255 | 2.2 | 875 | 0.8041 | 53.4992 | 34.6929 | 37.9614 | 51.091 | 142.0 | | 0.5295 | 2.51 | 1000 | 0.7907 | 52.6185 | 33.8053 | 37.1725 | 50.4881 | 142.0 | | 0.5295 | 2.83 | 1125 | 0.7740 | 52.7107 | 33.1023 | 36.0865 | 50.0365 | 142.0 | | 0.5295 | 3.14 | 1250 | 0.8200 | 52.5607 | 33.7948 | 37.2312 | 50.3345 | 142.0 | | 0.5295 | 3.45 | 1375 | 0.8188 | 53.9233 | 34.446 | 36.7566 | 51.3135 | 142.0 | | 0.351 | 3.77 | 1500 | 0.8071 | 53.9096 | 35.5977 | 38.6832 | 51.4986 | 142.0 | | 0.351 | 4.08 | 1625 | 0.8347 | 53.9049 | 35.5953 | 39.788 | 51.4101 | 142.0 |
1dba19eeed13c96a58c977d034e3eb1e
cc-by-4.0
['hi', 'en', 'codemix']
false
HingRoBERTa HingRoBERTa is a Hindi-English code-mixed RoBERTa model trained on roman text. It is an xlm-RoBERTa model fine-tuned on L3Cube-HingCorpus. <br> [dataset link] (https://github.com/l3cube-pune/code-mixed-nlp) More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2204.08398) ``` @inproceedings{nayak-joshi-2022-l3cube, title = "{L}3{C}ube-{H}ing{C}orpus and {H}ing{BERT}: A Code Mixed {H}indi-{E}nglish Dataset and {BERT} Language Models", author = "Nayak, Ravindra and Joshi, Raviraj", booktitle = "Proceedings of the WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference", month = jun, year = "2022", address = "Marseille, France", publisher = "European Language Resources Association", url = "https://aclanthology.org/2022.wildre-1.2", pages = "7--12", } ```
238cc1568e60d6d27ce897df7e830184
apache-2.0
[]
false
FrALBERT Base Pretrained model on French language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1909.11942) and first released in [this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not make a difference between french and French.
93dd4516ffa7292af32713bd81dd05b4
apache-2.0
[]
false
Model description FrALBERT is a transformers model pretrained on 4Go of French Wikipedia in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Sentence Ordering Prediction (SOP): FrALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the FrALBERT model as inputs. FrALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers. This is the first version of the base model. This model has the following configuration: - 12 repeating layers - 128 embedding dimension - 768 hidden dimension - 12 attention heads - 11M parameters
73cab1004f0d18372bcaa7b55a3ebee9
apache-2.0
[]
false
Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=fralbert) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2.
81bd14110709faf1150393a09fa2daff
apache-2.0
[]
false
How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='qwant/fralbert-base') >>> unmasker("Paris est la capitale de la [MASK] .") [ { "sequence": "paris est la capitale de la france.", "score": 0.6231236457824707, "token": 3043, "token_str": "france" }, { "sequence": "paris est la capitale de la region.", "score": 0.2993471622467041, "token": 10531, "token_str": "region" }, { "sequence": "paris est la capitale de la societe.", "score": 0.02028230018913746, "token": 24622, "token_str": "societe" }, { "sequence": "paris est la capitale de la bretagne.", "score": 0.012089950032532215, "token": 24987, "token_str": "bretagne" }, { "sequence": "paris est la capitale de la chine.", "score": 0.010002839379012585, "token": 14860, "token_str": "chine" } ] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import AlbertTokenizer, AlbertModel tokenizer = AlbertTokenizer.from_pretrained('qwant/fralbert-base') model = AlbertModel.from_pretrained("qwant/fralbert-base") text = "Remplacez-moi par le texte en français que vous souhaitez." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import AlbertTokenizer, TFAlbertModel tokenizer = AlbertTokenizer.from_pretrained('qwant/fralbert-base') model = TFAlbertModel.from_pretrained("qwant/fralbert-base") text = "Remplacez-moi par le texte en français que vous souhaitez." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ```
d10b12db160dd32ef332723dd4385b51
apache-2.0
[]
false
Preprocessing The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 32,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ```
8a6b187db6b7f971447b2e02423f1650
apache-2.0
[]
false
Training The FrALBERT procedure follows the BERT setup. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is.
efb23eeb285ca1e18a709c7c7baa0b60
apache-2.0
[]
false
Evaluation results When fine-tuned on downstream tasks, the ALBERT models achieve the following results: | | FQuAD1.0 | PIAF_dev |----------------|----------|---------- |frALBERT-base |72.6/55.1 |61.0 / 38.9
3705fad2498880c146303d2a4d7484ea
apache-2.0
[]
false
BibTeX entry and citation info ```bibtex @inproceedings{cattan2021fralbert, author = {Oralie Cattan and Christophe Servan and Sophie Rosset}, booktitle = {Recent Advances in Natural Language Processing, RANLP 2021}, title = {{On the Usability of Transformers-based models for a French Question-Answering task}}, year = {2021}, address = {Online}, month = sep, } ``` Link to the paper: [PDF](https://hal.archives-ouvertes.fr/hal-03336060)
ebb12879c3772986977375ee367839ed
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Whisper Small hy This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the common_voice_11_0 dataset. It achieves the following results on the evaluation set: - Loss: 0.6376 - Wer: 116.0855
e32b51f4f67fec7c05a24c81b3a278c1
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5 - training_steps: 50 - mixed_precision_training: Native AMP
65775acadd4e486e4b87c6c6ac346b98
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7891 | 0.2 | 10 | 0.9031 | 184.375 | | 0.6573 | 0.4 | 20 | 0.7425 | 149.0789 | | 0.647 | 0.6 | 30 | 0.6797 | 138.125 | | 0.551 | 0.8 | 40 | 0.6483 | 127.5329 | | 0.5477 | 1.0 | 50 | 0.6376 | 116.0855 |
8b0674253720538261db5db9825c74de
mit
[]
false
ChefBERTo 👨‍🍳 **chefberto-italian-cased** is a BERT model obtained by MLM adaptive-tuning [**bert-base-italian-xxl-cased**](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased) on Italian cooking recipes, approximately 50k sentences (2.6M words). **Author:** Cristiano De Nobili ([@denocris](https://twitter.com/denocris) on Twitter, [LinkedIn](https://www.linkedin.com/in/cristiano-de-nobili/)) for [VINHOOD](https://www.vinhood.com/en/). <p> <img src="https://drive.google.com/uc?export=view&id=1u5aY2wKu-X5DAzbOq7rsgGFW5_lGUAQn" width="400"> </br> </p>
abadc049f33c538c17f56c5ccd0d640f
mit
[]
false
Usage ```python from transformers import AutoModel, AutoTokenizer model_name = "vinhood/chefberto-italian-cased" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModel.from_pretrained(model_name) ```
7a3553e5128e7cb7f4da51078955f60c
mit
['generated_from_trainer']
false
roberta-base-finetuned-schizophreniaReddit2 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7785
a5b7746ec2cb7ebe6f6b2f18e09e8980
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 490 | 1.8093 | | 1.9343 | 2.0 | 980 | 1.7996 | | 1.8856 | 3.0 | 1470 | 1.7966 | | 1.8552 | 4.0 | 1960 | 1.7844 | | 1.8267 | 5.0 | 2450 | 1.7839 |
e89924ec0b3dbfe7106c79975c02ba55
apache-2.0
['exbert']
false
BERT base model (uncased) Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in [this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference between english and English.
3e42d553e59f0d95aa0e56620b449178
apache-2.0
['exbert']
false
Model description [sbcBI/sentiment_analysis] This is a fine-tuned downstream version of the bert-base-uncased model for sentiment analysis, this model is not intended for further downstream fine-tuning for any other tasks. This model is trained on a classified dataset for text-classification.
f2915e8bd9cef41be40de990543c25ea
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Whisper medium Turkish CV 3K This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the mozilla-foundation/common_voice_11_0 tr dataset. It achieves the following results on the evaluation set: - Loss: 0.3611 - Wer: 15.9012
c37b96910a18d26ec45b1ab5c81f6bd7
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 3000 - mixed_precision_training: Native AMP
fc1f4ac6d4e70a9456c7ca30641e4085
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0856 | 3.02 | 1000 | 0.3732 | 20.6764 | | 0.0119 | 6.03 | 2000 | 0.3684 | 17.5353 | | 0.001 | 9.05 | 3000 | 0.3611 | 15.9012 |
a7a8e3ac478238931e7020618831d836
apache-2.0
['generated_from_trainer']
false
platzi_vit_model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.0328 - Accuracy: 0.9925
1e3eee3afe69665d3b1fb0ab19a4048a
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1427 | 3.85 | 500 | 0.0328 | 0.9925 |
40d00b2e670065ee9bd7fa6441aece14
creativeml-openrail-m
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'science']
false
DreamBooth model for glxy trained by lewtun on the lewtun/galaxies dataset. This your the Stable Diffusion model fine-tuned the glxy concept taught to Stable Diffusion with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of glxy galaxy** This model was created as part of the DreamBooth Hackathon. Visit the organisation page for instructions on how to take part!
dc3884f87759bb78ce34ae959e6b0adc
apache-2.0
['generated_from_trainer']
false
distilled-mt5-small-0.6-1 This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset. It achieves the following results on the evaluation set: - Loss: 2.8345 - Bleu: 6.7165 - Gen Len: 46.3377
6fc4e81619252b8326789f29f18eb2d0
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10
56a625b652cee2311127147dcfed23e7
apache-2.0
['generated_from_trainer']
false
roberta-base-bne-finetuned-detests This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0716 - Accuracy: 0.8396
20a84349dec2e1d7c5017d044e0110ba
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2972 | 1.0 | 153 | 0.3359 | 0.8462 | | 0.2924 | 2.0 | 306 | 0.4509 | 0.8249 | | 0.0663 | 3.0 | 459 | 0.7186 | 0.8527 | | 0.0018 | 4.0 | 612 | 0.8081 | 0.8314 | | 0.0004 | 5.0 | 765 | 0.8861 | 0.8560 | | 0.0003 | 6.0 | 918 | 0.9940 | 0.8380 | | 0.0002 | 7.0 | 1071 | 1.0330 | 0.8396 | | 0.0002 | 8.0 | 1224 | 1.0545 | 0.8396 | | 0.0002 | 9.0 | 1377 | 1.0673 | 0.8396 | | 0.0002 | 10.0 | 1530 | 1.0716 | 0.8396 |
4e8763b87eba7aa71e652112c755918d
apache-2.0
['generated_from_trainer', 'fnet-bert-base-comparison']
false
fnet-base-finetuned-qqp This model is a fine-tuned version of [google/fnet-base](https://huggingface.co/google/fnet-base) on the GLUE QQP dataset. It achieves the following results on the evaluation set: - Loss: 0.3686 - Accuracy: 0.8847 - F1: 0.8466 - Combined Score: 0.8657 The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased).
9e36fcfb0a5b4618d07cf8855500feb1
apache-2.0
['generated_from_trainer', 'fnet-bert-base-comparison']
false
!/usr/bin/bash python ../run_glue.py \\n --model_name_or_path google/fnet-base \\n --task_name qqp \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 3 \\n --output_dir fnet-base-finetuned-qqp \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n```
c4ef2b475817b6fff93dabb7576c9975
apache-2.0
['generated_from_trainer', 'fnet-bert-base-comparison']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:| | 0.3484 | 1.0 | 22741 | 0.3014 | 0.8676 | 0.8297 | 0.8487 | | 0.2387 | 2.0 | 45482 | 0.3011 | 0.8801 | 0.8429 | 0.8615 | | 0.1739 | 3.0 | 68223 | 0.3686 | 0.8847 | 0.8466 | 0.8657 |
df0f01f5c4fa74223a313b6ad0831346
apache-2.0
[]
false
distilbert-base-en-ro-cased We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages. Our versions give exactly the same representations produced by the original model which preserves the original accuracy. For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
ab39616eededb348d152a2ceabd291ed
apache-2.0
[]
false
How to use ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-en-ro-cased") model = AutoModel.from_pretrained("Geotrend/distilbert-base-en-ro-cased") ``` To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
8de72ebd2f20d93d7b29adecdbf00122