license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
gpl-3.0
[]
false
训练过程 使用了[UER-py](https://github.com/dbiir/UER-py/) 进行fine-tuned 加入了包括但不限于摘要、负采样、混淆等数据加强方法 并转换为Huggingface进行上传 | | CMRC 2018 Dev | DRCD Dev | SQuAD-Zen Dev (Answerable) | AVG | | :-------: | :-----------: | :-------: | :------------------------: | :-------: | | PERT-large | 74.4/89.8 | 90.3/94.| 62.8/78.8 | 75.9/87.8 |
e4e5719cf0e8e59b977951ac89c76ca6
cc-by-4.0
['question-answering, multi-step-reasoning, multi-hop-reasoning']
false
digit_tokenization.py from https://github.com/stonybrooknlp/teabreac model_name = "StonyBrookNLP/teabreac-nt5-small-iirc-retrieved" tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False)
876b546c712b55518c3e3d0669d1efb3
apache-2.0
['summarization', 'generated_from_trainer']
false
mt5-small-test-ged-RAW_data_prep_2021_12_26___t1_7.csv_max_target_length_10 This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.0338 - Rouge1: 28.7359 - Rouge2: 15.6289 - Rougel: 28.6407 - Rougelsum: 28.7016
c5f1cdfb29fa7ae9255a1829ce78c788
apache-2.0
['summarization', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:| | 6.0554 | 1.0 | 1935 | 2.7346 | 23.7306 | 13.3598 | 23.7172 | 23.7447 | | 2.9111 | 2.0 | 3870 | 2.3916 | 26.5211 | 14.5628 | 26.4827 | 26.5716 | | 2.464 | 3.0 | 5805 | 2.2382 | 27.4404 | 15.1211 | 27.3331 | 27.401 | | 2.2328 | 4.0 | 7740 | 2.1557 | 28.3377 | 14.7406 | 28.2386 | 28.249 | | 2.0845 | 5.0 | 9675 | 2.1324 | 29.1476 | 15.7579 | 29.0614 | 29.1701 | | 1.9825 | 6.0 | 11610 | 2.0668 | 28.4677 | 15.3332 | 28.4128 | 28.4093 | | 1.9233 | 7.0 | 13545 | 2.0441 | 28.6832 | 15.5251 | 28.5723 | 28.6479 | | 1.8842 | 8.0 | 15480 | 2.0338 | 28.7359 | 15.6289 | 28.6407 | 28.7016 |
bd7193c79a0986eae3073947515d234d
apache-2.0
['automatic-speech-recognition', 'nl']
false
exp_w2v2t_nl_vp-fr_s156 Fine-tuned [facebook/wav2vec2-large-fr-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-fr-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (nl)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
00533ea2d59d4f1ef934e354453c1f8e
apache-2.0
['text-generation']
false
Model description [GPT-2](https://openai.com/blog/better-language-models/) is a large [transformer](https://arxiv.org/abs/1706.03762)-based language model with 1.5 billion parameters, trained on a dataset of 8 million web pages. GPT-2 is trained with a simple objective: predict the next word, given all of the previous words within some text. The diversity of the dataset causes this simple goal to contain naturally occurring demonstrations of many tasks across diverse domains. GPT-2 is a direct scale-up of GPT, with more than 10X the parameters and trained on more than 10X the amount of data.
efe9057ea5c9e07dfcfe09287bd1a2b6
apache-2.0
['text-generation']
false
How to use For best experience and clean outputs, you can use Live Demo mentioned above, also you can use the notebook mentioned in my [GitHub](https://github.com/HamidRezaAttar/GPT2-Home) You can use this model directly with a pipeline for text generation. ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline >>> tokenizer = AutoTokenizer.from_pretrained("HamidRezaAttar/gpt2-product-description-generator") >>> model = AutoModelForCausalLM.from_pretrained("HamidRezaAttar/gpt2-product-description-generator") >>> generator = pipeline('text-generation', model, tokenizer=tokenizer, config={'max_length':100}) >>> generated_text = generator("This bed is very comfortable.") ```
55d000e6b40987017749149581086691
apache-2.0
['text-generation']
false
Citation info ```bibtex @misc{GPT2-Home, author = {HamidReza Fatollah Zadeh Attar}, title = {GPT2-Home the English home product description generator}, year = {2021}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/HamidRezaAttar/GPT2-Home}}, } ```
a89f338da7dc862c8b6050bc22d57e06
apache-2.0
['generated_from_trainer']
false
tiny-mlm-glue-sst2-custom-tokenizer This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 7.2580
558d7695811bb96272da8e5147daafeb
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 8.0249 | 0.4 | 500 | 7.2506 | | 7.1076 | 0.8 | 1000 | 7.1057 | | 6.8912 | 1.2 | 1500 | 7.2155 | | 6.8907 | 1.6 | 2000 | 7.3149 | | 6.8295 | 2.0 | 2500 | 7.2580 |
966bd9b03cf274ea88de75f54f5b22ba
apache-2.0
['automatic-speech-recognition', 'fr']
false
exp_w2v2r_fr_vp-100k_accent_france-10_belgium-0_s271 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
2e49ad81e39af70619c2884520761231
apache-2.0
['generated_from_keras_callback']
false
test This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set:
f7ae8ddc1bc8aa41165a46f813df7cab
apache-2.0
['generated_from_trainer']
false
opus-mt-en-ru-finetuned-en-to-ru-Legal This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ru](https://huggingface.co/Helsinki-NLP/opus-mt-en-ru) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8561 - Bleu: 46.7284 - Gen Len: 23.1317
dc65ec298ee9fbc705ac908eaa0cdac0
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:| | No log | 1.0 | 387 | 1.1719 | 34.0562 | 22.991 | | 1.524 | 2.0 | 774 | 1.0342 | 37.7233 | 23.0052 | | 1.0226 | 3.0 | 1161 | 0.9595 | 40.0983 | 22.9755 | | 0.8066 | 4.0 | 1548 | 0.9188 | 41.9634 | 23.1162 | | 0.8066 | 5.0 | 1935 | 0.8907 | 43.6537 | 23.0923 | | 0.6637 | 6.0 | 2322 | 0.8771 | 44.5208 | 23.1097 | | 0.5697 | 7.0 | 2709 | 0.8669 | 45.5589 | 23.1388 | | 0.5175 | 8.0 | 3096 | 0.8603 | 46.2211 | 23.2356 | | 0.5175 | 9.0 | 3483 | 0.8566 | 46.7201 | 23.1375 | | 0.4768 | 10.0 | 3870 | 0.8561 | 46.7284 | 23.1317 |
e4e20bcc414593654477ee3569badb96
apache-2.0
['generated_from_keras_callback']
false
Gorenzelg/bert-finetuned-squad11 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.0664 - Epoch: 0
f048baad276e5582a2495040beb9767a
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 55450, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000} - training_precision: mixed_float16
7a80987d639667befc6d85b80abefb3f
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 192 - eval_batch_size: 192 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 1
125b9b8d401d48dfbd9e4686fb24f085
apache-2.0
['whisper-event', 'generated_from_trainer', 'hf-asr-leaderboard']
false
whisper-small-af-za - Ari This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - eval_loss: 0.0002 - eval_wer: 0.0 - eval_runtime: 77.0592 - eval_samples_per_second: 2.569 - eval_steps_per_second: 0.324 - epoch: 14.6 - step: 2000
190e284617306b5c6b8f442759482ec9
mit
[]
false
Jamiels on Stable Diffusion This is the `<jamiels>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<jamiels> 0](https://huggingface.co/sd-concepts-library/jamiels/resolve/main/concept_images/5.jpeg) ![<jamiels> 1](https://huggingface.co/sd-concepts-library/jamiels/resolve/main/concept_images/3.jpeg) ![<jamiels> 2](https://huggingface.co/sd-concepts-library/jamiels/resolve/main/concept_images/0.jpeg) ![<jamiels> 3](https://huggingface.co/sd-concepts-library/jamiels/resolve/main/concept_images/2.jpeg) ![<jamiels> 4](https://huggingface.co/sd-concepts-library/jamiels/resolve/main/concept_images/1.jpeg) ![<jamiels> 5](https://huggingface.co/sd-concepts-library/jamiels/resolve/main/concept_images/4.jpeg)
98dc9b255a3a60c7ab5703682a7db430
apache-2.0
['generated_from_trainer']
false
distilbert_sa_GLUE_Experiment_mrpc_384 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE MRPC dataset. It achieves the following results on the evaluation set: - Loss: 0.6089 - Accuracy: 0.6838 - F1: 0.8122 - Combined Score: 0.7480
ae6bbffbfd4caa01d3b3d1009ab1c73f
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:| | 0.6363 | 1.0 | 15 | 0.6257 | 0.6838 | 0.8122 | 0.7480 | | 0.6306 | 2.0 | 30 | 0.6230 | 0.6838 | 0.8122 | 0.7480 | | 0.6302 | 3.0 | 45 | 0.6227 | 0.6838 | 0.8122 | 0.7480 | | 0.6217 | 4.0 | 60 | 0.6089 | 0.6838 | 0.8122 | 0.7480 | | 0.5729 | 5.0 | 75 | 0.6097 | 0.6838 | 0.7817 | 0.7328 | | 0.4868 | 6.0 | 90 | 0.6395 | 0.6789 | 0.7791 | 0.7290 | | 0.3906 | 7.0 | 105 | 0.7014 | 0.6838 | 0.7725 | 0.7282 | | 0.3014 | 8.0 | 120 | 0.7773 | 0.6814 | 0.7735 | 0.7274 | | 0.2538 | 9.0 | 135 | 0.8550 | 0.6789 | 0.7730 | 0.7259 |
26ba61de9a617343156b5d303a3525d7
mit
['generated_from_trainer']
false
Facebook_Mit_HPS_5_Epoch This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4774 - Accuracy: 0.9315
c125971b504bbc3fa730470178aa40cc
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.546392051994155e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 5 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5
09b4c1cf25257fd0972d92f0639198ff
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 292 | 0.2181 | 0.9264 | | 0.2411 | 2.0 | 584 | 0.2571 | 0.9289 | | 0.2411 | 3.0 | 876 | 0.5712 | 0.8947 | | 0.0558 | 4.0 | 1168 | 0.4675 | 0.9332 | | 0.0558 | 5.0 | 1460 | 0.4774 | 0.9315 |
d792a285271baa6d05fee009565be75c
creativeml-openrail-m
['stable-diffusion', 'text-to-image', 'textual-inversion', 'embedding']
false
about - 2 embeddings to resemble a popular toy from the 90s - check the [PDF](https://huggingface.co/proxima/foorby/blob/main/foorby_embeddings_handbook.pdf) for comparisons, prompts and settings - v2 seems to trend more towards realism [<img src="https://huggingface.co/proxima/foorby/resolve/main/example_2.jpg">](https://huggingface.co/proxima/foorby/blob/main/example_2.jpg)
ae8b2f1872501141c9924ea3ce01695d
creativeml-openrail-m
['stable-diffusion', 'text-to-image', 'textual-inversion', 'embedding']
false
how to use - place the .bin files in your embeddings folder - use foorbyv1 or foorbyv2 in your prompt ---- if you enjoy this consider buying me a coffee (ノ◕ヮ◕)ノ*:・゚✧ <a href='https://ko-fi.com/S6S6FUYKY' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi3.png?v=3' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a> ----
44ff0f9e026db186a7b0cddd9dcce427
cc-by-sa-4.0
[]
false
ELECTRA base Japanese discriminator This is a [ELECTRA](https://github.com/google-research/electra) model pretrained on texts in the Japanese language. The codes for the pretraining are available at [retarfi/language-pretraining](https://github.com/retarfi/language-pretraining/tree/v1.0).
59f7dcaa28411406408498784bad3f9f
cc-by-sa-4.0
[]
false
Model architecture The model architecture is the same as ELECTRA base in the [original ELECTRA paper](https://arxiv.org/abs/2003.10555); 12 layers, 768 dimensions of hidden states, and 12 attention heads.
9d29a88e64981c71fa9d1551397741aa
cc-by-sa-4.0
[]
false
Training The models are trained with the same configuration as ELECTRA base in the [original ELECTRA paper](https://arxiv.org/abs/2003.10555); 512 tokens per instance, 256 instances per batch, and 766k training steps. The size of the generator is 1/3 of the size of the discriminator.
cb53086d7147e63ec34f9987347fa1d6
apache-2.0
['generated_from_trainer']
false
t5-small-billsum This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset. It achieves the following results on the evaluation set: - Loss: 2.5953 - Rouge1: 0.1383 - Rouge2: 0.0487 - Rougel: 0.1135 - Rougelsum: 0.1132 - Gen Len: 19.0
f6acd924d64136143196daa2cf831a4e
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP
d403cf4bf8674899ac5b1d1a814a5dae
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | No log | 1.0 | 124 | 2.6810 | 0.1312 | 0.0415 | 0.1076 | 0.1077 | 19.0 | | No log | 2.0 | 248 | 2.5953 | 0.1383 | 0.0487 | 0.1135 | 0.1132 | 19.0 |
a7814782a6260b1983a3aac8125b2cac
apache-2.0
['automatic-speech-recognition']
false
wav2vec2-xlsr-korean-senior Futher fine-tuned [fleek/wav2vec-large-xlsr-korean](https://huggingface.co/fleek/wav2vec-large-xlsr-korean) using the [AIhub 자유대화 음성(노인남녀)](https://aihub.or.kr/aidata/30704). - Total train data size: 808,642 - Total vaild data size: 159,970 When using this model, make sure that your speech input is sampled at 16kHz. The script used for training can be found here: https://github.com/hyyoka/wav2vec2-korean-senior
e6cae68e7dec1f1d22e8d66890fc24c4
apache-2.0
['automatic-speech-recognition']
false
Inference ``` py import torchaudio from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC import re def clean_up(transcription): hangul = re.compile('[^ ㄱ-ㅣ가-힣]+') result = hangul.sub('', transcription) return result model_name "hyyoka/wav2vec2-xlsr-korean-senior" processor = Wav2Vec2Processor.from_pretrained(model_name) model = Wav2Vec2ForCTC.from_pretrained(model_name) speech_array, sampling_rate = torchaudio.load(wav_file) feat = processor(speech_array[0], sampling_rate=16000, padding=True, max_length=800000, truncation=True, return_attention_mask=True, return_tensors="pt", pad_token_id=49 ) input = {'input_values': feat['input_values'],'attention_mask':feat['attention_mask']} outputs = model(**input, output_attentions=True) logits = outputs.logits predicted_ids = logits.argmax(axis=-1) transcription = processor.decode(predicted_ids[0]) stt_result = clean_up(transcription) ```
e9a96a9af0725020feb296d410829909
apache-2.0
[]
false
Introduction This seq-2-seq semantic parsing model is used by [Genie](https://github.com/stanford-oval/genie-toolkit) to compile an assistant in the restaurant domain. This model translates natural language utterances to [ThingTalk](https://github.com/stanford-oval/thingtalk), executed by Genie.
70d56b2c8c2bb1ec4d2270840cc45148
apache-2.0
['translation']
false
epo-ell * source group: Esperanto * target group: Modern Greek (1453-) * OPUS readme: [epo-ell](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-ell/README.md) * model: transformer-align * source language(s): epo * target language(s): ell * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ell/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ell/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ell/opus-2020-06-16.eval.txt)
29d6b824769bea3008d796f91221ba79
apache-2.0
['translation']
false
System Info: - hf_name: epo-ell - source_languages: epo - target_languages: ell - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-ell/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['eo', 'el'] - src_constituents: {'epo'} - tgt_constituents: {'ell'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ell/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ell/opus-2020-06-16.test.txt - src_alpha3: epo - tgt_alpha3: ell - short_pair: eo-el - chrF2_score: 0.43799999999999994 - bleu: 23.2 - brevity_penalty: 0.9159999999999999 - ref_len: 3892.0 - src_name: Esperanto - tgt_name: Modern Greek (1453-) - train_date: 2020-06-16 - src_alpha2: eo - tgt_alpha2: el - prefer_old: False - long_pair: epo-ell - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
65addbf753d9ff63efac3f9ea3632ed9
apache-2.0
['generated_from_trainer']
false
distilled-mt5-small-0.07-0.25 This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset. It achieves the following results on the evaluation set: - Loss: 2.8593 - Bleu: 7.0665 - Gen Len: 43.5793
e0a137297ac774c626b249c4f53ca2ca
apache-2.0
['generated_from_trainer']
false
DistilGPT2-Beatles-Lyrics-finetuned This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the [Huggingartists - beatles](https://huggingface.co/datasets/huggingartists/the-beatles) dataset. It will complete an input prompt with Beatles-like text.
93b8ea185ee362d554c2d9fd5e2afae7
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 5
c45d772309a55fa78bf38d2c467f75ca
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.748 | 1.0 | 165 | 2.3732 | | 2.4395 | 2.0 | 330 | 2.1938 | | 2.2968 | 3.0 | 495 | 2.1118 | | 2.2075 | 4.0 | 660 | 2.0721 | | 2.1393 | 5.0 | 825 | 2.0571 |
4f2f555afdaf45d6062c0a14f5cc4f36
apache-2.0
['generated_from_trainer']
false
bert-base-dutch-cased-finetuned-mBERT This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0898 - Precision: 0.7255 - Recall: 0.7255 - F1: 0.7255 - Accuracy: 0.9758
718424ff9fe18c7f5839b52b036aafc0
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.1603 | 1.0 | 533 | 0.0928 | 0.6896 | 0.6962 | 0.6929 | 0.9742 | | 0.0832 | 2.0 | 1066 | 0.0898 | 0.7255 | 0.7255 | 0.7255 | 0.9758 |
96699c6c2aad0b5f399060c795a46b6f
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0202 - Accuracy: 0.8235 - F1: 0.8223
3c144d5d12b553a88f142dbe0431bc2b
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.7099 | 1.0 | 17 | 0.6695 | 0.5294 | 0.3665 | | 0.686 | 2.0 | 34 | 0.6288 | 0.5294 | 0.3665 | | 0.5945 | 3.0 | 51 | 0.4339 | 0.8824 | 0.8824 | | 0.3718 | 4.0 | 68 | 0.3600 | 0.8235 | 0.8235 | | 0.1248 | 5.0 | 85 | 0.5730 | 0.8235 | 0.8223 | | 0.0984 | 6.0 | 102 | 0.7659 | 0.7647 | 0.7647 | | 0.0138 | 7.0 | 119 | 0.8271 | 0.8235 | 0.8223 | | 0.0121 | 8.0 | 136 | 0.8223 | 0.8235 | 0.8223 | | 0.0062 | 9.0 | 153 | 0.7349 | 0.8235 | 0.8223 | | 0.0045 | 10.0 | 170 | 0.8381 | 0.7647 | 0.7597 | | 0.0037 | 11.0 | 187 | 0.8636 | 0.7647 | 0.7597 | | 0.0031 | 12.0 | 204 | 0.8603 | 0.8235 | 0.8223 | | 0.0025 | 13.0 | 221 | 0.8714 | 0.8235 | 0.8223 | | 0.0021 | 14.0 | 238 | 0.8864 | 0.8235 | 0.8223 | | 0.002 | 15.0 | 255 | 0.9114 | 0.8235 | 0.8223 | | 0.0017 | 16.0 | 272 | 0.9295 | 0.8235 | 0.8223 | | 0.0014 | 17.0 | 289 | 0.9360 | 0.8235 | 0.8223 | | 0.0013 | 18.0 | 306 | 0.9378 | 0.8235 | 0.8223 | | 0.0012 | 19.0 | 323 | 0.9429 | 0.8235 | 0.8223 | | 0.0012 | 20.0 | 340 | 0.9528 | 0.8235 | 0.8223 | | 0.0011 | 21.0 | 357 | 0.9609 | 0.8235 | 0.8223 | | 0.001 | 22.0 | 374 | 0.9667 | 0.8235 | 0.8223 | | 0.001 | 23.0 | 391 | 0.9738 | 0.8235 | 0.8223 | | 0.001 | 24.0 | 408 | 0.9804 | 0.8235 | 0.8223 | | 0.0009 | 25.0 | 425 | 0.9827 | 0.8235 | 0.8223 | | 0.0009 | 26.0 | 442 | 0.9863 | 0.8235 | 0.8223 | | 0.0008 | 27.0 | 459 | 0.9910 | 0.8235 | 0.8223 | | 0.0008 | 28.0 | 476 | 0.9949 | 0.8235 | 0.8223 | | 0.0007 | 29.0 | 493 | 1.0002 | 0.8235 | 0.8223 | | 0.0008 | 30.0 | 510 | 1.0042 | 0.8235 | 0.8223 | | 0.0007 | 31.0 | 527 | 1.0058 | 0.8235 | 0.8223 | | 0.0007 | 32.0 | 544 | 1.0091 | 0.8235 | 0.8223 | | 0.0006 | 33.0 | 561 | 1.0118 | 0.8235 | 0.8223 | | 0.0006 | 34.0 | 578 | 1.0148 | 0.8235 | 0.8223 | | 0.0007 | 35.0 | 595 | 1.0163 | 0.8235 | 0.8223 | | 0.0006 | 36.0 | 612 | 1.0174 | 0.8235 | 0.8223 | | 0.0006 | 37.0 | 629 | 1.0185 | 0.8235 | 0.8223 | | 0.0006 | 38.0 | 646 | 1.0194 | 0.8235 | 0.8223 | | 0.0006 | 39.0 | 663 | 1.0200 | 0.8235 | 0.8223 | | 0.0006 | 40.0 | 680 | 1.0202 | 0.8235 | 0.8223 |
d812666512ab3f48eebb83487a4ef242
mit
['generated_from_trainer']
false
gpt2.CEBaB_confounding.price_food_ambiance_negative.absa.5-class.seed_44 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the OpenTable OPENTABLE-ABSA dataset. It achieves the following results on the evaluation set: - Loss: 0.4608 - Accuracy: 0.8270 - Macro-f1: 0.8253 - Weighted-macro-f1: 0.8274
86ae57da8d28c2485ce9b2bfca9ceebe
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-modelo-becas0 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the becasv3 dataset. It achieves the following results on the evaluation set: - Loss: 3.1182
b93a55e3466c5ee64bfdeab9725bb74f
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 5 | 5.5381 | | No log | 2.0 | 10 | 4.9493 | | No log | 3.0 | 15 | 4.4985 | | No log | 4.0 | 20 | 4.1063 | | No log | 5.0 | 25 | 3.7708 | | No log | 6.0 | 30 | 3.5205 | | No log | 7.0 | 35 | 3.3313 | | No log | 8.0 | 40 | 3.2195 | | No log | 9.0 | 45 | 3.1453 | | No log | 10.0 | 50 | 3.1182 |
e4fd3b769f924633877b3285960d2580
apache-2.0
['generated_from_trainer']
false
distilled-mt5-small-010099-0.5 This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset. It achieves the following results on the evaluation set: - Loss: 2.8127 - Bleu: 7.735 - Gen Len: 44.5453
68eea03b90b2dc98ff1111b5a0d2aba3
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1216 - F1: 0.8749
f0089a2f529b32bfeea03873af596033
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2247 | 1.0 | 834 | 0.1429 | 0.8432 | | 0.1127 | 2.0 | 1668 | 0.1270 | 0.8653 | | 0.0712 | 3.0 | 2502 | 0.1216 | 0.8749 |
0466e7b315a4dffd43469455ae613ae1
apache-2.0
['generated_from_trainer']
false
model-1-reverse-bart This model is a fine-tuned version of [eugenesiow/bart-paraphrase](https://huggingface.co/eugenesiow/bart-paraphrase) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3347 - Rouge1: 95.4467 - Rouge2: 91.7522 - Rougel: 95.448 - Rougelsum: 95.4377 - Gen Len: 15.5478
4ede4dbd0c2766955e53bc3193877ac7
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:------:|:---------:|:-------:| | 0.0744 | 1.0 | 28039 | 0.3347 | 95.4467 | 91.7522 | 95.448 | 95.4377 | 15.5478 |
dffcc4d31c0b6420789913a269ab510b
mit
[]
false
Andrej-sternen on Stable Diffusion This is the `<andrej-sternen>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<andrej-sternen> 0](https://huggingface.co/sd-concepts-library/andrej-sternen/resolve/main/concept_images/3.jpeg) ![<andrej-sternen> 1](https://huggingface.co/sd-concepts-library/andrej-sternen/resolve/main/concept_images/1.jpeg) ![<andrej-sternen> 2](https://huggingface.co/sd-concepts-library/andrej-sternen/resolve/main/concept_images/2.jpeg) ![<andrej-sternen> 3](https://huggingface.co/sd-concepts-library/andrej-sternen/resolve/main/concept_images/0.jpeg)
fd6fabda922994ccc5babf6da80becc4
apache-2.0
['whisper-event', 'generated_from_trainer']
false
openai/whisper-medium This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3830 - Wer: 19.5173
520af1647738b24d3fca510de1029a38
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.011 | 4.01 | 1000 | 0.3234 | 20.5978 | | 0.0011 | 8.03 | 2000 | 0.3650 | 19.4070 | | 0.0006 | 12.04 | 3000 | 0.3830 | 19.5173 |
6e7bd1d3a7efabfdd94f5697be056e19
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-peyma-fa This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0937 - F1: 0.9249
0f8bee8d5f831b9c7c873b6e8ec161fd
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.1562 | 1.0 | 998 | 0.0691 | 0.8777 | | 0.0638 | 2.0 | 1996 | 0.0703 | 0.8908 | | 0.0457 | 3.0 | 2994 | 0.0645 | 0.8975 | | 0.0281 | 4.0 | 3992 | 0.0842 | 0.8994 | | 0.0206 | 5.0 | 4990 | 0.0651 | 0.9164 | | 0.0139 | 6.0 | 5988 | 0.0787 | 0.9148 | | 0.0083 | 7.0 | 6986 | 0.0838 | 0.9253 | | 0.0052 | 8.0 | 7984 | 0.0833 | 0.9221 | | 0.0031 | 9.0 | 8982 | 0.0947 | 0.9230 | | 0.0028 | 10.0 | 9980 | 0.0937 | 0.9249 |
4b6c295ca2949e15a521b560449b90e2
apache-2.0
['generated_from_trainer', 'robust-speech-event', 'hf-asr-leaderboard']
false
wav2vec2-large-xls-r-300m-marathi This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5656 - Wer: 0.2156
fa6fc2b9415c001cf73d803faa90ae70
cc-by-4.0
['question generation']
false
Model Card of `research-backup/t5-small-squad-qg-no-answer` This model is fine-tuned version of [t5-small](https://huggingface.co/t5-small) for question generation task on the [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation). This model is fine-tuned without answer information, i.e. generate a question only given a paragraph (note that normal model is fine-tuned to generate a question given a pargraph and an associated answer in the paragraph).
2c5533e7ae1bf34aeb3025192c1845dc
cc-by-4.0
['question generation']
false
model prediction questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "research-backup/t5-small-squad-qg-no-answer") output = pipe("generate question: <hl> Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records. <hl>") ```
e0ec5ba14b409b99a7a74ed81b40f623
cc-by-4.0
['question generation']
false
Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/research-backup/t5-small-squad-qg-no-answer/raw/main/eval/metric.first.sentence.paragraph_sentence.question.lmqg_qg_squad.default.json) | | Score | Type | Dataset | |:-----------|--------:|:--------|:---------------------------------------------------------------| | BERTScore | 89.64 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_1 | 53.37 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_2 | 36.67 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_3 | 27.4 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_4 | 21.12 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | METEOR | 23.38 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | MoverScore | 62.07 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | ROUGE_L | 47.47 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
753bd522b97d14aebc0253cc32023284
cc-by-4.0
['question generation']
false
Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_squad - dataset_name: default - input_types: ['paragraph_sentence'] - output_types: ['question'] - prefix_types: ['qg'] - model: t5-small - max_length: 512 - max_length_output: 32 - epoch: 7 - batch: 64 - lr: 0.0001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 4 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/research-backup/t5-small-squad-qg-no-answer/raw/main/trainer_config.json).
ec2fae0bd6dbb391b9a93d898b802d64
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-distilled-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.3209 - Accuracy: 0.9429
d027605c5771711686ed5efc8ed0e03d
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8
e7e56a522eb3eaaeab5ebe3f22b3210a
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 3.0228 | 1.0 | 318 | 2.2545 | 0.7548 | | 1.7605 | 2.0 | 636 | 1.2040 | 0.8513 | | 0.959 | 3.0 | 954 | 0.6910 | 0.9123 | | 0.5707 | 4.0 | 1272 | 0.4821 | 0.9294 | | 0.3877 | 5.0 | 1590 | 0.3890 | 0.9394 | | 0.3025 | 6.0 | 1908 | 0.3476 | 0.9410 | | 0.258 | 7.0 | 2226 | 0.3264 | 0.9432 | | 0.2384 | 8.0 | 2544 | 0.3209 | 0.9429 |
0667926851dee8ba472d49b486df1b8c
apache-2.0
['Text', 'Sentence Similarity', 'Sentence-Embedding', 'camembert-base']
false
Training Data This model was trained on the [STS benchmark dataset](https://huggingface.co/datasets/stsb_multi_mt/viewer/fr/train). The model will predict a score between 0 and 1 how for the semantic similarity of two sentences.
e9833aec70e8adc62b2261614f1ce664
apache-2.0
['Text', 'Sentence Similarity', 'Sentence-Embedding', 'camembert-base']
false
Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import CrossEncoder model = CrossEncoder('dangvantuan/CrossEncoder-camembert-large', max_length=128) scores = model.predict([('Un avion est en train de décoller.', "Un homme joue d'une grande flûte."), ("Un homme étale du fromage râpé sur une pizza.", "Une personne jette un chat au plafond") ]) ```
151a7d6691c63bcd4543e556dbd1ad34
apache-2.0
['Text', 'Sentence Similarity', 'Sentence-Embedding', 'camembert-base']
false
Evaluation The model can be evaluated as follows on the French test data of stsb. ```python from sentence_transformers.readers import InputExample from sentence_transformers.cross_encoder.evaluation import CECorrelationEvaluator from datasets import load_dataset def convert_dataset(dataset): dataset_samples=[] for df in dataset: score = float(df['similarity_score'])/5.0
c4ad19240ffaf4c2f40e248de3733a97
apache-2.0
['Text', 'Sentence Similarity', 'Sentence-Embedding', 'camembert-base']
false
For Test set test_samples = convert_dataset(df_test) test_evaluator = CECorrelationEvaluator.from_input_examples(test_samples, name='sts-test') test_evaluator(models, output_path="./") ``` **Test Result**: The performance is measured using Pearson and Spearman correlation: - On dev | Model | Pearson correlation | Spearman correlation |
e93a74183ed33697168016ee7f8e3ab0
apache-2.0
['Text', 'Sentence Similarity', 'Sentence-Embedding', 'camembert-base']
false
params | | ------------- | ------------- | ------------- |------------- | | [dangvantuan/CrossEncoder-camembert-large](https://huggingface.co/dangvantuan/CrossEncoder-camembert-large)| 90.11 |90.01 | 336M | - On test | Model | Pearson correlation | Spearman correlation | | ------------- | ------------- | ------------- | | [dangvantuan/CrossEncoder-camembert-large](https://huggingface.co/dangvantuan/CrossEncoder-camembert-large)| 88.16 | 87.57|
04f86c6a1a66a28787c7b3a3c0c2457d
mit
[]
false
Model Description <!-- Provide a longer summary of what this model is. --> ['Sino-Tibetan_relations_during_the_Ming_dynasty', 'Human_Development_Index', 'Hunter-gatherer', 'Somalis', 'Black_people', 'Bird_migration', 'Biodiversity', 'Mammal', 'Predation', 'Botany', 'Heian_period', 'On_the_Origin_of_Species', 'Dominican_Order', 'Insect', 'Race_(human_categorization)', 'Neolithic', 'Sumer', 'Indigenous_peoples_of_the_Americas', 'Anthropology', 'Hunting'] - **Developed by:** nandysoham - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** en - **License:** mit - **Finetuned from model [optional]:** [More Information Needed]
93146730d98a0ef6bcfd674d512faaf0
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-moral-ctx-action-conseq This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1111 - Accuracy: 0.9676 - F1: 0.9676
d55e8e9e5e8f034970aefd067d9b6c59
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 9.989502318502869e-05 - train_batch_size: 2000 - eval_batch_size: 2000 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5
e74370b368c24b31f05f110c50f9dd5d
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 10 | 0.1569 | 0.9472 | 0.9472 | | No log | 2.0 | 20 | 0.1171 | 0.9636 | 0.9636 | | No log | 3.0 | 30 | 0.1164 | 0.9664 | 0.9664 | | No log | 4.0 | 40 | 0.1117 | 0.9672 | 0.9672 | | No log | 5.0 | 50 | 0.1111 | 0.9676 | 0.9676 |
ff340441ff0c54a1fd20e5b6415437cf
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-triviaqa This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9949
a07314bd5723cedf8e0694e60cb9c9a9
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.0391 | 1.0 | 11195 | 1.0133 | | 0.8425 | 2.0 | 22390 | 0.9949 |
9df81c54d35bc7fea7c77975cfb0db4b
apache-2.0
['translation']
false
opus-mt-pis-en * source languages: pis * target languages: en * OPUS readme: [pis-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pis-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pis-en/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pis-en/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pis-en/opus-2020-01-16.eval.txt)
2aef69403ebc1d9f7e7e82bbbf59b5f0
apache-2.0
['translation']
false
opus-mt-fi-he * source languages: fi * target languages: he * OPUS readme: [fi-he](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-he/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-he/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-he/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-he/opus-2020-01-08.eval.txt)
eaa794560600011de5c1edf9b001f8eb
apache-2.0
['automatic-speech-recognition', 'fr']
false
exp_w2v2t_fr_unispeech_s833 Fine-tuned [microsoft/unispeech-large-1500h-cv](https://huggingface.co/microsoft/unispeech-large-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
8b2b9c42eaac1bb369ecee04d7731bd1
cc-by-sa-4.0
['japanese', 'wikipedia', 'token-classification', 'pos', 'dependency-parsing']
false
Model Description This is a DeBERTa(V2) model pre-trained on Japanese Wikipedia and 青空文庫 texts for POS-tagging and dependency-parsing, derived from [deberta-base-japanese-wikipedia](https://huggingface.co/KoichiYasuoka/deberta-base-japanese-wikipedia). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech) and [FEATS](https://universaldependencies.org/u/feat/).
b22773e646cd5b981011a690e4e3bfa6
cc-by-sa-4.0
['japanese', 'wikipedia', 'token-classification', 'pos', 'dependency-parsing']
false
How to Use ```py import torch from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/deberta-base-japanese-wikipedia-luw-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/deberta-base-japanese-wikipedia-luw-upos") s="国境の長いトンネルを抜けると雪国であった。" t=tokenizer.tokenize(s) p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]] print(list(zip(t,p))) ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/deberta-base-japanese-wikipedia-luw-upos") print(nlp("国境の長いトンネルを抜けると雪国であった。")) ```
85ddf38792a2afd808c21fe6cfa4f9bb
apache-2.0
['generated_from_trainer']
false
xsun_models This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1191 - Accuracy: 1.0
ffa7de8bb4ae90bf064a46fe50fdb0ee
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2054 | 1.0 | 1 | 0.1407 | 1.0 | | 0.1505 | 2.0 | 2 | 0.1191 | 1.0 |
9ce4407d709d8ccb9918b7f553cb0afb
cc-by-sa-4.0
['japanese', 'token-classification', 'pos', 'wikipedia', 'dependency-parsing']
false
Model Description This is a BERT model pre-trained on Japanese Wikipedia texts for POS-tagging and dependency-parsing, derived from [bert-large-japanese-char-extended](https://huggingface.co/KoichiYasuoka/bert-large-japanese-char-extended). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech) and [FEATS](https://universaldependencies.org/u/feat/).
7de17b2a320f05fe0833d92706179894
cc-by-sa-4.0
['japanese', 'token-classification', 'pos', 'wikipedia', 'dependency-parsing']
false
How to Use ```py import torch from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-large-japanese-luw-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/bert-large-japanese-luw-upos") s="国境の長いトンネルを抜けると雪国であった。" p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]] print(list(zip(s,p))) ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/bert-large-japanese-luw-upos") print(nlp("国境の長いトンネルを抜けると雪国であった。")) ```
5a0f85a850c0cb36b76c09ac60f222d6
apache-2.0
['generated_from_trainer']
false
tiny-mlm-glue-qnli-target-glue-wnli This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-qnli](https://huggingface.co/muhtasham/tiny-mlm-glue-qnli) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.0564 - Accuracy: 0.1268
5f75eb052cb4c01d3c15033c66c47d6b
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6898 | 25.0 | 500 | 0.7650 | 0.2113 | | 0.663 | 50.0 | 1000 | 1.1165 | 0.1268 | | 0.6113 | 75.0 | 1500 | 1.6072 | 0.1127 | | 0.5491 | 100.0 | 2000 | 2.0564 | 0.1268 |
4c5496cedba430e652e41892df849a56
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2205 - Accuracy: 0.923 - F1: 0.9231
0e0138f0965aad2ffa4662c615c67f4e
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8625 | 1.0 | 250 | 0.3246 | 0.9075 | 0.9062 | | 0.2522 | 2.0 | 500 | 0.2205 | 0.923 | 0.9231 |
233bc15bba494600764924cf66269964
apache-2.0
['national library of spain', 'spanish', 'bne', 'capitel', 'ner']
false
Model description The **roberta-large-bne-capitel-ner** is a Named Entity Recognition (NER) model for the Spanish language fine-tuned from the [roberta-large-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-large-bne) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) large model pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text, processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019.
d2f0a6213107236cf0fc0fad13189c1a
apache-2.0
['national library of spain', 'spanish', 'bne', 'capitel', 'ner']
false
Intended uses and limitations **roberta-large-bne-capitel-ner** model can be used to recognize Named Entities (NE). The model is limited by its training dataset and may not generalize well for all use cases.
e1a7b537ebff8d674cd1cbb30cb4914d
apache-2.0
['national library of spain', 'spanish', 'bne', 'capitel', 'ner']
false
How to use ```python from transformers import pipeline from pprint import pprint nlp = pipeline("ner", model="PlanTL-GOB-ES/roberta-large-bne-capitel-ner") example = "Me llamo Francisco Javier y vivo en Madrid." ner_results = nlp(example) pprint(ner_results) ```
099192c6a92743e3cda9a97618ca63ad
apache-2.0
['national library of spain', 'spanish', 'bne', 'capitel', 'ner']
false
Training procedure The model was trained with a batch size of 32 and a learning rate of 3e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set.
a3070c26a9739f8088848fa9fa467449
apache-2.0
['national library of spain', 'spanish', 'bne', 'capitel', 'ner']
false
Evaluation results We evaluated the **roberta-large-bne-capitel-ner** on the CAPITEL-NERC test set against standard multilingual and monolingual baselines: | Model | CAPITEL-NERC (F1) | | ------------|:----| | roberta-large-bne-capitel-ner | **90.51** | | roberta-base-bne-capitel-ner | 89.60| | BETO | 87.72 | | mBERT | 88.10 | | BERTIN | 88.56 | | ELECTRA | 80.35 | For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/PlanTL-GOB-ES/lm-spanish).
428a68162b276bb169ac5251df320fdc
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Whisper Tiny It 2 - Gianluca Ruberto This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.711485 - Wer: 43.392956
0ce177be244f204158922519017758c6
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Training and evaluation data Data used for training is the initial 10% of train and validation of [Italian Common Voice](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0/viewer/it/train) 11.0 from Mozilla Foundation. The dataset used for evaluation is the initial 10% of test of Italian Common Voice. Unfortunately weight decay showed to have slightly worse result also on the evaluation dataset.
9ac6c2c1b9d8f8798bf68b4568995131
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP - weight_decay: 0.3
5d843c73cd11b3b01ee87709a3eac1dd
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.5837 | 0.95 | 1000 | 0.790046 | 50.6032 | | 0.4186 | 1.91 | 2000 | 0.730115 | 46.0067 | | 0.3154 | 2.86 | 3000 | 0.712776 | 44.114 | | 0.2676 | 3.82 | 4000 | 0.711485 | 43.393 |
9fc95ef8dce9296a9186cc50086c817d
apache-2.0
['generated_from_trainer']
false
olm-bert-tiny-december-2022-target-glue-qnli This model is a fine-tuned version of [muhtasham/olm-bert-tiny-december-2022](https://huggingface.co/muhtasham/olm-bert-tiny-december-2022) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6358 - Accuracy: 0.6306
90073810e4bed27e5c8a3501bfd59d7c