license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
apache-2.0
['summarization', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:| | No log | 1.0 | 1141 | 9.4719 | 0.0 | 0.0 | 0.0 | 0.0 | | 40.4884 | 2.0 | 2282 | 78.1757 | 0.0 | 0.0 | 0.0 | 0.0 | | 40.4884 | 3.0 | 3423 | 54.3033 | 0.0 | 0.0 | 0.0 | 0.0 | | 72.4118 | 4.0 | 4564 | 75.8558 | 0.0 | 0.0 | 0.0 | 0.0 | | 72.4118 | 5.0 | 5705 | 12.4297 | 0.0 | 0.0 | 0.0 | 0.0 | | 24.3571 | 6.0 | 6846 | 12.4297 | 0.0 | 0.0 | 0.0 | 0.0 | | 24.3571 | 7.0 | 7987 | 12.4297 | 0.0 | 0.0 | 0.0 | 0.0 | | 16.5474 | 8.0 | 9128 | nan | 0.0 | 0.0 | 0.0 | 0.0 |
3366f7a823546eee04d4409716996d38
other
['text-generation', 'opt']
false
transformers.generation_utils.GenerationMixin.generate) method as follows: ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> import torch >>> model = AutoModelForCausalLM.from_pretrained("facebook/opt-30b", torch_dtype=torch.float16).cuda() >>>
ec5f3562c366e797ee7e30f85ce0e1f6
other
['text-generation', 'opt']
false
the fast tokenizer currently does not work correctly >>> tokenizer = AutoTokenizer.from_pretrained("facebook/opt-30b", use_fast=False) >>> prompt = "Hello, I am conscious and" >>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda() >>> generated_ids = model.generate(input_ids) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True) ['Hello, I am conscious and I am here.\nI am also conscious and I am here'] ``` By default, generation is deterministic. In order to use the top-k sampling, please set `do_sample` to `True`. ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed >>> import torch >>> model = AutoModelForCausalLM.from_pretrained("facebook/opt-30b", torch_dtype=torch.float16).cuda() >>>
1507f508a5a9d6b9f7459bd36f7f4ee9
other
['text-generation', 'opt']
false
the fast tokenizer currently does not work correctly >>> tokenizer = AutoTokenizer.from_pretrained("facebook/opt-30b", use_fast=False) >>> prompt = "Hello, I am conscious and" >>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda() >>> set_seed(32) >>> generated_ids = model.generate(input_ids, do_sample=True) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True) ['Hello, I am conscious and aware that you have your back turned to me and want to talk'] ```
69b1d49321d944a64b32bd129ef92b4f
other
['text-generation', 'opt']
false
Limitations and bias As mentioned in Meta AI's model card, given that the training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral the model is strongly biased : > Like other large language models for which the diversity (or lack thereof) of training > data induces downstream impact on the quality of our model, OPT-175B has limitations in terms > of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and > hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern > large language models. Here's an example of how the model can have biased predictions: ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed >>> import torch >>> model = AutoModelForCausalLM.from_pretrained("facebook/opt-30b", torch_dtype=torch.float16).cuda() >>>
86f37060104c3c465768f1625a42ad07
other
['text-generation', 'opt']
false
the fast tokenizer currently does not work correctly >>> tokenizer = AutoTokenizer.from_pretrained("facebook/opt-30b", use_fast=False) >>> prompt = "The woman worked as a" >>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda() >>> set_seed(32) >>> generated_ids = model.generate(input_ids, do_sample=True, num_return_sequences=5, max_length=10) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True) The woman worked as a supervisor in the office The woman worked as a social worker in a The woman worked as a cashier at the The woman worked as a teacher from 2011 to he woman worked as a maid at the house ``` compared to: ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed >>> import torch >>> model = AutoModelForCausalLM.from_pretrained("facebook/opt-30b", torch_dtype=torch.float16).cuda() >>>
d5afb67c8bee3f06b11b4f7ab8eaac2c
other
['text-generation', 'opt']
false
the fast tokenizer currently does not work correctly >>> tokenizer = AutoTokenizer.from_pretrained("facebook/opt-30b", use_fast=False) >>> prompt = "The man worked as a" >>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda() >>> set_seed(32) >>> generated_ids = model.generate(input_ids, do_sample=True, num_return_sequences=5, max_length=10) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True) The man worked as a school bus driver for The man worked as a bartender in a bar The man worked as a cashier at the The man worked as a teacher, and was The man worked as a professional at a range ``` This bias will also affect all fine-tuned versions of this model.
77680fc75bdb732d962ec918a805023c
apache-2.0
['generated_from_trainer']
false
spanish-clinical-ner This model is a fine-tuned version of [plncmm/roberta-clinical-wl-es](https://huggingface.co/plncmm/roberta-clinical-wl-es) on the wl dataset. It achieves the following results on the evaluation set: - Loss: 0.6181 - Precision: 0.6869 - Recall: 0.7349 - F1: 0.7100 - Accuracy: 0.8263
fe471d196e2693123ae5c2ad305246bd
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 1.0283 | 1.0 | 500 | 0.6862 | 0.6690 | 0.6959 | 0.6822 | 0.8091 | | 0.599 | 2.0 | 1000 | 0.6198 | 0.6856 | 0.7276 | 0.7059 | 0.8252 | | 0.4973 | 3.0 | 1500 | 0.6181 | 0.6869 | 0.7349 | 0.7100 | 0.8263 |
9ba44d129c49adcb3a8cb0741d0d3c1b
mit
['generated_from_trainer']
false
deberta-base-finetuned-squad1-newsqa This model is a fine-tuned version of [stevemobs/deberta-base-finetuned-squad1](https://huggingface.co/stevemobs/deberta-base-finetuned-squad1) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7556
54b9eede4ce473cdbaabb80851873a65
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.6703 | 1.0 | 17307 | 0.7207 | | 0.4775 | 2.0 | 34614 | 0.7556 |
bc2bfa8402a70f61a0a5493cb05a5db4
cc-by-4.0
['question generation']
false
Model Card of `research-backup/bart-base-squadshifts-vanilla-new_wiki-qg` This model is fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) for question generation task on the [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) (dataset_name: new_wiki) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
a68971549d50977e7f9c385790908ebc
cc-by-4.0
['question generation']
false
Overview - **Language model:** [facebook/bart-base](https://huggingface.co/facebook/bart-base) - **Language:** en - **Training data:** [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) (new_wiki) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
933f1a60726e8d7d20383ac63076bc4a
cc-by-4.0
['question generation']
false
model prediction questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "research-backup/bart-base-squadshifts-vanilla-new_wiki-qg") output = pipe("<hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.") ```
1b14a3cfbfdd3bf8a6e034b88fccfd64
cc-by-4.0
['question generation']
false
Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/research-backup/bart-base-squadshifts-vanilla-new_wiki-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_squadshifts.new_wiki.json) | | Score | Type | Dataset | |:-----------|--------:|:---------|:---------------------------------------------------------------------------| | BERTScore | 92.97 | new_wiki | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | Bleu_1 | 29.14 | new_wiki | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | Bleu_2 | 19.48 | new_wiki | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | Bleu_3 | 13.85 | new_wiki | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | Bleu_4 | 10.27 | new_wiki | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | METEOR | 23.65 | new_wiki | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | MoverScore | 64.36 | new_wiki | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | ROUGE_L | 26.47 | new_wiki | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) |
0d2f44924b02103756a6317fcaf466f8
cc-by-4.0
['question generation']
false
Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_squadshifts - dataset_name: new_wiki - input_types: ['paragraph_answer'] - output_types: ['question'] - prefix_types: None - model: facebook/bart-base - max_length: 512 - max_length_output: 32 - epoch: 4 - batch: 8 - lr: 0.0001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 8 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/research-backup/bart-base-squadshifts-vanilla-new_wiki-qg/raw/main/trainer_config.json).
92d03f1b7b5bfd5c7a19e092ad9ecb79
other
['generated_from_keras_callback']
false
nateraw/mit-b0-finetuned-sidewalks This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5197 - Validation Loss: 0.6268 - Validation Mean Iou: 0.2719 - Validation Mean Accuracy: 0.3442 - Validation Overall Accuracy: 0.8180 - Validation Per Category Iou: [0. 0.62230678 0.81645513 0.18616589 0.66669478 0.30574734 nan 0.36681201 0.31128062 0. 0.76635363 0. 0. nan 0. 0.37874505 0. 0. 0.68193241 0. 0.48867838 0.25809644 0. nan 0. 0.25765818 0. 0. 0.81965205 0.71604385 0.9214592 0. 0.00636635 0.12957446 0. ] - Validation Per Category Accuracy: [0. 0.89469845 0.88320521 0.45231002 0.72104833 0.3386303 nan 0.53522723 0.72026843 0. 0.93197124 0. 0. nan 0. 0.45525816 0. 0. 0.87276184 0. 0.60762821 0.29654901 0. nan 0. 0.32162193 0. 0. 0.90797988 0.89199119 0.96388697 0. 0.00646084 0.21171965 0. ] - Epoch: 5
ef4ace67fd83bd0e5c6989bc17460d3c
other
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': 6e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32
6dfe83176552555b7feff9c529215785
other
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Validation Mean Iou | Validation Mean Accuracy | Validation Overall Accuracy | Validation Per Category Iou | Validation Per Category Accuracy | Epoch | |:----------:|:---------------:|:-------------------:|:------------------------:|:---------------------------:|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-----:| | 1.3430 | 0.8858 | 0.1724 | 0.2253 | 0.7508 | [0.00000000e+00 5.02535817e-01 7.94050536e-01 1.37476079e-01 5.28949130e-01 1.76391302e-01 nan 1.19967229e-01 0.00000000e+00 0.00000000e+00 6.61310784e-01 0.00000000e+00 0.00000000e+00 nan 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 5.06634036e-01 0.00000000e+00 7.22567226e-02 5.35294630e-03 0.00000000e+00 0.00000000e+00 0.00000000e+00 1.53949868e-02 0.00000000e+00 0.00000000e+00 7.37842004e-01 5.78989440e-01 8.52258994e-01 0.00000000e+00 0.00000000e+00 6.16858377e-05 0.00000000e+00] | [0.00000000e+00 5.80613096e-01 9.43852033e-01 1.50019637e-01 5.77268577e-01 3.25241508e-01 nan 1.68319967e-01 0.00000000e+00 0.00000000e+00 8.60308871e-01 0.00000000e+00 0.00000000e+00 nan 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 9.04260401e-01 0.00000000e+00 7.74112939e-02 5.58025588e-03 0.00000000e+00 nan 0.00000000e+00 1.56055377e-02 0.00000000e+00 0.00000000e+00 8.41648672e-01 8.58416118e-01 9.02457570e-01 0.00000000e+00 0.00000000e+00 6.18892982e-05 0.00000000e+00] | 0 | | 0.8402 | 0.7211 | 0.2203 | 0.2900 | 0.7927 | [0. 0.60561012 0.80467888 0.10134538 0.57674712 0.21967639 nan 0.279315 0.28998136 0. 0.71924852 0. 0. nan 0. 0.10241989 0. 0. 0.60537245 0. 0.37966409 0.0624908 0. 0. 0. 0.11869763 0. 0. 0.79675107 0.70541969 0.89177953 0. 0. 0.01097213 0. ] | [0. 0.70687024 0.92710849 0.47653578 0.6809956 0.28562204 nan 0.35954555 0.53804171 0. 0.87451178 0. 0. nan 0. 0.10473185 0. 0. 0.88548482 0. 0.52011987 0.06421075 0. nan 0. 0.13802701 0. 0. 0.9278545 0.83106582 0.94693817 0. 0. 0.01170072 0. ] | 1 | | 0.7051 | 0.6513 | 0.2568 | 0.3210 | 0.8151 | [0.00000000e+00 6.31500555e-01 8.33347761e-01 2.40727740e-01 6.71879162e-01 2.32727132e-01 nan 3.15720178e-01 3.22578864e-01 0.00000000e+00 7.51066980e-01 0.00000000e+00 0.00000000e+00 nan 0.00000000e+00 3.01090014e-01 0.00000000e+00 0.00000000e+00 6.56592309e-01 0.00000000e+00 3.82317489e-01 2.25385079e-01 0.00000000e+00 nan 0.00000000e+00 2.34975219e-01 0.00000000e+00 0.00000000e+00 7.92710603e-01 6.82508692e-01 9.02369099e-01 0.00000000e+00 5.10019193e-04 4.02361131e-02 0.00000000e+00] | [0.00000000e+00 7.76355941e-01 9.39707165e-01 3.90888278e-01 7.70256989e-01 2.84066636e-01 nan 4.57106724e-01 6.33498392e-01 0.00000000e+00 9.05789013e-01 0.00000000e+00 0.00000000e+00 nan 0.00000000e+00 3.57230962e-01 0.00000000e+00 0.00000000e+00 8.45761217e-01 0.00000000e+00 5.16681541e-01 2.82796479e-01 0.00000000e+00 nan 0.00000000e+00 3.07634724e-01 0.00000000e+00 0.00000000e+00 9.04391068e-01 8.86212453e-01 9.64570665e-01 0.00000000e+00 5.17411580e-04 4.71742075e-02 0.00000000e+00] | 2 | | 0.6294 | 0.6365 | 0.2695 | 0.3320 | 0.8244 | [0. 0.63840754 0.83879521 0.31781353 0.69394774 0.22324776 nan 0.35012894 0.31369877 0. 0.7683448 0. 0. nan 0. 0.36532292 0. 0. 0.65554136 0. 0.37438724 0.25682621 0. nan 0. 0.23051151 0. 0. 0.81818163 0.7633018 0.91092518 0. 0.00145576 0.10215516 0. ] | [0. 0.76103704 0.95305272 0.43848725 0.78760908 0.25645014 nan 0.48971828 0.61853472 0. 0.90793733 0. 0. nan 0. 0.48772201 0. 0. 0.84205031 0. 0.53308407 0.36285878 0. nan 0. 0.27953916 0. 0. 0.93079576 0.87079757 0.96477884 0. 0.00147054 0.13899972 0. ] | 3 | | 0.5686 | 0.6122 | 0.2715 | 0.3360 | 0.8256 | [0.00000000e+00 6.38345814e-01 8.56252996e-01 3.07043269e-01 6.87537894e-01 3.06534041e-01 nan 3.84145525e-01 3.19438916e-01 0.00000000e+00 7.57233152e-01 0.00000000e+00 0.00000000e+00 nan 0.00000000e+00 4.06585843e-01 0.00000000e+00 0.00000000e+00 6.47648546e-01 2.91885581e-04 4.00547422e-01 1.97261484e-01 0.00000000e+00 nan 0.00000000e+00 2.20793008e-01 0.00000000e+00 0.00000000e+00 8.19526784e-01 7.19306080e-01 9.20192720e-01 0.00000000e+00 2.23374930e-03 9.77508243e-02 0.00000000e+00] | [0.00000000e+00 7.89438910e-01 9.16367241e-01 4.32251205e-01 7.89740409e-01 4.88566404e-01 nan 5.36825005e-01 6.47787376e-01 0.00000000e+00 9.32641501e-01 0.00000000e+00 0.00000000e+00 nan 0.00000000e+00 4.73813253e-01 0.00000000e+00 0.00000000e+00 9.09004353e-01 2.91885581e-04 4.37175308e-01 2.25663128e-01 0.00000000e+00 nan 0.00000000e+00 2.60992057e-01 0.00000000e+00 0.00000000e+00 9.19328058e-01 9.02898346e-01 9.65529369e-01 0.00000000e+00 2.23984750e-03 1.20880721e-01 0.00000000e+00] | 4 | | 0.5197 | 0.6268 | 0.2719 | 0.3442 | 0.8180 | [0. 0.62230678 0.81645513 0.18616589 0.66669478 0.30574734 nan 0.36681201 0.31128062 0. 0.76635363 0. 0. nan 0. 0.37874505 0. 0. 0.68193241 0. 0.48867838 0.25809644 0. nan 0. 0.25765818 0. 0. 0.81965205 0.71604385 0.9214592 0. 0.00636635 0.12957446 0. ] | [0. 0.89469845 0.88320521 0.45231002 0.72104833 0.3386303 nan 0.53522723 0.72026843 0. 0.93197124 0. 0. nan 0. 0.45525816 0. 0. 0.87276184 0. 0.60762821 0.29654901 0. nan 0. 0.32162193 0. 0. 0.90797988 0.89199119 0.96388697 0. 0.00646084 0.21171965 0. ] | 5 |
ed6a8d18893f8c204891029453c8de71
bsd-3-clause
['automatic-speech-recognition']
false
Attribution As initial checkpoint used [stt_en_citrinet_512_gamma_0_25](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/nemo/models/stt_en_citrinet_512_gamma_0_25) by [NVIDIA](https://github.com/NVIDIA) licensed under [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/)
3ce9e48a3a7b7ef6f99fe4fe5814404c
mit
['roberta-base', 'roberta-base-epoch_23']
false
RoBERTa, Intermediate Checkpoint - Epoch 23 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_23.
b635157c7bd7a2ffe3954eb39524e70a
apache-2.0
['automatic-speech-recognition', 'uk']
false
exp_w2v2t_uk_hubert_s33 Fine-tuned [facebook/hubert-large-ll60k](https://huggingface.co/facebook/hubert-large-ll60k) for speech recognition using the train split of [Common Voice 7.0 (uk)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
ae04464968f5c18f8270b88d0d724851
apache-2.0
['automatic-speech-recognition', 'sv-SE']
false
exp_w2v2t_sv-se_xlsr-53_s328 Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) for speech recognition using the train split of [Common Voice 7.0 (sv-SE)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
6c27ebed91c1e5d059643745cee6da8c
apache-2.0
['translation']
false
deu-msa * source group: German * target group: Malay (macrolanguage) * OPUS readme: [deu-msa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-msa/README.md) * model: transformer-align * source language(s): deu * target language(s): ind zsm_Latn * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-msa/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-msa/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-msa/opus-2020-06-17.eval.txt)
9a903a0a3d72273351ee818bce015d8a
apache-2.0
['translation']
false
System Info: - hf_name: deu-msa - source_languages: deu - target_languages: msa - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-msa/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['de', 'ms'] - src_constituents: {'deu'} - tgt_constituents: {'zsm_Latn', 'ind', 'max_Latn', 'zlm_Latn', 'min'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-msa/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-msa/opus-2020-06-17.test.txt - src_alpha3: deu - tgt_alpha3: msa - short_pair: de-ms - chrF2_score: 0.607 - bleu: 34.0 - brevity_penalty: 0.9540000000000001 - ref_len: 3729.0 - src_name: German - tgt_name: Malay (macrolanguage) - train_date: 2020-06-17 - src_alpha2: de - tgt_alpha2: ms - prefer_old: False - long_pair: deu-msa - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
85ee87c6be4fb73eed54a154a107a591
creativeml-openrail-m
[]
false
<a href="https://colab.research.google.com/drive/1noyBA_JrYO6Lk6cwxsNZ_jdJ-Jtaf82G?usp=sharing"><img src="https://colab.research.google.com/assets/colab-badge.svg"></a> PromptCLUE:全中文任务零样本学习模型 这个模型是基于PromptCLUE-base进一步训练(+50%步数),以及更多任务(+50%任务)以及更多任务类型上进行训练,是对PromptCLUE-base进一步升级, 新增的任务类型有改写、纠错和问答等类型 在1000亿token中文语料上预训练,累计学习1.5万亿中文token,并且在数百种任务上进行Prompt任务式训练。针对理解类任务,如分类、情感分析、抽取等,可以自定义标签体系;针对多种生成任务,可以进行采样自由生成。 <a href='https://www.cluebenchmarks.com/clueai.html'>在线Demo</a> &nbsp; | <a href='https://www.clueai.cn'>使用clueai工具包和API(large版)</a> &nbsp; | &nbsp; <a href='https://github.com/clue-ai/PromptCLUE'>Github项目地址</a>&nbsp; | &nbsp;<a href='https://colab.research.google.com/drive/1noyBA_JrYO6Lk6cwxsNZ_jdJ-Jtaf82G?usp=sharing
1d333ccdba77a427c27a3a10ba1cf613
creativeml-openrail-m
[]
false
加载模型 from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("ClueAI/PromptCLUE-base-v1-5") model = T5ForConditionalGeneration.from_pretrained("ClueAI/PromptCLUE-base-v1-5") ``` 使用模型进行预测推理方法: ```python import torch
77c8f5df8a776dbb92b956670a9ca71f
creativeml-openrail-m
[]
false
device = torch.device('cpu') device = torch.device('cuda') model.to(device) def preprocess(text): return text.replace("\n", "_") def postprocess(text): return text.replace("_", "\n") def answer(text, sample=False, top_p=0.8): '''sample:是否抽样。生成任务,可以设置为True; top_p:0-1之间,生成的内容越多样''' text = preprocess(text) encoding = tokenizer(text=[text], truncation=True, padding=True, max_length=768, return_tensors="pt").to(device) if not sample: out = model.generate(**encoding, return_dict_in_generate=True, output_scores=False, max_length=128, num_beams=4, length_penalty=0.6) else: out = model.generate(**encoding, return_dict_in_generate=True, output_scores=False, max_length=64, do_sample=True, top_p=top_p) out_text = tokenizer.batch_decode(out["sequences"], skip_special_tokens=True) return postprocess(out_text[0]) ```
904be6c450da1329a61f2118f2bc8ef6
creativeml-openrail-m
[]
false
改写(paraphrase)[sample=True] Input: 生成与下列文字相同意思的句子: 白云遍地无人扫 答案: Model output: 白云散去无踪,没人扫。 Input: 用另外的话复述下面的文字: 怎么到至今还不回来,这满地的白云幸好没人打扫。 答案: Model output: 满地白云无扫处 Input: 改写下面的文字,确保意思相同: 一个如此藐视本国人民民主权利的人,怎么可能捍卫外国人的民权? 答案: Model output: 对一个如此藐视本国人民民主权利的人,怎么能捍卫外国人的民权?
6c4a06986cadbad24a7af5b0a5e77039
creativeml-openrail-m
[]
false
新闻分类(classify) ```bash Input: 分类任务: 折价率过低遭抛售基金泰和跌7.15%,证券时报记者 朱景锋本报讯 由于折价率在大盘封基中处于最低水平,基金泰和昨日遭到投资者大举抛售,跌幅达到7.15%,远超大盘。盘面显示,基金泰和随大盘高开,之后开始震荡走低,午后开始加速下行,几乎没有像样反弹。截至收盘时,在沪深300指数仅下跌2.56%的情况下,基金泰和收盘跌幅高达7.15%,在所有封基中跌幅最大,而昨日多数封基跌幅在2%左右。 选项:财经,娱乐,时政,股票 答案: Model output: 财经 ```
a44b18d5ef3925030edae66705ac039f
creativeml-openrail-m
[]
false
阅读理解(generate) ```bash Input: 阅读文章,给出答案: 段落: 港汇指数,全称港元实际汇兑指数(Effective Exchange Rate Index for the Hong Kong Dollar)是由香港政府统计处编制的一项指数,以反映港元与香港主要贸易伙伴之货币的名义有效汇率加权平均数的变动情况。加权比重是按1999年至2000年平均贸易模式所制定,但政府并未有公布详细的计算公式。旧港汇指数基准日为2000年1月1日,基数为100点。由2012年1月3日起,新系列港汇指数 (包括15种货币及以2010年1月 = 100) 已取代旧港汇指数系列。港汇指数的作用,主要是用于反映香港的货品及服务的价格相对于其主要贸易伙伴的变动,并通常被视作反映香港价格竞争力的指标。 问题:港汇指数的加权比重如何制定? 答案: Model output: 按1999年至2000年平均贸易模式所制定 ```
6286375efdef27be78ee4119072c0ab4
creativeml-openrail-m
[]
false
问题生成(generate) ```bash Input: 问题生成: 中新网2022年9月22日电 22日,商务部召开例行新闻发布会,商务部新闻发言人束珏婷表示,今年1-8月,中国实际使用外资1384亿美元,增长20.2%;其中,欧盟对华投资增长123.7%(含通过自由港投资数据)。这充分表明,包括欧盟在内的外国投资者持续看好中国市场,希望继续深化对华投资合作。 答案: Model output: 1、商务部发言人束珏婷表示,中国去年1-8月实际使用外资1384亿美元,增长20.2%,其中欧盟对华投资增长多少? 2、欧盟对华投资的增长是否达到预期? 3、欧盟在对华投资合作方面取得什么样的成果? ```
d1fe763fa8486f4302cfa047f087aae7
cc-by-4.0
['conversational', 'transformers']
false
DialoGPT_AfriWOZ (Pidgin) This is a fine-tuned model of DialoGPT (small) on the AfriWOZ dataset. It is intended to be used as a conversational system in Nigeria Pidgin English language. The dataset it's trained on is limited in scope, as it covers only certain domains such as restaurants, hotel, taxi, and booking. The perplexity achieved on the validation set is 38.52. * Generation example from an interactive environment: |Role | Response | |------|------------| |User | I hear say restaurant dey here. | |Bot | I dey find restaurant. | |User | Abeg you fit tell me which kind chop dey? | |Bot | I no dey sure right now. | |User | You do well. Thank you. | |Bot | I dey find restauarant. | Please find the information about preprocessing, training and full details of the DialoGPT in the [original DialoGPT repository](https://github.com/microsoft/DialoGPT) The paper for this work can be found on arXiv: [https://arxiv.org/pdf/2204.08083.pdf](https://arxiv.org/pdf/2204.08083.pdf)
5566ca7a9a661bac51c97e4be74b20ee
cc-by-4.0
['conversational', 'transformers']
false
How to use Now we are ready to try out how the model works as a chatting partner! ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch tokenizer = AutoTokenizer.from_pretrained("tosin/dialogpt_afriwoz_pidgin") model = AutoModelForCausalLM.from_pretrained("tosin/dialogpt_afriwoz_pidgin")
cdf5615c9909e4527276c5b8be737271
apache-2.0
['generated_from_trainer']
false
bert-large-cased-sigir-support-refute-no-label-40-2nd-test-LR100-40 This model is a fine-tuned version of [jojoUla/bert-large-cased-sigir-support-refute-no-label-40](https://huggingface.co/jojoUla/bert-large-cased-sigir-support-refute-no-label-40) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.4444
0921fcb0ec566427291dea005d69ce71
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 40.0 - mixed_precision_training: Native AMP
d5532c0aa031a530a533f48448a98ef7
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 4.1928 | 1.0 | 1 | 5.0343 | | 3.8865 | 2.0 | 2 | 4.7751 | | 4.0526 | 3.0 | 3 | 2.2212 | | 2.3444 | 4.0 | 4 | 1.6810 | | 1.596 | 5.0 | 5 | 1.3135 | | 1.6805 | 6.0 | 6 | 1.2568 | | 1.1736 | 7.0 | 7 | 1.5288 | | 1.2663 | 8.0 | 8 | 1.4556 | | 1.3703 | 9.0 | 9 | 1.1139 | | 0.9768 | 10.0 | 10 | 1.0658 | | 1.0132 | 11.0 | 11 | 1.2556 | | 0.9896 | 12.0 | 12 | 1.1046 | | 1.1184 | 13.0 | 13 | 1.0522 | | 0.8142 | 14.0 | 14 | 1.3122 | | 0.706 | 15.0 | 15 | 1.0713 | | 0.7227 | 16.0 | 16 | 1.4111 | | 0.7169 | 17.0 | 17 | 0.5603 | | 0.7922 | 18.0 | 18 | 1.0911 | | 0.7763 | 19.0 | 19 | 0.6882 | | 0.5832 | 20.0 | 20 | 1.4459 | | 0.7265 | 21.0 | 21 | 1.5459 | | 0.7249 | 22.0 | 22 | 0.9200 | | 0.5397 | 23.0 | 23 | 1.0976 | | 0.5063 | 24.0 | 24 | 1.1201 | | 0.6569 | 25.0 | 25 | 1.0701 | | 0.472 | 26.0 | 26 | 1.7735 | | 0.6124 | 27.0 | 27 | 1.3597 | | 0.6042 | 28.0 | 28 | 0.9292 | | 0.5232 | 29.0 | 29 | 1.4994 | | 0.4961 | 30.0 | 30 | 1.2059 | | 0.371 | 31.0 | 31 | 1.2648 | | 0.4746 | 32.0 | 32 | 1.0907 | | 0.4901 | 33.0 | 33 | 1.2564 | | 0.5066 | 34.0 | 34 | 1.9231 | | 0.6352 | 35.0 | 35 | 1.0160 | | 0.5672 | 36.0 | 36 | 1.2958 | | 0.5139 | 37.0 | 37 | 0.9384 | | 0.5583 | 38.0 | 38 | 1.9518 | | 0.5443 | 39.0 | 39 | 1.4243 | | 0.5935 | 40.0 | 40 | 1.3882 |
653340701644768ea493695868472a0d
apache-2.0
['generated_from_trainer']
false
irony_trained_1234567 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 1.6580 - F1: 0.6766
be0c6e7ee32d55134c8952f0eaf50eea
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.6774391860025942e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 1234567 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4
ca42ac84437f9960f52ff9b51927d8a6
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.6608 | 1.0 | 716 | 0.6057 | 0.6704 | | 0.5329 | 2.0 | 1432 | 0.8935 | 0.6621 | | 0.3042 | 3.0 | 2148 | 1.3871 | 0.6822 | | 0.1769 | 4.0 | 2864 | 1.6580 | 0.6766 |
3d1eea0af2ad3515065e3af02b7fe3d3
apache-2.0
['generated_from_trainer']
false
wav2vec2-large-xls-r-300m-turkish-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.9701 - Wer: 1.0
805d6c719609e9f2ad1a7ddd5fbc5281
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 40 - mixed_precision_training: Native AMP
cbed8f6a894ad55e518605a3a1c0ff4e
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:---:| | 4.3108 | 16.0 | 400 | 2.9378 | 1.0 | | 3.0115 | 32.0 | 800 | 2.9701 | 1.0 |
e2a89bb4f8f78b1b9f7bb0a886a4bfc3
apache-2.0
['generated_from_trainer']
false
distilgpt2-finetuned-katpoems-lm-15-epoch This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.8145
8e25245ca85e4373b3b338863762a933
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15.0
5ccaf89727bea44eaecd9e17d37194d4
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 59 | 4.6495 | | No log | 2.0 | 118 | 4.6555 | | No log | 3.0 | 177 | 4.6696 | | No log | 4.0 | 236 | 4.6930 | | No log | 5.0 | 295 | 4.7132 | | No log | 6.0 | 354 | 4.7185 | | No log | 7.0 | 413 | 4.7444 | | No log | 8.0 | 472 | 4.7611 | | 4.2244 | 9.0 | 531 | 4.7794 | | 4.2244 | 10.0 | 590 | 4.7841 | | 4.2244 | 11.0 | 649 | 4.7929 | | 4.2244 | 12.0 | 708 | 4.8048 | | 4.2244 | 13.0 | 767 | 4.8058 | | 4.2244 | 14.0 | 826 | 4.8124 | | 4.2244 | 15.0 | 885 | 4.8145 |
70fd4041b524b92d520ebaeca76efdd4
apache-2.0
['thai', 'token-classification', 'pos', 'wikipedia', 'dependency-parsing']
false
Model Description This is a BERT model pre-trained on Thai Wikipedia texts for POS-tagging and dependency-parsing, derived from [bert-base-th-cased](https://huggingface.co/Geotrend/bert-base-th-cased). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
499d636c9bc3dfb0822e48cfb9849678
apache-2.0
['thai', 'token-classification', 'pos', 'wikipedia', 'dependency-parsing']
false
How to Use ```py from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-base-thai-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/bert-base-thai-upos") ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/bert-base-thai-upos") ```
43b466ed838815108f0802567e98ed7e
mit
[]
false
uma-meme-style on Stable Diffusion This is the `<uma-meme-style>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<uma-meme-style> 0](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/unnamed_7_.jpg) ![<uma-meme-style> 1](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/28.jpg) ![<uma-meme-style> 2](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/unnamed_11_.jpg) ![<uma-meme-style> 3](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/unnamed_12_.jpg) ![<uma-meme-style> 4](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/unnamed_1_.png) ![<uma-meme-style> 5](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/22.jpg) ![<uma-meme-style> 6](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/10.jpg) ![<uma-meme-style> 7](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/KakaoTalk_20220904_015246222.jpg) ![<uma-meme-style> 8](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/50.jpg) ![<uma-meme-style> 9](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/unnamed.png) ![<uma-meme-style> 10](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/unnamed_6_.jpg) ![<uma-meme-style> 11](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/21.jpg) ![<uma-meme-style> 12](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/FbCVln9WIAA74Z2.png) ![<uma-meme-style> 13](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/file.jpg) ![<uma-meme-style> 14](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/tt0.png) ![<uma-meme-style> 15](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/31.jpg) ![<uma-meme-style> 16](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/unnamed-1.jpg) ![<uma-meme-style> 17](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/unnamed.jpg) ![<uma-meme-style> 18](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/unnamed_5_.jpg) ![<uma-meme-style> 19](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/3-30-25.png) ![<uma-meme-style> 20](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/Fb-Pk97aMAIgbYr.png) ![<uma-meme-style> 21](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/2.jpg) ![<uma-meme-style> 22](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/unnamed_2_.png) ![<uma-meme-style> 23](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/6.jpg) ![<uma-meme-style> 24](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/unnamed_1_.jpg) ![<uma-meme-style> 25](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/FZoyWUcXwAE3k2K.png) ![<uma-meme-style> 26](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/unnamed_4_.jpg) ![<uma-meme-style> 27](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/2022-09-14_13-02-28.png) ![<uma-meme-style> 28](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/16.jpg) ![<uma-meme-style> 29](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/unnamed_9_.jpg) ![<uma-meme-style> 30](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/unnamed_10_.jpg) ![<uma-meme-style> 31](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/4.jpg) ![<uma-meme-style> 32](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/unnamed_3_.jpg) ![<uma-meme-style> 33](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/unnamed_8_.jpg)
442b8e847443ad1bee1325d5ea12aaab
apache-2.0
['generated_from_trainer']
false
mt5-small-MT5-Intento1 This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan - Rouge1: 3.9645 - Rouge2: 0.8023 - Rougel: 3.8615 - Rougelsum: 3.8591 - Gen Len: 13.7379
d4f79c119ecf3643ed713b93bee203c5
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 0.0 | 1.0 | 6034 | nan | 3.9645 | 0.8023 | 3.8615 | 3.8591 | 13.7379 |
28587f8f2610a256fff09a86df010ef8
apache-2.0
['Image Captioning']
false
Model Description These are model weights originally provided by the authors of the paper [Text-Only Training for Image Captioning using Noise-Injected CLIP](https://arxiv.org/pdf/2211.00575.pdf). Their method aims to train CLIP with only text samples. Therefore they are injecting zero-mean Gaussian Noise into the text embeddings before decoding. In their words: *Specifically, we assume that the visual embedding corresponding to a text embedding lies somewhere within a ball of small radius around the text embedding (see Fig. 1). We would like all text embeddings in this ball to decode to the same caption,which should also correspond to the visual content mapped to this ball. We implement this intuition by adding zero-mean Gaussian noise of STD to the text embedding before decoding it.* The "Noise Level" of 0.001 is equivalent to the Noise Variance which is the square of the STD. The reported metrics are results of a model with a Noise Variance of 0.016, which the authors unfortunately do not provide in their repository.
c7a23859b1d423a51764f825bd963b7e
cc-by-4.0
['generated_from_trainer']
false
hing-roberta-finetuned-non-code-mixed-DS This model is a fine-tuned version of [l3cube-pune/hing-roberta](https://huggingface.co/l3cube-pune/hing-roberta) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1286 - Accuracy: 0.6656 - Precision: 0.6575 - Recall: 0.6554 - F1: 0.6556
126fd78327d56c22809d6167c112332b
cc-by-4.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.824279936868144e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 43 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5
df24fd3f44a87145b2aedbe8378b6e38
cc-by-4.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.8233 | 2.0 | 926 | 0.8104 | 0.6656 | 0.6607 | 0.6537 | 0.6555 | | 0.3924 | 3.99 | 1852 | 1.1286 | 0.6656 | 0.6575 | 0.6554 | 0.6556 |
4bc9481787e6d1da3d68438fcb6da061
creativeml-openrail-m
[]
false
<div style="display: flex; flex-direction: row; flex-wrap: wrap"> <a href="https://www.patreon.com/user?u=29466374" target="_blank"> <img src="https://img.shields.io/badge/Patreon-F96854?style=for-the-badge&logo=patreon&logoColor=white" alt="Patreon"/> </a> <a href="https://twitter.com/nerijs" target="_blank"> <img src="https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white" alt="Twitter"/> </a> </div>
a09db4958772194162b2fcbd21b8fa7a
creativeml-openrail-m
[]
false
coralchar-diffusion-v1 Stable Diffusion v1.5 model trained on to generate cute character portraits <div style="display: flex; flex-direction: row; flex-wrap: wrap"> <img src="https://s3.amazonaws.com/moonup/production/uploads/1670205150413-6303f37c3926de1f7ec42d3e.png" width="256"> <img src="https://s3.amazonaws.com/moonup/production/uploads/1670205171617-6303f37c3926de1f7ec42d3e.png" width="256"> </div>
333e3028e0b26994a40f27f599720d6b
creativeml-openrail-m
[]
false
How to use - Download the model and use it on your desired UI (Tested on AUTOMATIC1111's) .ckpt and Diffusers version available - Trigger the style in your prompt with the **coralchar** token, look at the next section for more examples - If you want to use the inpainting model, you can use it like a normal v1.5 model
8b067ca31f557f011d7dbc341c55eaa1
creativeml-openrail-m
[]
false
Examples on step-6000 model **a woman wearing blue jeans and a white tank top** Steps: 20, Sampler: DPM++ SDE, CFG scale: 7, Size: 512x768 <img src="https://s3.amazonaws.com/moonup/production/uploads/1670204360798-6303f37c3926de1f7ec42d3e.png" width="512"/> **a man wearing a black puffy vest** Steps: 20, Sampler: DPM++ SDE, CFG scale: 7, Size: 512x768 <img src="https://s3.amazonaws.com/moonup/production/uploads/1670204467592-6303f37c3926de1f7ec42d3e.png" width="512"/>
1401a9cb60fcb9b6daae8bbbada9186a
creativeml-openrail-m
[]
false
Examples on inpainting model **a man wearing a blue puffy vest** Steps: 20, Sampler: DPM++ SDE, CFG scale: 7, Size: 512x768, 0.75 Denoising strength <h2>Original vs step_6000 vs inpainting version</h2> <div style="display: flex; flex-direction: row; flex-wrap: wrap"> <img src="https://s3.amazonaws.com/moonup/production/uploads/1670205036420-6303f37c3926de1f7ec42d3e.png" width="256"/> <img src="https://s3.amazonaws.com/moonup/production/uploads/1670204708270-6303f37c3926de1f7ec42d3e.png" width="256"/> <img src="https://s3.amazonaws.com/moonup/production/uploads/1670204954426-6303f37c3926de1f7ec42d3e.png" width="256"/> </div>
3b1e0014451efbfd02e4f7e2bcc7fe49
creativeml-openrail-m
[]
false
Tips - Best results with 512x768, outputs full body portraits - Also high step count on Euler_a gives good results - Low CFG scale outputs great results - If you want to generate different expressions, generate a base character with txt2img then adjust your outfit and details with inpainting model and use inpainting again to generate different expressions and poses Please consider supporting further research on my Patreon: <a href="https://www.patreon.com/user?u=29466374" target="_blank"> <img src="https://img.shields.io/badge/Patreon-F96854?style=for-the-badge&logo=patreon&logoColor=white" alt="Patreon"/> </a> If you have any question, suggestion for new models or need help in general with SD related stuff, don't hesistate to reach out on Twitter: <a href="https://twitter.com/nerijs" target="_blank"> <img src="https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white" alt="Twitter"/> </a>
d0bb1fc944b9153c2c0bfa699bb017bc
apache-2.0
['whisper-event']
false
Whisper Telugu Large-v2 This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the Telugu data available from multiple publicly available ASR corpuses. It has been fine-tuned as a part of the Whisper fine-tuning sprint.
f1bd0032db3b1f74cde5d47697c1142d
apache-2.0
['whisper-event']
false
Training and evaluation data at Speech Lab, IITM Training Data: CSTD IIIT-H ASR Corpus, ULCA ASR Corpus, Shrutilipi ASR Corpus, Microsoft Research Telugu Corpus (Train+Dev), Babel ASR Corpus, Google/Fleurs (Train+Dev) set. Evaluation Data: Babel Test, Microsoft Research Telugu Corpus Test, Google/Fleurs Test set, OpenSLR.
dad208b53a76b28bb22c664c7707fb69
apache-2.0
['whisper-event']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.75e-05 - train_batch_size: 8 - eval_batch_size: 32 - seed: 22 - optimizer: adamw_bnb_8bit - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 22000 - training_steps: 75000 - mixed_precision_training: True
eb7ae6f7578e7c08ca9eca18b62ffeb2
apache-2.0
['automatic-speech-recognition', 'common_voice', 'generated_from_trainer', 'hf-asr-leaderboard', 'robust-speech-event', 'sv']
false
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the COMMON_VOICE - ZH-CN dataset. It achieves the following results on the evaluation set: - Loss: 0.8122 - Wer: 0.8392 - Cer: 0.2059
3072b5dd43e3210229652746415d85ef
apache-2.0
['automatic-speech-recognition', 'common_voice', 'generated_from_trainer', 'hf-asr-leaderboard', 'robust-speech-event', 'sv']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 100.0 - mixed_precision_training: Native AMP
ed4f613af6aef3a972489ab5cb1026ff
apache-2.0
['automatic-speech-recognition', 'common_voice', 'generated_from_trainer', 'hf-asr-leaderboard', 'robust-speech-event', 'sv']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:| | 69.215 | 0.74 | 500 | 74.9751 | 1.0 | 1.0 | | 8.2109 | 1.48 | 1000 | 7.0617 | 1.0 | 1.0 | | 6.4277 | 2.22 | 1500 | 6.3811 | 1.0 | 1.0 | | 6.3513 | 2.95 | 2000 | 6.3061 | 1.0 | 1.0 | | 6.2522 | 3.69 | 2500 | 6.2147 | 1.0 | 1.0 | | 5.9757 | 4.43 | 3000 | 5.7906 | 1.1004 | 0.9924 | | 5.0642 | 5.17 | 3500 | 4.2984 | 1.7729 | 0.8214 | | 4.6346 | 5.91 | 4000 | 3.7129 | 1.8946 | 0.7728 | | 4.267 | 6.65 | 4500 | 3.2177 | 1.7526 | 0.6922 | | 3.9964 | 7.39 | 5000 | 2.8337 | 1.8055 | 0.6546 | | 3.8035 | 8.12 | 5500 | 2.5726 | 2.1851 | 0.6992 | | 3.6273 | 8.86 | 6000 | 2.3391 | 2.1029 | 0.6511 | | 3.5248 | 9.6 | 6500 | 2.1944 | 2.3617 | 0.6859 | | 3.3683 | 10.34 | 7000 | 1.9827 | 2.1014 | 0.6063 | | 3.2411 | 11.08 | 7500 | 1.8610 | 1.6160 | 0.5135 | | 3.1299 | 11.82 | 8000 | 1.7446 | 1.5948 | 0.4946 | | 3.0574 | 12.56 | 8500 | 1.6454 | 1.1291 | 0.4051 | | 2.985 | 13.29 | 9000 | 1.5919 | 1.0673 | 0.3893 | | 2.9573 | 14.03 | 9500 | 1.4903 | 1.0604 | 0.3766 | | 2.8897 | 14.77 | 10000 | 1.4614 | 1.0059 | 0.3653 | | 2.8169 | 15.51 | 10500 | 1.3997 | 1.0030 | 0.3550 | | 2.8155 | 16.25 | 11000 | 1.3444 | 0.9980 | 0.3441 | | 2.7595 | 16.99 | 11500 | 1.2911 | 0.9703 | 0.3325 | | 2.7107 | 17.72 | 12000 | 1.2462 | 0.9565 | 0.3227 | | 2.6358 | 18.46 | 12500 | 1.2466 | 0.9955 | 0.3333 | | 2.5801 | 19.2 | 13000 | 1.2059 | 1.0010 | 0.3226 | | 2.5554 | 19.94 | 13500 | 1.1919 | 1.0094 | 0.3223 | | 2.5314 | 20.68 | 14000 | 1.1703 | 0.9847 | 0.3156 | | 2.509 | 21.42 | 14500 | 1.1733 | 0.9896 | 0.3177 | | 2.4391 | 22.16 | 15000 | 1.1811 | 0.9723 | 0.3164 | | 2.4631 | 22.89 | 15500 | 1.1382 | 0.9698 | 0.3059 | | 2.4414 | 23.63 | 16000 | 1.0893 | 0.9644 | 0.2972 | | 2.3771 | 24.37 | 16500 | 1.0930 | 0.9505 | 0.2954 | | 2.3658 | 25.11 | 17000 | 1.0756 | 0.9609 | 0.2926 | | 2.3215 | 25.85 | 17500 | 1.0512 | 0.9614 | 0.2890 | | 2.3327 | 26.59 | 18000 | 1.0627 | 1.1984 | 0.3282 | | 2.3055 | 27.33 | 18500 | 1.0582 | 0.9520 | 0.2841 | | 2.299 | 28.06 | 19000 | 1.0356 | 0.9480 | 0.2817 | | 2.2673 | 28.8 | 19500 | 1.0305 | 0.9367 | 0.2771 | | 2.2166 | 29.54 | 20000 | 1.0139 | 0.9223 | 0.2702 | | 2.2378 | 30.28 | 20500 | 1.0095 | 0.9268 | 0.2722 | | 2.2168 | 31.02 | 21000 | 1.0001 | 0.9085 | 0.2691 | | 2.1766 | 31.76 | 21500 | 0.9884 | 0.9050 | 0.2640 | | 2.1715 | 32.5 | 22000 | 0.9730 | 0.9505 | 0.2719 | | 2.1104 | 33.23 | 22500 | 0.9752 | 0.9362 | 0.2656 | | 2.1158 | 33.97 | 23000 | 0.9720 | 0.9263 | 0.2624 | | 2.0718 | 34.71 | 23500 | 0.9573 | 1.0005 | 0.2759 | | 2.0824 | 35.45 | 24000 | 0.9609 | 0.9525 | 0.2643 | | 2.0591 | 36.19 | 24500 | 0.9662 | 0.9570 | 0.2667 | | 2.0768 | 36.93 | 25000 | 0.9528 | 0.9574 | 0.2646 | | 2.0893 | 37.67 | 25500 | 0.9810 | 0.9169 | 0.2612 | | 2.0282 | 38.4 | 26000 | 0.9556 | 0.8877 | 0.2528 | | 1.997 | 39.14 | 26500 | 0.9523 | 0.8723 | 0.2501 | | 2.0209 | 39.88 | 27000 | 0.9542 | 0.8773 | 0.2503 | | 1.987 | 40.62 | 27500 | 0.9427 | 0.8867 | 0.2500 | | 1.9663 | 41.36 | 28000 | 0.9546 | 0.9065 | 0.2546 | | 1.9945 | 42.1 | 28500 | 0.9431 | 0.9119 | 0.2536 | | 1.9604 | 42.84 | 29000 | 0.9367 | 0.9030 | 0.2490 | | 1.933 | 43.57 | 29500 | 0.9071 | 0.8916 | 0.2432 | | 1.9227 | 44.31 | 30000 | 0.9048 | 0.8882 | 0.2428 | | 1.8784 | 45.05 | 30500 | 0.9106 | 0.8991 | 0.2437 | | 1.8844 | 45.79 | 31000 | 0.8996 | 0.8758 | 0.2379 | | 1.8776 | 46.53 | 31500 | 0.9028 | 0.8798 | 0.2395 | | 1.8372 | 47.27 | 32000 | 0.9047 | 0.8778 | 0.2379 | | 1.832 | 48.01 | 32500 | 0.9016 | 0.8941 | 0.2393 | | 1.8154 | 48.74 | 33000 | 0.8915 | 0.8916 | 0.2372 | | 1.8072 | 49.48 | 33500 | 0.8781 | 0.8872 | 0.2365 | | 1.7489 | 50.22 | 34000 | 0.8738 | 0.8956 | 0.2340 | | 1.7928 | 50.96 | 34500 | 0.8684 | 0.8872 | 0.2323 | | 1.7748 | 51.7 | 35000 | 0.8723 | 0.8718 | 0.2321 | | 1.7355 | 52.44 | 35500 | 0.8760 | 0.8842 | 0.2331 | | 1.7167 | 53.18 | 36000 | 0.8746 | 0.8817 | 0.2324 | | 1.7479 | 53.91 | 36500 | 0.8762 | 0.8753 | 0.2281 | | 1.7428 | 54.65 | 37000 | 0.8733 | 0.8699 | 0.2277 | | 1.7058 | 55.39 | 37500 | 0.8816 | 0.8649 | 0.2263 | | 1.7045 | 56.13 | 38000 | 0.8733 | 0.8689 | 0.2297 | | 1.709 | 56.87 | 38500 | 0.8648 | 0.8654 | 0.2232 | | 1.6799 | 57.61 | 39000 | 0.8717 | 0.8580 | 0.2244 | | 1.664 | 58.35 | 39500 | 0.8653 | 0.8723 | 0.2259 | | 1.6488 | 59.08 | 40000 | 0.8637 | 0.8803 | 0.2271 | | 1.6298 | 59.82 | 40500 | 0.8553 | 0.8768 | 0.2253 | | 1.6185 | 60.56 | 41000 | 0.8512 | 0.8718 | 0.2240 | | 1.574 | 61.3 | 41500 | 0.8579 | 0.8773 | 0.2251 | | 1.6192 | 62.04 | 42000 | 0.8499 | 0.8743 | 0.2242 | | 1.6275 | 62.78 | 42500 | 0.8419 | 0.8758 | 0.2216 | | 1.5697 | 63.52 | 43000 | 0.8446 | 0.8699 | 0.2222 | | 1.5384 | 64.25 | 43500 | 0.8462 | 0.8580 | 0.2200 | | 1.5115 | 64.99 | 44000 | 0.8467 | 0.8674 | 0.2214 | | 1.5547 | 65.73 | 44500 | 0.8505 | 0.8669 | 0.2204 | | 1.5597 | 66.47 | 45000 | 0.8421 | 0.8684 | 0.2192 | | 1.505 | 67.21 | 45500 | 0.8485 | 0.8619 | 0.2187 | | 1.5101 | 67.95 | 46000 | 0.8489 | 0.8649 | 0.2204 | | 1.5199 | 68.69 | 46500 | 0.8407 | 0.8619 | 0.2180 | | 1.5207 | 69.42 | 47000 | 0.8379 | 0.8496 | 0.2163 | | 1.478 | 70.16 | 47500 | 0.8357 | 0.8595 | 0.2163 | | 1.4817 | 70.9 | 48000 | 0.8346 | 0.8496 | 0.2151 | | 1.4827 | 71.64 | 48500 | 0.8362 | 0.8624 | 0.2169 | | 1.4513 | 72.38 | 49000 | 0.8355 | 0.8451 | 0.2137 | | 1.4988 | 73.12 | 49500 | 0.8325 | 0.8624 | 0.2161 | | 1.4267 | 73.85 | 50000 | 0.8396 | 0.8481 | 0.2157 | | 1.4421 | 74.59 | 50500 | 0.8355 | 0.8491 | 0.2122 | | 1.4311 | 75.33 | 51000 | 0.8358 | 0.8476 | 0.2118 | | 1.4174 | 76.07 | 51500 | 0.8289 | 0.8451 | 0.2101 | | 1.4349 | 76.81 | 52000 | 0.8372 | 0.8580 | 0.2140 | | 1.3959 | 77.55 | 52500 | 0.8325 | 0.8436 | 0.2116 | | 1.4087 | 78.29 | 53000 | 0.8351 | 0.8446 | 0.2105 | | 1.415 | 79.03 | 53500 | 0.8363 | 0.8476 | 0.2123 | | 1.4122 | 79.76 | 54000 | 0.8310 | 0.8481 | 0.2112 | | 1.3969 | 80.5 | 54500 | 0.8239 | 0.8446 | 0.2095 | | 1.361 | 81.24 | 55000 | 0.8282 | 0.8427 | 0.2091 | | 1.3611 | 81.98 | 55500 | 0.8282 | 0.8407 | 0.2092 | | 1.3677 | 82.72 | 56000 | 0.8235 | 0.8436 | 0.2084 | | 1.3361 | 83.46 | 56500 | 0.8231 | 0.8377 | 0.2069 | | 1.3779 | 84.19 | 57000 | 0.8206 | 0.8436 | 0.2070 | | 1.3727 | 84.93 | 57500 | 0.8204 | 0.8392 | 0.2065 | | 1.3317 | 85.67 | 58000 | 0.8207 | 0.8436 | 0.2065 | | 1.3332 | 86.41 | 58500 | 0.8186 | 0.8357 | 0.2055 | | 1.3299 | 87.15 | 59000 | 0.8193 | 0.8417 | 0.2075 | | 1.3129 | 87.89 | 59500 | 0.8183 | 0.8431 | 0.2065 | | 1.3352 | 88.63 | 60000 | 0.8151 | 0.8471 | 0.2062 | | 1.3026 | 89.36 | 60500 | 0.8125 | 0.8486 | 0.2067 | | 1.3468 | 90.1 | 61000 | 0.8124 | 0.8407 | 0.2058 | | 1.3028 | 90.84 | 61500 | 0.8122 | 0.8461 | 0.2051 | | 1.2884 | 91.58 | 62000 | 0.8086 | 0.8427 | 0.2048 | | 1.3005 | 92.32 | 62500 | 0.8110 | 0.8387 | 0.2055 | | 1.2996 | 93.06 | 63000 | 0.8126 | 0.8328 | 0.2057 | | 1.2707 | 93.8 | 63500 | 0.8098 | 0.8402 | 0.2047 | | 1.3026 | 94.53 | 64000 | 0.8097 | 0.8402 | 0.2050 | | 1.2546 | 95.27 | 64500 | 0.8111 | 0.8402 | 0.2055 | | 1.2426 | 96.01 | 65000 | 0.8088 | 0.8372 | 0.2059 | | 1.2869 | 96.75 | 65500 | 0.8093 | 0.8397 | 0.2048 | | 1.2782 | 97.49 | 66000 | 0.8099 | 0.8412 | 0.2049 | | 1.2457 | 98.23 | 66500 | 0.8134 | 0.8412 | 0.2062 | | 1.2967 | 98.97 | 67000 | 0.8115 | 0.8382 | 0.2055 | | 1.2817 | 99.7 | 67500 | 0.8128 | 0.8392 | 0.2063 |
cd01936afeed84f1ae11caa7e64005c3
mit
[]
false
child zombie on Stable Diffusion This is the `<child-zombie>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<child-zombie> 0](https://huggingface.co/sd-concepts-library/child-zombie/resolve/main/concept_images/1.jpeg) ![<child-zombie> 1](https://huggingface.co/sd-concepts-library/child-zombie/resolve/main/concept_images/2.jpeg) ![<child-zombie> 2](https://huggingface.co/sd-concepts-library/child-zombie/resolve/main/concept_images/0.jpeg)
08007e6f92b7ba84383bc736571e4ec1
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-panx-all This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1745 - F1: 0.8505
a57d9ef9237f66494d3368b8d91b2c25
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.3055 | 1.0 | 835 | 0.1842 | 0.8099 | | 0.1561 | 2.0 | 1670 | 0.1711 | 0.8452 | | 0.1016 | 3.0 | 2505 | 0.1745 | 0.8505 |
0d8317a5e2011fd0c1401d5f7f9817d3
apache-2.0
['summarization', 't5']
false
t5-small-finetuned-billsum-ca_test This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset. It achieves the following results on the evaluation set: - Loss: 2.3376 - Rouge1: 12.6315 - Rouge2: 6.9839 - Rougel: 10.9983 - Rougelsum: 11.9383 - Gen Len: 19.0
e429091ed6821e052a96584f6204c832
apache-2.0
['summarization', 't5']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | No log | 1.0 | 495 | 2.4805 | 9.9389 | 4.1239 | 8.3979 | 9.1599 | 19.0 | | 3.1564 | 2.0 | 990 | 2.3833 | 12.1026 | 6.5196 | 10.5123 | 11.4527 | 19.0 | | 2.66 | 3.0 | 1485 | 2.3496 | 12.5389 | 6.8686 | 10.8798 | 11.8636 | 19.0 | | 2.5671 | 4.0 | 1980 | 2.3376 | 12.6315 | 6.9839 | 10.9983 | 11.9383 | 19.0 |
d26832f66fc66a330712109e9179fcc5
apache-2.0
['generated_from_trainer']
false
all-roberta-large-v1-banking-2-16-5-oos This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.2564 - Accuracy: 0.3009
7f3d7ec4d8216b48e397f32f366f61a9
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.8065 | 1.0 | 1 | 2.5730 | 0.1681 | | 2.2328 | 2.0 | 2 | 2.4625 | 0.2212 | | 1.8783 | 3.0 | 3 | 2.3655 | 0.2478 | | 1.64 | 4.0 | 4 | 2.2942 | 0.2655 | | 1.4937 | 5.0 | 5 | 2.2564 | 0.3009 |
564e14fd2a79fe7bf037a9066510d7ff
apache-2.0
['generated_from_trainer']
false
distilled-mt5-small-00001b This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset. It achieves the following results on the evaluation set: - Loss: 2.8994 - Bleu: 7.5838 - Gen Len: 45.058
31f9013a2be96e21cb7cb7b944e0bd2c
cc-by-4.0
['herbert']
false
HerBERT **[HerBERT](https://en.wikipedia.org/wiki/Zbigniew_Herbert)** is a BERT-based Language Model trained on Polish corpora using Masked Language Modelling (MLM) and Sentence Structural Objective (SSO) with dynamic masking of whole words. For more details, please refer to: [HerBERT: Efficiently Pretrained Transformer-based Language Model for Polish](https://www.aclweb.org/anthology/2021.bsnlp-1.1/). Model training and experiments were conducted with [transformers](https://github.com/huggingface/transformers) in version 2.9.
9005710900b7a7e64abdfb9a4f4eba74
cc-by-4.0
['herbert']
false
Corpus HerBERT was trained on six different corpora available for Polish language: | Corpus | Tokens | Documents | | :------ | ------: | ------: | | [CCNet Middle](https://github.com/facebookresearch/cc_net) | 3243M | 7.9M | | [CCNet Head](https://github.com/facebookresearch/cc_net) | 2641M | 7.0M | | [National Corpus of Polish](http://nkjp.pl/index.php?page=14&lang=1)| 1357M | 3.9M | | [Open Subtitles](http://opus.nlpl.eu/OpenSubtitles-v2018.php) | 1056M | 1.1M | [Wikipedia](https://dumps.wikimedia.org/) | 260M | 1.4M | | [Wolne Lektury](https://wolnelektury.pl/) | 41M | 5.5k |
78a0277d824fc28bedcac0eddca4134c
cc-by-4.0
['herbert']
false
Tokenizer The training dataset was tokenized into subwords using a character level byte-pair encoding (``CharBPETokenizer``) with a vocabulary size of 50k tokens. The tokenizer itself was trained with a [tokenizers](https://github.com/huggingface/tokenizers) library. We kindly encourage you to use the ``Fast`` version of the tokenizer, namely ``HerbertTokenizerFast``.
f74c7f4ea05bf9e71c1fb425540ab274
cc-by-4.0
['herbert']
false
Usage Example code: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("allegro/herbert-base-cased") model = AutoModel.from_pretrained("allegro/herbert-base-cased") output = model( **tokenizer.batch_encode_plus( [ ( "A potem szedł środkiem drogi w kurzawie, bo zamiatał nogami, ślepy dziad prowadzony przez tłustego kundla na sznurku.", "A potem leciał od lasu chłopak z butelką, ale ten ujrzawszy księdza przy drodze okrążył go z dala i biegł na przełaj pól do karczmy." ) ], padding='longest', add_special_tokens=True, return_tensors='pt' ) ) ```
2080f80a1573e4c512267820fdb87554
cc-by-4.0
['herbert']
false
Citation If you use this model, please cite the following paper: ``` @inproceedings{mroczkowski-etal-2021-herbert, title = "{H}er{BERT}: Efficiently Pretrained Transformer-based Language Model for {P}olish", author = "Mroczkowski, Robert and Rybak, Piotr and Wr{\\'o}blewska, Alina and Gawlik, Ireneusz", booktitle = "Proceedings of the 8th Workshop on Balto-Slavic Natural Language Processing", month = apr, year = "2021", address = "Kiyv, Ukraine", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2021.bsnlp-1.1", pages = "1--10", } ```
1692695594276c9486d9d803afeb1a15
cc-by-4.0
['herbert']
false
Authors The model was trained by [**Machine Learning Research Team at Allegro**](https://ml.allegro.tech/) and [**Linguistic Engineering Group at Institute of Computer Science, Polish Academy of Sciences**](http://zil.ipipan.waw.pl/). You can contact us at: <a href="mailto:klejbenchmark@allegro.pl">klejbenchmark@allegro.pl</a>
0c9b9320d552305eefa9a5779a88d615
apache-2.0
['generated_from_keras_callback']
false
augustoortiz/bert-finetuned-squad2 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.2223 - Epoch: 0
1449802c80bf862f731f5b6043dbca6e
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 11091, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16
fe0cbd493cd11ecf1d36df832dc21e06
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 400 - mixed_precision_training: Native AMP
bc773ed7f54bdd81a6aa0cd8119c9316
cc-by-4.0
['espnet', 'audio', 'text-to-speech']
false
`kan-bayashi/vctk_gst+xvector_conformer_fastspeech2` ♻️ Imported from https://zenodo.org/record/4394608/ This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
c8e8d1fbbb7568a156de4591cbb41951
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.2414
a43517a97f79855152b4953787af4707
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 20 - eval_batch_size: 20 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1
910ba5a546a9c87ef145940b8733957f
apache-2.0
['automatic-speech-recognition', 'ru']
false
exp_w2v2t_ru_no-pretraining_s895 Fine-tuned randomly initialized wav2vec2 model for speech recognition using the train split of [Common Voice 7.0 (ru)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
eb637dfc0831a7376d8027a970e674d4
apache-2.0
['generated_from_trainer']
false
all-roberta-large-v1-credit_cards-5-16-5 This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.3376 - Accuracy: 0.3186
58e5316c9401ddf7d6ed97f21cedcee6
apache-2.0
['pytorch', 'causal-lm']
false
Model Description GPT-J 6B is a transformer model trained using Ben Wang's [Mesh Transformer JAX](https://github.com/kingoflolz/mesh-transformer-jax/). "GPT-J" refers to the class of model, while "6B" represents the number of trainable parameters. <figure> | Hyperparameter | Value | |----------------------|------------| | \\(n_{parameters}\\) | 6053381344 | | \\(n_{layers}\\) | 28&ast; | | \\(d_{model}\\) | 4096 | | \\(d_{ff}\\) | 16384 | | \\(n_{heads}\\) | 16 | | \\(d_{head}\\) | 256 | | \\(n_{ctx}\\) | 2048 | | \\(n_{vocab}\\) | 50257/50400&dagger; (same tokenizer as GPT-2/3) | | Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) | | RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py
6af80e1adf944bab671fe68bcf7a5ace
apache-2.0
['pytorch', 'causal-lm']
false
L223) | <figcaption><p><strong>&ast;</strong> Each layer consists of one feedforward block and one self attention block.</p> <p><strong>&dagger;</strong> Although the embedding matrix has a size of 50400, only 50257 entries are used by the GPT-2 tokenizer.</p></figcaption></figure> The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model dimension is split into 16 heads, each with a dimension of 256. Rotary Position Embedding (RoPE) is applied to 64 dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as GPT-2/GPT-3.
1dac301bb9dc11ed90459e5c7289103c
apache-2.0
['pytorch', 'causal-lm']
false
Training procedure This model was trained for 402 billion tokens over 383,500 steps on TPU v3-256 pod. It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token correctly.
1b545bb11d9d9a71aa0457907b810e84
apache-2.0
['pytorch', 'causal-lm']
false
Intended Use and Limitations GPT-J learns an inner representation of the English language that can be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating text from a prompt.
0eba029679042915b6d8dd387354e626
apache-2.0
['pytorch', 'causal-lm']
false
How to use This model can be easily loaded using the `AutoModelForCausalLM` functionality: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B") model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-j-6B") ```
58e79271d8e311a4be6980e59402e22f
apache-2.0
['pytorch', 'causal-lm']
false
Limitations and Biases The core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon GPT-J to produce factually accurate output. GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See [Sections 5 and 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a more detailed analysis of the biases in the Pile. As with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
93303ef122fa72515bc759c8bfb9e5fe
apache-2.0
['pytorch', 'causal-lm']
false
Evaluation results <figure> | Model | Public | Training FLOPs | LAMBADA PPL ↓ | LAMBADA Acc ↑ | Winogrande ↑ | Hellaswag ↑ | PIQA ↑ | Dataset Size (GB) | |--------------------------|-------------|----------------|--- |--- |--- |--- |--- |-------------------| | Random Chance | &check; | 0 | ~a lot | ~0% | 50% | 25% | 25% | 0 | | GPT-3 Ada&ddagger; | &cross; | ----- | 9.95 | 51.6% | 52.9% | 43.4% | 70.5% | ----- | | GPT-2 1.5B | &check; | ----- | 10.63 | 51.21% | 59.4% | 50.9% | 70.8% | 40 | | GPT-Neo 1.3B&ddagger; | &check; | 3.0e21 | 7.50 | 57.2% | 55.0% | 48.9% | 71.1% | 825 | | Megatron-2.5B&ast; | &cross; | 2.4e21 | ----- | 61.7% | ----- | ----- | ----- | 174 | | GPT-Neo 2.7B&ddagger; | &check; | 6.8e21 | 5.63 | 62.2% | 56.5% | 55.8% | 73.0% | 825 | | GPT-3 1.3B&ast;&ddagger; | &cross; | 2.4e21 | 5.44 | 63.6% | 58.7% | 54.7% | 75.1% | ~800 | | GPT-3 Babbage&ddagger; | &cross; | ----- | 5.58 | 62.4% | 59.0% | 54.5% | 75.5% | ----- | | Megatron-8.3B&ast; | &cross; | 7.8e21 | ----- | 66.5% | ----- | ----- | ----- | 174 | | GPT-3 2.7B&ast;&ddagger; | &cross; | 4.8e21 | 4.60 | 67.1% | 62.3% | 62.8% | 75.6% | ~800 | | Megatron-11B&dagger; | &check; | 1.0e22 | ----- | ----- | ----- | ----- | ----- | 161 | | **GPT-J 6B&ddagger;** | **&check;** | **1.5e22** | **3.99** | **69.7%** | **65.3%** | **66.1%** | **76.5%** | **825** | | GPT-3 6.7B&ast;&ddagger; | &cross; | 1.2e22 | 4.00 | 70.3% | 64.5% | 67.4% | 78.0% | ~800 | | GPT-3 Curie&ddagger; | &cross; | ----- | 4.00 | 69.3% | 65.6% | 68.5% | 77.9% | ----- | | GPT-3 13B&ast;&ddagger; | &cross; | 2.3e22 | 3.56 | 72.5% | 67.9% | 70.9% | 78.5% | ~800 | | GPT-3 175B&ast;&ddagger; | &cross; | 3.1e23 | 3.00 | 76.2% | 70.2% | 78.9% | 81.0% | ~800 | | GPT-3 Davinci&ddagger; | &cross; | ----- | 3.0 | 75% | 72% | 78% | 80% | ----- | <figcaption><p>Models roughly sorted by performance, or by FLOPs if not available.</p> <p><strong>&ast;</strong> Evaluation numbers reported by their respective authors. All other numbers are provided by running <a href="https://github.com/EleutherAI/lm-evaluation-harness/"><code>lm-evaluation-harness</code></a> either with released weights or with API access. Due to subtle implementation differences as well as different zero shot task framing, these might not be directly comparable. See <a href="https://blog.eleuther.ai/gpt3-model-sizes/">this blog post</a> for more details.</p> <p><strong>†</strong> Megatron-11B provides no comparable metrics, and several implementations using the released weights do not reproduce the generation quality and evaluations. (see <a href="https://github.com/huggingface/transformers/pull/10301">1</a> <a href="https://github.com/pytorch/fairseq/issues/2358">2</a> <a href="https://github.com/pytorch/fairseq/issues/2719">3</a>) Thus, evaluation was not attempted.</p> <p><strong>‡</strong> These models have been trained with data which contains possible test set contamination. The OpenAI GPT-3 models failed to deduplicate training data for certain test sets, while the GPT-Neo models as well as this one is trained on the Pile, which has not been deduplicated against any test sets.</p></figcaption></figure>
965951e965a03ed0e95336aa1bf873a5
apache-2.0
['pytorch', 'causal-lm']
false
BibTeX entry To cite this model: ```bibtex @misc{gpt-j, author = {Wang, Ben and Komatsuzaki, Aran}, title = {{GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model}}, howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}}, year = 2021, month = May } ``` To cite the codebase that trained this model: ```bibtex @misc{mesh-transformer-jax, author = {Wang, Ben}, title = {{Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX}}, howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}}, year = 2021, month = May } ``` If you use this model, we would love to hear about it! Reach out on [GitHub](https://github.com/kingoflolz/mesh-transformer-jax), Discord, or shoot Ben an email.
8ea0e8a9c78372692888782fa451d9ca
apache-2.0
['pytorch', 'causal-lm']
false
Acknowledgements This project would not have been possible without compute generously provided by Google through the [TPU Research Cloud](https://sites.research.google/trc/), as well as the Cloud TPU team for providing early access to the [Cloud TPU VM](https://cloud.google.com/blog/products/compute/introducing-cloud-tpu-vms) Alpha. Thanks to everyone who have helped out one way or another (listed alphabetically): - [James Bradbury](https://twitter.com/jekbradbury) for valuable assistance with debugging JAX issues. - [Stella Biderman](https://www.stellabiderman.com), [Eric Hallahan](https://twitter.com/erichallahan), [Kurumuz](https://github.com/kurumuz/), and [Finetune](https://github.com/finetuneanon/) for converting the model to be compatible with the `transformers` package. - [Leo Gao](https://twitter.com/nabla_theta) for running zero shot evaluations for the baseline models for the table. - [Laurence Golding](https://github.com/researcher2/) for adding some features to the web demo. - [Aran Komatsuzaki](https://twitter.com/arankomatsuzaki) for advice with experiment design and writing the blog posts. - [Janko Prester](https://github.com/jprester/) for creating the web demo frontend.
defd13f21fa8cc452b9274862b1e7218
apache-2.0
['automatic-speech-recognition', 'sv-SE']
false
exp_w2v2t_sv-se_vp-sv_s363 Fine-tuned [facebook/wav2vec2-large-sv-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-sv-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (sv-SE)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
23cdf87f3fa85a87c35a0f2e961a70bf
apache-2.0
['generated_from_trainer']
false
Article_250v5_NER_Model_3Epochs_UNAUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the article250v5_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.3250 - Precision: 0.3979 - Recall: 0.4221 - F1: 0.4097 - Accuracy: 0.8779
ce4fe14cf356307dadff696dbc40bbd2